url
stringlengths 53
56
| repository_url
stringclasses 1
value | labels_url
stringlengths 67
70
| comments_url
stringlengths 62
65
| events_url
stringlengths 60
63
| html_url
stringlengths 41
46
| id
int64 450k
1.69B
| node_id
stringlengths 18
32
| number
int64 1
2.72k
| title
stringlengths 1
209
| user
dict | labels
list | state
stringclasses 1
value | locked
bool 2
classes | assignee
null | assignees
sequence | milestone
null | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
104k
⌀ | reactions
dict | timeline_url
stringlengths 62
65
| performed_via_github_app
null | state_reason
stringclasses 2
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/coleifer/peewee/issues/614 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/614/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/614/comments | https://api.github.com/repos/coleifer/peewee/issues/614/events | https://github.com/coleifer/peewee/issues/614 | 83,773,182 | MDU6SXNzdWU4Mzc3MzE4Mg== | 614 | Aggregate Multiple Joins to Same Table | {
"login": "arrowgamer",
"id": 779694,
"node_id": "MDQ6VXNlcjc3OTY5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/779694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arrowgamer",
"html_url": "https://github.com/arrowgamer",
"followers_url": "https://api.github.com/users/arrowgamer/followers",
"following_url": "https://api.github.com/users/arrowgamer/following{/other_user}",
"gists_url": "https://api.github.com/users/arrowgamer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arrowgamer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arrowgamer/subscriptions",
"organizations_url": "https://api.github.com/users/arrowgamer/orgs",
"repos_url": "https://api.github.com/users/arrowgamer/repos",
"events_url": "https://api.github.com/users/arrowgamer/events{/privacy}",
"received_events_url": "https://api.github.com/users/arrowgamer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I didn't realize this broke with the latest changes, that sucks should've had a unit test in place to catch it. I'll look into it, but I'm thinking maybe I need to just scrap `AggregateQueryResultWrapper` and try a rewrite with some new ideas, it's becoming a bit of an eye-sore.\n",
"Did you try `on=SomethingWithTwoDifferentThings.owner`, i.e. passing just the field object rather than using a boolean expression?\n",
"Yeah, that's what I had initially, I put what I did above because I figured it was more implicit. Both ways yield the same error:\n\n```\nError\nTraceback (most recent call last):\n File \"/Users/arrow/dev/test/peewee_test.py\", line 37, in test_load_my_stuff_please\n .aggregate_rows())[0]\n File \"/Users/arrow/dev/test/venv/lib/python2.7/site-packages/peewee.py\", line 2703, in __getitem__\n res.fill_cache(index)\n File \"/Users/arrow/dev/test/venv/lib/python2.7/site-packages/peewee.py\", line 1912, in fill_cache\n self.next()\n File \"/Users/arrow/dev/test/venv/lib/python2.7/site-packages/peewee.py\", line 1898, in next\n obj = self.iterate()\n File \"/Users/arrow/dev/test/venv/lib/python2.7/site-packages/peewee.py\", line 2197, in iterate\n instance._data[metadata.src_fk.name]]\nKeyError: 2\n```\n",
"I believe this one should now be fixed in master.\n"
] | 2015-06-02T01:07:18 | 2015-06-05T04:36:01 | 2015-06-05T04:36:01 | CONTRIBUTOR | null | I don't appear able to load a join via `aggregate_rows()` when multiple foreign keys point to the same object. This used to be possible.
Consider the below:
``` python
from peewee import *
from unittest import TestCase
db = SqliteDatabase(':memory:')
class Thing(Model):
name = CharField()
class Meta:
database = db
class SomethingWithTwoDifferentThings(Model):
name = CharField()
creator = ForeignKeyField(Thing, related_name='creations')
owner = ForeignKeyField(Thing, related_name='owned_stuff')
class Meta:
database = db
class EagerLoadingIsHard(TestCase):
@classmethod
def setUpClass(cls):
SomethingWithTwoDifferentThings.create_table()
Thing.create_table()
thing1 = Thing.create(name='Thing 1')
thing2 = Thing.create(name='Thing 2')
SomethingWithTwoDifferentThings.create(name='Something', owner=thing1, creator=thing2)
def test_load_my_stuff_please(self):
something = (SomethingWithTwoDifferentThings
.select(SomethingWithTwoDifferentThings, Thing)
.join(Thing, on=(SomethingWithTwoDifferentThings.owner == Thing.id).alias('owner'))
.where(SomethingWithTwoDifferentThings.name == 'Something')
.aggregate_rows())[0]
self.assertIn('owner', something._obj_cache)
self.assertEqual(something._obj_cache['owner'].name, 'Thing 1')
```
I'm able to load the creator via simple `.join(Thing)`, though there doesn't appear to be a way to load the owner.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/614/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/613 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/613/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/613/comments | https://api.github.com/repos/coleifer/peewee/issues/613/events | https://github.com/coleifer/peewee/issues/613 | 83,756,855 | MDU6SXNzdWU4Mzc1Njg1NQ== | 613 | Aggregate Self-Referencing Joins via Child to Parent | {
"login": "arrowgamer",
"id": 779694,
"node_id": "MDQ6VXNlcjc3OTY5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/779694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arrowgamer",
"html_url": "https://github.com/arrowgamer",
"followers_url": "https://api.github.com/users/arrowgamer/followers",
"following_url": "https://api.github.com/users/arrowgamer/following{/other_user}",
"gists_url": "https://api.github.com/users/arrowgamer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arrowgamer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arrowgamer/subscriptions",
"organizations_url": "https://api.github.com/users/arrowgamer/orgs",
"repos_url": "https://api.github.com/users/arrowgamer/repos",
"events_url": "https://api.github.com/users/arrowgamer/events{/privacy}",
"received_events_url": "https://api.github.com/users/arrowgamer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Shoot...be great if in the future the test-cases could use models in the existing tests, as much as I enjoy your 'bro' models lol.\n\nHave you tried using `prefetch` instead of `aggregate_rows()`? I think that there are enough complexities to `aggregate_rows()` that making it a general-purpose does-everything kind of method will require some rethinking from the ground-up.\n",
"So this case is ambiguous and when using `aggregate_rows()` peewee assumes that you want the 1->N relationship, i.e. the category and all it's children, rather than the category and it's parent category.\n\nThat's kind of the point of using aggregate_rows() and is the reason I added this assumption. If you have a self-reference and want to get the parent, you can always write:\n\n``` python\n Grandparent = Category.alias()\n Parent = Category.alias()\n sq = (Category\n .select(Category, Parent, Grandparent)\n .join(Parent, on=(Category.parent == Parent.id))\n .join(Grandparent, on=(Parent.parent == Grandparent.id))\n .where(Grandparent.name == 'old granddaddy')\n .order_by(Category.name))\n for cat in sq:\n print cat.name, cat.parent.name, cat.parent.parent.name\n```\n",
"I can see how this would be problematic if you have multiple joins though, i.e. you want to select the category and it's parent, but also some other objects that have a foreign key to category.\n",
"In my actual use case I am eagerly loading a table that joins to a few other tables including one table that references itself, twice, as well as joining those self references with another table.\n\nSo it's a pretty complex query, haha. Loading data from several tables to be serialized to json is basically the crux of the project, though, and so `aggregate_rows()` was an important factor in choosing to use peewee =(\n\nThanks for the suggestion, though!\n",
"Yeah, it's just a limitation right now of `aggregate_rows()` that when you have a self-referential join it will assume you want the 1->N relationship rather than the parent object. Have you tried using `prefetch()`? I feel like I've asked you that before a couple times.\n"
] | 2015-06-02T00:15:50 | 2015-06-04T23:26:56 | 2015-06-04T23:26:56 | CONTRIBUTOR | null | Hi there. I tested out your changes in version 2.6.1, and while we're now able to use aggregate_rows() to load a record with its children via a self-referencing join, loading a child with its parent does not seem possible.
Below is the code I'm using to test. The first test now passes. The second test does not pass due to ModelAlias being passed to rel_for_model, though altering the code to pass the alias' model_class causes the first test to fail.
Do you plan on fixing this issue? Is there any workaround?
``` python
from peewee import *
from unittest import TestCase
db = SqliteDatabase(':memory:')
class SomethingThatReferencesItself(Model):
name = CharField()
bro = ForeignKeyField('self', null=True, related_name='bros')
class Meta:
database = db
class EagerLoadSelfReferencingJoinPleaseYesOkay(TestCase):
@classmethod
def setUpClass(cls):
SomethingThatReferencesItself.create_table()
bob = SomethingThatReferencesItself.create(name='bob')
SomethingThatReferencesItself.create(name='joe', bro=bob)
SomethingThatReferencesItself.create(name='rofl guy', bro=bob)
def test_get_bros(self):
Bro = SomethingThatReferencesItself.alias()
bob = (SomethingThatReferencesItself
.select(SomethingThatReferencesItself, Bro)
.join(Bro, on=(SomethingThatReferencesItself.id == Bro.bro).alias('bros'))
.where(SomethingThatReferencesItself.name == 'bob')
.aggregate_rows())[0]
self.assertIn('bros', bob._meta.reverse_rel.keys())
self.assertIsInstance(bob.bros, list)
self.assertEqual(len(bob.bros), 2)
def test_get_aggregate_bro(self):
Bro = SomethingThatReferencesItself.alias()
joe = (SomethingThatReferencesItself
.select(SomethingThatReferencesItself, Bro)
.join(Bro, on=(SomethingThatReferencesItself.bro == Bro.id).alias('bro'))
.where(SomethingThatReferencesItself.name == 'joe')
.aggregate_rows())[0]
self.assertIn('bro', joe._obj_cache)
self.assertEqual(joe._obj_cache['bro'].name, 'bob')
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/613/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/612 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/612/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/612/comments | https://api.github.com/repos/coleifer/peewee/issues/612/events | https://github.com/coleifer/peewee/issues/612 | 82,918,535 | MDU6SXNzdWU4MjkxODUzNQ== | 612 | Allow for using PEP8 compliant comparisons in boolean Fields | {
"login": "ezk84",
"id": 1305919,
"node_id": "MDQ6VXNlcjEzMDU5MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1305919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ezk84",
"html_url": "https://github.com/ezk84",
"followers_url": "https://api.github.com/users/ezk84/followers",
"following_url": "https://api.github.com/users/ezk84/following{/other_user}",
"gists_url": "https://api.github.com/users/ezk84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ezk84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ezk84/subscriptions",
"organizations_url": "https://api.github.com/users/ezk84/orgs",
"repos_url": "https://api.github.com/users/ezk84/repos",
"events_url": "https://api.github.com/users/ezk84/events{/privacy}",
"received_events_url": "https://api.github.com/users/ezk84/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"There is no way to override the `is` lookup, so this won't work. Same thing for `not`, `and` and `or`.\n",
"@coleifer Would adding something like `.equals(value)` or `.bool(value)` (to http://docs.peewee-orm.com/en/latest/peewee/querying.html#query-operators) be out of the question?",
"I came across this issue because `autopep8` was breaking my queries to be pep8 compliant. For instance, my original query\r\n\r\n```python\r\nprofile_image = Image.select().where(\r\n (Image.organization == org_id) &\r\n (Image.user == None) &\r\n (Image.id == image_id)\r\n).get()\r\n```\r\nwas having its `== None` replaced with `is None`, becoming\r\n\r\n```python\r\nprofile_image = Image.select().where(\r\n (Image.organization == org_id) &\r\n (Image.user is None) &\r\n (Image.id == image_id)\r\n).get()\r\n```\r\nwhich led to a `peewee.DoesNotExist` error. I'm not sure if something like @tuukkamustonen's suggestion was ever implemented (I didn't find it), but <strike> an alternative would be to define a constant like\r\n\r\n\r\n```python\r\npeewee.NULL = None\r\n```\r\nso comparison to null values can be compared with `==` in a `pep8` compliant way.\r\n\r\nFor instance,\r\n\r\n```python\r\nprofile_image = Image.select().where(\r\n (Image.organization == org_id) &\r\n (Image.user == peewee.NULL) &\r\n (Image.id == image_id)\r\n).get()\r\n```\r\n\r\n</strike>\r\n\r\n**Update:** use @coleifer's solution below. \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"To do null testing, use:\r\n\r\n```python\r\nImage.user.is_null(True)\r\n```\r\n\r\nOr\r\n\r\n```python\r\nImage.user.is_null(False) # IS NOT NULL\r\n```\r\n\r\nWhich is important because `IS NOT NULL` has different semantics than `NOT (... IS NULL)`."
] | 2015-05-30T21:33:12 | 2019-03-20T18:34:05 | 2015-05-31T21:38:05 | NONE | null | Current implementation necessitates this sort of syntax:
``` python
Model.selec().where(Model.field == True)
```
This raises a `E712 comparison to True should be 'if cond is True:' or 'if cond:'` and similarly with comparisons to `False`.
Would be nice to be able to do:
``` python
Model.selec().where(Model.field is True)
```
or even perhaps:
``` python
Model.selec().where(not Model.deleted)
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/612/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/612/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/611 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/611/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/611/comments | https://api.github.com/repos/coleifer/peewee/issues/611/events | https://github.com/coleifer/peewee/issues/611 | 81,672,075 | MDU6SXNzdWU4MTY3MjA3NQ== | 611 | Allow the use of connection pooling from within database url | {
"login": "kylef",
"id": 44164,
"node_id": "MDQ6VXNlcjQ0MTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/44164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylef",
"html_url": "https://github.com/kylef",
"followers_url": "https://api.github.com/users/kylef/followers",
"following_url": "https://api.github.com/users/kylef/following{/other_user}",
"gists_url": "https://api.github.com/users/kylef/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylef/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylef/subscriptions",
"organizations_url": "https://api.github.com/users/kylef/orgs",
"repos_url": "https://api.github.com/users/kylef/repos",
"events_url": "https://api.github.com/users/kylef/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylef/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Well, actually if you look at the docs, this is now supported. I don't believe I've pushed those changes to PyPI yet, but they will be included in the next release.\n\n\n",
"Oh nice @coleifer, making this issue has been on my todo list for months since you asked me to on IRC.\n"
] | 2015-05-27T23:35:25 | 2015-05-27T23:57:48 | 2015-05-27T23:46:53 | NONE | null | It would be nice if the [database URL](http://peewee.readthedocs.org/en/latest/peewee/playhouse.html#db-url) API would support connection pooling.
For example, an argument to the connect function which allows you to use pooling if available. `connect('uri', use_pooling=True)`
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/611/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/610 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/610/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/610/comments | https://api.github.com/repos/coleifer/peewee/issues/610/events | https://github.com/coleifer/peewee/pull/610 | 79,709,388 | MDExOlB1bGxSZXF1ZXN0MzYwODE1MTM= | 610 | Added id accessor to foreign relations, similar to Django. | {
"login": "tals",
"id": 761863,
"node_id": "MDQ6VXNlcjc2MTg2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/761863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tals",
"html_url": "https://github.com/tals",
"followers_url": "https://api.github.com/users/tals/followers",
"following_url": "https://api.github.com/users/tals/following{/other_user}",
"gists_url": "https://api.github.com/users/tals/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tals/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tals/subscriptions",
"organizations_url": "https://api.github.com/users/tals/orgs",
"repos_url": "https://api.github.com/users/tals/repos",
"events_url": "https://api.github.com/users/tals/events{/privacy}",
"received_events_url": "https://api.github.com/users/tals/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Well, I originally didn't want to add this, but thinking about it, enough people have requested this that maybe it'd be a good idea. Thanks I'll go ahead and add some tests.\n"
] | 2015-05-23T03:28:30 | 2015-05-24T17:03:25 | 2015-05-24T17:03:24 | CONTRIBUTOR | null | Models with a ForeignKeyField instance now receive an additional attribute - attr_name_id. This allows you to access the foreign id without bringing in the entire model.
It's currently missing tests, but I figured I should see if you're interested in this first :)
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/610/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/610",
"html_url": "https://github.com/coleifer/peewee/pull/610",
"diff_url": "https://github.com/coleifer/peewee/pull/610.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/610.patch",
"merged_at": "2015-05-24T17:03:24"
} |
https://api.github.com/repos/coleifer/peewee/issues/609 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/609/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/609/comments | https://api.github.com/repos/coleifer/peewee/issues/609/events | https://github.com/coleifer/peewee/issues/609 | 79,692,832 | MDU6SXNzdWU3OTY5MjgzMg== | 609 | Getting just the ID of a foreign key requires full object | {
"login": "tals",
"id": 761863,
"node_id": "MDQ6VXNlcjc2MTg2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/761863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tals",
"html_url": "https://github.com/tals",
"followers_url": "https://api.github.com/users/tals/followers",
"following_url": "https://api.github.com/users/tals/following{/other_user}",
"gists_url": "https://api.github.com/users/tals/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tals/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tals/subscriptions",
"organizations_url": "https://api.github.com/users/tals/orgs",
"repos_url": "https://api.github.com/users/tals/repos",
"events_url": "https://api.github.com/users/tals/events{/privacy}",
"received_events_url": "https://api.github.com/users/tals/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Poking around the code, it's possible to get something like the Django flavor by having a new descriptor that just returns `_data[self.att_name]` and add it to the object during the `_new_()` call of `ForeignKeyField`\n",
"This query will join `Post` and `User` and eagerly fetch the post's associated user object.\n\n``` python\nall_posts_with_user = Post.select(Post, User).join(User).execute()\n```\n\nThe other query you shared below will _not_ eagerly fetch the user, and will trigger a query when you attempt to access the `post.user` attribute.\n\n``` python\nall_posts = Post.select().execute()\n```\n\nAs you noted in your second comment, you can retrieve the related object's ID by looking at `post._data['user']`. Rather than modifying the code to add a new attribute, you can either subclass `ForeignKeyField` with your own implementation or write a `property` on your model class that will lookup the ID, e.g.\n\n``` python\nclass Post(Model):\n # ... fields ...\n @property\n def author_id(self):\n return self._data['author']\n```\n",
"Yeah, eagerly was the wrong word to use - I meant that it fetched the entire object before it had to :)\n\nAnyway, thanks for merging it in! :)\n"
] | 2015-05-23T02:19:04 | 2015-05-24T19:45:57 | 2015-05-24T17:02:05 | CONTRIBUTOR | null | Hey there,
For illustrative purposes, lets use the following model:
``` python
class User(Model):
name = CharField(max_length=50)
class Post(Model):
author = ForeignKeyField(User)
text = TextField()
all_posts_with_user = Post.select(Post, User).join(User).execute()
all_posts = Post.select().execute()
```
The issue I am seeing is that on `all_posts`, Peewee will do an eager lookup in the database as soon as `__getattr__` is hit on the author field's `RelationDescriptor`, which is unnecessary when you just want to work with the related object's id (like when doing any sort of client-side aggregate).
Django solves this by having an additional attribute of `modelname_id`. In our case, `Post` will have a `author_id` attribute in addition to the `author` attribute.
Would be nice to having something simiar in Peewee :)
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/609/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/608 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/608/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/608/comments | https://api.github.com/repos/coleifer/peewee/issues/608/events | https://github.com/coleifer/peewee/issues/608 | 78,940,712 | MDU6SXNzdWU3ODk0MDcxMg== | 608 | Some issues with connection pooling | {
"login": "arnuschky",
"id": 179920,
"node_id": "MDQ6VXNlcjE3OTkyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/179920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnuschky",
"html_url": "https://github.com/arnuschky",
"followers_url": "https://api.github.com/users/arnuschky/followers",
"following_url": "https://api.github.com/users/arnuschky/following{/other_user}",
"gists_url": "https://api.github.com/users/arnuschky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnuschky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnuschky/subscriptions",
"organizations_url": "https://api.github.com/users/arnuschky/orgs",
"repos_url": "https://api.github.com/users/arnuschky/repos",
"events_url": "https://api.github.com/users/arnuschky/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnuschky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`lsof` confirms that the app opens only a single connection to the database.\n",
"This is getting stranger still. I have a different connection for every greenlet, and the low-level mysql code seems to create a new connection object to the database for each instance (as expected). However, `lsof` is showing only a single connection. \n\nIf I manually open two connections via `mysql.connect` using the python interpreter, `lsof` shows two connections.\n\nWhat am I missing here?\n\nBTW, we're running with `threadlocals=True`\n",
"> If I read the code correctly, it's more of a maximum age after which a connection is closed no matter its usage, right?\n\nYes.\n\n> multi-greenlet application\n\nI have to think this is the issue / cause. Can you share the relevant parts of your code? Are you calling `gevent.monkey.patch_all()`, or are you being more selective?\n",
"I'm just using gevent-socketio within gunicorn (both most recent). Gunicorn's gevent worker does the monkey patching: https://github.com/benoitc/gunicorn/blob/master/gunicorn/workers/ggevent.py#L61\n",
"Ahh, actually I think this is the problem. In your code, the peewee module's instance of the `threading` module may not be patched at the time it is imported. This means that peewee's `Database` classes will use a `threading.local` instead of a _greenlet local_.\n\nTo fix this, in your WSGI script or the entry point of your application, at the very top of the module, first import, try adding:\n\n``` python\nfrom gevent import monkey; monkey.patch_all()\n```\n\nLet me know if that fixes the issue.\n",
"I've worked many hours on playhouse. It needs many cares to continue\ndeveloping. Close all connection does not work correctly because of Id of\nresource not representing original mysql connection. To see MySQL\nconnections run command \"show full processlist;\"\nCertainly some connections are created with pool but are not closed with\nit. It means some connections stay in _in_use or don't close correctly! See\nsleep times in process list of MySQL. I'm using gevent pool for some tasks\nwith peewee and this is my observation. Configure your MySQL to close sleep\nconnections with more than some certain time .e.g 200 seconds. Playhouse\npicks up one connection from pool for each thread and if no connection is\navailable new connection will be created.\nOn May 21, 2015 6:31 PM, \"Charles Leifer\" [email protected] wrote:\n\n> Ahh, actually I think this is the problem. In your code, the peewee\n> module's instance of the threading module may not be patched at the time\n> it is imported. This means that peewee's Database classes will use a\n> threading.local instead of a _greenlet local_.\n> \n> To fix this, in your WSGI script or the entry point of your application,\n> at the very top of the module, first import, try adding:\n> \n> from gevent import monkey; monkey.patch_all()\n> \n> Let me know if that fixes the issue.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/coleifer/peewee/issues/608#issuecomment-104286302.\n",
"Can't do, if I add `patch_all` to the WSGI script, the worker doesn't start at all with the most recent gunicorn 19.x. (We had `patch_all` in our WSGI script in a previous version.)\n",
"@arnuschky maybe `patch_thread` then instead?\n",
"@emamirazavi, sorry to hear you're having issues with the pool module. The module is unit-tested extensively (378 lines of tests compared to 185 of the module itself). It is also quite simple from an implementation standpoint. I'm inclined to believe that these bugs may be at the application level and not in peewee or the pool (especially because you mentioned using gevent and that requires some care to get right). If you can provide me a test-case showing otherwise that would be great.\n",
"@coleifer, thank you, that did the trick! It's now creating multiple real connections to the db.\n\nIt remains that the `_in_use` dictionary never shrinks, but this might be now due to my code. Peewee is creating many connections for a single socket of gevent-socketio: one for the request, one for the main greenlet, and then one for each greenlet that I launch. My problem is that I have troubles terminating all of them properly once the socket closes.\n\nIs it somehow possible to share a database connection among several greenlets, even when running with threadlocals? This all-or-nothing sharing is a bit limiting in my case...\n",
"I can confirm that connections are now properly removed from `_in_use` on close; it's just the combination of Peewee, gevent-socketio, and my code that causes only a single connection to be properly closed while the others linger.\n\nTo repeat my question above: is it somehow possible to share a database connection among several greenlets, even when running with threadlocals? This all-or-nothing sharing is a bit limiting in my case.\n",
"> Is it somehow possible to share a database connection among several greenlets, even when running with threadlocals? This all-or-nothing sharing is a bit limiting in my case...\n\nNot easily, no, though I'm not sure why you'd want to. The main greenlet really shouldn't need a connection unless of course you're making queries from it. I obviously don't know anything about your code or your app, so it's hard for me to speculate.\n\n> It remains that the _in_use dictionary never shrinks, but this might be now due to my code.\n\nThe `_in_use` should grow when you open a connection and shrink when you close one. The fact that you're using socketio may have something to do with the odd behavior, but I really don't know.\n\nI really like gevent, but unfortunately you need to know something of the implementation to reason about its behavior.\n\nI'm closing this issue as invalid, since the problem was monkey-patches not being applied correctly during import time, and this isn't an issue with peewee.\n",
"> To repeat my question above: is it somehow possible to share a database connection among several greenlets, even when running with threadlocals? This all-or-nothing sharing is a bit limiting in my case.\n\nAgain, I'm not sure how your app works, or when you are opening/closing connections, or the interaction between the socketio greenlets and the rest of the app.\n\nIn a typical web app (using gevent or otherwise), peewee stores the conn on a threadlocal (or greenlet-local). Additionally, typical web apps will spawn a thread or greenlet to handle each request. So request comes in to server, server spawns thread/greenlet, app opens connection and generates HTTP response, app closes DB connection after sending response, server shuts down greenlet. Socketio adds another layer of complexity and I'm just not familiar with it.\n",
"Ok, thank you for your answers. \n\nTo rephrase my last question more precisely: currently, when using threadlocals, Peewee automatically creates a new connection when I call `connect()` on my Peewee db object. So whenever I use Peewee with multiple greenlets, Peewee always creates a new connection for each greenlet. To get around this, my only choice is to pass around an already connected instance of my database object, right?\n",
"> currently, when using threadlocals, Peewee automatically creates a new connection when I call connect()\n\nYes, that is the correct behavior. If you are using a pool, then the connections will be managed by the pool, but the drift of your statement is correct. If you're using the pool, only `max_connections` will be opened.\n\n> To get around this, my only choice is to pass around an already connected instance of my database object, right?\n\nWait, I thought you were using the connection pool because you _wanted_ multiple connections? I'm confused... What exactly do you want?\n",
"Every thread has its own connection. You must have one connection for each\nthread. Thread local brings you this ability. Each time you call connect in\npool mode, one ready connection from pool will be returned to you or if no\nconnection exists, new connection will be created and will be returned.\nOn May 21, 2015 10:17 PM, \"Charles Leifer\" [email protected] wrote:\n\n> currently, when using threadlocals, Peewee automatically creates a new\n> connection when I call connect()\n> \n> Yes, that is the correct behavior. If you are using a pool, then the\n> connections will be managed by the pool, but the drift of your statement is\n> correct. If you're using the pool, only max_connections will be opened.\n> \n> To get around this, my only choice is to pass around an already connected\n> instance of my database object, right?\n> \n> Wait, I thought you were using the connection pool because you _wanted_\n> multiple connections? I'm confused... What exactly do you want?\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/coleifer/peewee/issues/608#issuecomment-104369523.\n",
"I'm using Flask, Gunicorn and Gevent thread pool. I'm just closing\nconnection on flask tears down, because pooling automatically open one\nconnection. My problem is that sometimes one connection in pool stay with\nno assignment and count of this [sleep] connections up to 150 after a day!\nConnection in playhouse.pool just is saved in thread local, and in use or\nconnections properties don't know MySQL connection resource to handle\nconnection correctly. I should tell that this observation read after\ninstalling Gunicorn and running my app as a preloaded application. Any\nideas?\n",
"> You must have one connection for each\n> thread.\n\nIf you specify `threadlocals=False` when initializing your database then this is not true, one conn will be shared.\n",
"What I want is basically this: each socket worker (which is a greenlet) has a single connection which is shared with its sub-greenlets.\n\nWhat I have is that my app sets up a new db connection before the request, as usual. However, this happens in the main gunicorn worker and not the socket worker greenlet. Hence, there's a new connection for the socket greenlet, which starts a few sub-greenlets that each get a new db instance as well. Closing connections later is even more messy depending on whether there was an exception and in which greenlet it occurred.\n\nThe first problem with the app creating a new connection is easy enough to fix but the second is harder: how can the sub-greenlets share a single connection with their parent, the socket greenlet?\n",
"I think you will have to write your own connection management code in this case, since it's rather unique.\n",
"It seems so. :) Thanks for all your help!\n"
] | 2015-05-21T10:50:33 | 2015-05-21T20:43:34 | 2015-05-21T17:33:00 | NONE | null | Using latest peewee with a pooled mysql db (I know, sorry... ;) ) It's a single-thread multi-greenlet application.
What we see is that the size of the connections `inuse` monotonically increases, until it hits `max_connections`. Still debugging that right now...
`stale_timeout` is somewhat of a misnomer, right? If I read the code correctly, it's more of a maximum age after which a connection is closed no matter its usage, right?
A curious observation is that the pool seems to use multiple connections to the database but the database reports only a single connection.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/608/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/607 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/607/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/607/comments | https://api.github.com/repos/coleifer/peewee/issues/607/events | https://github.com/coleifer/peewee/pull/607 | 78,610,050 | MDExOlB1bGxSZXF1ZXN0MzU4NTQ0MzA= | 607 | Introspector doing too much work | {
"login": "pilate",
"id": 131484,
"node_id": "MDQ6VXNlcjEzMTQ4NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/131484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pilate",
"html_url": "https://github.com/pilate",
"followers_url": "https://api.github.com/users/pilate/followers",
"following_url": "https://api.github.com/users/pilate/following{/other_user}",
"gists_url": "https://api.github.com/users/pilate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pilate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pilate/subscriptions",
"organizations_url": "https://api.github.com/users/pilate/orgs",
"repos_url": "https://api.github.com/users/pilate/repos",
"events_url": "https://api.github.com/users/pilate/events{/privacy}",
"received_events_url": "https://api.github.com/users/pilate/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Nice catch, thanks!\n"
] | 2015-05-20T17:08:39 | 2015-05-20T17:39:10 | 2015-05-20T17:39:10 | CONTRIBUTOR | null | Tables from arguments aren't being passed to introspector.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/607/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/607",
"html_url": "https://github.com/coleifer/peewee/pull/607",
"diff_url": "https://github.com/coleifer/peewee/pull/607.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/607.patch",
"merged_at": "2015-05-20T17:39:10"
} |
https://api.github.com/repos/coleifer/peewee/issues/606 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/606/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/606/comments | https://api.github.com/repos/coleifer/peewee/issues/606/events | https://github.com/coleifer/peewee/pull/606 | 77,164,297 | MDExOlB1bGxSZXF1ZXN0MzU1Nzk0MDI= | 606 | Self refrencing joins with aggregate_rows | {
"login": "arrowgamer",
"id": 779694,
"node_id": "MDQ6VXNlcjc3OTY5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/779694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arrowgamer",
"html_url": "https://github.com/arrowgamer",
"followers_url": "https://api.github.com/users/arrowgamer/followers",
"following_url": "https://api.github.com/users/arrowgamer/following{/other_user}",
"gists_url": "https://api.github.com/users/arrowgamer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arrowgamer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arrowgamer/subscriptions",
"organizations_url": "https://api.github.com/users/arrowgamer/orgs",
"repos_url": "https://api.github.com/users/arrowgamer/repos",
"events_url": "https://api.github.com/users/arrowgamer/events{/privacy}",
"received_events_url": "https://api.github.com/users/arrowgamer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Can you provide a test-case that shows the broken behavior?\n",
"``` python\nfrom peewee import *\nfrom unittest import TestCase\n\ndb = SqliteDatabase(':memory:')\n\n\nclass SomethingThatReferencesItself(Model):\n name = CharField()\n bro = ForeignKeyField('self', null=True, related_name='bros')\n\n class Meta:\n database = db\n\n\nclass EagerLoadSelfReferencingJoinPleaseYesOkay(TestCase):\n @classmethod\n def setUpClass(cls):\n SomethingThatReferencesItself.create_table()\n bob = SomethingThatReferencesItself.create(name='bob')\n SomethingThatReferencesItself.create(name='joe', bro=bob)\n SomethingThatReferencesItself.create(name='rofl guy', bro=bob)\n\n def test_get_aggregate_bro(self):\n Bro = SomethingThatReferencesItself.alias()\n\n joe = (SomethingThatReferencesItself\n .select(SomethingThatReferencesItself, Bro)\n .join(Bro, on=(SomethingThatReferencesItself.bro == Bro.id).alias('bro'))\n .where(SomethingThatReferencesItself.name == 'joe')\n .aggregate_rows()\n .get())\n\n self.assertIn('bro', joe._obj_cache)\n self.assertEqual(joe._obj_cache['bro'].name, 'bob')\n```\n\nI'm using Model._obj_cache to verify that the data was eagerly loaded. Because ModelOptions.rel_for_model does not support joining via ModelAlias, this will not work.\n\nI'm not aware of a way to eagerly load a parent with its children using a self-referencing join, for example, loading the above bob with his bros, but you're the master so perhaps you'd know how to write/test that.\n",
"Hmm, I think the fix for this may be more involved. I'm seeing some issues when self-references are queried using either `prefetch` or `aggregate_rows`. λ\n",
"Added support for self-joins in `prefetch` in 499bf2e81e66173eb4c50c5436ec43e88cbeb96b. Just need to get `aggregate_rows()` fixed now.\n",
"Awesome work! Looking forward to the next release! As you can tell, I use `aggregate_rows` a lot =P\n",
"@arrowgamer not quite fixed yet, still need to get the `aggregate_rows()` bit working. Also just FWIW don't mix calls to `aggregate_rows()` with `.get()`.\n",
"@coleifer Yes, I saw your commit was only for `prefetch`. I assume by next release you'll have it wrapped up? :laughing: And I'm aware of what `.get()` does to `aggregate_rows()` via `LIMIT`. Normally I use `SelectQuery[0]` in place of it when expecting more than one result returned from a join. Is there another way?\n",
"So from e5ce2bdf71f19bd5d4f8ab87b461bc23dea49da1 I think that the basic use-case should now work. Honestly due to the complexity of the `aggregate_rows()` implementation, plus mixing in `ModelAlias`, I don't know how much more I can do here.\n",
"So what would be the syntax of using `aggregate_rows()` to load a parent with its children or, using the above example, load a record with its bros?\n\nEdit: Nevermind, took a look at your testcase =) Thanks!\n",
"The example test case shows, but say you have `Category` which has a foreign key like so:\n\n``` python\nclass Category(Model):\n name = CharField()\n parent = ForeignKeyField('self', related_name='children')\n```\n\nYou would write:\n\n``` python\nChild = Category.alias()\nquery = (Category\n .select(Category, Child)\n .join(Child, JOIN.LEFT_OUTER, on=(Category.id == Child.parent).alias('kids'))\n .order_by(Category.id, Child.id)\n .aggregate_rows())\nfor category in query:\n print category.name\n for child in category.kids:\n print ' -', child.name\n```\n\nA caveat is that you must specify an alias in the join condition.\n",
"Thanks a bunch for the explanation! Once released, I'll report any issues if I find any, as I'm already using aggregated self-referencing joins.\n"
] | 2015-05-17T00:09:37 | 2015-05-20T02:10:38 | 2015-05-20T01:50:35 | CONTRIBUTOR | null | Hi again. I was trying to use aggregate_rows to eagerly load a model with a self referencing join, and I found that I could not do so unless the rel_for_model function was altered as per this pull request.
The reason is that rel_for_model doesn't appear to support joins via ModelAlias.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/606/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/606",
"html_url": "https://github.com/coleifer/peewee/pull/606",
"diff_url": "https://github.com/coleifer/peewee/pull/606.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/606.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/605 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/605/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/605/comments | https://api.github.com/repos/coleifer/peewee/issues/605/events | https://github.com/coleifer/peewee/issues/605 | 76,887,260 | MDU6SXNzdWU3Njg4NzI2MA== | 605 | Request for support for complex/null operators | {
"login": "johnboy2",
"id": 5591205,
"node_id": "MDQ6VXNlcjU1OTEyMDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5591205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnboy2",
"html_url": "https://github.com/johnboy2",
"followers_url": "https://api.github.com/users/johnboy2/followers",
"following_url": "https://api.github.com/users/johnboy2/following{/other_user}",
"gists_url": "https://api.github.com/users/johnboy2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnboy2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnboy2/subscriptions",
"organizations_url": "https://api.github.com/users/johnboy2/orgs",
"repos_url": "https://api.github.com/users/johnboy2/repos",
"events_url": "https://api.github.com/users/johnboy2/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnboy2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hmm...while I can see the benefit of allowing operations to map to callables, I do not have plans to add that currently. You can always subclass `QueryCompiler` and provide your own implementation for `_parse_expression` that does the right thing.\n"
] | 2015-05-15T23:11:32 | 2015-05-16T20:33:30 | 2015-05-16T20:33:30 | NONE | null | As requested in the online documentation, I am proposing a feature change for peewee, as described below.
While looking at subclassing peewee.Database for a project, I quickly realized that some operations that work on PostgreSQL or SQLite have no _direct_ analogy on some platforms I'm looking to use it with. Usually, that has been because peewee expects comparison clauses to always be expressed using _in-fix_ semantics (eg _<left_operand> <operator> <right_operand>_), which isn't always possible.
Similarly, if a given operation cannot be supported at all (eg regex matching), there is no way to _forbid_ use of the corresponding peewee expressions. I could let those operations fail in the database (leading to troubleshooting based on DBMS error text), but that's nowhere near as useful as would be an exception raised by peewee containing a clearly-worded message.
To address these, I want to modify `QueryCompiler()._parse_expression()` so that _overridden_ operators don't necessarily have to be strings, but can also be a callable or `None`. Use of `None` would indicate an unsupported operator whose use raises an exception, while a callable would be expected to assemble and return whatever complex clause is needed to get the job done.
Here is an example that would become possible with the proposed change:
``` python
class MyCustomDB(peewee.Database):
op_overrides = {
peewee.OP.REGEXP: None, # Raise ValueError if somebody tries this
peewee.OP.LIKE: lambda l, r: '%s LIKE %s COLLATE Latin1_General_BIN' % (l, r),
peewee.OP.ILIKE: 'LIKE',
peewee.OP.XOR: lambda l, r: '((%s AND NOT %s) OR (NOT %s AND %s))' % (l, r, l, r),
}
# ...
```
Thoughts?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/605/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/604 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/604/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/604/comments | https://api.github.com/repos/coleifer/peewee/issues/604/events | https://github.com/coleifer/peewee/issues/604 | 75,883,018 | MDU6SXNzdWU3NTg4MzAxOA== | 604 | Connection pooling: re-running failed query in sql_error_handler | {
"login": "tuukkamustonen",
"id": 94327,
"node_id": "MDQ6VXNlcjk0MzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/94327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuukkamustonen",
"html_url": "https://github.com/tuukkamustonen",
"followers_url": "https://api.github.com/users/tuukkamustonen/followers",
"following_url": "https://api.github.com/users/tuukkamustonen/following{/other_user}",
"gists_url": "https://api.github.com/users/tuukkamustonen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuukkamustonen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuukkamustonen/subscriptions",
"organizations_url": "https://api.github.com/users/tuukkamustonen/orgs",
"repos_url": "https://api.github.com/users/tuukkamustonen/repos",
"events_url": "https://api.github.com/users/tuukkamustonen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuukkamustonen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Have you tried telling the pool to close all connections in the error handler? This might work better:\n\n``` python\n def sql_error_handler(self, exception, sql, params, require_commit):\n\n if isinstance(exception, pymysql.err.OperationalError):\n\n if exception.args[0] in self.CONN_FAILURE_CODES:\n db.close_all() # Changed this line from `manual_close()`.\n\n return True\n```\n",
"I didn't try that, but while it would \"heal\" all the other connections, this one request would still fail (per process). It might not sound much, but...\n\nI also wonder: would `close_all()` really close all of the connections, even those in active use at the point of time in other threads, causing them to fail their requests?\n\nWell, I assume it doesn't matter as they would fail anyway (as the connections are broken).\n",
"> I also wonder: would close_all() really close all of the connections, even those in active use at the point of time in other threads, causing them to fail their requests?\n\nNo, those connections would not be closed and presumably they'd fail with the same `OperationalError`.\n",
"I think the `close_all()` solution is as good as it gets, so I'm going to close this for now.\n",
"@coleifer So, it's not possible to allow returning a new cursor? Would it be too complex, technically?\n",
"Yeah, returning the new cursor is the crux of the issue. I'm not sure of the cleanest way to do that with `sql_error_handler()`. I'd prefer not to break existing implementations, but at the same time I think it's failing to do what I had designed it for in the first place.\n\nI think I'll see about reworking the method.\n",
"I'm not positive of all the conditions that might cause an `OperationalError`, but I can imagine a max-recursion issue if the `connect()` after `close()` succeeded, and the subsequent `execute_sql()` raise an `OperationalError`:\n- `execute_sql(...)`\n- `OperationalError` occurs\n- call `sql_error_handler()`\n - `db.close()`\n - `db.connect()` - let's assume this succeeds\n - `db.execute_sql(...)` - retry the query\n - `OperationalError` occurs\n - call `sql_error_handler()` ...\n\nYou get the drift... If the issue was the db being unavailable this wouldn't be the case as the `connect()` would fail, but perhaps there are other reasons an `OperationalError` might be raised?\n",
"The more I think about it, the more I'm thinking that if the query fails with an `OperationalError`, it probably doesn't make sense to immediately retry. **Unless** you're using a connection pool, and there's a possibility the connection you just checked out is stale and all you need to do is reconnect to be good to go... Hmm...\n",
"I agree, this sort of re-connection logic makes sense only when using connection pooling. And even then, finding out all the situations where a re-connection might help (and not cause side effects) could be tricky... or not. I'm no expert with MySQL - the two failure codes (`2006`, `2013`) were just my first findings when I bumped into this issue.\n\nRegarding stale connections, I was in the assumption that `connect()` (which I'm calling for each requests) actually ensures that connection is not stale (and closes + opens a new one if it is). Or does that logic apply to just `stale_timeout`?\n\nI didn't realize the recursion issue. Wouldn't it be possible to catch that with a counter (or simply allow single retry)?\n\nThen, after thinking about this more, I don't think this is as critical as I initially thought. I mean, if the database is down for a moment, the milk is already spoiled, some requests will fail anyway. It doesn't make big difference if couple of requests fail even after DB is back up. `close_all()` helps a lot.\n\n(Though I would wish that was built-in to the pool somehow.)\n\nThe most annoying scenario is when DB temporarily goes away and comes back without any requests being processed between. As in my original post: _each open connection would break for DB going down temporarily, even if those connections were not used during the downtime._ That would mean a couple of failing requests, where in perfect world none would fail. I assume it's not that bad, either.\n",
"> Regarding stale connections, I was in the assumption that connect() (which I'm calling for each requests) actually ensures that connection is not stale (and closes + opens a new one if it is). Or does that logic apply to just stale_timeout?\n\nWhen a connection is checked out, it first checks if the connection is stale, and if not, it will check if the connection \"is closed\" by calling the `_is_closed()` method. The MySQL implementation looks like this:\n\n``` python\n def _is_closed(self, key, conn):\n is_closed = super(PooledMySQLDatabase, self)._is_closed(key, conn)\n if not is_closed:\n if hasattr(conn, 'open'):\n # MySQLdb `ping()` seems to always return `None` in my testing.\n # So the `open` attribute will be used instead.\n is_closed = not bool(conn.open)\n else:\n # pymysql `ping([reconnect=True])` will indicate if the conn\n # is closed or not.\n try:\n is_closed = not conn.ping(False)\n except:\n is_closed = True\n return is_closed\n```\n\nAs to the case where the connection goes away then comes back, I would have thought the logic in `_is_closed()` would catch those. Maybe something isn't working as I had thought?\n",
"What I did was simply restart MySQL (an AWS RDS instance) and throw in requests after it was back up. It was a master-standby setup, but I didn't really activate failover to standy - it was the same master going down and back up again. Of course, there's always also possibility of AWS RDS doing interesting things compared to simpler, local MySQL installation. Give me couple of days and I'll verify the behavior when I'm back at the office.\n\n> As to the case where the connection goes away then comes back, I would have thought the logic in _is_closed() would catch those. Maybe something isn't working as I had thought?\n\nLooking at pymysql's `ping()`, if `socket` exists, `pymysql` will ping the DB, and if that doesn't work, exception is thrown and peewee's `_is_closed()` will return `True` as expected.\n\nWhat makes me wonder, is pymysql's `Connection.open`:\n\n```\n@property\ndef open(self):\n return self.socket is not None\n```\n\nAs peewee relies solely on this (when available), maybe in some scenarios the socket _exists_, but it's actually in a failed state and actual operations will fail? Is `open` proper way to check the health of the connection?\n",
"Ok I tested it locally:\n- start app\n- make some requests to fill the connection pool\n- stop mysql\n- start mysql (no requests during downtime)\n- make a request\n\nThe result is:\n\n```\n File \"/home/musttu/Code/virtualenvs/cs/local/lib/python2.7/site-packages/pymysql/connections.py\", line 916, in _write_bytes\n raise err.OperationalError(2006, \"MySQL server has gone away (%r)\" % (e,))\n```\n\nI took a look at pymysql's `Connection.open` and indeed it returns `True` even after mysql restart. So I assume socket does not get closed, because it is not explicitly instructed to be closed by the driver.\n\nA quick googling gave me: http://stackoverflow.com/questions/3335342/how-to-check-if-a-mysql-connection-is-closed-in-python which actually states the same.\n\nUnfortunately, it doesn't sound like there is a way to directly know if the connection is functional or not. The expert there suggests a try/except block, which to me sounds like either failing that single request and callling `close_all()` (to play safe) or doing some sort of a retry logic.\n\nOf course, if pool always pinged the database, broken connections would be caught, but yeah that would also introduce latency, waste IO and such.\n",
"Thanks for looking into it some more. Well, I guess there's not much to do here. If you wanted to subclass the pool and modify the `_is_closed()` method to PING that'd be one option.\n",
"I agree, this is a corner case anyway and `close_all()` is good enough. Thanks for reviewing it.\n",
"In case you're interested, I've got a mixin for retrying failed queries in 017e4e4952f0b429dd21f0e90465a0f644cf6015 . Docs:\n\nhttp://docs.peewee-orm.com/en/latest/peewee/database.html#automatic-reconnect\n"
] | 2015-05-13T07:32:55 | 2015-09-30T03:39:59 | 2015-06-02T13:43:34 | NONE | null | Restructued from my email at https://groups.google.com/forum/#!topic/peewee-orm/LDuaQtT-ZTc
I find connection pooling somewhat unusable at the moment due to following:
When using `playhouse.pool.PooledMySQLDatabase`, if database is rebooted (non-HA), open connections in the pool fail the next time they are used.
Those failed connections can be closed by:
```
class ConnectionClosingMySQLDatabase(playhouse.pool.PooledMySQLDatabase):
CONN_FAILURE_CODES = [
2006, # MySQL server has gone away (error(32, 'Broken pipe'))
2013, # Lost connection to MySQL server during query
]
def sql_error_handler(self, exception, sql, params, require_commit):
if isinstance(exception, pymysql.err.OperationalError):
if exception.args[0] in self.CONN_FAILURE_CODES:
db.manual_close()
return True
```
However, after database has become online again, the first query for each connection will fail, as this logic only closes the broken connection.
(Also, it seemed like each open connection would break for DB going down temporarily, even if that connection was not used during the downtime.)
In order to also re-run the query:
```
class ReconnectingMySQLDatabase(playhouse.pool.PooledMySQLDatabase):
CONN_FAILURE_CODES = [
2006,
2013,
]
def sql_error_handler(self, exception, sql, params, require_commit):
if isinstance(exception, pymysql.err.OperationalError):
if exception.args[0] in self.CONN_FAILURE_CODES:
db.manual_close()
db.connect()
db.execute_sql(sql, params or (), require_commit=require_commit)
return False
return True
```
But this gets:
```
...
File "/home/musttu/Code/virtualenvs/cs/local/lib/python2.7/site-packages/pymysql/cursors.py", line 71, in _check_executed
raise ProgrammingError("execute() first")
ProgrammingError: execute() first
```
I believe this happens because `Database.execute_sql` returns the original `cursor` and not the cursor of the fixed connection:
```
....
cursor = self.get_cursor()
try:
cursor.execute(sql, params or ())
except Exception as exc:
if self.get_autocommit() and self.autorollback:
self.rollback()
if self.sql_error_handler(exc, sql, params, require_commit):
raise
else:
if require_commit and self.get_autocommit():
self.commit()
return cursor
```
If I could fetch and return a new cursor from `sql_error_handler` (now it returns bool), I assume it could work?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/604/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/603 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/603/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/603/comments | https://api.github.com/repos/coleifer/peewee/issues/603/events | https://github.com/coleifer/peewee/issues/603 | 75,732,042 | MDU6SXNzdWU3NTczMjA0Mg== | 603 | FTSModels for sqlite pass `options` that are not expected. | {
"login": "mklauber",
"id": 563721,
"node_id": "MDQ6VXNlcjU2MzcyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/563721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mklauber",
"html_url": "https://github.com/mklauber",
"followers_url": "https://api.github.com/users/mklauber/followers",
"following_url": "https://api.github.com/users/mklauber/following{/other_user}",
"gists_url": "https://api.github.com/users/mklauber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mklauber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mklauber/subscriptions",
"organizations_url": "https://api.github.com/users/mklauber/orgs",
"repos_url": "https://api.github.com/users/mklauber/repos",
"events_url": "https://api.github.com/users/mklauber/events{/privacy}",
"received_events_url": "https://api.github.com/users/mklauber/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Doh, Found it in the closed issues. Sorry.\n"
] | 2015-05-12T20:41:47 | 2015-05-12T20:47:54 | 2015-05-12T20:47:54 | NONE | null | I'm having an issue with using the sqlite_ext FTS extension. I'm trying to follow the example from the documentation (reproduced below), but I'm getting the following stack trace. (reproduced _further_ below). Am I missing something, or is this part of the general experimental nature of the sqlite_ext?
``` python
class Metadata(Model):
document = ForeignKeyField(Document)
field = CharField(index=True)
content = TextField()
class Meta:
database = db
class MetadataIndex(FTSModel):
value = TextField()
class Meta:
database = db
Metadata.create_table()
MetadataIndex.create_table(value=Metadata.content)
```
``` python
Traceback (most recent call last):
File "models.py", line 43, in <module>
MetadataIndex.create_table()
File "/home/mklauber/.virtualenvs/metalogos/local/lib/python2.7/site-packages/playhouse/sqlite_ext.py", line 107, in create_table
cls._meta.database.create_table(cls, options=options)
TypeError: create_table() got an unexpected keyword argument 'options'
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/603/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/602 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/602/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/602/comments | https://api.github.com/repos/coleifer/peewee/issues/602/events | https://github.com/coleifer/peewee/issues/602 | 75,437,483 | MDU6SXNzdWU3NTQzNzQ4Mw== | 602 | will peewee support table partition? | {
"login": "pyloque",
"id": 2040421,
"node_id": "MDQ6VXNlcjIwNDA0MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2040421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pyloque",
"html_url": "https://github.com/pyloque",
"followers_url": "https://api.github.com/users/pyloque/followers",
"following_url": "https://api.github.com/users/pyloque/following{/other_user}",
"gists_url": "https://api.github.com/users/pyloque/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pyloque/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pyloque/subscriptions",
"organizations_url": "https://api.github.com/users/pyloque/orgs",
"repos_url": "https://api.github.com/users/pyloque/repos",
"events_url": "https://api.github.com/users/pyloque/events{/privacy}",
"received_events_url": "https://api.github.com/users/pyloque/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You can probably build something _on top_ of peewee, but peewee itself does not support table partitioning.\n\nIf you want to change the table your query uses at run-time, you can set the `ModelClass._meta.db_table` attribute, e.g.\n\n``` python\nAccount._meta.db_table = 'account_123'\nquery = Account.select().where(Account.username == 'username_xxx')\nAccount._meta.db_table = 'account'\n```\n\nYou could write a context manager, for example:\n\n``` python\nwith Account.query_partition(123):\n query = Account.select().where(...)\n```\n",
"One potential solution that we're looking at: http://architect.readthedocs.io/features/partition/index.html\n"
] | 2015-05-12T02:53:36 | 2016-08-03T19:27:49 | 2015-05-12T03:54:06 | NONE | null | in our project, table_name is decided by entity field value, so different entities of the same model class can be saved to different tables with same table_name prefix.
I currently self maked an orm framework with less functions.I wish peewee can support table partition,so I could replace it with powerfull peewee.
```
class Account(Model):
__partition_key__ = 'username'
__partition_num__ = 256
username = StringField()
password = StringField()
Account.query_for('username_xxx')
Account.query_for_partition(123)
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/602/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/601 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/601/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/601/comments | https://api.github.com/repos/coleifer/peewee/issues/601/events | https://github.com/coleifer/peewee/issues/601 | 74,917,693 | MDU6SXNzdWU3NDkxNzY5Mw== | 601 | Add method/hook for default table names | {
"login": "mikemill",
"id": 1652125,
"node_id": "MDQ6VXNlcjE2NTIxMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1652125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikemill",
"html_url": "https://github.com/mikemill",
"followers_url": "https://api.github.com/users/mikemill/followers",
"following_url": "https://api.github.com/users/mikemill/following{/other_user}",
"gists_url": "https://api.github.com/users/mikemill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikemill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikemill/subscriptions",
"organizations_url": "https://api.github.com/users/mikemill/orgs",
"repos_url": "https://api.github.com/users/mikemill/repos",
"events_url": "https://api.github.com/users/mikemill/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikemill/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Nice idea, be happy to add that.\n",
"Wow, thanks for such a quick response. That solution is awesome!\n"
] | 2015-05-10T13:15:53 | 2015-05-10T15:53:07 | 2015-05-10T15:32:30 | NONE | null | In our project we are using the singular form for the model names and the plural form for the tables. This means each of our models has to have `class Meta: db_table = 'foos'`. It'd be nice if there was a method or hook for defining the default table name algorithm that way it can be defined once. Currently it is coded directly into `BaseModel.__new__`.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/601/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/600 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/600/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/600/comments | https://api.github.com/repos/coleifer/peewee/issues/600/events | https://github.com/coleifer/peewee/issues/600 | 74,778,482 | MDU6SXNzdWU3NDc3ODQ4Mg== | 600 | Ordering of set in back reference from ForeignKeyField | {
"login": "josefdlange",
"id": 1062835,
"node_id": "MDQ6VXNlcjEwNjI4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1062835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josefdlange",
"html_url": "https://github.com/josefdlange",
"followers_url": "https://api.github.com/users/josefdlange/followers",
"following_url": "https://api.github.com/users/josefdlange/following{/other_user}",
"gists_url": "https://api.github.com/users/josefdlange/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josefdlange/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josefdlange/subscriptions",
"organizations_url": "https://api.github.com/users/josefdlange/orgs",
"repos_url": "https://api.github.com/users/josefdlange/repos",
"events_url": "https://api.github.com/users/josefdlange/events{/privacy}",
"received_events_url": "https://api.github.com/users/josefdlange/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Well, let's say you have `User` and `Tweet` from the docs:\r\n\r\n``` python\r\nclass User(Model):\r\n username = CharField(unique=True)\r\n\r\nclass Tweet(Model):\r\n user = ForeignKeyField(User, related_name='tweets')\r\n content = TextField()\r\n timestamp = DateTimeField(default=datetime.datetime.now)\r\n```\r\n\r\nYou can observe that the `user.tweets` object is a `SelectQuery`:\r\n\r\n``` python\r\n>>> u = User.create(username='charlie')\r\n>>> u.tweets\r\n<class '__main__.Tweet'> SELECT \"t1\".\"id\", \"t1\".\"user_id\", \"t1\".\"content\", \"t1\".\"timestamp\" FROM \"tweet\" AS t1 WHERE (\"t1\".\"user_id\" = ?) [1]\r\n```\r\n\r\nSince it's just a query you can call `order_by()` on it:\r\n\r\n``` python\r\n>>> u.tweets.order_by(Tweet.timestamp.desc())\r\n<class '__main__.Tweet'> SELECT \"t1\".\"id\", \"t1\".\"user_id\", \"t1\".\"content\", \"t1\".\"timestamp\" FROM \"tweet\" AS t1 WHERE (\"t1\".\"user_id\" = ?) ORDER BY \"t1\".\"timestamp\" DESC [1]\r\n```\r\n\r\nIf you don't want to type this all the time, just create a `property` object on `User`:\r\n\r\n``` python\r\nclass User(Model):\r\n # ... fields ...\r\n\r\n @property\r\n def tweets_ordered(self):\r\n return self.tweets.order_by(Tweet.timestamp.desc())\r\n```\r\n\r\nThen use `tweets_ordered`.\r\n",
"The property solution is slick. Thanks!\n",
"@coleifer then how do you call `tweets_ordered`? since it would belong to User?\r\nIt must be `u.tweets_ordered` , but `tweets_ordered` is a property of Tweet?",
"I had made a mistake, that should have read class User. I've edited the comment. Hope that helps."
] | 2015-05-09T23:05:39 | 2020-11-05T12:44:52 | 2015-05-10T02:17:17 | CONTRIBUTOR | null | Is it possible, in a simple way, to declare what order in which the back reference created by a ForeignKeyField returns? I can sort the items once I get them (in most cases since the backref groups shouldn't be huge), but offloading this work to the DB would be nice. Is there something right in front of me I am missing?
Thanks!
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/600/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/599 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/599/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/599/comments | https://api.github.com/repos/coleifer/peewee/issues/599/events | https://github.com/coleifer/peewee/issues/599 | 74,706,606 | MDU6SXNzdWU3NDcwNjYwNg== | 599 | KeyError with user-defined operations | {
"login": "coleifer",
"id": 119974,
"node_id": "MDQ6VXNlcjExOTk3NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/119974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coleifer",
"html_url": "https://github.com/coleifer",
"followers_url": "https://api.github.com/users/coleifer/followers",
"following_url": "https://api.github.com/users/coleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/coleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coleifer/subscriptions",
"organizations_url": "https://api.github.com/users/coleifer/orgs",
"repos_url": "https://api.github.com/users/coleifer/repos",
"events_url": "https://api.github.com/users/coleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/coleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Actually, I think this is acceptable behavior. It wouldn't be hard to use a second attribute to store instance-specific overrides and merge them when instantiating the `QueryCompiler`, but I think this is OK as-is.\n"
] | 2015-05-09T16:22:45 | 2015-05-10T15:52:34 | 2015-05-10T15:52:34 | OWNER | null | ``` python
from peewee import *
from peewee import OP
from peewee import Expression
db = MySQLDatabase(app.config['MYSQL_DATABASE_DB'], host=app.config['MYSQL_DATABASE_HOST'], user=app.config['MYSQL_DATABASE_USER'], passwd=app.config['MYSQL_DATABASE_PASSWORD'])
OP['MOD'] = 'mod'
def mod(lhs, rhs):
return Expression(lhs, OP.MOD, rhs)
MySQLDatabase.register_ops({OP.MOD: '%'})
class Base(Model):
class Meta:
database = db
class User(Base):
user_id = PrimaryKeyField()
first_name = CharField(max_length = 150)
last_name = CharField(max_length = 150)
@app.route('/')
def test():
query = User.select().where(mod(User.user_id, 2) == 0)
return "Query: %r" % query
```
---
If the call to `register_ops()` is moved before the DB is instantiated then everything works correctly.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/599/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/598 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/598/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/598/comments | https://api.github.com/repos/coleifer/peewee/issues/598/events | https://github.com/coleifer/peewee/issues/598 | 74,371,604 | MDU6SXNzdWU3NDM3MTYwNA== | 598 | Tests are not run for MySQL? | {
"login": "pmeinhardt",
"id": 706519,
"node_id": "MDQ6VXNlcjcwNjUxOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/706519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pmeinhardt",
"html_url": "https://github.com/pmeinhardt",
"followers_url": "https://api.github.com/users/pmeinhardt/followers",
"following_url": "https://api.github.com/users/pmeinhardt/following{/other_user}",
"gists_url": "https://api.github.com/users/pmeinhardt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pmeinhardt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pmeinhardt/subscriptions",
"organizations_url": "https://api.github.com/users/pmeinhardt/orgs",
"repos_url": "https://api.github.com/users/pmeinhardt/repos",
"events_url": "https://api.github.com/users/pmeinhardt/events{/privacy}",
"received_events_url": "https://api.github.com/users/pmeinhardt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"No they are not run for MySQL because there are some failures that I do not intend to fix. MySQL is supported, and most things work, but it is not tested as part of the continuous integration.\n"
] | 2015-05-08T13:52:44 | 2015-05-08T14:04:24 | 2015-05-08T14:04:24 | NONE | null | Tests are only run for SQLite and PostgreSQL at the moment. Is there any particular reason the suite is not run against MySQL? After all `peewee` claims to support MySQL in the README and documentation.
See [.travis.yml](https://github.com/coleifer/peewee/blob/65bf15f93f2f96abf8b216e6eea056726711a19f/.travis.yml#L8-L10)
Cheers,
Paul
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/598/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/597 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/597/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/597/comments | https://api.github.com/repos/coleifer/peewee/issues/597/events | https://github.com/coleifer/peewee/pull/597 | 74,137,018 | MDExOlB1bGxSZXF1ZXN0MzQ5NjExNjI= | 597 | Update quickstart.rst | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you!\n"
] | 2015-05-07T21:38:42 | 2015-05-07T23:17:37 | 2015-05-07T21:39:38 | NONE | null | Fix typo.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/597/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/597",
"html_url": "https://github.com/coleifer/peewee/pull/597",
"diff_url": "https://github.com/coleifer/peewee/pull/597.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/597.patch",
"merged_at": "2015-05-07T21:39:38"
} |
https://api.github.com/repos/coleifer/peewee/issues/596 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/596/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/596/comments | https://api.github.com/repos/coleifer/peewee/issues/596/events | https://github.com/coleifer/peewee/issues/596 | 74,114,107 | MDU6SXNzdWU3NDExNDEwNw== | 596 | Feature Request: N+1 query detection | {
"login": "josephschorr",
"id": 4073002,
"node_id": "MDQ6VXNlcjQwNzMwMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4073002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josephschorr",
"html_url": "https://github.com/josephschorr",
"followers_url": "https://api.github.com/users/josephschorr/followers",
"following_url": "https://api.github.com/users/josephschorr/following{/other_user}",
"gists_url": "https://api.github.com/users/josephschorr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josephschorr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josephschorr/subscriptions",
"organizations_url": "https://api.github.com/users/josephschorr/orgs",
"repos_url": "https://api.github.com/users/josephschorr/repos",
"events_url": "https://api.github.com/users/josephschorr/events{/privacy}",
"received_events_url": "https://api.github.com/users/josephschorr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"That's a neat idea, but I'm not sure how to implement that without cluttering up the `ForeignKeyField` / `RelationDescriptor` classes.\n\nOne thing you can do currently, which is very easy, is to use the `assert_query_count` helper in `playhouse.test_utils` ([documentation link](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#assert_query_count)):\n\n``` python\nclass TestMyApp(unittest.TestCase):\n @assert_query_count(1)\n def test_get_popular_blogs(self):\n popular_blogs = Blog.get_popular()\n self.assertEqual(\n [blog.title for blog in popular_blogs],\n [\"Peewee's Playhouse!\", \"All About Huey\", \"Mickey's Adventures\"])\n\n def test_expensive_operation(self):\n with assert_query_count(1):\n perform_expensive_operation()\n```\n\nI think these are probably a better way of ensuring correct query behavior than checking for accidental foreign key resolutions anyways.\n"
] | 2015-05-07T20:21:00 | 2015-05-07T21:02:12 | 2015-05-07T21:02:12 | NONE | null | It would be really nice if there was a clean way to detect N+1 queries in tests, perhaps by having peewee (when initialized with a flag) raise an exception when accessing a non-cached ForeignKeyField value.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/596/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/595 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/595/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/595/comments | https://api.github.com/repos/coleifer/peewee/issues/595/events | https://github.com/coleifer/peewee/issues/595 | 73,798,673 | MDU6SXNzdWU3Mzc5ODY3Mw== | 595 | __init__() got an unexpected keyword argument 'abstract' on peewee 2.04 | {
"login": "jbpineiroc",
"id": 12285501,
"node_id": "MDQ6VXNlcjEyMjg1NTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/12285501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbpineiroc",
"html_url": "https://github.com/jbpineiroc",
"followers_url": "https://api.github.com/users/jbpineiroc/followers",
"following_url": "https://api.github.com/users/jbpineiroc/following{/other_user}",
"gists_url": "https://api.github.com/users/jbpineiroc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbpineiroc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbpineiroc/subscriptions",
"organizations_url": "https://api.github.com/users/jbpineiroc/orgs",
"repos_url": "https://api.github.com/users/jbpineiroc/repos",
"events_url": "https://api.github.com/users/jbpineiroc/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbpineiroc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Without a little more info I have no idea what to do with this.\n\nNext time take the two extra seconds to format your issue at least.\n"
] | 2015-05-07T01:38:43 | 2015-05-07T02:46:36 | 2015-05-07T02:46:36 | NONE | null | File "C:\Python27\lib\site-packages\peewee.py", line 1801, in **new**
cls._meta = ModelOptions(cls, **meta_options)
TypeError: Error when calling the metaclass bases
__init__() got an unexpected keyword argument 'abstract'
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/595/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/594 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/594/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/594/comments | https://api.github.com/repos/coleifer/peewee/issues/594/events | https://github.com/coleifer/peewee/issues/594 | 73,765,563 | MDU6SXNzdWU3Mzc2NTU2Mw== | 594 | Having a single multi-column index defined in Meta class breaks table/index creation | {
"login": "josefdlange",
"id": 1062835,
"node_id": "MDQ6VXNlcjEwNjI4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1062835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josefdlange",
"html_url": "https://github.com/josefdlange",
"followers_url": "https://api.github.com/users/josefdlange/followers",
"following_url": "https://api.github.com/users/josefdlange/following{/other_user}",
"gists_url": "https://api.github.com/users/josefdlange/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josefdlange/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josefdlange/subscriptions",
"organizations_url": "https://api.github.com/users/josefdlange/orgs",
"repos_url": "https://api.github.com/users/josefdlange/repos",
"events_url": "https://api.github.com/users/josefdlange/events{/privacy}",
"received_events_url": "https://api.github.com/users/josefdlange/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This is not at all a peewee bug. You just forgot that if you want a tuple with one element you need a trailing comma:\n\n``` python\n class Meta:\n indexes = (\n (('something', 'name'), True), # Added trailing comma.\n )\n```\n",
"D'oh! I learn something new every day. Thanks for the tip.\n"
] | 2015-05-06T23:14:08 | 2015-05-06T23:23:53 | 2015-05-06T23:22:53 | CONTRIBUTOR | null | Given the following, table/index creation fails:
```
from peewee import Model, ForeignKeyField
from somewhere import SomeOtherModel
class SomeModel(Model):
something = ForeignKeyField(SomeOtherModel)
name = CharField()
class Meta:
indexes = (
(('something', 'name'), True)
)
```
This is because Python (as far as I can tell) throws away the outermost parentheses as extraneous, so when you iterate over `indexes` on line `4011`, it actually is iterating on `(('something','name'), True)`, meaning that `fields` and `unique` in this case are `'something'` and `'name'`, respectively. If I change the code to the following, it works just fine:
```
...
class Meta:
indexes = tuple([
(('something', 'name'), True)
])
```
If you like, I can try and figure out a way around this -- I think the parentheses-only code looks a lot better -- and submit a pull request. Or maybe we do want the outermost structure to be a list and not a tuple at all. Your call, just thought I'd report this.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/594/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/593 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/593/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/593/comments | https://api.github.com/repos/coleifer/peewee/issues/593/events | https://github.com/coleifer/peewee/issues/593 | 73,657,255 | MDU6SXNzdWU3MzY1NzI1NQ== | 593 | Query add_column method | {
"login": "stanep",
"id": 3240661,
"node_id": "MDQ6VXNlcjMyNDA2NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3240661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stanep",
"html_url": "https://github.com/stanep",
"followers_url": "https://api.github.com/users/stanep/followers",
"following_url": "https://api.github.com/users/stanep/following{/other_user}",
"gists_url": "https://api.github.com/users/stanep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stanep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stanep/subscriptions",
"organizations_url": "https://api.github.com/users/stanep/orgs",
"repos_url": "https://api.github.com/users/stanep/repos",
"events_url": "https://api.github.com/users/stanep/events{/privacy}",
"received_events_url": "https://api.github.com/users/stanep/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"No. You can muck around with `SelectQuery._select`, which is a list of the selected columns. Even better is to just use the `SelectQuery.select()` method to modify the list of columns.\n",
"Ok , fair enough\n\nOn Wed, May 6, 2015 at 11:46 AM, Charles Leifer [email protected]\nwrote:\n\n> Closed #593 https://github.com/coleifer/peewee/issues/593.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/coleifer/peewee/issues/593#event-298363133.\n"
] | 2015-05-06T15:30:57 | 2015-05-06T15:51:58 | 2015-05-06T15:46:14 | NONE | null | Hi , is it possible to add method to query class so we can do something like this
qr = User.select(User.id)
later on
qr.add_column(User.first_name)
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/593/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/592 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/592/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/592/comments | https://api.github.com/repos/coleifer/peewee/issues/592/events | https://github.com/coleifer/peewee/issues/592 | 73,378,013 | MDU6SXNzdWU3MzM3ODAxMw== | 592 | Hybrid module | {
"login": "stanep",
"id": 3240661,
"node_id": "MDQ6VXNlcjMyNDA2NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3240661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stanep",
"html_url": "https://github.com/stanep",
"followers_url": "https://api.github.com/users/stanep/followers",
"following_url": "https://api.github.com/users/stanep/following{/other_user}",
"gists_url": "https://api.github.com/users/stanep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stanep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stanep/subscriptions",
"organizations_url": "https://api.github.com/users/stanep/orgs",
"repos_url": "https://api.github.com/users/stanep/repos",
"events_url": "https://api.github.com/users/stanep/events{/privacy}",
"received_events_url": "https://api.github.com/users/stanep/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Yeah, I haven't issued a new release yet.\n",
"Anyway great thing hybrid properties,thx\n\nOn Tue, May 5, 2015 at 1:37 PM, Charles Leifer [email protected]\nwrote:\n\n> Closed #592 https://github.com/coleifer/peewee/issues/592.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/coleifer/peewee/issues/592#event-297451694.\n"
] | 2015-05-05T17:32:29 | 2015-05-05T18:15:39 | 2015-05-05T17:36:56 | NONE | null | Hi when you do
pip install -U peewee
playhouse get installed but hybrid.py is missing
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/592/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/591 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/591/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/591/comments | https://api.github.com/repos/coleifer/peewee/issues/591/events | https://github.com/coleifer/peewee/pull/591 | 73,083,846 | MDExOlB1bGxSZXF1ZXN0MzQ2NTgzODg= | 591 | fix when not using named cursor | {
"login": "eseom",
"id": 1251642,
"node_id": "MDQ6VXNlcjEyNTE2NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1251642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eseom",
"html_url": "https://github.com/eseom",
"followers_url": "https://api.github.com/users/eseom/followers",
"following_url": "https://api.github.com/users/eseom/following{/other_user}",
"gists_url": "https://api.github.com/users/eseom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eseom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eseom/subscriptions",
"organizations_url": "https://api.github.com/users/eseom/orgs",
"repos_url": "https://api.github.com/users/eseom/repos",
"events_url": "https://api.github.com/users/eseom/events{/privacy}",
"received_events_url": "https://api.github.com/users/eseom/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Interesting, what version of psycopg2 are you using?\n",
"missed my environment\nubuntu 14.04\npython 2.7.6\npsycopg2 is 2.4.5 from python-psycopg2 package\n\nIt could be this situation.\n\n```\nimport psycopg2\n\nname = None\nconnection = psycopg2.connect('dbname=test user=test')\ncursor = connection.cursor(name=name)\n```\n\n```\nTraceback (most recent call last):\n File \"test.py\", line 5, in <module>\n cursor = connection.cursor(name=name)\nTypeError: argument 1 must be string, not None\n```\n",
"Ew, just a tip but you should definitely be using pip and not the Ubuntu psycpog2 package.\n\nThat said, seems like a fine PR so I'll go ahead and merge.\n"
] | 2015-05-04T17:53:40 | 2015-05-05T16:25:15 | 2015-05-05T16:25:15 | CONTRIBUTOR | null | I'm using peewee on my projects. Thanks. :)
I wrote a simple connection to the postgresql server without named cursor.
```
from peewee import * # NOQA
from playhouse.postgres_ext import * # NOQA
database = PostgresqlExtDatabase('test', user='test')
class MyModel(Model):
name = CharField()
value = CharField()
class Meta:
database = database
MyModel.create_table()
```
An error occurred.
```
Traceback (most recent call last):
File "test.py", line 15, in <module>
MyModel.create_table()
File "/home/vagrant/src/peewee/peewee.py", line 4010, in create_table
db.create_table(cls)
File "/home/vagrant/src/peewee/peewee.py", line 3115, in create_table
return self.execute_sql(*qc.create_table(model_class, safe))
File "/home/vagrant/src/peewee/playhouse/postgres_ext.py", line 356, in execute_sql
cursor = self.get_cursor()
File "/home/vagrant/src/peewee/playhouse/postgres_ext.py", line 343, in get_cursor
return self.get_conn().cursor(name=name)
TypeError: argument 1 must be string, not None
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/591/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/591",
"html_url": "https://github.com/coleifer/peewee/pull/591",
"diff_url": "https://github.com/coleifer/peewee/pull/591.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/591.patch",
"merged_at": "2015-05-05T16:25:15"
} |
https://api.github.com/repos/coleifer/peewee/issues/590 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/590/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/590/comments | https://api.github.com/repos/coleifer/peewee/issues/590/events | https://github.com/coleifer/peewee/issues/590 | 72,314,052 | MDU6SXNzdWU3MjMxNDA1Mg== | 590 | Default for register_hstore in postgresql extensions should be False | {
"login": "elgow",
"id": 11529401,
"node_id": "MDQ6VXNlcjExNTI5NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/11529401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elgow",
"html_url": "https://github.com/elgow",
"followers_url": "https://api.github.com/users/elgow/followers",
"following_url": "https://api.github.com/users/elgow/following{/other_user}",
"gists_url": "https://api.github.com/users/elgow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elgow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elgow/subscriptions",
"organizations_url": "https://api.github.com/users/elgow/orgs",
"repos_url": "https://api.github.com/users/elgow/repos",
"events_url": "https://api.github.com/users/elgow/events{/privacy}",
"received_events_url": "https://api.github.com/users/elgow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2015-04-30T23:57:09 | 2015-05-01T01:29:25 | 2015-05-01T01:29:25 | NONE | null | Default usage of the pewee playhouse postgresql extension fails to work with an out-of-the-box postgresql database. The postgresql database ships with the hstore extension not installed, but the playhouse postgresql extension has this default setting.
kwargs.pop('register_hstore', True)
This causes the DB to fail to open if the user has not explicitly installed the hstore extension in their database. This is confusing if the user does not intend to use hstore.
At very least, this deserves a warning box in the documents to alert the user to open the PostgresqlExtDatabase with register_hstore=False unless the hstore extension is already installed in the database that they intend to use.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/590/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/589 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/589/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/589/comments | https://api.github.com/repos/coleifer/peewee/issues/589/events | https://github.com/coleifer/peewee/issues/589 | 72,106,179 | MDU6SXNzdWU3MjEwNjE3OQ== | 589 | Closing MySQL Database and / or Model creation decorator | {
"login": "conqp",
"id": 3766192,
"node_id": "MDQ6VXNlcjM3NjYxOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3766192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conqp",
"html_url": "https://github.com/conqp",
"followers_url": "https://api.github.com/users/conqp/followers",
"following_url": "https://api.github.com/users/conqp/following{/other_user}",
"gists_url": "https://api.github.com/users/conqp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conqp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conqp/subscriptions",
"organizations_url": "https://api.github.com/users/conqp/orgs",
"repos_url": "https://api.github.com/users/conqp/repos",
"events_url": "https://api.github.com/users/conqp/events{/privacy}",
"received_events_url": "https://api.github.com/users/conqp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I prefer to keep model creation APIs as-is. Of course if you like using this function to create tables at module-load time then you could release it as a peewee extension. I prefer to avoid any import-time side-effects and APIs that promote them.\n",
"Thanks for the feedback. But what about the first part of the patch regarding auto-closing MySQL connections?\nPatch for 911833b1b08a44041cf99cdc76117adec1a6832b\n\n```\n3473a3474,3479\n> def execute_sql(self, sql, params=None, require_commit=True):\n> \"\"\"Executes an SQL query with an explicit connection\"\"\"\n> with self.execution_context():\n> return super(MySQLDatabase, self).execute_sql(\n> sql, params=params, require_commit=require_commit)\n> \n```\n"
] | 2015-04-30T09:10:55 | 2015-04-30T15:04:47 | 2015-04-30T13:01:09 | CONTRIBUTOR | null | Hi coleifer,
I am working with peewee now for some time - mostly with MySQL databases - and have encountered several problems regarding the lack of automatic closing of database connections, particularly Error 2006 as described here: http://peewee.readthedocs.org/en/latest/peewee/database.html#using-mysql
I have modified the `peewee.MySQLDatabase` accordingly to always use execution contexts when handling queries.
This resolved the aforementioned issue for me and I do not need to worry about connection management (which is unlikely what you want to do when using an ORM).
Additionally I added a module-level function to be used as a decorator to create modules automatically on loading their respective module.
Below you'll find the patch for commit 911833b1b08a44041cf99cdc76117adec1a6832b
```
29a30
> from contextlib import suppress
3473a3475,3480
> def execute_sql(self, sql, params=None, require_commit=True):
> """Executes an SQL query with an explicit connection"""
> with self.execution_context():
> return super(MySQLDatabase, self).execute_sql(
> sql, params=params, require_commit=require_commit)
>
4159a4167,4181
>
>
> def create(model):
> """Decorator for peewee.Model definitions that
> actually should be created on load.
>
> Usage:
> @create
> class MyModel(peewee.Model):
> pass
> """
> with suppress(OperationalError):
> with model._meta.database.execution_context():
> model.create_table(fail_silently=True)
> return model
```
I'd be glad if you consider my patch for upstream or give me some feedback why you wouldn't.
Cheers
Richard alias coNQP
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/589/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/588 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/588/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/588/comments | https://api.github.com/repos/coleifer/peewee/issues/588/events | https://github.com/coleifer/peewee/issues/588 | 71,552,112 | MDU6SXNzdWU3MTU1MjExMg== | 588 | Adding a column with SqliteMigrator fails on tables which already have an index | {
"login": "tfeldmann",
"id": 385566,
"node_id": "MDQ6VXNlcjM4NTU2Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/385566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tfeldmann",
"html_url": "https://github.com/tfeldmann",
"followers_url": "https://api.github.com/users/tfeldmann/followers",
"following_url": "https://api.github.com/users/tfeldmann/following{/other_user}",
"gists_url": "https://api.github.com/users/tfeldmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tfeldmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tfeldmann/subscriptions",
"organizations_url": "https://api.github.com/users/tfeldmann/orgs",
"repos_url": "https://api.github.com/users/tfeldmann/repos",
"events_url": "https://api.github.com/users/tfeldmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/tfeldmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Peewee is currently at 2.6.0, and this code runs correctly under 2.6.0, so I'm not sure what more I can tell you besides _upgrade_! :)\n",
"Ok, this is strange. I tested it in Ubuntu 15.04 and Windows 8.1 VMs and it works. But I can reproduce the error every time on my system in both python2 and python3.\n\npeewee==2.4.2 works\npeewee==2.4.3 error\npeewee==2.6.0 error\n\nI'm using Mac OS X 10.10.3\nSQLite 3.8.9, Python 3.4.3 installed with homebrew.\n\nDo you have any idea how to investigate this?\n",
"Are you sure about 2.6.0 being broken? With peewee 2.6.0, python 2 or 3, and sqlite 3.8.9 the script works as expected:\n\n``` python\nfrom peewee import *\nfrom peewee import create_model_tables\nfrom playhouse.migrate import SqliteMigrator, migrate\n\ndb = SqliteDatabase(':memory:')\n\nclass Measurement(Model):\n some_field = CharField(index=True)\n\n class Meta:\n database = db\n\ncreate_model_tables([Measurement])\n\nmigrator = SqliteMigrator(db)\nmigrate(\n migrator.add_column('measurement', 'location', CharField(default=''))\n)\n\nprint(db.get_columns('measurement'))\n```\n\nOutput:\n\n``` python\n[\n ColumnMetadata(name='id', data_type='INTEGER', null=False, primary_key=True, table='measurement'), \n ColumnMetadata(name='some_field', data_type='VARCHAR(255)', null=False, primary_key=False, table='measurement'), \n ColumnMetadata(name='location', data_type='VARCHAR(255)', null=False, primary_key=False, table='measurement')]\n```\n",
"If you're running the above \"test\" script you might also add something like the following, just to be sure:\n\n``` python\nfrom peewee import __version__ as peewee_version\nprint('Peewee version %s' % peewee_version)\n```\n",
"This returns:\n\n```\nPeewee version 2.6.0\nTraceback (most recent call last):\n File \"test 2.py\", line 22, in <module>\n migrator.add_column('measurement', 'location', CharField(default=''))\n File \"/usr/local/lib/python3.4/site-packages/playhouse/migrate.py\", line 575, in migrate\n operation.run()\n File \"/usr/local/lib/python3.4/site-packages/playhouse/migrate.py\", line 144, in run\n getattr(self.migrator, self.method)(*self.args, **kwargs))\n File \"/usr/local/lib/python3.4/site-packages/playhouse/migrate.py\", line 138, in _handle_result\n self._handle_result(item)\n File \"/usr/local/lib/python3.4/site-packages/playhouse/migrate.py\", line 135, in _handle_result\n result.run()\n File \"/usr/local/lib/python3.4/site-packages/playhouse/migrate.py\", line 144, in run\n getattr(self.migrator, self.method)(*self.args, **kwargs))\n File \"/usr/local/lib/python3.4/site-packages/playhouse/migrate.py\", line 135, in _handle_result\n result.run()\n File \"/usr/local/lib/python3.4/site-packages/playhouse/migrate.py\", line 144, in run\n getattr(self.migrator, self.method)(*self.args, **kwargs))\n File \"/usr/local/lib/python3.4/site-packages/playhouse/migrate.py\", line 152, in inner\n return fn(self, *args, **kwargs)\n File \"/usr/local/lib/python3.4/site-packages/playhouse/migrate.py\", line 449, in _update_column\n indexes = self.database.get_indexes(table)\n File \"/usr/local/lib/python3.4/site-packages/peewee.py\", line 3184, in get_indexes\n for _, name, is_unique in cursor.fetchall():\nValueError: too many values to unpack (expected 3)\n```\n\nIf I set `index=False`, it works as expected.\n",
"Ahh, I think this is due to changes in SQLite. This was just recently fixed in:\n\n6d616e25c9748abc22ecf3f413b81ee91c390699\n\nSo you might try out master for now until I release 2.6.1.\n",
"Thank you for the feedback, I thought there was something wrong with my system :+1: \n"
] | 2015-04-28T10:40:41 | 2015-04-29T15:35:14 | 2015-04-28T14:15:42 | NONE | null | Hello Charles,
I have a problem with adding a column to one of my tables. I created a short example program which reproduces the error.
I found the problem was introduced in peewee 2.4.3
Greetings,
Thomas
``` python
from peewee import *
from peewee import create_model_tables
from playhouse.migrate import SqliteMigrator, migrate
db = SqliteDatabase(':memory:')
class Measurement(Model):
some_field = CharField(index=True)
class Meta:
database = db
create_model_tables([Measurement])
migrator = SqliteMigrator(db)
migrate(
migrator.add_column('measurement', 'location', CharField(default=''))
)
```
Traceback:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-f9b29a548522> in <module>()
15 migrator = SqliteMigrator(db)
16 migrate(
---> 17 migrator.add_column('measurement', 'location', CharField(default=''))
18 )
/usr/local/lib/python3.4/site-packages/playhouse/migrate.py in migrate(*operations, **kwargs)
533
534 @operation
--> 535 def add_not_null(self, table, column):
536 def _add_not_null(column_name, column_def):
537 return column_def + ' NOT NULL'
/usr/local/lib/python3.4/site-packages/playhouse/migrate.py in run(self)
142 kwargs['generate'] = True
143 self._handle_result(
--> 144 getattr(self.migrator, self.method)(*self.args, **kwargs))
145
146
/usr/local/lib/python3.4/site-packages/playhouse/migrate.py in _handle_result(self, result)
136 elif isinstance(result, (list, tuple)):
137 for item in result:
--> 138 self._handle_result(item)
139
140 def run(self):
/usr/local/lib/python3.4/site-packages/playhouse/migrate.py in _handle_result(self, result)
133 self.execute(result)
134 elif isinstance(result, Operation):
--> 135 result.run()
136 elif isinstance(result, (list, tuple)):
137 for item in result:
/usr/local/lib/python3.4/site-packages/playhouse/migrate.py in run(self)
142 kwargs['generate'] = True
143 self._handle_result(
--> 144 getattr(self.migrator, self.method)(*self.args, **kwargs))
145
146
/usr/local/lib/python3.4/site-packages/playhouse/migrate.py in _handle_result(self, result)
133 self.execute(result)
134 elif isinstance(result, Operation):
--> 135 result.run()
136 elif isinstance(result, (list, tuple)):
137 for item in result:
/usr/local/lib/python3.4/site-packages/playhouse/migrate.py in run(self)
142 kwargs['generate'] = True
143 self._handle_result(
--> 144 getattr(self.migrator, self.method)(*self.args, **kwargs))
145
146
/usr/local/lib/python3.4/site-packages/playhouse/migrate.py in inner(self, *args, **kwargs)
150 generate = kwargs.pop('generate', False)
151 if generate:
--> 152 return fn(self, *args, **kwargs)
153 return Operation(self, fn.__name__, *args, **kwargs)
154 return inner
/usr/local/lib/python3.4/site-packages/playhouse/migrate.py in _update_column(self, table, column_to_update, fn)
434 index_to_sql = dict(cursor.fetchall())
435 indexed_columns = {}
--> 436 for index_name in sorted(index_to_sql):
437 cursor = self.database.execute_sql(
438 'PRAGMA index_info("%s")' % index_name)
/usr/local/lib/python3.4/site-packages/playhouse/migrate.py in _get_indexes(self, table)
426 ['table', table])
427 return res.fetchone()[0]
--> 428
429 def _get_indexes(self, table):
430 cursor = self.database.execute_sql(
/usr/local/lib/python3.4/site-packages/peewee.py in get_indexes(self, table, schema)
2944
2945 register_unicode = True
-> 2946
2947 def _connect(self, database, **kwargs):
2948 if not psycopg2:
ValueError: too many values to unpack (expected 3)
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/588/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/587 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/587/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/587/comments | https://api.github.com/repos/coleifer/peewee/issues/587/events | https://github.com/coleifer/peewee/issues/587 | 71,111,523 | MDU6SXNzdWU3MTExMTUyMw== | 587 | Support for "on conflict" clause | {
"login": "elya5",
"id": 4464481,
"node_id": "MDQ6VXNlcjQ0NjQ0ODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4464481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elya5",
"html_url": "https://github.com/elya5",
"followers_url": "https://api.github.com/users/elya5/followers",
"following_url": "https://api.github.com/users/elya5/following{/other_user}",
"gists_url": "https://api.github.com/users/elya5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elya5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elya5/subscriptions",
"organizations_url": "https://api.github.com/users/elya5/orgs",
"repos_url": "https://api.github.com/users/elya5/repos",
"events_url": "https://api.github.com/users/elya5/events{/privacy}",
"received_events_url": "https://api.github.com/users/elya5/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2015-04-26T19:46:01 | 2015-04-26T21:41:49 | 2015-04-26T21:41:49 | NONE | null | It would be great to have support for the `on conflict` clause as it is described [here](https://www.sqlite.org/lang_conflict.html) for sqlite.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/587/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/586 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/586/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/586/comments | https://api.github.com/repos/coleifer/peewee/issues/586/events | https://github.com/coleifer/peewee/issues/586 | 70,713,167 | MDU6SXNzdWU3MDcxMzE2Nw== | 586 | Class methods on model instances (why?) | {
"login": "foxx",
"id": 651797,
"node_id": "MDQ6VXNlcjY1MTc5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/foxx",
"html_url": "https://github.com/foxx",
"followers_url": "https://api.github.com/users/foxx/followers",
"following_url": "https://api.github.com/users/foxx/following{/other_user}",
"gists_url": "https://api.github.com/users/foxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/foxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/foxx/subscriptions",
"organizations_url": "https://api.github.com/users/foxx/orgs",
"repos_url": "https://api.github.com/users/foxx/repos",
"events_url": "https://api.github.com/users/foxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/foxx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Classmethods operate on the table. Instance methods operate on the rows. Class represents table, instance represents row. It is just an inconvenient fact of python's `classmethod` decorator that you can call classmethods from an instance. I could write a special decorator that prevented this, e.g. `classonlymethod` but I don't think it's worth the time and more Pythonic anyways.\n",
"That makes sense, I had considered using a special decorator myself in the past but it always felt like a bit too much magic, and made it confusing for anyone new to the code. This has given me some food for thought anyway, thanks again for the quick reply.\n"
] | 2015-04-24T15:07:50 | 2015-04-24T15:13:12 | 2015-04-24T15:10:04 | CONTRIBUTOR | null | As per [documentation](1), models have class methods such as `get_or_create`, which can be used to create a new model instance. Merging together object and objectset methods doesn't feel like a clean abstraction (imho), e.g. the ability to `get` another object which is unrelated to the object we're calling `get` from. For example;
```
u = User.get_or_create(name='amber')
u.get_or_create(name='jessica')
```
Previously I've always followed the design approach of separating object and objectset, similar to the approach that [Django](2) uses. For example;
```
u = User.objects.get_or_create(name='amber')
```
Could you give a brief explanation behind your reasoning for having methods such as `get_or_create` as class methods on the object, rather than a method on a manager class (such as `User.objects`)? Is this a legacy design thing that won't change due to backwards compatibility, or is there some technical merit behind this approach?
This is more for my own knowledge/learning, as it's left me wondering if my design principles are flawed or if I've overlooked some detail, so your insight would be appreciated.
Many thanks
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/586/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/585 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/585/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/585/comments | https://api.github.com/repos/coleifer/peewee/issues/585/events | https://github.com/coleifer/peewee/issues/585 | 70,683,256 | MDU6SXNzdWU3MDY4MzI1Ng== | 585 | connect() doesn't allow sqlite://:memory: | {
"login": "foxx",
"id": 651797,
"node_id": "MDQ6VXNlcjY1MTc5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/foxx",
"html_url": "https://github.com/foxx",
"followers_url": "https://api.github.com/users/foxx/followers",
"following_url": "https://api.github.com/users/foxx/following{/other_user}",
"gists_url": "https://api.github.com/users/foxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/foxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/foxx/subscriptions",
"organizations_url": "https://api.github.com/users/foxx/orgs",
"repos_url": "https://api.github.com/users/foxx/repos",
"events_url": "https://api.github.com/users/foxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/foxx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The correct way is to write:\n\n``` python\nconnect('sqlite:///:memory:')\n```\n",
"Doh, you are correct. Confirmed working, my apologies.\n"
] | 2015-04-24T12:56:43 | 2015-04-24T15:08:45 | 2015-04-24T14:30:14 | CONTRIBUTOR | null | The following code does not work;
``` py
db = connect('sqlite://:memory:')
File "/usr/lib/python2.7/urlparse.py", line 113, in port
port = int(port, 10)
ValueError: invalid literal for int() with base 10: 'memory'
```
This is because `urlparse` doesn't see the above as a valid connection string, and rightly so. One option would be to check for this specific use case and handle accordingly [here](1), however this doesn't feel very clean and would force people to build invalid URIs.
```
def connect(url):
driver, target = url.split(":", 1)
if driver in ('sqlite', 'sqliteext'):
if target == ':memory:':
return schemes[driver](':memory')
```
It would seem `SqliteDatabase` already uses `:memory:` if no connection string is provided, and as such the following will work fine;
``` py
db = connect('sqlite://')
```
Although this feels like strange behaviour in someways, it's actually correct because `sqlite://:memory:` is not a valid connection strings, as mentioned earlier. Perhaps a docs update for `connect()` would be most appropriate here?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/585/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/584 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/584/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/584/comments | https://api.github.com/repos/coleifer/peewee/issues/584/events | https://github.com/coleifer/peewee/issues/584 | 70,667,789 | MDU6SXNzdWU3MDY2Nzc4OQ== | 584 | Lastly... thank you | {
"login": "foxx",
"id": 651797,
"node_id": "MDQ6VXNlcjY1MTc5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/foxx",
"html_url": "https://github.com/foxx",
"followers_url": "https://api.github.com/users/foxx/followers",
"following_url": "https://api.github.com/users/foxx/following{/other_user}",
"gists_url": "https://api.github.com/users/foxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/foxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/foxx/subscriptions",
"organizations_url": "https://api.github.com/users/foxx/orgs",
"repos_url": "https://api.github.com/users/foxx/repos",
"events_url": "https://api.github.com/users/foxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/foxx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Glad you're enjoying using peewee!\n"
] | 2015-04-24T11:40:50 | 2015-04-24T14:29:08 | 2015-04-24T14:29:08 | CONTRIBUTOR | null | I've raised quite a few issues today, but I'd just like to take a moment to say a huge thank you to everyone involved in getting peewee where it is today. The code appears to be well written, I was able to easily follow the source code and understand what was happening, and overall is just a pleasure to use.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/584/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/583 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/583/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/583/comments | https://api.github.com/repos/coleifer/peewee/issues/583/events | https://github.com/coleifer/peewee/issues/583 | 70,667,387 | MDU6SXNzdWU3MDY2NzM4Nw== | 583 | Database router and connection manager support | {
"login": "foxx",
"id": 651797,
"node_id": "MDQ6VXNlcjY1MTc5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/foxx",
"html_url": "https://github.com/foxx",
"followers_url": "https://api.github.com/users/foxx/followers",
"following_url": "https://api.github.com/users/foxx/following{/other_user}",
"gists_url": "https://api.github.com/users/foxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/foxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/foxx/subscriptions",
"organizations_url": "https://api.github.com/users/foxx/orgs",
"repos_url": "https://api.github.com/users/foxx/repos",
"events_url": "https://api.github.com/users/foxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/foxx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Extending the discussion from #582, I was unable to find a clean way to introduce module inspection into Peewee without also bringing in the concept of 'apps', which is way outside of its scope as an ORM.\n\nThe cleanest way I've found so far is to use a connection manager by wrapping each connection with a proxy and referencing it later on. The test case can then tell the database proxy to switch out the current connection for the test database connection.\n\nHowever this breaks because `create_model_tables()` also requires models to be defined explicitly, and due to module inspection not being a clean way forward, the next alternative would be to maintain some sort of global registry of models. This could perhaps be achieved by using the `BaseClass` metaclass to place itself into a list on the peewee module.\n\nFurther more, `create_model_tables()` is unable to distinguish which classes are \"base inheritance\" classes, e.g. used by other models but not actually a model themselves. The best option I can think of would be to introduce a new meta option called `abstract`, similar to the one in [Django](1), which would prevent this problem from happening.\n\nNaturally this would be quite a big change but none of these would be backwards incompatible. The proposal so far would be for;\n- Database router\n- Connection manager\n- Models registry\n- New meta option `abstract`\n",
"In order to keep peewee simple I have chosen not to implement this type of functionality. The database is simply an attribute on the `Model._meta` object, so tooling could be built to change this at run-time, but at no point do I think peewee will include routers/connection managers like Django for example.\n",
"Thanks for the quick reply, appreciated.\n"
] | 2015-04-24T11:37:53 | 2015-04-24T14:33:10 | 2015-04-24T14:28:52 | CONTRIBUTOR | null | As an extension from #582, although Playhouse has an existing class for [ReadSlave](3), it doesn't appear to be a suitable replacement for database routers, as seen in [django](4).
Admittedly adding support for database routers would most likely require some sort of connections manager, and would need some careful thought on design decision. For example, being able to automatically create/teardown test databases based on a database settings dictionary, similar to the [test db feature](5) in Django.
If such a feature would ever be considered, I will happily spend some time putting together a PR proposal, and would welcome comments/ideas before hand.
Thoughts?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/583/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/582 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/582/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/582/comments | https://api.github.com/repos/coleifer/peewee/issues/582/events | https://github.com/coleifer/peewee/issues/582 | 70,667,073 | MDU6SXNzdWU3MDY2NzA3Mw== | 582 | test_database requires explicit list of models | {
"login": "foxx",
"id": 651797,
"node_id": "MDQ6VXNlcjY1MTc5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/foxx",
"html_url": "https://github.com/foxx",
"followers_url": "https://api.github.com/users/foxx/followers",
"following_url": "https://api.github.com/users/foxx/following{/other_user}",
"gists_url": "https://api.github.com/users/foxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/foxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/foxx/subscriptions",
"organizations_url": "https://api.github.com/users/foxx/orgs",
"repos_url": "https://api.github.com/users/foxx/repos",
"events_url": "https://api.github.com/users/foxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/foxx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"As a temporary workaround, I was able to do the following;\n\n``` py\nimport unittest\nimport inspect\nimport models\nimport peewee\nfrom app import testdb\nfrom playhouse.test_utils import test_database\n\nclass PeeweeTestDatabaseMixin(object):\n def run(self, *args, **kwargs):\n classes = [ v for k,v in inspect.getmembers(models, inspect.isclass) \n if issubclass(v, peewee.Model) ]\n test_database(testdb, classes)\n return super(PeeweeTestDatabaseMixin, self).run(*args, **kwargs)\n```\n",
"I prefer keeping things explicit as it is easier to understand and the implementation is simpler.\n"
] | 2015-04-24T11:35:52 | 2015-04-24T14:32:56 | 2015-04-24T14:06:20 | CONTRIBUTOR | null | If you wish to use `test_database`, you have to give it an explicit list of models, as seen [here](1). This can lead to some surprising behaviour if you create a model class and forget to add it to the list.
One way around this, as seen [here](2), is to use module inspection to auto detect classes without having to explicitly define them. Now this may not be a suitable option for everyone, for example if your models are split across multiple databases.
My original proposal was to add module inspection into `test_database()`, however further testing of #583 has shown this is not feasible. Realistically there isn't much that can be done to improve this unless #583 is accepted, so I'm going to mark as closed.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/582/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/581 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/581/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/581/comments | https://api.github.com/repos/coleifer/peewee/issues/581/events | https://github.com/coleifer/peewee/issues/581 | 70,655,515 | MDU6SXNzdWU3MDY1NTUxNQ== | 581 | db_url.connect does not support pooling | {
"login": "foxx",
"id": 651797,
"node_id": "MDQ6VXNlcjY1MTc5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/foxx",
"html_url": "https://github.com/foxx",
"followers_url": "https://api.github.com/users/foxx/followers",
"following_url": "https://api.github.com/users/foxx/following{/other_user}",
"gists_url": "https://api.github.com/users/foxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/foxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/foxx/subscriptions",
"organizations_url": "https://api.github.com/users/foxx/orgs",
"repos_url": "https://api.github.com/users/foxx/repos",
"events_url": "https://api.github.com/users/foxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/foxx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"In the mean time, for anyone who comes across this problem in the future, I was able to get a temporary monkeypatch working;\n\n``` py\nfrom playhouse import db_url\nfrom playhouse.pool import PooledMySQLDatabase\ndb_url.schemes['mysql'] = PooledMySQLDatabase\n```\n\nThe above will replace the MySQL backend with a pooled backend, whenever `connect()` is used. Replace classes accordingly for other databases, see [here](1).\n",
"You can use `db_url.parse(url)` to parse the data into a dictionary, then pass the dictionary values into a pooled database implementation. Because pooled databases require additional arguments (pool size, etc), they do not seem to me a good candidate for inclusion in the db_url module.\n",
"That works beautifully, thank you.\n"
] | 2015-04-24T10:44:45 | 2015-04-24T15:10:06 | 2015-04-24T14:26:45 | CONTRIBUTOR | null | As seen [here](1), there is no way to make `connect()` use a pooled connection. I've looked over the code and there doesn't appear to be any way to convert an existing DB object into a pooled one, at least short of using some horrible metaclass hacks or monkey patching.
Could someone clarify if this feature is on the roadmap, or whether a PR would be accepted if not?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/581/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/580 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/580/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/580/comments | https://api.github.com/repos/coleifer/peewee/issues/580/events | https://github.com/coleifer/peewee/issues/580 | 70,641,084 | MDU6SXNzdWU3MDY0MTA4NA== | 580 | JSON support with MariaDB | {
"login": "foxx",
"id": 651797,
"node_id": "MDQ6VXNlcjY1MTc5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/foxx",
"html_url": "https://github.com/foxx",
"followers_url": "https://api.github.com/users/foxx/followers",
"following_url": "https://api.github.com/users/foxx/following{/other_user}",
"gists_url": "https://api.github.com/users/foxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/foxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/foxx/subscriptions",
"organizations_url": "https://api.github.com/users/foxx/orgs",
"repos_url": "https://api.github.com/users/foxx/repos",
"events_url": "https://api.github.com/users/foxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/foxx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I have no plans myself for adding support for any MySQL-specific features since between Postgres and SQLite I never find myself using MySQL. That said, I would certainly be open to a well-tested pull-request (with documentation).\n",
"Thanks for the quick reply, if I end up using MariaDB JSON heavily then I'll put together a PR.\n"
] | 2015-04-24T09:39:47 | 2015-04-24T14:02:44 | 2015-04-24T14:01:18 | CONTRIBUTOR | null | It would seem that Playhouse has [support](1) for Postgres JSON, where as it doesn't have the same for MySQL. MariaDB, which is backwards compatible, has [supported](1) JSON since 10.0.1 and it would be fantastic to see Playhouse support for this.
Are there any plans to put this on the roadmap?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/580/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/580/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/579 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/579/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/579/comments | https://api.github.com/repos/coleifer/peewee/issues/579/events | https://github.com/coleifer/peewee/pull/579 | 70,232,871 | MDExOlB1bGxSZXF1ZXN0MzM5MDAzMTc= | 579 | Log slow queries | {
"login": "koblas",
"id": 219934,
"node_id": "MDQ6VXNlcjIxOTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/219934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/koblas",
"html_url": "https://github.com/koblas",
"followers_url": "https://api.github.com/users/koblas/followers",
"following_url": "https://api.github.com/users/koblas/following{/other_user}",
"gists_url": "https://api.github.com/users/koblas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/koblas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/koblas/subscriptions",
"organizations_url": "https://api.github.com/users/koblas/orgs",
"repos_url": "https://api.github.com/users/koblas/repos",
"events_url": "https://api.github.com/users/koblas/events{/privacy}",
"received_events_url": "https://api.github.com/users/koblas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2015-04-22T20:57:04 | 2015-04-22T20:57:26 | 2015-04-22T20:57:26 | NONE | null | Test Plan: Local testing on dev box
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/579/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/579",
"html_url": "https://github.com/coleifer/peewee/pull/579",
"diff_url": "https://github.com/coleifer/peewee/pull/579.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/579.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/578 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/578/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/578/comments | https://api.github.com/repos/coleifer/peewee/issues/578/events | https://github.com/coleifer/peewee/pull/578 | 69,993,449 | MDExOlB1bGxSZXF1ZXN0MzM4MTg0NTA= | 578 | let the Meta class can extends from another class | {
"login": "anjianshi",
"id": 5005012,
"node_id": "MDQ6VXNlcjUwMDUwMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5005012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anjianshi",
"html_url": "https://github.com/anjianshi",
"followers_url": "https://api.github.com/users/anjianshi/followers",
"following_url": "https://api.github.com/users/anjianshi/following{/other_user}",
"gists_url": "https://api.github.com/users/anjianshi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anjianshi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anjianshi/subscriptions",
"organizations_url": "https://api.github.com/users/anjianshi/orgs",
"repos_url": "https://api.github.com/users/anjianshi/repos",
"events_url": "https://api.github.com/users/anjianshi/events{/privacy}",
"received_events_url": "https://api.github.com/users/anjianshi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I can see the reasoning behind this change, but I'm loth to introduce any changes that might break existing applications. I'm not sure off-hand, but given the way peewee munges `Meta` objects at present, there may be some edge-cases or situations where this breaks.\n\nWhat do you think?\n",
"I see, the `dir()` function may be unstable.\nCan we use `inspect.getmembers()` instead? \nhttp://stackoverflow.com/a/8529470/2815178\nI think it's more safety.\n\n``` python\nimport inspect\nfor k, v in inspect.getmembers(meta):\n if not k.startswith('_'):\n meta_options[k] = v\n```\n",
"Or I can just use a decorator:\n\n``` python\ndef with_database(cls):\n cls.database = db\n return cls\n\n\nclass BaseModel(Model):\n @with_database\n class Meta:\n pass\n\nclass ModelA(Model): \n pass\n\nclass ModelB(Model):\n @with_database\n class Meta:\n db_table = \"model_b\"\n```\n\nThen, nothing need changes in Peewee.\nIt's just not so nature.\n",
"You do know you can write this, right?\n\n``` python\ndb = PostgresqlDatabase(\"mydb\")\n\nclass BaseModel(Model):\n class Meta:\n database = db\n\nclass ModelA(BaseModel): \n # no need to define Meta / database = db, it is inherited.\n pass\n\nclass ModelB(BaseModel):\n class Meta(BaseMeta):\n # database = db --> this is inherited via BaseModel\n db_table = \"model_b\"\n```\n",
"Oh, I don't know this before. Thank you!\nPerhaps this pull request is not needed any more?\n"
] | 2015-04-22T01:19:05 | 2015-04-22T04:47:09 | 2015-04-22T04:47:09 | NONE | null | Example:
``` python
db = PostgresqlDatabase("mydb")
class BaseMeta:
database = db
class BaseModel(Model):
class Meta(BaseMeta):
pass
class ModelA(Model):
pass
class ModelB(Model):
class Meta(BaseMeta):
db_table = "model_b"
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/578/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/578",
"html_url": "https://github.com/coleifer/peewee/pull/578",
"diff_url": "https://github.com/coleifer/peewee/pull/578.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/578.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/577 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/577/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/577/comments | https://api.github.com/repos/coleifer/peewee/issues/577/events | https://github.com/coleifer/peewee/issues/577 | 69,896,638 | MDU6SXNzdWU2OTg5NjYzOA== | 577 | peewee.DataError: (1406, "Data too long for column ... | {
"login": "martinburch",
"id": 2335284,
"node_id": "MDQ6VXNlcjIzMzUyODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2335284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/martinburch",
"html_url": "https://github.com/martinburch",
"followers_url": "https://api.github.com/users/martinburch/followers",
"following_url": "https://api.github.com/users/martinburch/following{/other_user}",
"gists_url": "https://api.github.com/users/martinburch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/martinburch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/martinburch/subscriptions",
"organizations_url": "https://api.github.com/users/martinburch/orgs",
"repos_url": "https://api.github.com/users/martinburch/repos",
"events_url": "https://api.github.com/users/martinburch/events{/privacy}",
"received_events_url": "https://api.github.com/users/martinburch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Sorry, I think this was a database issue, peewee was just passing back the error.\n"
] | 2015-04-21T16:48:21 | 2015-04-21T16:57:39 | 2015-04-21T16:57:39 | NONE | null | I'm trying to create a custom field type
```
class JSONField(Field):
def db_value(self, value):
return json.dumps(value)
def python_value(self, value):
return json.load(value)
```
And then use this to write some data. But I'm being told its too long for the column. How do I explain to peewee what size of data the column can accept?
Edit: I have also tried changing to `class JSONField(TextField):` instead of just `Field` but no change in the error.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/577/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/576 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/576/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/576/comments | https://api.github.com/repos/coleifer/peewee/issues/576/events | https://github.com/coleifer/peewee/pull/576 | 69,109,649 | MDExOlB1bGxSZXF1ZXN0MzM1MzA5ODU= | 576 | Update database.rst | {
"login": "jiffies",
"id": 1257256,
"node_id": "MDQ6VXNlcjEyNTcyNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1257256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiffies",
"html_url": "https://github.com/jiffies",
"followers_url": "https://api.github.com/users/jiffies/followers",
"following_url": "https://api.github.com/users/jiffies/following{/other_user}",
"gists_url": "https://api.github.com/users/jiffies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiffies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiffies/subscriptions",
"organizations_url": "https://api.github.com/users/jiffies/orgs",
"repos_url": "https://api.github.com/users/jiffies/repos",
"events_url": "https://api.github.com/users/jiffies/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiffies/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"thnx porco rosso\n"
] | 2015-04-17T10:14:28 | 2015-04-17T13:01:31 | 2015-04-17T13:01:22 | CONTRIBUTOR | null | Mysql databse url format should like postgresql.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/576/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/576",
"html_url": "https://github.com/coleifer/peewee/pull/576",
"diff_url": "https://github.com/coleifer/peewee/pull/576.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/576.patch",
"merged_at": "2015-04-17T13:01:22"
} |
https://api.github.com/repos/coleifer/peewee/issues/575 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/575/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/575/comments | https://api.github.com/repos/coleifer/peewee/issues/575/events | https://github.com/coleifer/peewee/issues/575 | 68,942,152 | MDU6SXNzdWU2ODk0MjE1Mg== | 575 | peewee successfully updates model with existed unique fields | {
"login": "semolex",
"id": 7127330,
"node_id": "MDQ6VXNlcjcxMjczMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7127330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/semolex",
"html_url": "https://github.com/semolex",
"followers_url": "https://api.github.com/users/semolex/followers",
"following_url": "https://api.github.com/users/semolex/following{/other_user}",
"gists_url": "https://api.github.com/users/semolex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/semolex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/semolex/subscriptions",
"organizations_url": "https://api.github.com/users/semolex/orgs",
"repos_url": "https://api.github.com/users/semolex/repos",
"events_url": "https://api.github.com/users/semolex/events{/privacy}",
"received_events_url": "https://api.github.com/users/semolex/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm not sure I understand what is happening. Can you share code to replicate the issue?\n",
"I feel sorry for disturbing you. It is not issue of peewee itself, it was at DB level...\n"
] | 2015-04-16T14:57:29 | 2015-04-17T19:53:05 | 2015-04-17T19:52:34 | NONE | null | I have such code:
``` python
class User(peewee.Model):
username = peewee.CharField(unique=True, max_length=50)
password = peewee.CharField()
email = peewee.CharField(unique=True, max_length=80)
status = peewee.IntegerField(choices=[(1, 'active'), (2, 'inactive'), (3, 'blocked')], default=1)
class Meta:
database = database
```
way of creating table:
``` python
User.create_table()
```
I have already created record in my DB with username 'Alex'.
And after that, when I'm trying to update some model with username 'Alex', it is updates successfully for the first time! It fails only when I try to do this twice!
Maybe it is issue with something else?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/575/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/574 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/574/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/574/comments | https://api.github.com/repos/coleifer/peewee/issues/574/events | https://github.com/coleifer/peewee/pull/574 | 68,893,544 | MDExOlB1bGxSZXF1ZXN0MzM0MzcwNDI= | 574 | Support encoding in Postgres | {
"login": "klen",
"id": 90699,
"node_id": "MDQ6VXNlcjkwNjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/90699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klen",
"html_url": "https://github.com/klen",
"followers_url": "https://api.github.com/users/klen/followers",
"following_url": "https://api.github.com/users/klen/following{/other_user}",
"gists_url": "https://api.github.com/users/klen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klen/subscriptions",
"organizations_url": "https://api.github.com/users/klen/orgs",
"repos_url": "https://api.github.com/users/klen/repos",
"events_url": "https://api.github.com/users/klen/events{/privacy}",
"received_events_url": "https://api.github.com/users/klen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks!\n",
"Thank you! Can I hope on patch-release?\n",
"I'll probably wait a bit as I just released a new one not long ago. Maybe a couple weeks?\n",
"Up to you. But for now I have to use manual installation from github, it's not compatible.\n",
"I just released 2.6.0.\n",
"Great! Thank you!\n"
] | 2015-04-16T10:22:45 | 2015-04-22T07:09:31 | 2015-04-17T17:17:32 | CONTRIBUTOR | null | Hello,
In my case I work with a lot of unicode data. All my databases have a UTF-8 encoding, but I've got errors like this: `UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-5: ordinal not in range(128)`.
I knew about the trick:
``` python
database = PosgresqlDatabase(database, **connection_params)
conn = database.get_conn()
conn.set_client_encoding('UTF8')
```
But if you have multiple workers (or pool) it doesn't help and the error still appears.
So I provide the pull request.
```
database = PostgresqlDatabase(database, encoding='UTF8', **connection_params)
```
Which solve the problem completely.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/574/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/574",
"html_url": "https://github.com/coleifer/peewee/pull/574",
"diff_url": "https://github.com/coleifer/peewee/pull/574.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/574.patch",
"merged_at": "2015-04-17T17:17:32"
} |
https://api.github.com/repos/coleifer/peewee/issues/573 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/573/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/573/comments | https://api.github.com/repos/coleifer/peewee/issues/573/events | https://github.com/coleifer/peewee/issues/573 | 68,882,251 | MDU6SXNzdWU2ODg4MjI1MQ== | 573 | postgres_ext do not support LTreeField? | {
"login": "dllhlx",
"id": 7111160,
"node_id": "MDQ6VXNlcjcxMTExNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7111160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dllhlx",
"html_url": "https://github.com/dllhlx",
"followers_url": "https://api.github.com/users/dllhlx/followers",
"following_url": "https://api.github.com/users/dllhlx/following{/other_user}",
"gists_url": "https://api.github.com/users/dllhlx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dllhlx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dllhlx/subscriptions",
"organizations_url": "https://api.github.com/users/dllhlx/orgs",
"repos_url": "https://api.github.com/users/dllhlx/repos",
"events_url": "https://api.github.com/users/dllhlx/events{/privacy}",
"received_events_url": "https://api.github.com/users/dllhlx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"i find define of LTreeFiled in peewee old version - 0.9.9 , why delete this logic in lastest version ?\n",
"> from the docs we can know peewee ext can support postgres ltree\n\nWhere in the docs does it say this?\n",
"I have no intentions of adding LTree back, but would accept pull-requests.\n"
] | 2015-04-16T09:24:13 | 2015-04-18T18:37:05 | 2015-04-18T18:37:05 | NONE | null | from the docs <http://docs.peewee-orm.com/en/1.0.0/peewee/playhouse.html#ltree > we can know peewee ext can support postgres ltree but in the source code in github we can not find the define of LTreeField. why?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/573/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/572 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/572/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/572/comments | https://api.github.com/repos/coleifer/peewee/issues/572/events | https://github.com/coleifer/peewee/issues/572 | 68,785,637 | MDU6SXNzdWU2ODc4NTYzNw== | 572 | PostgerSQL:how to add explicit type casts? | {
"login": "khahux",
"id": 4948118,
"node_id": "MDQ6VXNlcjQ5NDgxMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4948118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khahux",
"html_url": "https://github.com/khahux",
"followers_url": "https://api.github.com/users/khahux/followers",
"following_url": "https://api.github.com/users/khahux/following{/other_user}",
"gists_url": "https://api.github.com/users/khahux/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khahux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khahux/subscriptions",
"organizations_url": "https://api.github.com/users/khahux/orgs",
"repos_url": "https://api.github.com/users/khahux/repos",
"events_url": "https://api.github.com/users/khahux/events{/privacy}",
"received_events_url": "https://api.github.com/users/khahux/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hey, if you are using the `PostgresqlExtDatabase` you can use `cls.created_at.cast('text') ** format`.\n\nHonestly why on earth are you doing your date comparison like that, though?! You can use the `between()` function, `fn.date_trunc()`, so many alternatives.\n\nFor example, I believe this should work:\n\n``` python\n@classmethod\ndef get_blog_by_date(cls, year, month):\n return cls.select(cls.created_at).where(\n fn.date_trunc('month', cls.created_at) == datetime.date(year, month, 1))\n```\n"
] | 2015-04-15T20:58:42 | 2015-04-16T00:59:45 | 2015-04-16T00:59:45 | NONE | null | ```
@classmethod
def get_blog_by_date(cls, year, month):
format = '%%%s-%s%%' % (year, month)
return cls.select(cls.created_at).where(cls.created_at ** format)
```
------ error ------
ProgrammingError: operator does not exist: timestamp without time zone ~~\* unknown
LINE 1: ...ed_at" FROM "blog" AS t1 WHERE ("t1"."created_at" ILIKE '%20...
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
I find this select \* from events where CAST(timestamp as TEXT) like '2010-01-26 10:%'; http://www.question-defense.com/2010/02/08/no-operator-matches-the-given-name-and-argument-types-you-might-need-to-add-explicit-type-casts
but not solve this problem by searching.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/572/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/571 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/571/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/571/comments | https://api.github.com/repos/coleifer/peewee/issues/571/events | https://github.com/coleifer/peewee/issues/571 | 68,598,255 | MDU6SXNzdWU2ODU5ODI1NQ== | 571 | A more powerful get_or_create method | {
"login": "lsc20051426",
"id": 219287,
"node_id": "MDQ6VXNlcjIxOTI4Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/219287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lsc20051426",
"html_url": "https://github.com/lsc20051426",
"followers_url": "https://api.github.com/users/lsc20051426/followers",
"following_url": "https://api.github.com/users/lsc20051426/following{/other_user}",
"gists_url": "https://api.github.com/users/lsc20051426/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lsc20051426/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsc20051426/subscriptions",
"organizations_url": "https://api.github.com/users/lsc20051426/orgs",
"repos_url": "https://api.github.com/users/lsc20051426/repos",
"events_url": "https://api.github.com/users/lsc20051426/events{/privacy}",
"received_events_url": "https://api.github.com/users/lsc20051426/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think I will pass on making changes to the implementation. The method has been deprecated for some time and there is documentation on implementing it yourself:\n\nhttp://docs.peewee-orm.com/en/latest/peewee/querying.html#get-or-create\n",
"These changes are now included in the new version 2.6.0. The function behaves just like it's Django equivalent.\n",
"Cool!\n"
] | 2015-04-15T07:17:42 | 2015-04-28T06:34:43 | 2015-04-15T20:09:15 | NONE | null | Currently the get_or_create is pretty weak.
Here is the example from django:
```
try:
obj = Person.objects.get(first_name='John', last_name='Lennon')
except Person.DoesNotExist:
obj = Person(first_name='John', last_name='Lennon', birthday=date(1940, 10, 9))
obj.save()
```
This pattern gets quite unwieldy as the number of fields in a model goes up. The above example can be rewritten using get_or_create() like so:
```
obj, created = Person.objects.get_or_create(first_name='John', last_name='Lennon',
defaults={'birthday': date(1940, 10, 9)})
```
Thanks~
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/571/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/570 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/570/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/570/comments | https://api.github.com/repos/coleifer/peewee/issues/570/events | https://github.com/coleifer/peewee/issues/570 | 68,462,093 | MDU6SXNzdWU2ODQ2MjA5Mw== | 570 | joining is not allowed on UpdateQuery instances | {
"login": "arski",
"id": 904818,
"node_id": "MDQ6VXNlcjkwNDgxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/904818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arski",
"html_url": "https://github.com/arski",
"followers_url": "https://api.github.com/users/arski/followers",
"following_url": "https://api.github.com/users/arski/following{/other_user}",
"gists_url": "https://api.github.com/users/arski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arski/subscriptions",
"organizations_url": "https://api.github.com/users/arski/orgs",
"repos_url": "https://api.github.com/users/arski/repos",
"events_url": "https://api.github.com/users/arski/events{/privacy}",
"received_events_url": "https://api.github.com/users/arski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"``` python\nus_pub = Publisher.select().where(Publisher.country == 'US')\nBook.update(foo='bar').where(Book.publisher << us_pub)\n```\n\nIIRC only MySQL supports joins on update queries, while SQLite and Postgres do not. Subqueries are the way to go.\n",
"aha! thanks for the quick reply.\n",
"actually, what if i wanted to populatw my books with some value\nfrom the related publisher.. No way to do that i guess?\n",
"If you're using postgres you can theoretically use `update({Book.foo: Publisher.bar}).from_(Publisher)`",
"@GothAck is correct, and this is supported by peewee. Sqlite will be adding support for this in the next release as well, which is exciting.\r\n\r\nExample peewee usage:\r\n\r\nhttps://github.com/coleifer/peewee/blob/611a08987e8f08af1056ec679ea6cfdccb87c65b/tests/model_sql.py#L689-L743"
] | 2015-04-14T18:28:18 | 2020-07-28T14:09:28 | 2015-04-14T18:31:19 | NONE | null | this is an error message from peewee. May I ask why this is though? Let's assume I want to run an update on Books where the Publisher of said book is based in the US. That's imo a very common update query. Or is there just a different way of doing this during an update?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/570/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/569 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/569/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/569/comments | https://api.github.com/repos/coleifer/peewee/issues/569/events | https://github.com/coleifer/peewee/pull/569 | 65,986,903 | MDExOlB1bGxSZXF1ZXN0MzI1MzgxMTM= | 569 | Expose parsed db url | {
"login": "stt",
"id": 245985,
"node_id": "MDQ6VXNlcjI0NTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/245985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stt",
"html_url": "https://github.com/stt",
"followers_url": "https://api.github.com/users/stt/followers",
"following_url": "https://api.github.com/users/stt/following{/other_user}",
"gists_url": "https://api.github.com/users/stt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stt/subscriptions",
"organizations_url": "https://api.github.com/users/stt/orgs",
"repos_url": "https://api.github.com/users/stt/repos",
"events_url": "https://api.github.com/users/stt/events{/privacy}",
"received_events_url": "https://api.github.com/users/stt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Nice! Do you mind adding a test-case for the new function?\n",
"Sure, :+1: ?\n",
"Thanks!\n"
] | 2015-04-02T18:38:59 | 2015-04-03T14:09:14 | 2015-04-03T14:09:12 | CONTRIBUTOR | null | It'd be useful to get the parsed dict from db_url so it could be used with pooled connections as well.
E.g.:
```
db = PooledMySQLDatabase(
max_connections=32,
stale_timeout=300, # 5 minutes.
**db_url.parse(app.config['DATABASE_URL'])
)
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/569/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/569",
"html_url": "https://github.com/coleifer/peewee/pull/569",
"diff_url": "https://github.com/coleifer/peewee/pull/569.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/569.patch",
"merged_at": "2015-04-03T14:09:12"
} |
https://api.github.com/repos/coleifer/peewee/issues/568 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/568/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/568/comments | https://api.github.com/repos/coleifer/peewee/issues/568/events | https://github.com/coleifer/peewee/issues/568 | 65,792,741 | MDU6SXNzdWU2NTc5Mjc0MQ== | 568 | Snippet function on FTS tables. | {
"login": "leiserfg",
"id": 2947276,
"node_id": "MDQ6VXNlcjI5NDcyNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2947276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leiserfg",
"html_url": "https://github.com/leiserfg",
"followers_url": "https://api.github.com/users/leiserfg/followers",
"following_url": "https://api.github.com/users/leiserfg/following{/other_user}",
"gists_url": "https://api.github.com/users/leiserfg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leiserfg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leiserfg/subscriptions",
"organizations_url": "https://api.github.com/users/leiserfg/orgs",
"repos_url": "https://api.github.com/users/leiserfg/repos",
"events_url": "https://api.github.com/users/leiserfg/events{/privacy}",
"received_events_url": "https://api.github.com/users/leiserfg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The problem is that peewee assumes when it encounters a Model as part of a query, that it should alias it to the alias used for that table. In this case, however, we don't want that -- we simply want the table name to be passed in.\n\nTo remedy, you can rewrite this as:\n\n``` python\nfrom peewee import Entity\n(FTSBook\n .select(fn.snippet(Entity(FTSBook._meta.db_table)))\n .where(FTSBook.match('python')))\n```\n\nIf you are using peewee 2.5.0 or older, you can also use:\n\n``` python\n(FTSBook\n .select(fn.snippet(FTSBook._as_entity()))\n .where(FTSBook.match('python')))\n```\n\nFinally, if you are using master, I changed `_as_entity()` to a public method, so it is now `as_entity()`:\n\n``` python\n(FTSBook\n .select(fn.snippet(FTSBook.as_entity()))\n .where(FTSBook.match('python')))\n```\n",
"Hi,\r\nIt seems the as_entity() method does not exist any more.\r\nHow should I use fn.snippet now?",
"You can use `._meta.entity` property instead:\r\n\r\n```python\r\nFTSBook.select(fn.snippet(FTSBook._meta.entity))\r\n```"
] | 2015-04-01T22:28:14 | 2018-10-02T12:31:44 | 2015-04-02T00:08:23 | NONE | null | When i write
``` python
FTSBook.select(fn.snippet(FTSBook)).where(FTSBook.match('python'))
```
it returns:
``` sql
SELECT snippet("ftsbook" AS t1) FROM "ftsbook" AS t1 WHERE ("ftsbook" MATCH 'python')
```
But if i run this query it fails:
```
near "AS": syntax error
```
is it a bug in peewee or there is a way to call a function that recive a table as argument?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/568/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/567 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/567/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/567/comments | https://api.github.com/repos/coleifer/peewee/issues/567/events | https://github.com/coleifer/peewee/issues/567 | 65,690,955 | MDU6SXNzdWU2NTY5MDk1NQ== | 567 | Unable to create json or jsonb columns in Postgres 9.4 | {
"login": "wishabhilash",
"id": 831818,
"node_id": "MDQ6VXNlcjgzMTgxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/831818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wishabhilash",
"html_url": "https://github.com/wishabhilash",
"followers_url": "https://api.github.com/users/wishabhilash/followers",
"following_url": "https://api.github.com/users/wishabhilash/following{/other_user}",
"gists_url": "https://api.github.com/users/wishabhilash/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wishabhilash/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wishabhilash/subscriptions",
"organizations_url": "https://api.github.com/users/wishabhilash/orgs",
"repos_url": "https://api.github.com/users/wishabhilash/repos",
"events_url": "https://api.github.com/users/wishabhilash/events{/privacy}",
"received_events_url": "https://api.github.com/users/wishabhilash/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You need to use playhouse.postgres_ext.PostgresqlExtDatabase.\n",
"Ohh thanks... :+1: \n"
] | 2015-04-01T13:59:49 | 2015-04-07T19:44:48 | 2015-04-01T14:34:15 | NONE | null | Am trying to create a table with a jsonb column but its keeps on giving me error.
```
from peewee import *
from playhouse.postgres_ext import BinaryJSONField
db = PostgresqlDatabase(database="test")
class JSONTest3(Model):
username = BinaryJSONField()
class Meta:
database = db
if __name__ == '__main__':
db.connect()
db.create_tables([JSONTest3])
```
My lib versions are:
Postgres == 9.4
psycopg2 == 2.6
Error is:
Traceback (most recent call last):
File "peeweepostgres.py", line 16, in <module>
db.create_tables([JSONTest3])
File "/home/wish/virtualenv/local/lib/python2.7/site-packages/peewee.py", line 3042, in create_tables
create_model_tables(models, fail_silently=safe)
File "/home/wish/virtualenv/local/lib/python2.7/site-packages/peewee.py", line 4163, in create_model_tables
m.create_table(*_create_table_kwargs)
File "/home/wish/virtualenv/local/lib/python2.7/site-packages/peewee.py", line 3911, in create_table
db.create_table(cls)
File "/home/wish/virtualenv/local/lib/python2.7/site-packages/peewee.py", line 3039, in create_table
return self.execute_sql(_qc.create_table(model_class, safe))
File "/home/wish/virtualenv/local/lib/python2.7/site-packages/peewee.py", line 1718, in inner
return self.parse_node(fn(_args, *_kwargs))
File "/home/wish/virtualenv/local/lib/python2.7/site-packages/peewee.py", line 1746, in _create_table
columns.append(self.field_definition(field))
File "/home/wish/virtualenv/local/lib/python2.7/site-packages/peewee.py", line 1696, in field_definition
column_type = self.get_column_type(field.get_db_field())
File "/home/wish/virtualenv/local/lib/python2.7/site-packages/peewee.py", line 1325, in get_column_type
return self._field_map[f]
KeyError: 'json'
What can be the error am doing here???
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/567/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/566 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/566/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/566/comments | https://api.github.com/repos/coleifer/peewee/issues/566/events | https://github.com/coleifer/peewee/issues/566 | 65,562,653 | MDU6SXNzdWU2NTU2MjY1Mw== | 566 | UNION query | {
"login": "ybahador",
"id": 9665921,
"node_id": "MDQ6VXNlcjk2NjU5MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9665921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ybahador",
"html_url": "https://github.com/ybahador",
"followers_url": "https://api.github.com/users/ybahador/followers",
"following_url": "https://api.github.com/users/ybahador/following{/other_user}",
"gists_url": "https://api.github.com/users/ybahador/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ybahador/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ybahador/subscriptions",
"organizations_url": "https://api.github.com/users/ybahador/orgs",
"repos_url": "https://api.github.com/users/ybahador/repos",
"events_url": "https://api.github.com/users/ybahador/events{/privacy}",
"received_events_url": "https://api.github.com/users/ybahador/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Can you share your code and the SQL? Its hard to help without any clues you know...\n",
"Also what database are you using? If you can't share your exact query can you give me an example that fails in a similar way?\n",
"Please comment and I will re-open.\n",
"I'm using postgres as my database.\n\nHere is my code:\n\n``` python\nq1 = ASRank.select().where(ASRank.snapshot == '2014-02-04').order_by(ASRank.byte_rank).limit(10)\nq2 = ASRank.select().where(ASRank.snapshot == '2014-02-04').order_by(ASRank.flow_rank).limit(10)\nq = q1 | q2\n```\n\nwhich results in:\n\n``` sql\nSELECT \"t2\".\"id\", \"t2\".\"snapshot_id\", \"t2\".\"asn\", \"t2\".\"byte_rank\", \"t2\".\"flow_rank\", \"t2\".\"bytes\", \"t2\".\"flows\" \nFROM \"asrank\" AS t2 \nWHERE (\"t2\".\"snapshot_id\" = 2014-02-04) ORDER BY \"t2\".\"byte_rank\" LIMIT 10 \nUNION \nSELECT \"t2\".\"id\", \"t2\".\"snapshot_id\", \"t2\".\"asn\", \"t2\".\"byte_rank\", \"t2\".\"flow_rank\", \"t2\".\"bytes\", \"t2\".\"flows\" \nFROM \"asrank\" AS t2 \nWHERE (\"t2\".\"snapshot_id\" = 2014-02-04) ORDER BY \"t2\".\"flow_rank\" LIMIT 10 ORDER BY \"t1\".\"byte_rank\"\n```\n",
"Ouch, yes I've verified the bug on both Postgresql and SQLite.\n",
"I believe these problems should now be fixed in `master`. SQLite does not support limit/order by in the components of a compound query, but since you're using Postgres this ended up being a problem of fixing the parentheses. Your query should now be working if you pull the latest code. I'll make a new release in the next week or two.\n",
"Oh, and thank you very much for reporting this!!\n",
"Thanks for fixing the issue quickly.\n"
] | 2015-03-31T22:41:26 | 2015-04-03T04:20:51 | 2015-04-03T00:23:45 | NONE | null | I'm making a compound query by using the | operator but the generated query does not have correct parenthesis and results in a SQL error.
I'm using peewee version 2.5.0
I've also found a similar issue (#454) and based on the discussion it seems that the issue has been fixed in previous versions, It's strange that I'm still experiencing this problem.
Could you please look into it?
Thanks.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/566/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/565 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/565/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/565/comments | https://api.github.com/repos/coleifer/peewee/issues/565/events | https://github.com/coleifer/peewee/issues/565 | 64,942,927 | MDU6SXNzdWU2NDk0MjkyNw== | 565 | PrimaryKey issue with peewee and pymysql | {
"login": "ckoepp",
"id": 1830022,
"node_id": "MDQ6VXNlcjE4MzAwMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1830022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ckoepp",
"html_url": "https://github.com/ckoepp",
"followers_url": "https://api.github.com/users/ckoepp/followers",
"following_url": "https://api.github.com/users/ckoepp/following{/other_user}",
"gists_url": "https://api.github.com/users/ckoepp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ckoepp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ckoepp/subscriptions",
"organizations_url": "https://api.github.com/users/ckoepp/orgs",
"repos_url": "https://api.github.com/users/ckoepp/repos",
"events_url": "https://api.github.com/users/ckoepp/events{/privacy}",
"received_events_url": "https://api.github.com/users/ckoepp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"If you use `IntegerField` then it will not be auto-incrementing. Instead you should use `PrimaryKeyField` which contains the appropriate \"AUTO INCREMENT\" clause.\n"
] | 2015-03-28T13:31:29 | 2015-03-28T14:50:09 | 2015-03-28T14:50:09 | NONE | null | I ran into an issue with primary keys when using pymysql in combination with peewee. Somehow the default value for the integer PK always is 0 which of course leads to problems when more than one object is created. A minimal example is the following code using a (rather stupid) Cat object and adding two cats:
```
from peewee import *
db = MySQLDatabase(
database = "test",
user = "test",
passwd = "test",
host = "localhost",
port = 3306
)
class Cat(Model):
catid = IntegerField(primary_key=True)
name = CharField()
class Meta:
database = db
db.create_tables((Cat,))
Cat.create(name="Winston")
Cat.create(name="Churchill")
```
When I run this code (using pymysql) I receive the following exception:
```
(...)/python3.4/site-packages/pymysql/cursors.py:134: Warning: Field 'catid' doesn't have a default value
result = self._query(query)
Traceback (most recent call last):
File "(...)/python3.4/site-packages/peewee.py", line 2869, in execute_sql
cursor.execute(sql, params or ())
File "(...)/python3.4/site-packages/pymysql/cursors.py", line 134, in execute
result = self._query(query)
File "(...)/python3.4/site-packages/pymysql/cursors.py", line 282, in _query
conn.query(q)
File "(...)/python3.4/site-packages/pymysql/connections.py", line 768, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "(...)/python3.4/site-packages/pymysql/connections.py", line 929, in _read_query_result
result.read()
(...)
File "(...)/python3.4/site-packages/pymysql/err.py", line 120, in raise_mysql_exception
_check_mysql_exception(errinfo)
File "(...)/python3.4/site-packages/pymysql/err.py", line 112, in _check_mysql_exception
raise errorclass(errno, errorvalue)
pymysql.err.IntegrityError: (1062, "Duplicate entry '0' for key 'PRIMARY'")
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./cats.py", line 22, in <module>
Cat.create(name="Churchill")
File "(...)/python3.4/site-packages/peewee.py", line 3755, in create
inst.save(force_insert=True)
File "(...)/python3.4/site-packages/peewee.py", line 3890, in save
pk_from_cursor = self.insert(**field_dict).execute()
File "(...)/python3.4/site-packages/peewee.py", line 2685, in execute
return self.database.last_insert_id(self._execute(), self.model_class)
File "(...)/python3.4/site-packages/peewee.py", line 2243, in _execute
return self.database.execute_sql(sql, params, self.require_commit)
File "(...)/python3.4/site-packages/peewee.py", line 2877, in execute_sql
self.commit()
(...)
File "(...)/python3.4/site-packages/pymysql/err.py", line 112, in _check_mysql_exception
raise errorclass(errno, errorvalue)
peewee.IntegrityError: (1062, "Duplicate entry '0' for key 'PRIMARY'")
```
Using sqlite3 with the very same model works, so I assume pymsql and peewee do have a problem when it comes to primary keys. Instead of auto-incrementing the PK field, it just uses 0 as default value. Thus my sql table looks like this after running the code above:
```
mysql> select * from cat;
+-------+---------+
| catid | name |
+-------+---------+
| 0 | Winston |
+-------+---------+
1 row in set (0.00 sec)
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/565/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/564 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/564/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/564/comments | https://api.github.com/repos/coleifer/peewee/issues/564/events | https://github.com/coleifer/peewee/pull/564 | 64,866,576 | MDExOlB1bGxSZXF1ZXN0MzIxNTE3NjI= | 564 | Update pk seq | {
"login": "elgow",
"id": 11529401,
"node_id": "MDQ6VXNlcjExNTI5NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/11529401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elgow",
"html_url": "https://github.com/elgow",
"followers_url": "https://api.github.com/users/elgow/followers",
"following_url": "https://api.github.com/users/elgow/following{/other_user}",
"gists_url": "https://api.github.com/users/elgow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elgow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elgow/subscriptions",
"organizations_url": "https://api.github.com/users/elgow/orgs",
"repos_url": "https://api.github.com/users/elgow/repos",
"events_url": "https://api.github.com/users/elgow/events{/privacy}",
"received_events_url": "https://api.github.com/users/elgow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you for the effort you put into this, but I don't have any plans to merge this functionality, as it seems to me something that would be a better fit for your own library code.\n",
"The use case for this is actually very common. Many, if not most, DBs have some reference tables that are populated with an initial set of values and later added to, and those tables are usually the target of foreign key relations. The peewee docs even contain a section on how to do it, \"Manually specifying primary keys\". If a peewee user follows the docs, then after re-activating auto_increment in a Postgresql database, as instructed by the docs, they are going to encounter a duplicate key constraint violation, unless they do something like what I've done. If the sequence update is not done by peewee then the user must code it as a special case for Postgresql if they wish to target different DBs. \n\nFor a feature as frequently used as auto-increment, and with such a troublesome incompatibility between Postgresql and the other back-end DBs, isn't it worth one method on the model to keep peewee client code DB neutral? \n",
"> The use case for this is actually very common\n\nI'd beg to differ, as in the 4+ years peewee's been around, you're the first to ask for a sequence updating helper.\n\nMy stance is that relying on particular values for auto-incremented, sequence-backed columns is an anti-pattern. I understand that maybe you're restoring data, but in the very rare cases I've done that all I had to do was restore the data then set the currval of the sequence -- it was a one-time operation, just another part of restoring from backup.\n\nThe docs you've referenced even emphasize that this is a one-time type operation:\n\n> To handle this on a **one-off** basis, you can simply tell peewee to turn off auto_increment during the import\n",
"I agree that it's a one-off operation, though possibly one time for each new deployment. But when you are finished with it the DB might be broken (if it's postgresql), or it might not. The docs don't mention that your DB might be broken, and there's no database neutral code that you can write to make sure that it's fixed. That's all I'm trying to create, the simplest possible uniform way to fix any DB after inserting explicit keys into an auto-increment table. \n"
] | 2015-03-27T22:54:06 | 2015-03-28T21:04:13 | 2015-03-28T02:36:11 | NONE | null | New method to force the DB to ensure that the next auto-generated key will be greater than any existing key, including explicitly inserted keys. On sqlite and MySQL this happens automatically, so the method is a no-op. For Postgresql it provides a canned version of a nice query to update the sequence, saving peewee users from having to write it themselves.
This might possibly be simplified and better integrated by having a method to turn auto_increment on/off rather than just setting the attribute on the model. Then this method could be run whenever auto_increment is explicitly activated.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/564/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/564",
"html_url": "https://github.com/coleifer/peewee/pull/564",
"diff_url": "https://github.com/coleifer/peewee/pull/564.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/564.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/563 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/563/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/563/comments | https://api.github.com/repos/coleifer/peewee/issues/563/events | https://github.com/coleifer/peewee/issues/563 | 64,439,169 | MDU6SXNzdWU2NDQzOTE2OQ== | 563 | Does .dicts() or .tuples() force an eval? | {
"login": "syegulalp",
"id": 401657,
"node_id": "MDQ6VXNlcjQwMTY1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/401657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/syegulalp",
"html_url": "https://github.com/syegulalp",
"followers_url": "https://api.github.com/users/syegulalp/followers",
"following_url": "https://api.github.com/users/syegulalp/following{/other_user}",
"gists_url": "https://api.github.com/users/syegulalp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/syegulalp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/syegulalp/subscriptions",
"organizations_url": "https://api.github.com/users/syegulalp/orgs",
"repos_url": "https://api.github.com/users/syegulalp/repos",
"events_url": "https://api.github.com/users/syegulalp/events{/privacy}",
"received_events_url": "https://api.github.com/users/syegulalp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Well, first of all, two queries is not N+1 queries. N+1 queries happens when you have a loop, for instance, and for every result in the outer query you execute an additional query.\n\nSo no, it's not N+1.\n\nIn fact, it is only 1 query, which you can verify by [enabling query logging](http://docs.peewee-orm.com/en/latest/peewee/database.html#logging-queries).\n\nCalling `dicts()` or `tuples()` will not evaluate the query. Select queries are evaluated when you call `execute()` on them, iterate over them, call `len()` on them, or index into them.\n",
"Excellent, just what I needed to know. Thanks.\n"
] | 2015-03-26T04:30:02 | 2015-03-26T16:02:06 | 2015-03-26T15:59:16 | NONE | null | I have the following code:
```
media_association = MediaAssociation.select(MediaAssociation.id).where(
MediaAssociation.blog == self.id).dicts()
media = Media.select().where(Media.id << media_association)
```
Would this cause an `n+1` query, where `media_association` generates one query and then `media` another? My gut tells me that would be the case, but I wanted to be sure.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/563/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/562 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/562/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/562/comments | https://api.github.com/repos/coleifer/peewee/issues/562/events | https://github.com/coleifer/peewee/pull/562 | 64,239,314 | MDExOlB1bGxSZXF1ZXN0MzE5MTAyMTk= | 562 | Update quickstart.rst | {
"login": "mozillazg",
"id": 485054,
"node_id": "MDQ6VXNlcjQ4NTA1NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/485054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mozillazg",
"html_url": "https://github.com/mozillazg",
"followers_url": "https://api.github.com/users/mozillazg/followers",
"following_url": "https://api.github.com/users/mozillazg/following{/other_user}",
"gists_url": "https://api.github.com/users/mozillazg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mozillazg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mozillazg/subscriptions",
"organizations_url": "https://api.github.com/users/mozillazg/orgs",
"repos_url": "https://api.github.com/users/mozillazg/repos",
"events_url": "https://api.github.com/users/mozillazg/events{/privacy}",
"received_events_url": "https://api.github.com/users/mozillazg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2015-03-25T10:40:29 | 2015-03-26T02:12:06 | 2015-03-25T18:39:15 | NONE | null | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/562/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/562",
"html_url": "https://github.com/coleifer/peewee/pull/562",
"diff_url": "https://github.com/coleifer/peewee/pull/562.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/562.patch",
"merged_at": null
} |
|
https://api.github.com/repos/coleifer/peewee/issues/561 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/561/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/561/comments | https://api.github.com/repos/coleifer/peewee/issues/561/events | https://github.com/coleifer/peewee/pull/561 | 64,188,107 | MDExOlB1bGxSZXF1ZXN0MzE4OTYwMzA= | 561 | insert_many() + insert_returning support | {
"login": "ianawilson",
"id": 831154,
"node_id": "MDQ6VXNlcjgzMTE1NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/831154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ianawilson",
"html_url": "https://github.com/ianawilson",
"followers_url": "https://api.github.com/users/ianawilson/followers",
"following_url": "https://api.github.com/users/ianawilson/following{/other_user}",
"gists_url": "https://api.github.com/users/ianawilson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ianawilson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ianawilson/subscriptions",
"organizations_url": "https://api.github.com/users/ianawilson/orgs",
"repos_url": "https://api.github.com/users/ianawilson/repos",
"events_url": "https://api.github.com/users/ianawilson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ianawilson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I didn't think it made a lot of sense to return the IDs for bulk inserts. If inserting a lot of rows, it could mean a lot of data being transferred back over the wire for all those inserts. Furthermore, insert_many() doesn't return instances, so what would be the point of having the IDs?\n\nAlso, the behavior prior to 2.5 was to only return the latest insert ID of a batch. \n",
"In my particular use case, I'm using peewee in a service that ingests large amounts of data and writes it to the db (postgres). We have another service that is responsible for updating the search index (elasticsearch), and it drastically simplifies things if I can pass the list of new IDs to the search index service after the bulk insert.\n\nI'm not convinced this behavior makes sense as a default, but I do think it's nice to have as an option, especially because the behavior in 2.5 is nearly there. Like I mentioned, I'd be happy to rewrite it to be a non-default option for `insert_many()` / `InsertQuery` -- shouldn't be hard.\n\nEDIT: The tests ran fine in my environment, so I'll have to look at why they're failing on Travis a little later.\n",
"> I do think it's nice to have as an option\n\nSounds good to me.\n",
"You can now call:\n\n``` python\ndata = [{'username': username} for username in list_of_usernames]\nuser_ids = User.insert_many(data).return_id_list().execute()\nprint user_ids\n```\n",
"http://docs.peewee-orm.com/en/latest/peewee/api.html#InsertQuery.return_id_list\n"
] | 2015-03-25T06:16:49 | 2015-03-27T03:47:40 | 2015-03-27T03:31:06 | NONE | null | I saw that there was support for RETURNING but only for single-row INSERTs, and I needed support for multi-row INSERTs with RETURNING as well.
Based on your tests, it looks like you may not have wanted to include this functionality. If that's the case, I'd be happy to make this a non-default option for `insert_many()` so that the default behavior is unchanged, but getting the primary keys of all new rows is available as an option for those who want it.
Otherwise, if this does seem like a good default behavior, then I've updated the tests and this should be good to merge.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/561/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/561",
"html_url": "https://github.com/coleifer/peewee/pull/561",
"diff_url": "https://github.com/coleifer/peewee/pull/561.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/561.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/560 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/560/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/560/comments | https://api.github.com/repos/coleifer/peewee/issues/560/events | https://github.com/coleifer/peewee/issues/560 | 64,087,478 | MDU6SXNzdWU2NDA4NzQ3OA== | 560 | ArrayField: to lower | {
"login": "havannavar",
"id": 1104650,
"node_id": "MDQ6VXNlcjExMDQ2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1104650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/havannavar",
"html_url": "https://github.com/havannavar",
"followers_url": "https://api.github.com/users/havannavar/followers",
"following_url": "https://api.github.com/users/havannavar/following{/other_user}",
"gists_url": "https://api.github.com/users/havannavar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/havannavar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/havannavar/subscriptions",
"organizations_url": "https://api.github.com/users/havannavar/orgs",
"repos_url": "https://api.github.com/users/havannavar/repos",
"events_url": "https://api.github.com/users/havannavar/events{/privacy}",
"received_events_url": "https://api.github.com/users/havannavar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I don't think `array_lower` does what you are expecting there. It returns the lower bound of the requested array dimension.\n",
"So, can you let me know, how can i resolve this issue ?\n",
"I am not sure. Can you express this query using plain SQL? If you can give me the correct SQL, then I will help you translate it to utilize peewee.\n"
] | 2015-03-24T20:04:16 | 2015-03-25T21:34:49 | 2015-03-25T18:52:26 | NONE | null | I am trying to convert ArrayField values to lower case, but unsuccessful
here is my query with exception
```
Tweets.select().where((fn.Lower(Tweets.preferred_city) == fn.Lower(city)) & (fn.array_lower(Tweets.tagging).contains_any(['foo','bar'])))
ProgrammingError: function contains_any(unknown) does not exist
LINE 1: ..."."PREFERRED_CITY") = Lower('NewYork')) AND contains_a...
```
I am using Postgresql 9.4 and in the document i can see this
http://www.postgresql.org/docs/9.4/static/functions-array.html
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/560/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/559 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/559/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/559/comments | https://api.github.com/repos/coleifer/peewee/issues/559/events | https://github.com/coleifer/peewee/pull/559 | 63,569,984 | MDExOlB1bGxSZXF1ZXN0MzE2OTM5MTE= | 559 | Add pwiz option to preserve original DB column order. | {
"login": "elgow",
"id": 11529401,
"node_id": "MDQ6VXNlcjExNTI5NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/11529401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elgow",
"html_url": "https://github.com/elgow",
"followers_url": "https://api.github.com/users/elgow/followers",
"following_url": "https://api.github.com/users/elgow/following{/other_user}",
"gists_url": "https://api.github.com/users/elgow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elgow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elgow/subscriptions",
"organizations_url": "https://api.github.com/users/elgow/orgs",
"repos_url": "https://api.github.com/users/elgow/repos",
"events_url": "https://api.github.com/users/elgow/events{/privacy}",
"received_events_url": "https://api.github.com/users/elgow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hello Charles,\n\nThis is a minor modification to enable pwiz to create models that preserve the column ordering from the original source DB. I needed that in order to use dump files created with native Postgresql tools from the original DB. The new behavior an option so that default pwiz behavior is unchanged. I hope that you will find it to be a useful addition. \n\n Ed\n",
"Nice work. Could you add a unit test?\n",
"Hi Charles,\n\nAll fixes in. Please let me know if you think the way I've divided up the EXPECTED text is \"too clever by half\" and if you'd prefer just plain text for the new test. \n\n Ed\n",
"> Please let me know if you think the way I've divided up the EXPECTED text is \"too clever by half\" and if you'd prefer just plain text for the new test.\n\nerrr yeah\n",
"I switched to full explicit text for each expected value per your preference. Also prevented test failure of unsupported feature under Python 2.6\n",
"Thank you so much! I've mergd a very slightly modified version of your changes. I also caught an unreported bug while messing with the tests, so even better!\n"
] | 2015-03-22T20:06:37 | 2015-03-27T04:17:25 | 2015-03-27T04:16:38 | NONE | null | Add pwiz option to preserve original DB column order in generated model definitions. Very useful when a DB created by Peewee will be loaded from a dump file created with native DB tools (e.g. Postgresql INSERT type dump).
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/559/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/559",
"html_url": "https://github.com/coleifer/peewee/pull/559",
"diff_url": "https://github.com/coleifer/peewee/pull/559.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/559.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/558 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/558/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/558/comments | https://api.github.com/repos/coleifer/peewee/issues/558/events | https://github.com/coleifer/peewee/issues/558 | 63,284,332 | MDU6SXNzdWU2MzI4NDMzMg== | 558 | Get "raw" ForeignKeyField value without firing a query | {
"login": "rudyryk",
"id": 4500,
"node_id": "MDQ6VXNlcjQ1MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rudyryk",
"html_url": "https://github.com/rudyryk",
"followers_url": "https://api.github.com/users/rudyryk/followers",
"following_url": "https://api.github.com/users/rudyryk/following{/other_user}",
"gists_url": "https://api.github.com/users/rudyryk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rudyryk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rudyryk/subscriptions",
"organizations_url": "https://api.github.com/users/rudyryk/orgs",
"repos_url": "https://api.github.com/users/rudyryk/repos",
"events_url": "https://api.github.com/users/rudyryk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rudyryk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`post._data['user']`\n",
"Thank you Charles! :)\n"
] | 2015-03-20T19:06:40 | 2015-03-20T22:23:54 | 2015-03-20T20:29:29 | NONE | null | I haven't found a way to simply get related object id for ForeignKeyField without performing a query.
Is that currently possible? That would be convenient in cases when we need to know only related object id, i.e. when trying to read from cache etc.
Here's what I mean:
``` python
import peewee
db = peewee.SqliteDatabase('test.db')
class User(peewee.Model):
username = peewee.CharField(max_length=100)
class Meta:
database = db
User.create_table(true)
class Post(peewee.Model):
user = peewee.ForeignKeyField(User)
class Meta:
database = db
Post.create_table(true)
user = User.create(username="John")
Post.create(user=user)
post = Post.get(Post.id == 1)
# This will fire an extra query to get whole User object!
print(post.user.id)
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/558/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/557 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/557/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/557/comments | https://api.github.com/repos/coleifer/peewee/issues/557/events | https://github.com/coleifer/peewee/issues/557 | 63,249,466 | MDU6SXNzdWU2MzI0OTQ2Ng== | 557 | playhouse.db_url shouldn’t remove the leading slash | {
"login": "bfontaine",
"id": 1334295,
"node_id": "MDQ6VXNlcjEzMzQyOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1334295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bfontaine",
"html_url": "https://github.com/bfontaine",
"followers_url": "https://api.github.com/users/bfontaine/followers",
"following_url": "https://api.github.com/users/bfontaine/following{/other_user}",
"gists_url": "https://api.github.com/users/bfontaine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bfontaine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bfontaine/subscriptions",
"organizations_url": "https://api.github.com/users/bfontaine/orgs",
"repos_url": "https://api.github.com/users/bfontaine/repos",
"events_url": "https://api.github.com/users/bfontaine/events{/privacy}",
"received_events_url": "https://api.github.com/users/bfontaine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This follows the pattern used by SQLAlchemy:\n\nhttp://docs.sqlalchemy.org/en/rel_0_9/core/engines.html#sqlite\n\nThis translates to `sqlite://<no host, so empty>/<path to database>`.\n",
"See also: https://pythonhosted.org/Flask-SQLAlchemy/config.html\n",
"Thanks!\n"
] | 2015-03-20T15:29:20 | 2015-03-20T18:30:14 | 2015-03-20T17:23:45 | CONTRIBUTOR | null | [`playhouse.db_url`removes the first character of the path](https://github.com/coleifer/peewee/blob/master/playhouse/db_url.py#L45) , which means that when you think you’re connecting to `/foo/bar/mydb.db` it’ll in fact try to connect to `foo/bar/mydb.db` and fail.
This means I’ve to use `sqlite3:////foo/bar/mydb.db` to use an absolute path. Why does it behave like this?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/557/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/556 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/556/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/556/comments | https://api.github.com/repos/coleifer/peewee/issues/556/events | https://github.com/coleifer/peewee/issues/556 | 62,828,713 | MDU6SXNzdWU2MjgyODcxMw== | 556 | negative operator for descending order | {
"login": "langit",
"id": 3238258,
"node_id": "MDQ6VXNlcjMyMzgyNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3238258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/langit",
"html_url": "https://github.com/langit",
"followers_url": "https://api.github.com/users/langit/followers",
"following_url": "https://api.github.com/users/langit/following{/other_user}",
"gists_url": "https://api.github.com/users/langit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/langit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/langit/subscriptions",
"organizations_url": "https://api.github.com/users/langit/orgs",
"repos_url": "https://api.github.com/users/langit/repos",
"events_url": "https://api.github.com/users/langit/events{/privacy}",
"received_events_url": "https://api.github.com/users/langit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"- 08b3c0ee03576e16965b9003b38b4bad4b126fb3\n- 3ced2b6d3955243fb815250158c44af94b212781\n- 2f114136cb823f5a5b0c936dcc93d313c9e4f368\n"
] | 2015-03-18T23:41:41 | 2015-03-19T01:45:40 | 2015-03-19T01:45:40 | NONE | null | I'd like to propose a syntax sugar for ordering. Instead of
```
.order_by(User.name.desc())
```
one can also do
```
.order_by( - User.name)
```
which simply requires a hook method in class Node:
```
def __neg__(self):
self._ordering = "DESC"
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/556/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/555 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/555/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/555/comments | https://api.github.com/repos/coleifer/peewee/issues/555/events | https://github.com/coleifer/peewee/issues/555 | 62,790,348 | MDU6SXNzdWU2Mjc5MDM0OA== | 555 | Delete with joins | {
"login": "jakedt",
"id": 2183986,
"node_id": "MDQ6VXNlcjIxODM5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2183986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakedt",
"html_url": "https://github.com/jakedt",
"followers_url": "https://api.github.com/users/jakedt/followers",
"following_url": "https://api.github.com/users/jakedt/following{/other_user}",
"gists_url": "https://api.github.com/users/jakedt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jakedt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jakedt/subscriptions",
"organizations_url": "https://api.github.com/users/jakedt/orgs",
"repos_url": "https://api.github.com/users/jakedt/repos",
"events_url": "https://api.github.com/users/jakedt/events{/privacy}",
"received_events_url": "https://api.github.com/users/jakedt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This appears to not work at all with sqlite, and so is probably a non-starter. Feel free to close if you agree.\n",
"I don't believe it is supported by Postgres either, so yes I think I will close it out. If your deletes happen infrequently you could always recourse to having a \"raw\" SQL query to accomplish them, if that makes things easier.\n\nThanks for the good bug report and for being understanding about the wontfix.\n",
"This is supported in Postgresql with the [`USING` clause](https://www.postgresql.org/docs/current/static/sql-delete.html)\n\n> PostgreSQL lets you reference columns of other tables in the WHERE condition by specifying the other tables in the USING clause. For example, to delete all films produced by a given producer, one can do:\n> \n> `DELETE FROM films USING producers\n> WHERE producer_id = producers.id AND producers.name = 'foo';`\n\nBut since at the Peewee level you can just use s [raw query](http://stackoverflow.com/a/38068039/1161906), so I'm not sure this changes anything. \n"
] | 2015-03-18T20:23:53 | 2016-06-28T05:57:30 | 2015-03-18T23:02:35 | NONE | null | Currently peewee doesn't support joins in delete queries. We've worked around this by using where clauses and nested subqueries (to force mysql to create a temp table). This is causing our DB to have a lot of deadlocks. One way around this would be to allow deletes to use joins.
An example query:
``` python
subq = A.select(A.id).join(B).where(B.thing != example).alias('ps')
inner = A.select(subq.c.id).from_(subq)
A.delete().where(A.id << inner).execute()
```
could become:
``` python
A.delete(A).join(B).where(B.thing != example).execute()
```
Thoughts? Pointers on the implementation? Examples of where this fails horribly?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/555/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/554 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/554/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/554/comments | https://api.github.com/repos/coleifer/peewee/issues/554/events | https://github.com/coleifer/peewee/issues/554 | 62,788,832 | MDU6SXNzdWU2Mjc4ODgzMg== | 554 | SQLite journal mode hangs | {
"login": "wkschwartz",
"id": 1417749,
"node_id": "MDQ6VXNlcjE0MTc3NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1417749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wkschwartz",
"html_url": "https://github.com/wkschwartz",
"followers_url": "https://api.github.com/users/wkschwartz/followers",
"following_url": "https://api.github.com/users/wkschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/wkschwartz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wkschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkschwartz/subscriptions",
"organizations_url": "https://api.github.com/users/wkschwartz/orgs",
"repos_url": "https://api.github.com/users/wkschwartz/repos",
"events_url": "https://api.github.com/users/wkschwartz/events{/privacy}",
"received_events_url": "https://api.github.com/users/wkschwartz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2015-03-18T20:19:08 | 2015-03-18T20:47:28 | 2015-03-18T20:47:28 | NONE | null | ## Summary
The following code hangs indefinitely.
``` python
>>> import peewee
>>> db = peewee.SqliteDatabase(':memory:', journal_mode='WAL')
>>> db.connect()
```
## Diagnosis
When you hit Ctrl+C to trigger `KeyboardInterrupt` exception, you get the following traceback, indicating that `_add_conn_hooks`, which is called from `connect` (first line of the traceback), recursively calls back into `connect` (second to last line of the traceback), causing a deadlock with the database object's lock.
``` python
^CTraceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../venv/lib/python3.4/site-packages/peewee.py", line 2810, in connect
**self.connect_kwargs)
File ".../venv/lib/python3.4/site-packages/peewee.py", line 3018, in _connect
self._add_conn_hooks(conn)
File ".../venv/lib/python3.4/site-packages/peewee.py", line 3026, in _add_conn_hooks
self.execute_sql('PRAGMA journal_mode=%s;' % self._journal_mode)
File ".../venv/lib/python3.4/site-packages/peewee.py", line 2867, in execute_sql
cursor = self.get_cursor()
File ".../venv/lib/python3.4/site-packages/peewee.py", line 2833, in get_cursor
return self.get_conn().cursor()
File ".../venv/lib/python3.4/site-packages/peewee.py", line 2826, in get_conn
self.connect()
File ".../venv/lib/python3.4/site-packages/peewee.py", line 2803, in connect
with self._conn_lock:
KeyboardInterrupt
```
## Solution
- [ ] There appear to be no tests in the repo for `SqliteDatabase`'s `journal_mode` argument. One should be added.
- [ ] `_add_conn_hooks` needs to execute SQL directly on the `conn` connection argument it is passed rather than using the public `execute_sql` method. This will avoid the recursion.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/554/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/553 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/553/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/553/comments | https://api.github.com/repos/coleifer/peewee/issues/553/events | https://github.com/coleifer/peewee/issues/553 | 62,734,763 | MDU6SXNzdWU2MjczNDc2Mw== | 553 | implicit join via select function | {
"login": "langit",
"id": 3238258,
"node_id": "MDQ6VXNlcjMyMzgyNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3238258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/langit",
"html_url": "https://github.com/langit",
"followers_url": "https://api.github.com/users/langit/followers",
"following_url": "https://api.github.com/users/langit/following{/other_user}",
"gists_url": "https://api.github.com/users/langit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/langit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/langit/subscriptions",
"organizations_url": "https://api.github.com/users/langit/orgs",
"repos_url": "https://api.github.com/users/langit/repos",
"events_url": "https://api.github.com/users/langit/events{/privacy}",
"received_events_url": "https://api.github.com/users/langit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This is definitely a nice API and something I will consider for peewee 3.0 (whenever that might be, no timetable yet but I'm starting to think about it). I've added this to the TODOs for peewee 3. Closing for now.\n"
] | 2015-03-18T16:27:14 | 2015-03-19T01:50:10 | 2015-03-19T01:50:10 | NONE | null | I use django a lot and have just come to know about peewee, and I already like it. When I read of the N+1 behavior in the quickstart, an idea suddenly hit me: would it be nice if peewee can automatically figure out how to join tables behind the scenes so that users are relieved from such SQL concepts? Here is what I mean in more details:
```
for pet in Pet .select(Pet.name, Pet.owner.name
).where(Pet.animal_type == 'cat'):
print pet.name, pet.owner.name
```
From the fields provided in the select(...) function, it should be possible to infer what kind of join is needed to provide those selected fields. Just a very rough idea, I hope you find it interesting and worth refining. Looking forward to hearing from you on what you think about this idea.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/553/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/552 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/552/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/552/comments | https://api.github.com/repos/coleifer/peewee/issues/552/events | https://github.com/coleifer/peewee/issues/552 | 62,542,810 | MDU6SXNzdWU2MjU0MjgxMA== | 552 | Errors when inserting explicit ID in Postgresql table with auto-generated primary key | {
"login": "elgow",
"id": 11529401,
"node_id": "MDQ6VXNlcjExNTI5NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/11529401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elgow",
"html_url": "https://github.com/elgow",
"followers_url": "https://api.github.com/users/elgow/followers",
"following_url": "https://api.github.com/users/elgow/following{/other_user}",
"gists_url": "https://api.github.com/users/elgow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elgow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elgow/subscriptions",
"organizations_url": "https://api.github.com/users/elgow/orgs",
"repos_url": "https://api.github.com/users/elgow/repos",
"events_url": "https://api.github.com/users/elgow/events{/privacy}",
"received_events_url": "https://api.github.com/users/elgow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think this is going to be a wontfix for me. If you are relying on specific autogenerated primary key values in your tests, that seems problematic -- perhaps they should have a separate unique identifier. Similarly, when restoring backups, I don't imagine you would do that using peewee but use postgresql pg_restore instead.\n",
"Hello Charles,\n\nI am relying on peewee to import data because I want to be able to use either Postgresql or Sqlite underneath my application. I thought that portability was one of the main value propositions of peewee.\n\nMore importantly, the way that peewee gets the id of inserted records is non-standard for Postgresql. Since version 8.2 Postgresql has had the INSERT ... RETURNING clause. With that clause Postgresql will support inserting either an implicit or explicit key into an auto-increment column and capturing it without generating an error. Peewee will blow up. Two of the DBs supported by peewee work flawlessly to support that usage pattern, and I don't think it's an accident that they do. Peewee could make it so that all three worked properly. By not using the RETURNING clause, peewee creates a problem for the user and also degrades the performance of every insert statement by requiring a second query to retrieve the key.\n\nAs I said in the issue, the integrity of the sequence value is trickier, and it could be left to the user to handle with an explicit update to the sequence. I hope, though, that you will reconsider at least switching to using RETURNING clause for inserts in Postgresql.\n\nThanks,\n\n```\n Ed\n```\n\nOn Wed, 18 Mar 2015, Charles Leifer wrote:\n\n> I think this is going to be a wontfix for me. If you are relying on specific autogenerated primary key values in your tests, that seems problematic -- perhaps they should have a separate unique identifier. Similarly, when restoring\n> backups, I don't imagine you would do that using peewee but use postgresql pg_restore instead.\n> \n> —\n> Reply to this email directly or view it on GitHub.[AK_suUhbUTiZFbmk5zpNl_SCzHs5T0iOks5n2Y23gaJpZM4DwV3M.gif]\n",
"Not only will the INSERT...RETURNING feature in 2.5.0 speed up inserts in Postgresql, but it also fixes the explicit key insert issue. \n\nThank you very much for this fix. \n\n Ed\n",
"Sure thing, thanks for suggesting I implement it! It definitely needed to happen :)\n"
] | 2015-03-17T23:34:21 | 2015-03-22T18:35:22 | 2015-03-18T15:13:54 | NONE | null | Postgresql behavior differs from Sqlite and Mysql for auto-generated keys. While Sqlite and Mysql handle it seamlessly, Postgresql will fail in two ways if an insert with an explicitly specified key value is made in a table with an auto-generated key.
1) If the explicit key insert is the first insert in a session, the insert will fail with an error of "currval of sequence "pk_sequence_name" is not yet defined in this session".
2) The key generation sequence is not updated by the explicit key insert so, unless the update is done, the sequence is guaranteed to produce a duplicate key at some point in the future.
Problem 2 poses problems to fix in a transparent way, though it could be done. Problem 1 can be easily fixed through use of the Postgresql "returning" clause in the explicit key insert SQL statement instead of a call to currval() on the sequence.
While auto-generated keys would seem to preclude it, explicit key insertion often happens when restoring databases from saved or imported data, and when loading test data.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/552/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/551 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/551/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/551/comments | https://api.github.com/repos/coleifer/peewee/issues/551/events | https://github.com/coleifer/peewee/issues/551 | 62,514,956 | MDU6SXNzdWU2MjUxNDk1Ng== | 551 | QueryResultWrapper iterator broken | {
"login": "wkschwartz",
"id": 1417749,
"node_id": "MDQ6VXNlcjE0MTc3NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1417749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wkschwartz",
"html_url": "https://github.com/wkschwartz",
"followers_url": "https://api.github.com/users/wkschwartz/followers",
"following_url": "https://api.github.com/users/wkschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/wkschwartz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wkschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkschwartz/subscriptions",
"organizations_url": "https://api.github.com/users/wkschwartz/orgs",
"repos_url": "https://api.github.com/users/wkschwartz/repos",
"events_url": "https://api.github.com/users/wkschwartz/events{/privacy}",
"received_events_url": "https://api.github.com/users/wkschwartz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This is a more correct fix, but I'll have to think through the edge cases:\n\n``` diff\ndiff --git a/peewee.py b/peewee.py\nindex 0a640c3..f630bce 100644\n--- a/peewee.py\n+++ b/peewee.py\n@@ -1837,6 +1837,8 @@ class QueryResultWrapper(object):\n inst = self._result_cache[self.__idx]\n self.__idx += 1\n return inst\n+ elif self._populated:\n+ raise StopIteration\n\n obj = self.iterate()\n self._result_cache.append(obj)\n```\n",
"Just out of curiosity, why are you wrapping the select call in `iter`, e.g.:\n\n``` python\niterator = iter(Table.select())\n```\n\nHow did you come about this bug in production?\n",
"The problem arose when I wrapped a query in `map` on Python 3. I had forgotten to wrap the whole thing in `list` so I could iterate over the object a second time.\n"
] | 2015-03-17T20:48:37 | 2015-03-19T19:25:36 | 2015-03-19T02:54:03 | NONE | null | ## Summary
Peewee's `QueryResultWrapper.__next__` does not raise `StopIteration` exceptions on subsequent calls after the first `StopIteration` in contrast to the Python iterator protocol (documentation quoted/linked below). Instead, `QueryResultWrapper` iterators raise a `sqlite3.ProgrammingError`. The following test case demonstrates this.
## Test case
``` python
import peewee
database = peewee.SqliteDatabase(":memory:")
class Table(peewee.Model):
id = peewee.PrimaryKeyField()
class Meta(object):
database = database
database.create_tables([Table])
with database.atomic():
Table.insert_many(({'id': i} for i in range(5))).execute()
iterator = iter(Table.select())
# The first use of `iterator` works fine
for table_instance in iterator:
pass
# The next line produces the wrong error type in violation of the iterator protocol.
next(iterator)
```
## Traceback
``` python
---------------------------------------------------------------------------
ProgrammingError Traceback (most recent call last)
<ipython-input-9-edd1adac5cd0> in <module>()
----> 1 next(iterable)
venv/lib/python3.4/site-packages/peewee.py in next(self)
1795 return inst
1796
-> 1797 obj = self.iterate()
1798 self._result_cache.append(obj)
1799 self.__ct += 1
venv/lib/python3.4/site-packages/peewee.py in iterate(self)
1774
1775 def iterate(self):
-> 1776 row = self.cursor.fetchone()
1777 if not row:
1778 self._populated = True
ProgrammingError: Cannot operate on a closed cursor.
```
## Expected result: the iterator protocol
Subsequent calls to `next` (either explicitly or through a `for` loop) should raise `StopIteration` exceptions.
From the documentation for the [Python 3 iterator protocol](https://docs.python.org/3/library/stdtypes.html#iterator.__next__):
> Once an iterator’s [`__next__()`](https://docs.python.org/3/library/stdtypes.html#iterator.__next__) method raises [`StopIteration`](https://docs.python.org/3/library/exceptions.html#StopIteration), it must continue to do so on subsequent calls. Implementations that do not obey this property are deemed broken.
From the [Python 2 iterator protocol](https://docs.python.org/2/library/stdtypes.html#iterator.next):
> The intention of the protocol is that once an iterator’s next() method raises StopIteration, it will continue to do so on subsequent calls. Implementations that do not obey this property are deemed broken. (This constraint was added in Python 2.3; in Python 2.2, various iterators are broken according to this rule.)
## Suggested fix
Wrapping the `row = self.cursor.fetchone()` (see traceback above) line in
``` python
try:
row = self.cursor.fetchone()
except sqlite3.ProgrammingError:
raise StopIteration
```
would do the trick. A `try` block is fine since loops won't trigger the exception handling -- it's only if you erroneously keep using the iterator after the `StopIteration` that you'll get an exception, and `try` blocks are [efficient when there's no exception to catch](https://docs.python.org/3/faq/design.html#how-fast-are-exceptions).
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/551/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/550 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/550/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/550/comments | https://api.github.com/repos/coleifer/peewee/issues/550/events | https://github.com/coleifer/peewee/issues/550 | 62,153,445 | MDU6SXNzdWU2MjE1MzQ0NQ== | 550 | Foreign key enforcement in SQLite | {
"login": "wkschwartz",
"id": 1417749,
"node_id": "MDQ6VXNlcjE0MTc3NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1417749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wkschwartz",
"html_url": "https://github.com/wkschwartz",
"followers_url": "https://api.github.com/users/wkschwartz/followers",
"following_url": "https://api.github.com/users/wkschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/wkschwartz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wkschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkschwartz/subscriptions",
"organizations_url": "https://api.github.com/users/wkschwartz/orgs",
"repos_url": "https://api.github.com/users/wkschwartz/repos",
"events_url": "https://api.github.com/users/wkschwartz/events{/privacy}",
"received_events_url": "https://api.github.com/users/wkschwartz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You can subclass `SqliteDatabase` and override the `_add_conn_hooks` method. I don't think I will add this as a database option, but it should be easy to add yourself.\r\n\r\n**Edit**: this is out-dated, see bottom of page for more info.",
"Using `_add_conn_hooks` doesn't work. In fact, it doesn't even work for `journal_mode`. See #554.\n\nI created a wrapper connection function for my global database object `_db` that looks something like:\n\n``` python\ndef connect():\n _db.connect()\n if isinstance(_db, peewee.SqliteDatabase):\n if _db.execute_sql('PRAGMA foreign_keys).fetchone() is None:\n _db.close()\n raise RuntimeError('Your installed version of SQLite does not support foreign keys.')\n _db.execute_sql('PRAGMA foreign_keys = ON')\n```\n",
"I've gone ahead and addressed the issue in #554. I'm not sure why `_add_conn_hooks` won't work. Can you not write:\n\n``` python\ncursor = conn.cursor()\ncursor.execute('pragma foreign_keys=ON')\n```\n",
"I'm sure that would work. I just gave up too quickly.\n\nDoes the fact that you recommend `_add_conn_hooks` mean you intend on keeping its API stable? I assumed from the fact that it's a private method client code (such as mine) shouldn't use it.\n",
"I think it'd be good to add a public API for interacting with a new connection. I will add that, add docs, and update you with my progress.\n",
"Added 1040736. Added bonus, you can call `execute_sql()` from this hook.\n\nThanks for working with me on this. Good feature for sure.\n",
"Very cool.\n",
"I have a bit of feedback on this one, which I hope is helpful:\r\n\r\nIt strikes me that having to subclass SqliteDatabase is a little heavyweight for something that I would've expected to \"just work\". I just ran into this issue, and was quite surprised to find that Sqlite turns foreign-key behaviour off by default. Worse, it depends on compile-time options, etc, and even the Sqlite docs say not to depend on either behaviour as default.\r\n\r\nI think it's fairly important for users to be aware of this and make it easy to turn it on. It's simple (and understandable) to assume that if you set on_delete='CASCADE' then that's exactly what will happen. \r\n\r\nIn fact, this is a bigger problem than it first appears: because peewee does not specify AUTOINCREMENT on primary key fields, SQLite will feel free to re-use deleted ids. In combination with foreign keys not working, you can easily get nasty data bugs where an orphaned child will get re-linked to the next parent that comes along. http://sqlite.org/autoinc.html\r\n\r\nA couple suggestions that I think would help a lot:\r\n1. Mentioning this issue near the top of the Sqlite section of the docs, instead of being tucked away under \"Additional connection initialization\", would bring it to people's attention.\r\n2. Specifying AUTOINCREMENT on id fields seems like a good idea.\r\n3. Simpler code that doesn't require subclassing SqliteDatabase would be nice.\r\n4. In fact, you can already do this simply with `SqliteDatabase(..., pragmas=[('foreign_keys', 'ON')])`. I propose that should be the \"officially recommended\" solution at least if the code were to remain status quo.\r\n5. I think a `foreign_keys` parameter to SqliteDatabase() is a great idea! I'd be interested to know your rationale for not adding it.\r\n\r\nHope I'm not coming across as overly critical! Peewee's great, and I respect your work.\r\n\r\nIs it ok to comment on closed issues like this, or should I raise a new issue?",
"SqliteDatabase can now be instantiated with pragmas, so:\r\n\r\n```python\r\ndb = SqliteDatabase('filename.db', pragmas=(('foreign_keys', 'on'),))\r\n```",
"> Specifying AUTOINCREMENT on id fields seems like a good idea.\r\n\r\nThis has performance implications. Hence, we use the default SQLite behavior but allow you to use auto-increment if you really want that. http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#PrimaryKeyAutoIncrementField"
] | 2015-03-16T17:22:47 | 2016-12-11T21:23:47 | 2015-03-17T17:55:18 | NONE | null | Foreign keys are disabled by [default in SQLite3](http://www.sqlite.org/foreignkeys.html#fk_enable). It would be nice to have an optional `enable_foreign_keys` boolean argument to the `SqliteDatabase` constructor. It would have two effects. Suppose `db = SqliteDatabase(filename, enable_foreign_keys=True)`. Then
1. `db.foreign_keys is True`; and
2. `db.create_tables` would issue
``` SQL
PRAGMA foreign_keys = ON;
```
Issuing that `PRAGMA` in the middle of a multi-statement transaction has no effect and swallows the error. It is only possible to turn on `foreign_keys` in SQLite versions greater than 3.6.19 when SQLite is compiled with `SQLITE_OMIT_TRIGGER` and `SQLITE_OMIT_FOREIGN_KEY` turned _off_. The versions of SQLite compiled into Python versions 2.7.9 and 3.4.3 on Windows appear to support `foreign_keys`.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/550/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/549 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/549/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/549/comments | https://api.github.com/repos/coleifer/peewee/issues/549/events | https://github.com/coleifer/peewee/pull/549 | 61,157,406 | MDExOlB1bGxSZXF1ZXN0MzExNDgyNzg= | 549 | Custom bindings for selection expressions | {
"login": "jhorman",
"id": 323697,
"node_id": "MDQ6VXNlcjMyMzY5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/323697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jhorman",
"html_url": "https://github.com/jhorman",
"followers_url": "https://api.github.com/users/jhorman/followers",
"following_url": "https://api.github.com/users/jhorman/following{/other_user}",
"gists_url": "https://api.github.com/users/jhorman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jhorman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jhorman/subscriptions",
"organizations_url": "https://api.github.com/users/jhorman/orgs",
"repos_url": "https://api.github.com/users/jhorman/repos",
"events_url": "https://api.github.com/users/jhorman/events{/privacy}",
"received_events_url": "https://api.github.com/users/jhorman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Related to topic here https://groups.google.com/forum/#!topic/peewee-orm/Ny-2UE6yPvs\n",
"Neat! Do you mind renaming `target` to `bind_to` and adding documentation on the `Node` class' api docs?\n",
"Sounds good. Made those changes @coleifer \n"
] | 2015-03-13T17:48:00 | 2015-03-13T22:21:14 | 2015-03-13T22:21:14 | CONTRIBUTOR | null | Currently there is no way to have the results of a selection sub expression land on a joined instance, instead of on the top level instance. Many times a model is expected to always have a field defined, even if it was inflated via a join.
Example:
``` python
BlogEntry.select(
BlogEntry,
User,
fn.Exists(Role.select(Role.id).where(
Role.user == User.id,
Role.role == 'admin'
)).alias('is_admin').bind_to(User)
).join(User)
```
Here, we expect the `User` objects to always have `is_admin` defined, even if inflated via a join.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/549/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/549",
"html_url": "https://github.com/coleifer/peewee/pull/549",
"diff_url": "https://github.com/coleifer/peewee/pull/549.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/549.patch",
"merged_at": "2015-03-13T22:21:14"
} |
https://api.github.com/repos/coleifer/peewee/issues/548 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/548/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/548/comments | https://api.github.com/repos/coleifer/peewee/issues/548/events | https://github.com/coleifer/peewee/pull/548 | 61,077,218 | MDExOlB1bGxSZXF1ZXN0MzExMzAyODA= | 548 | Error with long names in tables | {
"login": "sidan93",
"id": 9017730,
"node_id": "MDQ6VXNlcjkwMTc3MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9017730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sidan93",
"html_url": "https://github.com/sidan93",
"followers_url": "https://api.github.com/users/sidan93/followers",
"following_url": "https://api.github.com/users/sidan93/following{/other_user}",
"gists_url": "https://api.github.com/users/sidan93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sidan93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sidan93/subscriptions",
"organizations_url": "https://api.github.com/users/sidan93/orgs",
"repos_url": "https://api.github.com/users/sidan93/repos",
"events_url": "https://api.github.com/users/sidan93/events{/privacy}",
"received_events_url": "https://api.github.com/users/sidan93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think the better fix is to just manually specify the sequence name to be whatever postgresql truncated it to. e.g.:\n\n``` python\ntabletabletable = PrimaryKeyField(sequence='tabletabletable_id_s')\n```\n"
] | 2015-03-13T14:31:57 | 2015-03-14T05:25:43 | 2015-03-14T05:25:43 | NONE | null | DATABASE: postgressql
only works postgressql!
I have a table called "TableTableTable" with primary key "@TableTableTable"
CURRVAL complains of a very long name "TableTableTable_@TableTableTable_seq" and cuts it
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/548/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/548",
"html_url": "https://github.com/coleifer/peewee/pull/548",
"diff_url": "https://github.com/coleifer/peewee/pull/548.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/548.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/547 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/547/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/547/comments | https://api.github.com/repos/coleifer/peewee/issues/547/events | https://github.com/coleifer/peewee/issues/547 | 60,686,396 | MDU6SXNzdWU2MDY4NjM5Ng== | 547 | pwiz takes a very long time | {
"login": "digi604",
"id": 25490,
"node_id": "MDQ6VXNlcjI1NDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/25490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/digi604",
"html_url": "https://github.com/digi604",
"followers_url": "https://api.github.com/users/digi604/followers",
"following_url": "https://api.github.com/users/digi604/following{/other_user}",
"gists_url": "https://api.github.com/users/digi604/gists{/gist_id}",
"starred_url": "https://api.github.com/users/digi604/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/digi604/subscriptions",
"organizations_url": "https://api.github.com/users/digi604/orgs",
"repos_url": "https://api.github.com/users/digi604/repos",
"events_url": "https://api.github.com/users/digi604/events{/privacy}",
"received_events_url": "https://api.github.com/users/digi604/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Uhhh this isn't a bug really. How many tables you got? What kind of database?\n",
"Also typically you'll only need to run pwiz once so even if it takes a while hopefully it's a one-time thing. pwiz takes a long time because it needs to traverse the graph of tables and introspect each one, finding col type info, constraints, indexes and foreign keys. There's no, afaik, way to speed that up. If you want to profile the code and try to find hotspots, by all means, but I think it's pretty much going to take time for large databases.\n"
] | 2015-03-11T16:28:18 | 2015-03-11T17:59:00 | 2015-03-11T17:59:00 | NONE | null | i have a db with many tables... pwiz seams to take a looong time... up to 30min to generate 1 table model
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/547/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/546 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/546/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/546/comments | https://api.github.com/repos/coleifer/peewee/issues/546/events | https://github.com/coleifer/peewee/issues/546 | 60,372,364 | MDU6SXNzdWU2MDM3MjM2NA== | 546 | extending database.connect_kwargs in / with ExecutionContext | {
"login": "richey-v",
"id": 6144958,
"node_id": "MDQ6VXNlcjYxNDQ5NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6144958?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richey-v",
"html_url": "https://github.com/richey-v",
"followers_url": "https://api.github.com/users/richey-v/followers",
"following_url": "https://api.github.com/users/richey-v/following{/other_user}",
"gists_url": "https://api.github.com/users/richey-v/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richey-v/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richey-v/subscriptions",
"organizations_url": "https://api.github.com/users/richey-v/orgs",
"repos_url": "https://api.github.com/users/richey-v/repos",
"events_url": "https://api.github.com/users/richey-v/events{/privacy}",
"received_events_url": "https://api.github.com/users/richey-v/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Unfortunately I don't know how well this will work given the API of `ExecutionContext`. You might look into the new `Using` helper, as I think this is more what you want:\n\nhttp://docs.peewee-orm.com/en/latest/peewee/api.html#Using\n"
] | 2015-03-09T16:41:38 | 2015-03-09T23:36:01 | 2015-03-09T23:36:01 | NONE | null | I'd like to be able to have same application (in Flask) be able to connect to database (MySQL) with different arguments for some transactions. Specifically, looking for a way to have certain transactions coming from certain URLs be read-only and some be read/write.
I thought db.execution_context decorator would be a very cool way to annotate the handlers for the smaller set of the read/write URLs (letting default be read-only).
I started digging into extending peewee classes to do this, but wanted to check with you before spending much time. Maybe a similar feature is planned, or maybe there is a better way?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/546/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/545 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/545/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/545/comments | https://api.github.com/repos/coleifer/peewee/issues/545/events | https://github.com/coleifer/peewee/issues/545 | 60,267,259 | MDU6SXNzdWU2MDI2NzI1OQ== | 545 | Meta Class fields not inherited | {
"login": "stephenfin",
"id": 1690835,
"node_id": "MDQ6VXNlcjE2OTA4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1690835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stephenfin",
"html_url": "https://github.com/stephenfin",
"followers_url": "https://api.github.com/users/stephenfin/followers",
"following_url": "https://api.github.com/users/stephenfin/following{/other_user}",
"gists_url": "https://api.github.com/users/stephenfin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stephenfin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stephenfin/subscriptions",
"organizations_url": "https://api.github.com/users/stephenfin/orgs",
"repos_url": "https://api.github.com/users/stephenfin/repos",
"events_url": "https://api.github.com/users/stephenfin/events{/privacy}",
"received_events_url": "https://api.github.com/users/stephenfin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This doesn't look like peewee to me.\n",
"Nope - it's Marshmallow. My apologies.\n"
] | 2015-03-08T17:36:41 | 2015-03-08T18:55:01 | 2015-03-08T18:10:03 | NONE | null | Take two models that demonstrate inheritance:
```
class BaseSchema(Schema):
"""Base serializer."""
id = fields.Integer(dump_only=True)
created_at = fields.DateTime(dump_only=True)
class Meta:
ordered = True
class PersonSchema(BaseSchema):
"""Person serializer."""
updated_at = fields.DateTime(dump_only=True)
name = fields.String()
email = fields.Email(required=True)
```
The `PersonSchema` schema inherits all the attributes of the `BaseSchema`, as expected. However, it does not seem to inherit the meta attributes (i.e. `ordered`). To ensure ordered output, it is necessary to include these two lines in every model:
```
class Meta:
ordered = True
```
I don't know if this is something that could be solved or not (I'm not all that familiar with the implementation of these 'Meta' classes).
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/545/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/544 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/544/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/544/comments | https://api.github.com/repos/coleifer/peewee/issues/544/events | https://github.com/coleifer/peewee/pull/544 | 60,233,975 | MDExOlB1bGxSZXF1ZXN0MzA3MTIyNTI= | 544 | Max length reflection | {
"login": "garar",
"id": 220005,
"node_id": "MDQ6VXNlcjIyMDAwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/220005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garar",
"html_url": "https://github.com/garar",
"followers_url": "https://api.github.com/users/garar/followers",
"following_url": "https://api.github.com/users/garar/following{/other_user}",
"gists_url": "https://api.github.com/users/garar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garar/subscriptions",
"organizations_url": "https://api.github.com/users/garar/orgs",
"repos_url": "https://api.github.com/users/garar/repos",
"events_url": "https://api.github.com/users/garar/events{/privacy}",
"received_events_url": "https://api.github.com/users/garar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think I'm going to pass on this, but thank you for the PR.\n",
"Hi!\n\nCan I get some arguments against it? Currently when you run generate_models from Introspector it will return invalid models. max_length with be incorrect for CHAR and VARCHAR fields. I would call it a bug.\n\nThanks!\n"
] | 2015-03-08T00:38:12 | 2015-03-08T18:21:05 | 2015-03-08T18:09:02 | NONE | null | Hello!
I've added support for max_length to reflection. Previously reflection didn't read max_length for varchar or char.
Left me know if you need anything else for this.
Thanks!
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/544/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/544",
"html_url": "https://github.com/coleifer/peewee/pull/544",
"diff_url": "https://github.com/coleifer/peewee/pull/544.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/544.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/543 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/543/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/543/comments | https://api.github.com/repos/coleifer/peewee/issues/543/events | https://github.com/coleifer/peewee/issues/543 | 60,152,907 | MDU6SXNzdWU2MDE1MjkwNw== | 543 | postgresql DEFAULT now() | {
"login": "mmongeon-aa",
"id": 9663290,
"node_id": "MDQ6VXNlcjk2NjMyOTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9663290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmongeon-aa",
"html_url": "https://github.com/mmongeon-aa",
"followers_url": "https://api.github.com/users/mmongeon-aa/followers",
"following_url": "https://api.github.com/users/mmongeon-aa/following{/other_user}",
"gists_url": "https://api.github.com/users/mmongeon-aa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmongeon-aa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmongeon-aa/subscriptions",
"organizations_url": "https://api.github.com/users/mmongeon-aa/orgs",
"repos_url": "https://api.github.com/users/mmongeon-aa/repos",
"events_url": "https://api.github.com/users/mmongeon-aa/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmongeon-aa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Peewee does not support database defaults out of the box. You can subclass `playhouse.postgres_ext.DateTimeTZField` and override the `__ddl__` method, however:\n\n``` python\nclass DefaultNowDateTimeTZField(DateTimeTZField):\n def __ddl__(self, column_type):\n ddl = super(DefaultNowDateTimeTZField, self).__ddl__(column_type)\n ddl.append(SQL('DEFAULT now()'))\n return ddl\n```\n",
"Peewee supports server-side constraints as of a while ago but I apologize as I never followed through updating this ticket. If you are still curious how to do this, you can:\n\n``` python\ntimestamp = DateTimeField(constraints=[SQL('DEFAULT now()')])\n```\n",
"@coleifer it doesn't look like it's working with `SqliteDatabase`\r\n\r\n```\r\nself = <peewee.SqliteDatabase object at 0x10522fd68>\r\nsql = 'CREATE TABLE \"my_table\" (\"id\" INTEGER NOT NULL PRIMARY KEY, \"asin\" VARCHAR(255) NOT NULL, \"cat... \"updated_at\" DATETIME NOT NULL DEFAULT now(), FOREIGN KEY (\"category_id\") REFERENCES \"category\" (\"id\"))'\r\nparams = [], require_commit = True\r\n\r\n def execute_sql(self, sql, params=None, require_commit=True):\r\n logger.debug((sql, params))\r\n with self.exception_wrapper:\r\n cursor = self.get_cursor()\r\n try:\r\n> cursor.execute(sql, params or ())\r\nE peewee.OperationalError: near \"(\": syntax error\r\n\r\nvenv/lib/python3.6/site-packages/peewee.py:3758: OperationalError\r\n```",
"In SQLite the expression is different:\r\n\r\n```sql\r\ntimestamp DATETIME DEFAULT CURRENT_TIMESTAMP\r\n```"
] | 2015-03-06T20:16:38 | 2017-04-15T17:08:43 | 2015-03-06T20:21:08 | NONE | null | I'm trying to create a table in PostgreSQL that uses timestamp with time zone field default now()
```
from playhouse.postgres_ext import *
from peewee import *
from playhouse.csv_loader import load_csv
db = PostgresqlExtDatabase('mydb', host='127.0.0.1', user='user', password='pass')
class BaseModel(Model):
class Meta:
database = db
class my_table(BaseModel):
rpt_grp = TextField()
rpt_int_grp = TextField(null=True)
event_name = TextField()
form_id = TextField()
start_ts = DateField()
end_ts = DateField(null=True)
ts_insert = DateTimeTZField(default='now()')
ts_update = DateTimeTZField(default='now()')
db.connect()
my_table.create_table()
```
This creates a table successfully with the following DDL:
```
CREATE TABLE my_table
(
id serial NOT NULL,
rpt_grp text NOT NULL,
rpt_int_grp text,
event_name text NOT NULL,
form_id text NOT NULL,
start_ts date,
end_ts date,
ts_insert timestamp with time zone NOT NULL,
ts_update timestamp with time zone NOT NULL,
CONSTRAINT my_table_pkey PRIMARY KEY (id)
)
```
What i really need peewee to do is create a table that returns this:
```
CREATE TABLE my_table_now
(
id serial NOT NULL,
rpt_grp text NOT NULL,
rpt_int_grp text,
event_name text NOT NULL,
form_id text NOT NULL,
start_ts date,
end_ts date,
ts_insert timestamp with time zone NOT NULL DEFAULT now(),
ts_update timestamp with time zone NOT NULL DEFAULT now()
)
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/543/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/542 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/542/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/542/comments | https://api.github.com/repos/coleifer/peewee/issues/542/events | https://github.com/coleifer/peewee/issues/542 | 60,037,600 | MDU6SXNzdWU2MDAzNzYwMA== | 542 | UnsignedIntegerField? | {
"login": "nilp0inter",
"id": 1224006,
"node_id": "MDQ6VXNlcjEyMjQwMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1224006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nilp0inter",
"html_url": "https://github.com/nilp0inter",
"followers_url": "https://api.github.com/users/nilp0inter/followers",
"following_url": "https://api.github.com/users/nilp0inter/following{/other_user}",
"gists_url": "https://api.github.com/users/nilp0inter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nilp0inter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nilp0inter/subscriptions",
"organizations_url": "https://api.github.com/users/nilp0inter/orgs",
"repos_url": "https://api.github.com/users/nilp0inter/repos",
"events_url": "https://api.github.com/users/nilp0inter/events{/privacy}",
"received_events_url": "https://api.github.com/users/nilp0inter/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"http://docs.peewee-orm.com/en/latest/peewee/models.html#creating-a-custom-field\n",
"if you got here through google - like me - here is a quick solution which worked for my use case\r\n``\r\nuid = SmallIntegerField(null=True, constraints=[SQL(\"UNSIGNED\")])\r\n``",
":+1: thanks for sharing @sooslaca ",
"got this one worked:\r\n\r\n```\r\nclass UnsignedIntegerField(IntegerField):\r\n field_type = 'int unsigned'\r\n```\r\n\r\n(peewee 3.3.4)",
"@sooslaca \r\nit doesn't work as you said.",
"You can use @binderclip snippet, which works for Peewee 3.x and newer.",
"> got this one worked:\r\n> \r\n> ```\r\n> class UnsignedIntegerField(IntegerField):\r\n> field_type = 'int unsigned'\r\n> ```\r\n> \r\n> (peewee 3.3.4)\r\n\r\nGood luck using that with decimals."
] | 2015-03-06T00:01:34 | 2022-10-21T10:24:21 | 2015-03-06T00:17:32 | CONTRIBUTOR | null | Hi,
I'm trying to create a model with an unsigned integer field but I can't find how to do it with peewee.
Is this possible?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/542/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/541 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/541/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/541/comments | https://api.github.com/repos/coleifer/peewee/issues/541/events | https://github.com/coleifer/peewee/pull/541 | 59,728,928 | MDExOlB1bGxSZXF1ZXN0MzA0MjYzMDA= | 541 | In PostgresqlExtDatabase, respect the 'autorollback' option | {
"login": "davidmcclure",
"id": 814168,
"node_id": "MDQ6VXNlcjgxNDE2OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/814168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidmcclure",
"html_url": "https://github.com/davidmcclure",
"followers_url": "https://api.github.com/users/davidmcclure/followers",
"following_url": "https://api.github.com/users/davidmcclure/following{/other_user}",
"gists_url": "https://api.github.com/users/davidmcclure/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidmcclure/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidmcclure/subscriptions",
"organizations_url": "https://api.github.com/users/davidmcclure/orgs",
"repos_url": "https://api.github.com/users/davidmcclure/repos",
"events_url": "https://api.github.com/users/davidmcclure/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidmcclure/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for this!\n"
] | 2015-03-04T00:07:02 | 2015-03-04T00:53:14 | 2015-03-04T00:53:10 | CONTRIBUTOR | null | First of all, thanks so much for Peewee! I ran across the problem described in #240, which was fixed by the addition of the `autorollback` database option.
I was setting `autorollback=True`, but still hitting the problem, and realized that the implementation of `execute_sql` in `PostgresqlExtDatabase` (which I'm using) doesn't include the check to see if the transaction should be automatically rolled back if the query throws an exception. I just copied in the two lines from the parent `Database` class that do this.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/541/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/541",
"html_url": "https://github.com/coleifer/peewee/pull/541",
"diff_url": "https://github.com/coleifer/peewee/pull/541.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/541.patch",
"merged_at": "2015-03-04T00:53:10"
} |
https://api.github.com/repos/coleifer/peewee/issues/540 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/540/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/540/comments | https://api.github.com/repos/coleifer/peewee/issues/540/events | https://github.com/coleifer/peewee/pull/540 | 59,225,060 | MDExOlB1bGxSZXF1ZXN0MzAxNTk1OTQ= | 540 | Traverse multiple FKs to the same model with several joins | {
"login": "sangwa",
"id": 7068898,
"node_id": "MDQ6VXNlcjcwNjg4OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7068898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sangwa",
"html_url": "https://github.com/sangwa",
"followers_url": "https://api.github.com/users/sangwa/followers",
"following_url": "https://api.github.com/users/sangwa/following{/other_user}",
"gists_url": "https://api.github.com/users/sangwa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sangwa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sangwa/subscriptions",
"organizations_url": "https://api.github.com/users/sangwa/orgs",
"repos_url": "https://api.github.com/users/sangwa/repos",
"events_url": "https://api.github.com/users/sangwa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sangwa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The way to do this with peewee currently is to use aliases in the joins, which tell peewee what attribute to patch the related data onto.\n\nIf I modify your example,\n\n``` python\nrelations = (Relation\n .select(Relation, parents, children)\n .join(parents, on=(Relation.parent == parents.id).alias('parent'))\n .switch(Relation)\n .join(children, on=(Relation.child == children.id).alias('child')))\n```\n\nI can get the related values and ensure that they are being retrieve in a single query:\n\n``` python\nfrom playhouse.test_utils import assert_query_count\nwith assert_query_count(1):\n for rel in relations:\n print rel.id, rel.parent.name, rel.child.name\n```\n",
"Thanks! Tried aliases before but somehow it didnt work.\n"
] | 2015-02-27T11:34:54 | 2015-02-27T15:45:06 | 2015-02-27T15:45:06 | NONE | null | Consider the following many-to-many relationship:
``` python
from peewee import Model, PrimaryKeyField, ForeignKeyField, CharField
class Entity(Model):
id = PrimaryKeyField()
name = CharField(unique=True)
class Relation(Model):
class Meta:
indexes = ((('parent', 'child'), True),)
id = PrimaryKeyField()
parent = ForeignKeyField(Entity, related_name='children')
child = ForeignKeyField(Entity, related_name='parents')
data = CharField()
```
If I want to fetch the list of _relations_ and access their members of **both** sides ("parent" and "child") without additional queries, for example:
``` python
parents = Entity.alias()
children = Entity.alias()
relations = (Relation
.select(Relation, parents, children)
.join(parents, on=(Relation.parent == parents.id))
.switch(Relation)
.join(children, on=(Relation.child == children.id)))
for rel in relations:
print rel.id, rel.parent.name, rel.children.name
```
this won't work, because the current implementation of joins will only scan the list of fields up to the **first** foreign key to the joined model, so the second `.join(children, ...)` in the code above will stop scanning on the `parent` attribute and set it second time after the first `.join(parents, ...)` instead of setting the `child` attribute which is actually specified in the expression of the join (and data for it is correctly selected by the generated query).
To overcome this, the patch introduced iterates over all the foreign keys to the same model and selects the appropriate target field based on the join expression, if provided with, so the example code above works properly.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/540/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/540",
"html_url": "https://github.com/coleifer/peewee/pull/540",
"diff_url": "https://github.com/coleifer/peewee/pull/540.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/540.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/539 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/539/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/539/comments | https://api.github.com/repos/coleifer/peewee/issues/539/events | https://github.com/coleifer/peewee/issues/539 | 59,105,050 | MDU6SXNzdWU1OTEwNTA1MA== | 539 | Issue with database.transaction() for playhouse.flask_utils import FlaskDB | {
"login": "havannavar",
"id": 1104650,
"node_id": "MDQ6VXNlcjExMDQ2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1104650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/havannavar",
"html_url": "https://github.com/havannavar",
"followers_url": "https://api.github.com/users/havannavar/followers",
"following_url": "https://api.github.com/users/havannavar/following{/other_user}",
"gists_url": "https://api.github.com/users/havannavar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/havannavar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/havannavar/subscriptions",
"organizations_url": "https://api.github.com/users/havannavar/orgs",
"repos_url": "https://api.github.com/users/havannavar/repos",
"events_url": "https://api.github.com/users/havannavar/events{/privacy}",
"received_events_url": "https://api.github.com/users/havannavar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You need to access the `peewee` database associated with the Flask `Database` object. So,\n\n``` python\nwith database.database.transaction():\n ...\n```\n"
] | 2015-02-26T17:11:23 | 2015-02-26T18:01:30 | 2015-02-26T18:01:30 | NONE | null | I am using FlaskDB
```
from flask import Flask
from playhouse.flask_utils import FlaskDB
DATABASE = {
'name': 'testdb',
'engine': 'playhouse.pool.PooledPostgresqlExtDatabase',
'user': 'sats',
'max_connections': 32,
'stale_timeout': 600,
}
app = Flask(__name__)
app.config.from_object(__name__)
database = FlaskDB(app)
```
How to use database.transaction() as mentioned below
# Explicitly roll back a transaction.
```
with database.transaction() as txn:
do_some_stuff()
if something_bad_happened():
# Roll back any changes made within this block.
txn.rollback()
```
as FlaskDB has no attribute 'transaction'
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/539/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/538 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/538/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/538/comments | https://api.github.com/repos/coleifer/peewee/issues/538/events | https://github.com/coleifer/peewee/issues/538 | 59,097,240 | MDU6SXNzdWU1OTA5NzI0MA== | 538 | (2006, "MySQL server has gone away (error(32, 'Broken pipe'))") | {
"login": "emamirazavi",
"id": 3804866,
"node_id": "MDQ6VXNlcjM4MDQ4NjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3804866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emamirazavi",
"html_url": "https://github.com/emamirazavi",
"followers_url": "https://api.github.com/users/emamirazavi/followers",
"following_url": "https://api.github.com/users/emamirazavi/following{/other_user}",
"gists_url": "https://api.github.com/users/emamirazavi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emamirazavi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emamirazavi/subscriptions",
"organizations_url": "https://api.github.com/users/emamirazavi/orgs",
"repos_url": "https://api.github.com/users/emamirazavi/repos",
"events_url": "https://api.github.com/users/emamirazavi/events{/privacy}",
"received_events_url": "https://api.github.com/users/emamirazavi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"http://docs.peewee-orm.com/en/latest/peewee/database.html#error-2006-mysql-server-has-gone-away\n",
"For an alternate fix, see also 017e4e4952f0b429dd21f0e90465a0f644cf6015\n",
"For a new alternate fix:\r\n```python\r\nfrom playhouse.shortcuts import ReconnectMixin\r\n\r\n\r\nclass MyRetryDB(ReconnectMixin, MySQLDatabase):\r\n pass\r\n```"
] | 2015-02-26T16:24:44 | 2021-04-28T07:32:33 | 2015-02-26T18:02:31 | NONE | null | When i want to save my model, my connection broke! and raises an exception with title of this issue, but mysql logfile reads:
INSERT INTO `inboundmsg` (`tojid`, `fromjid`, `type`, `datetime`, `content`, `url`, `size`, `local`, `status`) VALUES ('989393310102', '989102260264', 'image', '2015-02-26 19:43:11.789153', '/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBYWICwgIyYnKSopGR8tMC0oMCUoKSj/2wBDAQcHBwoIChMKChMoGhYaKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCj/wAARCABKAGQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDz22+K15Y2i21jahURQqtJMztgflUA+JN9ePGl7a2kgjyUDKcjPvnNb1p4Y0e78T6bpFnbIryNh2xn8zWr49+Hmk22u2wtYwgwATK2xGXnOeOuRU+2jJuI50fZw55Ir6B4xfV5FtY7Rllx2YspA9M8gj8RXpelagEtJC0rqVA4KkfnXD6NpdtbacYNOkihTISSVFAAPUDd3OAeldZopSFyMmUFArIzevTB6YIrlm02c103oXIWstVkdhBALtRsJTkHI9ccGs+x8MvpsVxOkELu2XELg7uv8/aqw8R6X4cMttBuiJZmJxk/UH25/SqEHjGS+fy4fMmVGV87sEDPJ9/Wtoo1VJW1FntYdZ0+VUkjtrsSbWSYYPX0xmsq8QCyhF/cs0yv8209D05P4ZxzXYahbR67ZrJax7NUjbcSp2vL7ZHf3rmx4Y1a+QzXNvcvbwSFGM4Clm5zjoTjHNVFW0MZwaMmQpuYeZHORwMnkA+39Ks6dLbRh4XJbIO87tuPpUdv4ZvILGXXN0YtLfDGOY43ewzycZql9pjZpg5U7v8AVbX4UkngnrxSlEzSOqtIrGfDi5AgUryAMjHb3611Vpdw2jvDAxZV4JVRwPrXl1pZmC/vI7a4W5g2I25flJJXkD2Bzg10lpdQQssNtHcbwdrt5mS5B4O3t+dYyitildHqcMU13BHNCdyso/i6H0NFddoUCtpVu0oHmMoLfL3orVQVjoUD578NWkJ8dxywptNvcKikHqOhruPjBY6XP5NrcPFFeSwN5bv1xnkCuI03Tru08c2slv5rIZ8yRqM9+tbnx30nXda1TTItAVpQlq8kwB+b7w2gDvn5vyrhwjcrns5rBKEbHll5LY6fbR2M3nQxct58nKyem3b0OK1f7T/srRZGt55X8xUMZGQWXB5+b6Yz71BoVv48+02+njTZJEVsGK4t1TA6H5mAwORzWrN4R13x1o19drcpbR6ZI8OwoyhnGAyDPXp1/Cutqz948WlBuSseb3niea4hfeY2nZsfOchR24rPh1K7jmUiR5COiodoPvUOt+FvEMetNC+l3Ad2yhVCVx9elbdx4fl8PeFnup7iOTUbicRxRlc4CjLADuSSBntj3rR1qcbJPc640Kkru2iN7TfHJto1KuVlTb5mDkYHHTrXr8dhqGraXb3SXbpFJGZUAIUsGHXkjg9enNeX+Evh/wCRp9vqGrwpcX98qi2t3GyGFs/dbn5m9q6zWdVbw5NaLqkFvLM8jRrHb/MRj7x+gwOPcVN1U+Exq03HRnR3d9p2k+HJtBvZ+ZUO8RMJGAbucZrjLb4bagJfN00q8DxrIDMfJOD04J4rofDtzY2t7Lq0N5eWf2yIQSzeRuEJDZHXjkHGfUD1rsPttnql28kBunlS3d5Gl2+cqFsBUYcLggnBz1oU7S5GhPDp0+dM4vRPAl4l1LHfS6bFHwdqXQMgOO49MV6L4Q0rTvDDNIzQSNPwdqEkAfd5P/Av0rn4/h617cyXb38tz5jnY8kOGjUDgM2Rzz9OK6aDw7pmnXiX2ppbPIjAW0e0M3HTn06VScY+8QqMr2O4+0rgbeBjpRXEX/iLFy3O3POM0VHtDsVIbp9vBBq6TOgBOQxrF+I3h/UNbmjuNCcW2prCEiumfYoXnK7vetjw039pGMZ3uvfv0rb1Oyt45IJL9naOElhbglgzDkMBn9Kyp0pUbqXc1rVo1bNPocj4Y0/UdE0mU6/LBJrM2FWRXz5iIBt9OeefXg1Ym8Rmws2N1ExhJLNGFHJ74I710J0aDxPbSyXdpqNjJuzG0xVT9QAens1ZnjfQdTi8Lxz2cEeoahax7JolOwToBjdzxnAyR35qZqbvJbGcOVaPc8Q+LXxCitrc/wBmoMyDahx0968ZstTu9c8Q2Kxxy3N20ipFGMkkk9ABXrNlOmqeK4bfUNIa03Dy5baaDzFlQnBPI+VgMcj0rr9L8F+A/Dc8XiuK+Nl5ayQfZZJNjGdcggA9CfyGQaKEYRTSV2zWtUnK13ZI77wXpMFzo6W99L5k0ADAFExC/Q4J6/U+nFc54o8GWGj6dq+r2trcT6iJxma8bKqpPJiA4we561J4c8eeHo7d/Kg86GYBZHyzbSclhlvvDJ5x0GKzfjD4502LwHc2Gl3Ucl1KFEdvE+9gNwP4cetVGXJNQREo+0g5s5jT/GqWd7a210mJJ28tAoGAff2rv9N8d2NnFKTp8QkIDOVUAMemTgdeK+TbbU57jUFe5uBAwPDOQpX8+/vXrXwn1aKLxjZt9qF5IWCvgFo0HuenHXNejJcyPMS5Xoez2mu+Idet5DYWUtvbSDIuGQrGoPcH+I/SszVF1HSI7o6jc+fOjA7l4X5uQEPfgHP/ANatHxl4i1jRNF1aO9sfK0U2ZitZdOcOfNYkYGBkDByPpWbANSvdEtrDUAgtTaJJbeYfMlaQgklyRxj09/auCpqtWehShyvQ5a41tppmcuefeiuXZjG7ozAFWIPPvRWB06HdeH9T1DSbmO4sNSLbf4J0EikenY/rXb2fiuS9uxNerHCSAGMBIzj68gfTn3ryO3usDritG1vyMYevfq0Y1FaR4lOrKDuj1+PxjdrMUtNHkmgU/wCs89AW+i5zWtpvjPTLy4FrMXtLpuBDcLsZvoD1/CvIrTVzH8rE9OtaaamkuwsFcodylhnB9RXE8HJfBI6ViYv4onXeLfB8RuF1rQ43EkakzW0GP3ox1UHjPsMZ7Ed/OPD/AIXtL/wnr8vjCzvGiYEQeapjZHUttaInqSNuc/3ee9d5pni+4h2q0isPRhWf4s1P7fp8yXSJJYzqYp1BIkUN/ErA9QcEemKhYSSdyvrEWj5g8Y3E9vqdlBayvEi2/lrBnAVSOfTqQc/SrfhbRmv8Ehrg93DEIPxHB/AVsfEXwVeWraWRvvUnbyLa4VCfOO75VfHRueR717x4M0jR4PhvFp17JEJIEctKkTKI3AxkEgFjkf5B53lJQMYwclY8G1/wpctCJBZadMiKQGklZHHtmsb4caXe614gisl1V9MtVmRXiSRgzqWwVU+o6fjXoHiR47qzb7Ncb4mfZ5qnKlkbt+I/WuW0fXjp9/Iq6QJDbqXS5UbQHQsAGPTkAe9VLbQzje9me06zHeqbiSO7jvdEt5xDZwtHlwyqf4upxjgnmsp7e6uPCsQ0+48u+Zmmk3MQQhBO3/PvWb4H8d6vqsMlv4fhs4rmICaWEqDnnGFJGAfQ96tajpfjEeINP1L7FdT6bPua5ZiiMit0+XIIxz2rhcbS0O5Sbi7s8gvL6eO5dd3eipfGFnLpuv3Vo67GiO3HXPvnvmikoo1uzpzMAOtC3QUfe+nNUiflqB+n417s1qeSzdh1L5gsjY9DV631QRzBS3B6HNcbcE7hyauEnbEc84qBI7eW/wAJuQkkc1K2pG7sJofMIZ0IFc/ETheT0qO3JAbBI+Y0Ad58NNYa+07ZcyJLLa5dYpVGN6nk/XaGwfevR7y6+1eHo5dMQ3OlXStlVdU8oDoV4wy5BGDk9OK8N+FZJ8QopJKtdMCOxBJyK9S0N3t/EGtWtuzRWsdzHshQ7UXLc4UcDNcGIjaR24d3ieKeOJktfE2r6NpsAncvFLBDbIWbc6HcoQcg5AyO2a3PDng68062ubDxEhtH1OMSBd4dolOQdw6Bgece1c54cd2/aG1CQsxkF1L8xPP3fWvSvFBLeONHjYlo206Msp5BJZskislUekPIdSklF1PM8dg0PU/Cni2eKwjn1bTXVhJNZQl1kjboCQDtNfUPhay/tTwzbxGW4t7t7dXeKVd8kQ7DcBkH2JzW34LtoIIJI4IYo41AwqIAB+AqeaWRby5CyOAJIAAGPGSc0laWo/I861H4LWuqXH2xrqRmkUEmaPa34gjIor0++J87r2/xoqfYRL9vI//Z', 'https://----/d/xOX7wcwGDoX13Mmk-7xHZlTvRhcABRAAB0IQjQ/An-vJIOekxRv77ykjdGy6LOTtwk_j-3GcAq1VBeQ-pNt.jpg', 57180, '5dce934c-bdd2-11e4-b641-6c8814f85df8.jpg', '3')
and has not any error!!!
any idea about peewee? peewee is being boring for me...
I use playhouse to make pool and this is my pool instance:
my_db = PooledMySQLDatabase(dbconfig['database'], host=dbconfig['host'],
port=dbconfig['port'], user=dbconfig['user'],
passwd=dbconfig['password'], threadlocals=True, max_connections=None,
charset='utf8mb4')
also i use flask for web interface
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/538/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/537 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/537/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/537/comments | https://api.github.com/repos/coleifer/peewee/issues/537/events | https://github.com/coleifer/peewee/issues/537 | 59,054,432 | MDU6SXNzdWU1OTA1NDQzMg== | 537 | upsert fails when applying on insert_many | {
"login": "hitzg",
"id": 688395,
"node_id": "MDQ6VXNlcjY4ODM5NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/688395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hitzg",
"html_url": "https://github.com/hitzg",
"followers_url": "https://api.github.com/users/hitzg/followers",
"following_url": "https://api.github.com/users/hitzg/following{/other_user}",
"gists_url": "https://api.github.com/users/hitzg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hitzg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hitzg/subscriptions",
"organizations_url": "https://api.github.com/users/hitzg/orgs",
"repos_url": "https://api.github.com/users/hitzg/repos",
"events_url": "https://api.github.com/users/hitzg/events{/privacy}",
"received_events_url": "https://api.github.com/users/hitzg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Nice post, I will look into this.\n",
"Interestingly, here is the output when I run your sample script (slightly cleaned up output):\n\n``` python\n query 1: INSERT INTO \"person\" (\"name\") VALUES (?), (?), (?), (?)\n query 2: INSERT OR REPLACE INTO \"person\" (\"name\") VALUES (?), (?), (?), (?)\nPersons in db:\n carl\n maria\n anna\n john\n mark\n lena\n```\n\nThe list of names is correct and no `IntegrityError` is raised. I'm using SQLite 3.8.8 so perhaps that's the difference?\n",
"Yes, this seems to be related to the version of SQLite: Inserting multiple rows is supported since 3.7.11 ([source](http://www.sqlite.org/releaselog/3_7_11.html)). Before it was still feasible using a more complex syntax: ([ref](http://stackoverflow.com/a/1734067/4177384)).\n\nSo indeed, this fails on my system:\n\n``` bash\n sqlite3 :memory: \"create table person (name string);\n> insert into person (name) values ('a'), ('b');\"\n```\n\nwith `Error: near \",\": syntax error` (as expected).\nHowever, shouldn't then also the first insert in the example above fail? (as it seems to be independent of the \"or replace\" clause??\n\nBut I guess we can close this, as it is a _not_ bug of peewee.\n",
"Do you mind trying out ab815a6? I think I may have found the particular bug as it relates to older SQLite versions.\n",
"Nice! I have just tested it and it fixes the issue.\nThanks a lot!\n"
] | 2015-02-26T10:49:57 | 2015-02-27T07:19:43 | 2015-02-27T07:19:43 | NONE | null | I got confused using `insert_many` and `upsert`. But I'm not sure if this is a bug of peewee or just a limitation of sqlite. But either way the result is not what I would expect.
(also I was not sure if I should post this as an issue or on stackoverflow. I'm happy to post it there if you feel its more appropriate)
Minimal example:
``` python
import peewee as pw
db = pw.SqliteDatabase(':memory:')
# simple model with unique field
class Person(pw.Model):
name = pw.CharField(unique=True)
class Meta:
database = db
# create two sets of persons with some duplicates
persons1 = [dict(name=n) for n in 'anna john carl maria'.split()]
persons2 = [dict(name=n) for n in 'anna john mark lena'.split()]
# initialize the db
db.connect()
db.create_tables([Person], True)
# insert first set of persons
with db.transaction():
query = Person.insert_many(persons1)
print " query 1:", query
query.execute()
# use upsert to savely add the second set
with db.transaction():
query = Person.insert_many(persons2).upsert(upsert=True)
print " query 2:", query
query.execute()
# output all persons
print "Persons in db:"
for p in Person.select():
print " ", p.name
```
Output:
```
query 1: <class '__main__.Person'> INSERT INTO "person" ("name") VALUES (?), (?), (?), (?) [u'anna', u'john', u'carl', u'maria']
query 2: <class '__main__.Person'> INSERT OR REPLACE INTO "person" ("name") VALUES (?), (?), (?), (?) [u'anna', u'john', u'mark', u'lena']
Traceback (most recent call last):
File "upsert_minimal.py", line 29, in <module>
query.execute()
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2683, in execute
last_id = InsertQuery(self.model_class, row).execute()
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2685, in execute
return self.database.last_insert_id(self._execute(), self.model_class)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2243, in _execute
return self.database.execute_sql(sql, params, self.require_commit)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2877, in execute_sql
self.commit()
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2732, in __exit__
reraise(new_type, new_type(*exc_value.args), traceback)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2869, in execute_sql
cursor.execute(sql, params or ())
peewee.IntegrityError: column name is not unique
```
Doing each upsert separately works though:
``` python
with db.transaction():
for person in persons2:
Person.insert(**person).upsert(upsert=True).execute()
```
Versions:
- Ubuntu 12.04
- peewee: 2.4.7
- sqlite3: 3.7.9
- sqlite3 (python package): 2.6.0
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/537/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/536 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/536/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/536/comments | https://api.github.com/repos/coleifer/peewee/issues/536/events | https://github.com/coleifer/peewee/issues/536 | 59,042,295 | MDU6SXNzdWU1OTA0MjI5NQ== | 536 | Usage around insert_many | {
"login": "MartynBliss",
"id": 1713902,
"node_id": "MDQ6VXNlcjE3MTM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1713902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MartynBliss",
"html_url": "https://github.com/MartynBliss",
"followers_url": "https://api.github.com/users/MartynBliss/followers",
"following_url": "https://api.github.com/users/MartynBliss/following{/other_user}",
"gists_url": "https://api.github.com/users/MartynBliss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MartynBliss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MartynBliss/subscriptions",
"organizations_url": "https://api.github.com/users/MartynBliss/orgs",
"repos_url": "https://api.github.com/users/MartynBliss/repos",
"events_url": "https://api.github.com/users/MartynBliss/events{/privacy}",
"received_events_url": "https://api.github.com/users/MartynBliss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"In order to keep peewee lightweight, I think I will pass on adding this functionality.\n"
] | 2015-02-26T09:17:32 | 2015-02-26T15:02:39 | 2015-02-26T15:02:39 | NONE | null | Not really an issue...
It would be helpful if when doing an insert_many there would be a way to get a recommended chunk size given a model. If the max number of sql parameters is fixed, then returning that divided by the number of fields that each row insert would generate?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/536/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/535 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/535/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/535/comments | https://api.github.com/repos/coleifer/peewee/issues/535/events | https://github.com/coleifer/peewee/issues/535 | 58,663,054 | MDU6SXNzdWU1ODY2MzA1NA== | 535 | Exception when switching from MySql to Sqlite | {
"login": "hamiltont",
"id": 305380,
"node_id": "MDQ6VXNlcjMwNTM4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/305380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamiltont",
"html_url": "https://github.com/hamiltont",
"followers_url": "https://api.github.com/users/hamiltont/followers",
"following_url": "https://api.github.com/users/hamiltont/following{/other_user}",
"gists_url": "https://api.github.com/users/hamiltont/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamiltont/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamiltont/subscriptions",
"organizations_url": "https://api.github.com/users/hamiltont/orgs",
"repos_url": "https://api.github.com/users/hamiltont/repos",
"events_url": "https://api.github.com/users/hamiltont/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamiltont/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm inclined to actually leave that error as-is, since I wouldn't want you to specify `cascade=True` with SQLite, then have it _not_ cascade. Thank you for reporting, though.\n",
"Ok dokie! Figured it's a preference thing, thanks for the quick reply!\n"
] | 2015-02-23T22:50:16 | 2015-02-23T23:26:40 | 2015-02-23T23:15:33 | NONE | null | With mysql this was valid:
```
database.drop_tables(tables, safe=True, cascade=True)
```
When I switched to sqlite, this line caused the exception below. I'd have expected that it would just ignore the cascade argument if the engine is sqlite.
```
('DROP TABLE IF EXISTS "benchmark" CASCADE', [])
127.0.0.1 - - [23/Feb/2015 17:49:09] "GET / HTTP/1.1" 500 -
Traceback (most recent call last):
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/flask_debugtoolbar/__init__.py", line 124, in dispatch_request
return view_func(**req.view_args)
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/line_profiler.py", line 141, in runcall
return func(*args, **kw)
File "/Users/hamiltont/Documents/FrameworkContinuous/webapp.py", line 35, in dash
models.create_database()
File "/Users/hamiltont/Documents/FrameworkContinuous/src/models.py", line 36, in create_database
database.drop_tables(tables, safe=True, cascade=True)
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/peewee.py", line 2987, in drop_tables
drop_model_tables(models, fail_silently=safe, cascade=cascade)
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/peewee.py", line 4054, in drop_model_tables
m.drop_table(**drop_table_kwargs)
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/peewee.py", line 3843, in drop_table
cls._meta.database.drop_table(cls, fail_silently, cascade)
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/peewee.py", line 2984, in drop_table
model_class, fail_silently, cascade))
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/peewee.py", line 2877, in execute_sql
self.commit()
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/peewee.py", line 2732, in __exit__
reraise(new_type, new_type(*exc_value.args), traceback)
File "/Users/hamiltont/Documents/FrameworkContinuous/env/lib/python2.7/site-packages/peewee.py", line 2869, in execute_sql
cursor.execute(sql, params or ())
OperationalError: near "CASCADE": syntax error
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/535/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/534 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/534/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/534/comments | https://api.github.com/repos/coleifer/peewee/issues/534/events | https://github.com/coleifer/peewee/issues/534 | 58,578,306 | MDU6SXNzdWU1ODU3ODMwNg== | 534 | Using load_csv on files generated with dump_csv and using a database model doesn't work | {
"login": "MartynBliss",
"id": 1713902,
"node_id": "MDQ6VXNlcjE3MTM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1713902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MartynBliss",
"html_url": "https://github.com/MartynBliss",
"followers_url": "https://api.github.com/users/MartynBliss/followers",
"following_url": "https://api.github.com/users/MartynBliss/following{/other_user}",
"gists_url": "https://api.github.com/users/MartynBliss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MartynBliss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MartynBliss/subscriptions",
"organizations_url": "https://api.github.com/users/MartynBliss/orgs",
"repos_url": "https://api.github.com/users/MartynBliss/repos",
"events_url": "https://api.github.com/users/MartynBliss/events{/privacy}",
"received_events_url": "https://api.github.com/users/MartynBliss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Nice catch, thanks for letting me know.\n"
] | 2015-02-23T12:15:53 | 2015-02-24T01:09:13 | 2015-02-24T01:09:13 | NONE | null | When the fields are iterated as part of creating in memory database table, primary keys are being stripped from column names, but the row data is not being modified. So what happens is field(1) gets the value of field(0) and so on
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/534/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/533 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/533/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/533/comments | https://api.github.com/repos/coleifer/peewee/issues/533/events | https://github.com/coleifer/peewee/issues/533 | 58,495,375 | MDU6SXNzdWU1ODQ5NTM3NQ== | 533 | ArrayField: contains_any returning 'A||' | {
"login": "havannavar",
"id": 1104650,
"node_id": "MDQ6VXNlcjExMDQ2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1104650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/havannavar",
"html_url": "https://github.com/havannavar",
"followers_url": "https://api.github.com/users/havannavar/followers",
"following_url": "https://api.github.com/users/havannavar/following{/other_user}",
"gists_url": "https://api.github.com/users/havannavar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/havannavar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/havannavar/subscriptions",
"organizations_url": "https://api.github.com/users/havannavar/orgs",
"repos_url": "https://api.github.com/users/havannavar/repos",
"events_url": "https://api.github.com/users/havannavar/events{/privacy}",
"received_events_url": "https://api.github.com/users/havannavar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm not sure... the code is tested and here is a sample script that runs fine on my computer:\n\n``` python\n#!/usr/bin/env python\n\nimport logging\n\nfrom peewee import *\nfrom peewee import create_model_tables\nfrom playhouse.postgres_ext import *\n\n\ndb = PostgresqlExtDatabase('peewee_test')\n\nclass BaseModel(Model):\n class Meta:\n database = db\n\nclass BlogPost(BaseModel):\n title = TextField()\n tags = ArrayField(TextField)\n\n\ndef main():\n db.create_tables([BlogPost], True)\n BlogPost.delete().execute()\n BlogPost.create(\n title='awesome',\n tags=['foo', 'bar', 'baz'])\n BlogPost.create(\n title='bad',\n tags=['foo', 'bax', 'baz'])\n for b in BlogPost.select().where(BlogPost.tags.contains_any('python', 'bar')):\n print b.title\n\nif __name__ == '__main__':\n main()\n```\n\nPrints\n\n```\nawesome\n```\n\nYou're using `playhouse.postgres_ext.PostgresqlExtDatabase`, right?\n",
"yeah thats the reason, i was using \n\nplayhouse.pool.PooledPostgresqlDatabase my bad didn't check the what Postgres class it refers to \nnow using \nplayhouse.pool.PooledPostgresqlExtDatabase\n\nMany thanks..\n",
"Charles,\n\nwhen i enter query like the below,\nBlogPost.select().where(BlogPost.tags.contains_any('bax', 'bar'))\n\nI get only one result, instead it should be 2, i saw in peewee generated sql \n\nWHERE (\"t1\".\"TAGS\" && %s) LIMIT 1 [<playhouse.postgres_ext._Array object at 0x1128dbe10>]\n\nwhat could be the reason?\n",
"Are you calling `.get()`, or why is there a `LIMIT 1` on the query?\n",
"```\nclass BlogPost(BaseModel):\n title = TextField()\n tags = ArrayField(TextField)\n name = TextField()\n\ndef main():\n db.create_tables([BlogPost], True)\n BlogPost.delete().execute()\n BlogPost.create(\n title='awesome',\n name='george',\n tags=['foo', 'bar', 'baz'])\n BlogPost.create(\n title='bad',\n name='george',\n tags=['foo', 'bax', 'baz'])\nBlogPost.select().where(name='george' and BlogPost.tags.contains_any('python', 'bar'))\n```\n\nI m not calling .get(), i don't have an idea why there is LIMIT 1 on the query?\n\nI just tried \n\nBlogPost.select().where(name='george' and BlogPost.tags.contains_any('bar'))\n\nStill the same result\n",
"Bro, take like 2 seconds and see how to format the code in your comments.\n",
"`BlogPost.select().where(name='george' and BlogPost.tags.contains_any('python', 'bar'))`\n\nThat is invalid. You want instead:\n\n``` python\nBlogPost.select().where(\n (BlogPost.name == 'george') &\n (BlogPost.tags.contains_any('python', 'bar'))\n```\n\nCheck the docs for more examples: http://docs.peewee-orm.com/en/latest/peewee/querying.html#query-operators\n",
"I did the format :+1: \n\nAs i said earlier \n\n```\n BlogPost.select().where(BlogPost.tags.contains_any('bar')) \n```\n\nEven for the above query able to retrieve only one result, it is not retrieving multiple\n",
"I'm not sure what's going on. This code works for me:\n\n``` python\n#!/usr/bin/env python\n\nimport logging\n\nfrom peewee import *\nfrom peewee import create_model_tables\nfrom playhouse.postgres_ext import *\n\n\ndb = PostgresqlExtDatabase('peewee_test')\n\nclass BaseModel(Model):\n class Meta:\n database = db\n\nclass BlogPost(BaseModel):\n title = TextField()\n tags = ArrayField(TextField)\n\n\ndef main():\n db.create_tables([BlogPost], True)\n BlogPost.delete().execute()\n BlogPost.create(\n title='awesome',\n tags=['foo', 'bar', 'baz'])\n BlogPost.create(\n title='radical',\n tags=['python', 'foo', 'bla'])\n BlogPost.create(\n title='bad',\n tags=['foo', 'bax', 'baz'])\n q = BlogPost.select().where(BlogPost.tags.contains_any('python', 'bar'))\n print q\n for b in q:\n print b.title\n\nif __name__ == '__main__':\n main()\n```\n\nOutput:\n\n```\n<class '__main__.BlogPost'> SELECT \"t1\".\"id\", \"t1\".\"title\", \"t1\".\"tags\" FROM \"blogpost\" AS t1 WHERE (\"t1\".\"tags\" && %s) [<playhouse.postgres_ext._Array object at 0x7f547afbd310>]\nawesome\nradical\n```\n",
"Wonder !!\nI didn't found out the exact reason for the above issue, but after restarting my IDE and posgresql , everything starts working as expected, could be caching issues??\n\nThanks Charles once again !!\nI may come up with more issues, as i m new to peewee and in hurry to release our product.\n"
] | 2015-02-22T08:32:38 | 2015-02-23T15:39:17 | 2015-02-23T02:07:44 | NONE | null | Here is my Model
class BlogPost(BaseModel):
user = TextField()
content = TextField()
tags = ArrayField(TextField)
data in database is BlogPost(user='abc',content='awesome', tags=['foo', 'bar', 'baz'])"
and when i execute
Blog.select().where(Blog.tags.contains_any('python', 'bar'))
I always get an Error 'A||'
Am i missing something here??
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/533/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/532 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/532/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/532/comments | https://api.github.com/repos/coleifer/peewee/issues/532/events | https://github.com/coleifer/peewee/issues/532 | 58,453,118 | MDU6SXNzdWU1ODQ1MzExOA== | 532 | MySQL full text search | {
"login": "leebrooks0",
"id": 2501773,
"node_id": "MDQ6VXNlcjI1MDE3NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2501773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leebrooks0",
"html_url": "https://github.com/leebrooks0",
"followers_url": "https://api.github.com/users/leebrooks0/followers",
"following_url": "https://api.github.com/users/leebrooks0/following{/other_user}",
"gists_url": "https://api.github.com/users/leebrooks0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leebrooks0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leebrooks0/subscriptions",
"organizations_url": "https://api.github.com/users/leebrooks0/orgs",
"repos_url": "https://api.github.com/users/leebrooks0/repos",
"events_url": "https://api.github.com/users/leebrooks0/events{/privacy}",
"received_events_url": "https://api.github.com/users/leebrooks0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Check out http://docs.peewee-orm.com/en/latest/peewee/querying.html#adding-user-defined-operators\n",
"@coleifer Is there a way to add the FULLTEXT index on the model?",
"Looks like MySQL syntax is `create fulltext index ...`, which is not supported by peewee's out-of-the-box schema manager. Your best bet is to just run it as a one-off query:\r\n\r\n```\r\ndb = MySQLDatabase(...)\r\n\r\ndef create_schema():\r\n db.create_tables(list_of_models)\r\n db.execute_sql('CREATE FULLTEXT INDEX ...')\r\n```",
"@coleifer the above link is expired. I can't see the way to implement full text search in mysql",
"http://docs.peewee-orm.com/en/latest/peewee/query_operators.html#adding-user-defined-operators"
] | 2015-02-21T09:57:48 | 2019-06-18T19:54:50 | 2015-02-23T02:02:20 | NONE | null | Is there any support for MySQL full text search? I see you can call fn.any_function(), but I am not sure how you would use that when using MATCH AGAINST?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/532/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/531 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/531/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/531/comments | https://api.github.com/repos/coleifer/peewee/issues/531/events | https://github.com/coleifer/peewee/issues/531 | 58,289,598 | MDU6SXNzdWU1ODI4OTU5OA== | 531 | [Feature] Ability to explicitly switch db connection per query | {
"login": "alexlatchford",
"id": 628146,
"node_id": "MDQ6VXNlcjYyODE0Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/628146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexlatchford",
"html_url": "https://github.com/alexlatchford",
"followers_url": "https://api.github.com/users/alexlatchford/followers",
"following_url": "https://api.github.com/users/alexlatchford/following{/other_user}",
"gists_url": "https://api.github.com/users/alexlatchford/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexlatchford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexlatchford/subscriptions",
"organizations_url": "https://api.github.com/users/alexlatchford/orgs",
"repos_url": "https://api.github.com/users/alexlatchford/repos",
"events_url": "https://api.github.com/users/alexlatchford/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexlatchford/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"PS. My current solution is something like:\n\n``` python\nquery = MyModel.select()\nquery.database = process_db_conn\nresult = query.execute()\n```\n\nWorks just about, having some teething problems with connections hanging around too though but think it's psycopg2 rather than peewee :)\n",
"Thanks for the feature request, seems like a good one. There's a little bit of prior art in the `playhouse.test_utils` module for using a separate database in a context manager for testing. \n",
"Added a new `Using` context manager to the `playhouse.read_slave` module. Still needs docs, but you can find examples in the tests.\n",
"Awesome! Cheers Charles, I've put in a ticket into our system to move over to using this I'll let you know once we're using it in LIVE :)\n",
"Yikes, good luck!\n"
] | 2015-02-19T23:47:16 | 2015-03-02T14:41:45 | 2015-02-24T05:44:47 | CONTRIBUTOR | null | Currently models are defined globally and you have to specific the database in the metaclass, I'd like a way to switch db connection explicitly. I understand that the execution_context decorator exists but still only worked with a single peewee connection pool (i.e. threads).
For my use case I've created a manager class to creates new connection pools per process to get around psycopg2's limitations in that respect. This works great but means I have to hook into private member variables to override the database attribute on Query objects to specify the correct connection (or it'll use the parent/global one that I specified when defining the module).
I'm thinking of a syntax something like this:
``` python
from peewee import *
from multiprocessing import Process
global_db_conn = PostgresqlDatabase(...)
class MyModel(Model):
name = CharField()
class Meta:
database = global_db_conn
MyModel.create_table()
def run():
process_db_conn = PostgresqlDatabase(...)
my_process_obj = MyModel.db(process_db_conn).get() # Or select/update/delete etc.
my_proc = Process(target=run)
my_proc.start()
my_proc.join()
```
Not sure if I've explained the problem or covered all the bases or whether there is a nicer solution but this is certainly a problem I'd love to upvote! (If I get a vote that is)
Thanks,
Alex
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/531/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/530 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/530/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/530/comments | https://api.github.com/repos/coleifer/peewee/issues/530/events | https://github.com/coleifer/peewee/issues/530 | 57,900,570 | MDU6SXNzdWU1NzkwMDU3MA== | 530 | Support for pysqlcipher3 | {
"login": "tfeldmann",
"id": 385566,
"node_id": "MDQ6VXNlcjM4NTU2Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/385566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tfeldmann",
"html_url": "https://github.com/tfeldmann",
"followers_url": "https://api.github.com/users/tfeldmann/followers",
"following_url": "https://api.github.com/users/tfeldmann/following{/other_user}",
"gists_url": "https://api.github.com/users/tfeldmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tfeldmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tfeldmann/subscriptions",
"organizations_url": "https://api.github.com/users/tfeldmann/orgs",
"repos_url": "https://api.github.com/users/tfeldmann/repos",
"events_url": "https://api.github.com/users/tfeldmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/tfeldmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2015-02-17T09:20:50 | 2015-02-18T22:15:37 | 2015-02-18T22:15:37 | NONE | null | The sqlcipher_ext in playhouse uses pysqlcipher, which is only compatible with Python 2.7.x.
David Riggleman recently forked this module to work with both Python 2 + 3: https://github.com/rigglemania/pysqlcipher3
It would be great if peewee used this module to be able to work with encrypted databases in Python 3.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/530/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/529 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/529/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/529/comments | https://api.github.com/repos/coleifer/peewee/issues/529/events | https://github.com/coleifer/peewee/issues/529 | 57,900,087 | MDU6SXNzdWU1NzkwMDA4Nw== | 529 | I cannot explicitly create the storage engine of MySQL tables | {
"login": "kevinisaac",
"id": 4241767,
"node_id": "MDQ6VXNlcjQyNDE3Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4241767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevinisaac",
"html_url": "https://github.com/kevinisaac",
"followers_url": "https://api.github.com/users/kevinisaac/followers",
"following_url": "https://api.github.com/users/kevinisaac/following{/other_user}",
"gists_url": "https://api.github.com/users/kevinisaac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kevinisaac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevinisaac/subscriptions",
"organizations_url": "https://api.github.com/users/kevinisaac/orgs",
"repos_url": "https://api.github.com/users/kevinisaac/repos",
"events_url": "https://api.github.com/users/kevinisaac/events{/privacy}",
"received_events_url": "https://api.github.com/users/kevinisaac/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I don't have any plans to add this functionality, but will accept pull-requests.\n"
] | 2015-02-17T09:15:17 | 2015-02-24T03:32:48 | 2015-02-24T03:32:48 | NONE | null | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/529/timeline | null | completed | null | null |
|
https://api.github.com/repos/coleifer/peewee/issues/528 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/528/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/528/comments | https://api.github.com/repos/coleifer/peewee/issues/528/events | https://github.com/coleifer/peewee/issues/528 | 57,713,212 | MDU6SXNzdWU1NzcxMzIxMg== | 528 | Keyword URLs supported in database connection? | {
"login": "SamuelMarks",
"id": 807580,
"node_id": "MDQ6VXNlcjgwNzU4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/807580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelMarks",
"html_url": "https://github.com/SamuelMarks",
"followers_url": "https://api.github.com/users/SamuelMarks/followers",
"following_url": "https://api.github.com/users/SamuelMarks/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelMarks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelMarks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelMarks/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelMarks/orgs",
"repos_url": "https://api.github.com/users/SamuelMarks/repos",
"events_url": "https://api.github.com/users/SamuelMarks/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelMarks/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#db-url\nOn Feb 14, 2015 5:12 PM, \"Samuel Marks\" [email protected] wrote:\n\n> Looking through your examples, looks like I need seperated values rather\n> than a single URL.\n> \n> So going off that:\n> \n> from os import environ\n> from operator import add\n> \n> environ.setdefault('HEROKU_POSTGRESQL_AMBER', 'postgres://username:password@hostname:PORT/dbname')\n> \n> username, password, hostname, PORT, dbname = reduce(add,\n> reduce(add,\n> map(lambda e: map(lambda i: i.split('/'), e),\n> map(lambda elem: elem.split(':'),\n> environ['HEROKU_POSTGRESQL_AMBER'][\n> environ['HEROKU_POSTGRESQL_AMBER'].find(\n> '//') + 2:].split('@')))))\n> \n> I can then do:\n> \n> from playhouse.postgres_ext import PostgresqlExtDatabase\n> \n> psql_db = PostgresqlExtDatabase(dbname, user=username)\n> \n> Unfortunately PostgresqlExtDatabase.**init** isn't documents and just\n> uses _args and *_kwargs everywhere.\n> \n> _How do I provide all these details to the initialiser?_\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/coleifer/peewee/issues/528.\n",
"You can use the db_url module or you can check the documentation on psycopg2. Besides the name of the database, all the other args are passed directly to psycopg2.\n",
"Thanks, that worked.\n"
] | 2015-02-15T01:12:41 | 2015-02-23T22:02:42 | 2015-02-23T02:08:13 | NONE | null | Looking through your examples, looks like I need seperated values rather than a single URL.
So going off that:
```
from os import environ
from operator import add
environ.setdefault('HEROKU_POSTGRESQL_AMBER', 'postgres://username:password@hostname:PORT/dbname')
username, password, hostname, PORT, dbname = reduce(
add, reduce(add,
map(lambda e: map(lambda i: i.split('/'), e),
map(lambda elem: elem.split(':'),
(lambda e: e[e.find('//') + 2:].split('@'))(
environ['HEROKU_POSTGRESQL_AMBER']
)))))
```
I can then do:
```
from playhouse.postgres_ext import PostgresqlExtDatabase
psql_db = PostgresqlExtDatabase(dbname, user=username)
```
Unfortunately `PostgresqlExtDatabase.__init__` isn't documented; and just uses `*args` and `**kwargs` everywhere.
**How do I provide all these details to the initialiser?**
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/528/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/528/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/527 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/527/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/527/comments | https://api.github.com/repos/coleifer/peewee/issues/527/events | https://github.com/coleifer/peewee/issues/527 | 57,655,481 | MDU6SXNzdWU1NzY1NTQ4MQ== | 527 | Cannot create tables with circular foreign key dependencies with SQLite | {
"login": "bfontaine",
"id": 1334295,
"node_id": "MDQ6VXNlcjEzMzQyOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1334295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bfontaine",
"html_url": "https://github.com/bfontaine",
"followers_url": "https://api.github.com/users/bfontaine/followers",
"following_url": "https://api.github.com/users/bfontaine/following{/other_user}",
"gists_url": "https://api.github.com/users/bfontaine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bfontaine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bfontaine/subscriptions",
"organizations_url": "https://api.github.com/users/bfontaine/orgs",
"repos_url": "https://api.github.com/users/bfontaine/repos",
"events_url": "https://api.github.com/users/bfontaine/events{/privacy}",
"received_events_url": "https://api.github.com/users/bfontaine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> SQLite doesn’t support the ALTER TABLE … ADD CONSTRAINT syntax (source), which is why there’s a syntax error here.\n\nGood point. I think I will leave this out as anyone using SQLite probably should already be aware of the limited support for alter table. And circular FKs are a bad design!\n"
] | 2015-02-13T21:19:24 | 2015-02-17T22:47:52 | 2015-02-17T22:47:52 | CONTRIBUTOR | null | TL;DR: SQLite doesn’t support the `ALTER TABLE … ADD CONSTRAINT` syntax used by `db.create_foreign_key`.
---
Hello,
I’m trying to store genealogical data in an SQLite database with the following simplified models:
``` python
FamilyProxy = Proxy()
class Person(Model):
# the family where this person is a child in
family = ForeignKeyField(FamilyProxy, related_name='children')
class Family(Model):
father = ForeignKeyField(Person, related_name='father_in')
mother = ForeignKeyField(Person, related_name='mother_in')
FamilyProxy.initialize(Family)
```
The [doc about circular dependencies](http://peewee.readthedocs.org/en/latest/peewee/models.html#circular-foreign-key-dependencies) mentions the `Proxy` workaround and at the end shows an example of what we need to do when creating tables, which gives this for my example:
``` py
db.create_tables([Person, Family], safe=True)
db.create_foreign_key(Person, Person.family)
```
And here lies the problem: the `create_foreign_key` method called here gives a syntax error with SQLite:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "mymodule/store.py", line 68, in create_tables
db.create_foreign_key(Person, Person.family)
File "/…/lib/python2.7/site-packages/peewee.py", line 2974, in create_foreign_key
model_class, field, constraint))
File "/…/lib/python2.7/site-packages/peewee.py", line 2877, in execute_sql
self.commit()
File "/…/lib/python2.7/site-packages/peewee.py", line 2732, in __exit__
reraise(new_type, new_type(*exc_value.args), traceback)
File "/…/lib/python2.7/site-packages/peewee.py", line 2869, in execute_sql
cursor.execute(sql, params or ())
peewee.OperationalError: near "CONSTRAINT": syntax error
```
I added a `print` statement in the code to understand the syntax error:
``` sql
ALTER TABLE "person" ADD CONSTRAINT "fk_person_family_id_refs_family" FOREIGN KEY ("family_id") REFERENCES "family" ("id")
```
SQLite doesn’t support the `ALTER TABLE … ADD CONSTRAINT` syntax ([source](http://www.sqlite.org/omitted.html)), which is why there’s a syntax error here.
---
I can rewrite my code to avoid circular dependencies (with more complex queries), but you might want to add a notice in the documentation about this issue with SQLite.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/527/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/526 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/526/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/526/comments | https://api.github.com/repos/coleifer/peewee/issues/526/events | https://github.com/coleifer/peewee/pull/526 | 57,633,275 | MDExOlB1bGxSZXF1ZXN0MjkyODU4ODM= | 526 | Typo fixed in querying.rst | {
"login": "bfontaine",
"id": 1334295,
"node_id": "MDQ6VXNlcjEzMzQyOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1334295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bfontaine",
"html_url": "https://github.com/bfontaine",
"followers_url": "https://api.github.com/users/bfontaine/followers",
"following_url": "https://api.github.com/users/bfontaine/following{/other_user}",
"gists_url": "https://api.github.com/users/bfontaine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bfontaine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bfontaine/subscriptions",
"organizations_url": "https://api.github.com/users/bfontaine/orgs",
"repos_url": "https://api.github.com/users/bfontaine/repos",
"events_url": "https://api.github.com/users/bfontaine/events{/privacy}",
"received_events_url": "https://api.github.com/users/bfontaine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks!\n"
] | 2015-02-13T18:09:26 | 2015-02-17T22:46:20 | 2015-02-17T22:46:17 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/526/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/526",
"html_url": "https://github.com/coleifer/peewee/pull/526",
"diff_url": "https://github.com/coleifer/peewee/pull/526.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/526.patch",
"merged_at": "2015-02-17T22:46:17"
} |
|
https://api.github.com/repos/coleifer/peewee/issues/525 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/525/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/525/comments | https://api.github.com/repos/coleifer/peewee/issues/525/events | https://github.com/coleifer/peewee/issues/525 | 57,441,369 | MDU6SXNzdWU1NzQ0MTM2OQ== | 525 | Issue: integration with PostgreSQL | {
"login": "havannavar",
"id": 1104650,
"node_id": "MDQ6VXNlcjExMDQ2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1104650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/havannavar",
"html_url": "https://github.com/havannavar",
"followers_url": "https://api.github.com/users/havannavar/followers",
"following_url": "https://api.github.com/users/havannavar/following{/other_user}",
"gists_url": "https://api.github.com/users/havannavar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/havannavar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/havannavar/subscriptions",
"organizations_url": "https://api.github.com/users/havannavar/orgs",
"repos_url": "https://api.github.com/users/havannavar/repos",
"events_url": "https://api.github.com/users/havannavar/events{/privacy}",
"received_events_url": "https://api.github.com/users/havannavar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"For future reference please take the time to format your code.\n",
"This is the issue:\n\n``` python\nclass BaseModel(database.Model):\n class Meta:\n database = database\n```\n\nSetting the `database` to the `FlaskDB` wrapper will not work because peewee models expect the `Meta.database` attribute to be a _peewee_ `Database` subclass. So you should change your code to:\n\n``` python\nclass BaseModel(database.Model):\n pass\n```\n\nAlso, you don't technically need a `BaseModel` because that's effectively what `database.Model` is.\n",
"thanks it works\n"
] | 2015-02-12T10:20:01 | 2015-02-13T05:06:28 | 2015-02-12T13:44:49 | NONE | null | Hi Charles,
I m trying to integrate between playhouse.flask_utils. FlaskDB and postgresql, but unable to do so.
here is my code and exception
``` python
DATABASE = {
'name': 'testdb',
'engine': 'playhouse.pool.PooledPostgresqlDatabase',
'user': 'sats',
'max_connections': 32,
'stale_timeout': 600,
}
app = Flask(__name__)
app.config.from_object(__name__)
database = FlaskDB(app)
class BaseModel(database.Model):
class Meta:
database = database
class User(BaseModel):
class Meta:
db_table = 'user_auth'
id = IntegerField(primary_key=True)
username = CharField(null=True)
password = CharField(null=True)
@app.route('/api/newuser', methods=['POST'])
def new_user():
username = request.json.get('username')
password = request.json.get('password')
if username is None or password is None:
abort(400) # missing arguments
return 'Either username or password is null'
try:
if User.get(username=username) is not None:
return 'user exists' # existing user
except Exception:
user = User(username=username,id=101,password=password)
User.save(user)
#user = User.get(username=username)
```
And the exception is AttributeError: 'FlaskDB' object has no attribute 'rows_affected'
Full stacktrace:
```
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/Library/Python/2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/Library/Python/2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/Library/Python/2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/WorkSpace/python/flask-peewee-rest/src/testflaskutildb.py", line 99, in new_user
User.save(user)
File "/Library/Python/2.7/site-packages/peewee.py", line 3887, in save
rows = self.update(**field_dict).where(self._pk_expr()).execute()
File "/Library/Python/2.7/site-packages/peewee.py", line 2614, in execute
return self.database.rows_affected(self._execute())
AttributeError: 'FlaskDB' object has no attribute 'rows_affected'
```
I have tried different options of DB configuration but not working with postgres
From stack trace, i feel it only supports only Sqlite db
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/525/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/524 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/524/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/524/comments | https://api.github.com/repos/coleifer/peewee/issues/524/events | https://github.com/coleifer/peewee/issues/524 | 57,190,493 | MDU6SXNzdWU1NzE5MDQ5Mw== | 524 | Are mysql migrations still not tested? | {
"login": "leebrooks0",
"id": 2501773,
"node_id": "MDQ6VXNlcjI1MDE3NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2501773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leebrooks0",
"html_url": "https://github.com/leebrooks0",
"followers_url": "https://api.github.com/users/leebrooks0/followers",
"following_url": "https://api.github.com/users/leebrooks0/following{/other_user}",
"gists_url": "https://api.github.com/users/leebrooks0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leebrooks0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leebrooks0/subscriptions",
"organizations_url": "https://api.github.com/users/leebrooks0/orgs",
"repos_url": "https://api.github.com/users/leebrooks0/repos",
"events_url": "https://api.github.com/users/leebrooks0/events{/privacy}",
"received_events_url": "https://api.github.com/users/leebrooks0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"No, they are tested now. Thanks!\n",
"Smashing, looking forward to using your ORM!\n\nOn Tue, Feb 10, 2015 at 5:39 PM, Charles Leifer [email protected]\nwrote:\n\n> No, they are tested now. Thanks!\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/coleifer/peewee/issues/524#issuecomment-73720653.\n"
] | 2015-02-10T15:36:23 | 2015-02-10T15:40:43 | 2015-02-10T15:38:46 | NONE | null | In the docs it says that migrations have not been well tested with mysql? Is this still correct?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/524/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/523 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/523/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/523/comments | https://api.github.com/repos/coleifer/peewee/issues/523/events | https://github.com/coleifer/peewee/issues/523 | 56,810,865 | MDU6SXNzdWU1NjgxMDg2NQ== | 523 | Cannot store BLOB fields with peewee and PyMySQL | {
"login": "conqp",
"id": 3766192,
"node_id": "MDQ6VXNlcjM3NjYxOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3766192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conqp",
"html_url": "https://github.com/conqp",
"followers_url": "https://api.github.com/users/conqp/followers",
"following_url": "https://api.github.com/users/conqp/following{/other_user}",
"gists_url": "https://api.github.com/users/conqp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conqp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conqp/subscriptions",
"organizations_url": "https://api.github.com/users/conqp/orgs",
"repos_url": "https://api.github.com/users/conqp/repos",
"events_url": "https://api.github.com/users/conqp/events{/privacy}",
"received_events_url": "https://api.github.com/users/conqp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> I have the problem, that I cannot store Models having a BlobField with binary content since upgrading to the latest version of peewee 5aea8f6.\n\nWhat was the latest version of peewee that worked for you?\n",
"I am not sure, which one it was. But as I mentioned above this problem occured after simultaneously upgrading peewee and PyMySQL. Since the latest peewee release works with releases of PyMySQL before PyMySQL/PyMySQL@5157c9b I suspect that the changes introduced in that release of PyMySQL regarding surrogates escapes causes this problem. And since this was marked as a fix in PyMySQL, maybe peewee should become compatible with the changes made in PyMySQL.\n",
"What version of Python are you using? Here is some code I wrote to test this out using Python 2.7:\n\n``` python\nfrom peewee import *\n\ndb = MySQLDatabase('peewee_test')\n\nclass TM(Model):\n title = CharField()\n data = BlobField()\n\n class Meta:\n database = db\n\ndef main():\n db.create_tables([TM], True)\n TM.delete().execute()\n data = ''.join(chr(i) for i in range(1, 256))\n tm = TM.create(title='test', data=data)\n tm_db = TM.get(TM.id == tm.id)\n print tm_db.data\n\nif __name__ == '__main__':\n main()\n```\n\nI get the following traceback:\n\n``` python\nTraceback (most recent call last):\n File \"test.py\", line 21, in <module>\n main()\n File \"test.py\", line 16, in main\n tm = TM.create(title='test', data=data)\n File \"/home/media/tmp/mysqlenv/src/peewee/peewee.py\", line 3755, in create\n inst.save(force_insert=True)\n File \"/home/media/tmp/mysqlenv/src/peewee/peewee.py\", line 3890, in save\n pk_from_cursor = self.insert(**field_dict).execute()\n File \"/home/media/tmp/mysqlenv/src/peewee/peewee.py\", line 2685, in execute\n return self.database.last_insert_id(self._execute(), self.model_class)\n File \"/home/media/tmp/mysqlenv/src/peewee/peewee.py\", line 2243, in _execute\n return self.database.execute_sql(sql, params, self.require_commit)\n File \"/home/media/tmp/mysqlenv/src/peewee/peewee.py\", line 2869, in execute_sql\n cursor.execute(sql, params or ())\n File \"/home/media/tmp/mysqlenv/local/lib/python2.7/site-packages/pymysql/cursors.py\", line 133, in execute\n query = query % self._escape_args(args, conn)\n File \"/home/media/tmp/mysqlenv/local/lib/python2.7/site-packages/pymysql/cursors.py\", line 99, in _escape_args\n return tuple(conn.escape(arg) for arg in args)\n File \"/home/media/tmp/mysqlenv/local/lib/python2.7/site-packages/pymysql/cursors.py\", line 99, in <genexpr>\n return tuple(conn.escape(arg) for arg in args)\n File \"/home/media/tmp/mysqlenv/local/lib/python2.7/site-packages/pymysql/connections.py\", line 678, in escape\n return escape_item(obj, self.charset)\n File \"/home/media/tmp/mysqlenv/local/lib/python2.7/site-packages/pymysql/converters.py\", line 24, in escape_item\n encoder = encoders[type(val)]\nKeyError: <type 'buffer'>\n```\n",
"I think that peewee handles this datatype correctly. According to PEP 249, the db-api 2.0 spec:\n\n> The preferred object type for Binary objects are the `buffer` types available in standard Python starting with version 1.5.2. Please see the Python documentation for details. For information about the C interface have a look at Include/bufferobject.h and Objects/bufferobject.c in the Python source distribution.\n\nWhen using a `BlobField`, peewee uses `buffer` in Python2 and `bytes` for Python3.\n",
"I think the best path forward will be to subclass `BlobField` in your app and use custom logic for the `db_value()` function:\n\n``` python\nfrom peewee import *\nfrom pymysql import Binary\n\nclass MyBlobField(BlobField):\n def db_value(self, value):\n if value is not None:\n return Binary(value)\n```\n"
] | 2015-02-06T13:21:27 | 2015-02-11T02:25:40 | 2015-02-11T02:25:40 | CONTRIBUTOR | null | Hi,
I have the problem, that I cannot store Models having a BlobField with binary content since upgrading to the latest version of peewee 5aea8f6fa405c896884c71262e0b2ac04d8cdaf9.
I am using a MySQL database with peewee over PyMySQL b2ec8287151f8dc54e71e426b41a1b300fb934bb.
How to reproduce:
1) Install peewee and PyMySQL versions as mentioned above
2) Create a MySQL databse with a table containing a BLOB field
3) Create an appropriate peewee.Model
4) Create a new Model instance
5) Set the BlobField to a non-text binary (e.g. a picture file)
6) Try to store the model with model.save()
7) Get UnicodeEncodeError exceptions like:
```
'utf-8' codec can't encode character '\udcff' in position 159: surrogates not allowed in <class 'homie.mods.openimmodb.openimmodb.Anhaenge'>
'utf-8' codec can't encode character '\udcff' in position 160: surrogates not allowed in <class 'homie.mods.openimmodb.openimmodb.Anhaenge'>
'utf-8' codec can't encode character '\udcff' in position 161: surrogates not allowed in <class 'homie.mods.openimmodb.openimmodb.Anhaenge'>
'utf-8' codec can't encode character '\udc89' in position 163: surrogates not allowed in <class 'homie.mods.openimmodb.openimmodb.Anhaenge'>
'utf-8' codec can't encode character '\udcc7' in position 185: surrogates not allowed in <class 'homie.mods.openimmodb.openimmodb.Anhaenge'>
```
Thanks for looking into it.
PS: peewee does not seem compatible with the surrogates escapes introduced in PyMySQL 5157c9b7d058f764695a35684b599e0cbac22d9c :
https://github.com/PyMySQL/PyMySQL/commit/5157c9b7d058f764695a35684b599e0cbac22d9c
Since this was a **fix** in PyMySQL, I assume, that peewee needs to correct for compatibility here, or am I wrong?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/523/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/522 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/522/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/522/comments | https://api.github.com/repos/coleifer/peewee/issues/522/events | https://github.com/coleifer/peewee/issues/522 | 56,523,109 | MDU6SXNzdWU1NjUyMzEwOQ== | 522 | Inserting record into link table throws OperationalError | {
"login": "rotsj",
"id": 866141,
"node_id": "MDQ6VXNlcjg2NjE0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/866141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rotsj",
"html_url": "https://github.com/rotsj",
"followers_url": "https://api.github.com/users/rotsj/followers",
"following_url": "https://api.github.com/users/rotsj/following{/other_user}",
"gists_url": "https://api.github.com/users/rotsj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rotsj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rotsj/subscriptions",
"organizations_url": "https://api.github.com/users/rotsj/orgs",
"repos_url": "https://api.github.com/users/rotsj/repos",
"events_url": "https://api.github.com/users/rotsj/events{/privacy}",
"received_events_url": "https://api.github.com/users/rotsj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The problem is the call to `project_team.save()`. Because this model has a non-auto-incrementing primary key, if you are `INSERT`-ing a row you need to call `save(force_insert=True)`.\n\nhttp://docs.peewee-orm.com/en/latest/peewee/models.html#id3\n",
"Works like a charm, but I guess you'd expect that. Thanks!\n"
] | 2015-02-04T12:57:52 | 2015-02-05T08:03:53 | 2015-02-04T15:57:46 | NONE | null | Am trying to insert a record into a link table but getting an OperationalError thrown. I believe my model definitions are correct, but any feedback is welcome.
Output:
```
Traceback (most recent call last):
File "permDb.py", line 129, in <module>
main()
File "permDb.py", line 22, in main
setup()
File "permDb.py", line 85, in setup
convertConfig()
File "permDb.py", line 118, in convertConfig
project_team.save()
File "/home/xrogder/scripts/peewee.py", line 3887, in save
rows = self.update(**field_dict).where(self._pk_expr()).execute()
File "/home/xrogder/scripts/peewee.py", line 2614, in execute
return self.database.rows_affected(self._execute())
File "/home/xrogder/scripts/peewee.py", line 2243, in _execute
return self.database.execute_sql(sql, params, self.require_commit)
File "/home/xrogder/scripts/peewee.py", line 2877, in execute_sql
self.commit()
File "/home/xrogder/scripts/peewee.py", line 2732, in __exit__
reraise(new_type, new_type(*exc_value.args), traceback)
File "/home/xrogder/scripts/peewee.py", line 2869, in execute_sql
cursor.execute(sql, params or ())
peewee.OperationalError: near "WHERE": syntax error
```
My models are:
``` python
database = peewee.SqliteDatabase("wee.db")
class Project(peewee.Model):
name = peewee.CharField(unique=True)
class Meta:
database = database
class Team(peewee.Model):
name = peewee.CharField(unique=True)
class Meta:
database = database
class Project_Team(peewee.Model):
project = peewee.ForeignKeyField(Project)
team = peewee.ForeignKeyField(Team)
class Meta:
database = database
primary_key = peewee.CompositeKey('project', 'team')
```
The code that's executing the insert:
``` python
for project in Project.select():
new_team = Team()
new_team.name = team
try:
new_team.save()
project_team = Project_Team()
#print "project id ", project.id
#print "team id ", new_team.id
project_team.project = project.id
project_team.team = new_team.id
project_team.save()
except peewee.IntegrityError:
# Team already exists
pass
```
The commented out print statements have shown me that both values exist. In this case, we've just created the first record in the Team table and are now trying to link it to the first project. (1,1)
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/522/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/521 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/521/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/521/comments | https://api.github.com/repos/coleifer/peewee/issues/521/events | https://github.com/coleifer/peewee/issues/521 | 56,444,916 | MDU6SXNzdWU1NjQ0NDkxNg== | 521 | [Question] Is there a way to get the display instead of the value for a field with a choices parameter associated with it? | {
"login": "EPadronU",
"id": 829947,
"node_id": "MDQ6VXNlcjgyOTk0Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/829947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EPadronU",
"html_url": "https://github.com/EPadronU",
"followers_url": "https://api.github.com/users/EPadronU/followers",
"following_url": "https://api.github.com/users/EPadronU/following{/other_user}",
"gists_url": "https://api.github.com/users/EPadronU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EPadronU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EPadronU/subscriptions",
"organizations_url": "https://api.github.com/users/EPadronU/orgs",
"repos_url": "https://api.github.com/users/EPadronU/repos",
"events_url": "https://api.github.com/users/EPadronU/events{/privacy}",
"received_events_url": "https://api.github.com/users/EPadronU/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It's just Python, there's no magic there. One easy way to accomplish this is to write something like this:\n\n``` python\n\nclass Tweet(Model):\n STATUS_CHOICES = (\n (0, 'Draft'),\n (1, 'Published'),\n (9, 'Deleted'))\n status = IntegerField(choices=STATUS_CHOICES)\n\n def get_status_label(self):\n return dict(self.STATUS_CHOICES)[self.status]\n```\n",
"Thank you very much.\n",
"Hello everyone,\r\nand how to display labels in select query? (I need display labels later in jinja template)\r\n```python\r\n# some view\r\norders = Order.select(Order.id, Order.date,\r\n Order.status.get_status_label()).order_by(Order.date)\r\n\r\n# some jinja template\r\n{% for order in orders %}\r\n{{ order.id }} {{ order.status }}\r\n{% endfor %}\r\n```\r\nThanks",
"If you want it as part of the SQL you probably want a `CASE` statement mapping integer statuses to the labels. See the `Case()` helper in the docs: http://docs.peewee-orm.com/en/latest/peewee/api.html#Case\r\n\r\nAlternatively, you could avoid \"select\"-ing a label and just calculate it in Python:\r\n\r\n```python\r\n# some view\r\n# note we select the status column\r\norders = Order.select(Order.id, Order.date, Order.status).order_by(Order.date)\r\n\r\n# some jinja template\r\n{% for order in orders %}\r\n {# status column is populated with the status value, so we can use the helper method to get label #}\r\n {{ order.id }} {{ order.get_status_label() }}\r\n{% endfor %}\r\n```",
"Ok, your solution works. Before it I have tried:\r\n\r\n```python\r\n{% for order in orders %}\r\n {# status column is populated with the status value, so we can use the helper method to get label #}\r\n {{ order.id }} {{ order.status.get_status_label() }}\r\n{% endfor %}\r\n```\r\n...but this not work. Your answer works as I expected.\r\n\r\nThank you very much and thanks for your work on peewee.\r\n"
] | 2015-02-03T22:05:17 | 2018-03-14T14:35:35 | 2015-02-03T22:13:57 | NONE | null | Hello, everyone.
In the following code:
```
class Model(db.Model):
field = pw.Charfield(
choices=[1, 'foo']
)
def __unidoce__(self):
return self.<api call I don't know about>
```
I would like to print **foo** instead of **1**. The ORM implements a way to achieve this?
Thanks in advance.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/521/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/520 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/520/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/520/comments | https://api.github.com/repos/coleifer/peewee/issues/520/events | https://github.com/coleifer/peewee/issues/520 | 56,436,201 | MDU6SXNzdWU1NjQzNjIwMQ== | 520 | Not sure why: OperationalError: near "AS": syntax error | {
"login": "kristofer",
"id": 17994,
"node_id": "MDQ6VXNlcjE3OTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/17994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kristofer",
"html_url": "https://github.com/kristofer",
"followers_url": "https://api.github.com/users/kristofer/followers",
"following_url": "https://api.github.com/users/kristofer/following{/other_user}",
"gists_url": "https://api.github.com/users/kristofer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kristofer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kristofer/subscriptions",
"organizations_url": "https://api.github.com/users/kristofer/orgs",
"repos_url": "https://api.github.com/users/kristofer/repos",
"events_url": "https://api.github.com/users/kristofer/events{/privacy}",
"received_events_url": "https://api.github.com/users/kristofer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It would be really helpful if you could paste the Python code that generated the `File` query.\n",
"The problem appears to be in how you are sorting your files. Maybe try modifying it to be:\n\n``` python\nfiles = File.select().where(File.parent == '/').order_by(File.folder.desc(), File.filename)\n```\n",
"Yes, this was the line that created the files\n\n```\n files = File.filter(parent=session['folder']).order_by((File,'folder','DESC'),(File,'filename'))\n```\n",
"yep, I replace mine with yours and it works. Thanks for your help.\n",
"No problem! I think you were using the old pre-1.0 syntax. You might check http://peewee.readthedocs.org/en/2.0.2/peewee/upgrading.html#upgrading if you run into similar issues.\n",
"but then, real quick, how would I change\n\n```\nfoo = File.filter(filename__icontains=someSubstring).order_by(File.folder.desc(), File.filename)\n```\n\nI keep getting a **AttributeError: type object 'File' has no attribute 'icontains'**\nas you might guess it's a search term.\n",
"``` python\nfoo = File.select().where(\n File.filename.contains(someSubstring)).order_by(File.folder.desc(), File.filename)\n```\n\nThis is all documented: http://docs.peewee-orm.com/en/latest/peewee/querying.html#query-operators\n",
"yep, that's handy. Thanks again.\n"
] | 2015-02-03T20:59:22 | 2015-02-03T23:29:38 | 2015-02-03T22:10:43 | NONE | null | So I am getting a syntax error deep inside of peewee.
I'm trying to get the "encrypted-flask-aws" app, pretty much unmodified,
when I try to do a GET on the root of the app,
I get:
```
...venv/lib/python2.7/site-packages/peewee.py", line 2855, in execute_sql
cursor.execute(sql, params or ())
OperationalError: near "AS": syntax error
```
Just before this error, it run this...
```
return render_template('index.html', files=files, breads=breads, path=path)
```
and just before that when I "print files", I have
```
<class '__main__.File'> SELECT "t1"."id", "t1"."filename", "t1"."created_date", "t1"."encrypted", "t1"."folder", "t1"."parent" FROM "file" AS t1 WHERE ("t1"."parent" = ?) ORDER BY ("file" AS t1, ?, ?), ("file" AS t1, ?) [u'/', 'folder', 'DESC', 'filename']
```
And that SQL is being generated for me by Peewee, as it tries to query my sqlite.
any ideas??
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/520/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/519 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/519/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/519/comments | https://api.github.com/repos/coleifer/peewee/issues/519/events | https://github.com/coleifer/peewee/issues/519 | 56,152,177 | MDU6SXNzdWU1NjE1MjE3Nw== | 519 | Aggregate rows does not preserve ordering of nested objects. | {
"login": "coleifer",
"id": 119974,
"node_id": "MDQ6VXNlcjExOTk3NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/119974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coleifer",
"html_url": "https://github.com/coleifer",
"followers_url": "https://api.github.com/users/coleifer/followers",
"following_url": "https://api.github.com/users/coleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/coleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coleifer/subscriptions",
"organizations_url": "https://api.github.com/users/coleifer/orgs",
"repos_url": "https://api.github.com/users/coleifer/repos",
"events_url": "https://api.github.com/users/coleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/coleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2015-02-01T05:00:13 | 2015-02-01T05:04:38 | 2015-02-01T05:04:38 | OWNER | null | http://stackoverflow.com/questions/28255182/order-by-method-not-working-in-peewee
When using `aggregate_rows()` with multiple order-by clauses, the ordering of nested objects is not preserved.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/519/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/518 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/518/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/518/comments | https://api.github.com/repos/coleifer/peewee/issues/518/events | https://github.com/coleifer/peewee/issues/518 | 56,116,658 | MDU6SXNzdWU1NjExNjY1OA== | 518 | JDBC support | {
"login": "SamuelMarks",
"id": 807580,
"node_id": "MDQ6VXNlcjgwNzU4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/807580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelMarks",
"html_url": "https://github.com/SamuelMarks",
"followers_url": "https://api.github.com/users/SamuelMarks/followers",
"following_url": "https://api.github.com/users/SamuelMarks/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelMarks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelMarks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelMarks/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelMarks/orgs",
"repos_url": "https://api.github.com/users/SamuelMarks/repos",
"events_url": "https://api.github.com/users/SamuelMarks/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelMarks/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This type of discussion might be more suited for the discussion group: https://groups.google.com/group/peewee-orm\n"
] | 2015-01-31T07:29:57 | 2015-01-31T16:29:19 | 2015-01-31T16:29:13 | NONE | null | Has anyone tried using peewee with JDBC, e.g.: using [this wrapper](https://pypi.python.org/pypi/JayDeBeApi/) ([code](http://bazaar.launchpad.net/~baztian/jaydebeapi/trunk/view/head:/src/jaydebeapi/dbapi2.py)) which exposes DB-API v2.0 support?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/518/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/517 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/517/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/517/comments | https://api.github.com/repos/coleifer/peewee/issues/517/events | https://github.com/coleifer/peewee/issues/517 | 55,936,249 | MDU6SXNzdWU1NTkzNjI0OQ== | 517 | Pooled Connections, MasterSlave and closing connections | {
"login": "grvhi",
"id": 1818244,
"node_id": "MDQ6VXNlcjE4MTgyNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1818244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/grvhi",
"html_url": "https://github.com/grvhi",
"followers_url": "https://api.github.com/users/grvhi/followers",
"following_url": "https://api.github.com/users/grvhi/following{/other_user}",
"gists_url": "https://api.github.com/users/grvhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/grvhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/grvhi/subscriptions",
"organizations_url": "https://api.github.com/users/grvhi/orgs",
"repos_url": "https://api.github.com/users/grvhi/repos",
"events_url": "https://api.github.com/users/grvhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/grvhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"For a celery task you will probably want to open/close the connections to both databases, if you are going to be using both in the task, e.g.\n\n``` python\nmaster.connect()\nread.connect()\ntry:\n my_code()\nfinally:\n master.close()\n read.close()\n```\n\nThe pool will handle recycling the connections for you.\n",
"Great, thanks @coleifer \n"
] | 2015-01-29T18:18:25 | 2015-01-31T09:36:14 | 2015-01-31T01:00:36 | NONE | null | I'm a little confused as to the process of opening and closing connections when using PooledPostgresqlExtDatabase and ReadSlaveModel... Say I have one master and one read. When opening a connection in my app (celery task), do I need to explicitly open a connection to both databases? Or do I open a connection to the master database only?
In the following example, how should I manage connections?
``` python
master = PooledPostgresqlExtDatabase()
read = PooledPostgresqlExtDatabase()
### Models
class SomeModel(ReadSlaveModel):
something = TextField()
class Meta(object):
"""
Set the required Meta values
"""
database = master
read_slaves = (read, )
##### App
try:
SomeModel.get(SomeModel.id == 1)
except:
SomeModel.create(**attributes)
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/517/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/516 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/516/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/516/comments | https://api.github.com/repos/coleifer/peewee/issues/516/events | https://github.com/coleifer/peewee/issues/516 | 55,876,439 | MDU6SXNzdWU1NTg3NjQzOQ== | 516 | "where"-chaining does not work with negated BooleanFields | {
"login": "tfeldmann",
"id": 385566,
"node_id": "MDQ6VXNlcjM4NTU2Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/385566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tfeldmann",
"html_url": "https://github.com/tfeldmann",
"followers_url": "https://api.github.com/users/tfeldmann/followers",
"following_url": "https://api.github.com/users/tfeldmann/following{/other_user}",
"gists_url": "https://api.github.com/users/tfeldmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tfeldmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tfeldmann/subscriptions",
"organizations_url": "https://api.github.com/users/tfeldmann/orgs",
"repos_url": "https://api.github.com/users/tfeldmann/repos",
"events_url": "https://api.github.com/users/tfeldmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/tfeldmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2015-01-29T09:36:31 | 2015-01-31T00:57:50 | 2015-01-31T00:57:50 | NONE | null | Hello,
when chaining ".where()" expressions and using negations the query works only if the negated expression comes last. See the attached example code:
``` python
from peewee import *
db = SqliteDatabase(':memory:')
class Test(Model):
flag = BooleanField()
num = IntegerField()
def __repr__(self):
return '<Test(flag=%s, num=%s)>' % (self.flag, self.num)
class Meta:
database = db
db.create_tables([Test])
Test.create(flag=False, num=10)
Test.create(flag=True, num=-1)
Test.create(flag=False, num=-2)
Test.create(flag=False, num=12)
# this works:
print(list(Test.select().where((~Test.flag) & (Test.num > 0))))
print(list(Test.select().where((Test.num > 0) & (~Test.flag))))
print(list(Test.select().where(Test.num > 0).where(~Test.flag)))
print(list(Test.select().where(Test.flag == False).where(Test.num > 0)))
# this throws "AttributeError: 'NoneType' object has no attribute '_negated'"
print(list(Test.select().where(~Test.flag).where(Test.num > 0)))
```
Tested with Python 3.4.2 and peewee 2.4.6 on Mac OS X 10.10.2
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/516/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/515 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/515/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/515/comments | https://api.github.com/repos/coleifer/peewee/issues/515/events | https://github.com/coleifer/peewee/issues/515 | 55,695,383 | MDU6SXNzdWU1NTY5NTM4Mw== | 515 | Subqueries in simple `delete_instance()` calls | {
"login": "coleifer",
"id": 119974,
"node_id": "MDQ6VXNlcjExOTk3NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/119974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coleifer",
"html_url": "https://github.com/coleifer",
"followers_url": "https://api.github.com/users/coleifer/followers",
"following_url": "https://api.github.com/users/coleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/coleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coleifer/subscriptions",
"organizations_url": "https://api.github.com/users/coleifer/orgs",
"repos_url": "https://api.github.com/users/coleifer/repos",
"events_url": "https://api.github.com/users/coleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/coleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Fixed by 08c595a\n",
"Hi @coleifer,\r\n\r\nHas this issue regressed at some point in version 3.x.x? We're looking to upgrade to the latest version of peewee and we're once again seeing deletion queries of this form when calling recursive deletes:\r\n\r\n```\r\nDELETE FROM \"table2\" WHERE (\"foreign_field\" IN (SELECT \"t1\".\"id\" FROM \"table1\" AS \"t1\" WHERE (\"t1\".\"id\" = ?)))\r\n```\r\n",
"Fixed (again)!"
] | 2015-01-28T00:12:24 | 2018-05-03T16:16:52 | 2018-05-03T16:16:52 | OWNER | null | > We've been using peewee for quite a while now and have noticed from time-to-time that peewee generates delete (and update for nullables) queries of the form:
>
> ``` sql
> UPDATE `table` SET `field` = null
> WHERE (`table`.`field` IN (
> SELECT `t2`.`id` FROM `othertable` AS t2 WHERE (`t2`.`id` = 1234)))
> ```
>
> Earlier today I was debugging a slowdown on a table of approx 11 million rows in MySQL, when I noticed that the above SQL, if run without the subquery, drops from approx 20s to a few ms. Is there a reason that peewee generates foreign key updates/deletions with this form vs a direct 'table.field = 1234'? I was thinking of submitting a PR to change peewee to use the more direct SQL form, but I want to make sure I fully understand the problem space before addressing it this way.
My response:
> The reason they're generated this way is because when you call `delete_instance()` Peewee generates a dependency graph. As the graph is traversed, peewee progressively builds more complex subqueries. This works, but I can see how in the simple case we could get away with a simple equality check.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/515/timeline | null | completed | null | null |
Subsets and Splits