threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "> Well, there is a theoretical chance of deadlock --- not against other\n> transactions doing the same thing, since RowShareLock and\n> RowExclusiveLock don't conflict, but you could construct deadlock\n> scenarios involving other transactions that grab ShareLock or\n> ShareRowExclusiveLock. So I don't think it's appropriate for the\n> \"deadlock risk\" check to ignore RowShareLock->RowExclusiveLock\n> upgrades.\n\nThere is theoretical chance of deadlock when two xactions lock\ntables in different order and we can check this only in deadlock\ndetection code.\n\n> But I'm not sure the check should be enabled in production releases\n> anyway. I just put it in as a quick and dirty debug check. Perhaps\n> it should be under an #ifdef that's not enabled by default.\n\nNo reason to learn users during transaction processing. We need in\ngood deadlock detection code and documentation.\n\nVadim\n",
"msg_date": "Tue, 5 Dec 2000 10:49:01 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Wrong FOR UPDATE lock type "
}
]
|
[
{
"msg_contents": "> > I totaly missed your point here. How closing source of \n> > ERserver is related to closing code of PostgreSQL DB server?\n> > Let me clear things:\n> >\n> > 1. ERserver isn't based on WAL. It will work with any version >= 6.5\n> >\n> > 2. WAL was partially sponsored by my employer, Sectorbase.com,\n> > not by PG, Inc.\n> \n> Has somebody thought about putting PG in the GPL licence \n> instead of the BSD? \n> PG inc would still be able to do there money giving support \n> (just like IBM, HP and Compaq are doing there share with Linux),\n> without been able to close the code.\n\nERserver is *external* application that change *nothing* in\nPostgreSQL code. So, no matter under what licence are\nserver code, any company will be able to close code of\nany privately developed *external* application.\nAnd I don't see what's wrong with this, do you?\n\nVadim\n",
"msg_date": "Tue, 5 Dec 2000 11:24:34 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: beta testing version"
},
{
"msg_contents": "> > > I totaly missed your point here. How closing source of \n> > > ERserver is related to closing code of PostgreSQL DB server?\n> > > Let me clear things:\n> > >\n> > > 1. ERserver isn't based on WAL. It will work with any version >= 6.5\n> > >\n> > > 2. WAL was partially sponsored by my employer, Sectorbase.com,\n> > > not by PG, Inc.\n> > \n> > Has somebody thought about putting PG in the GPL licence \n> > instead of the BSD? \n> > PG inc would still be able to do there money giving support \n> > (just like IBM, HP and Compaq are doing there share with Linux),\n> > without been able to close the code.\n\nThis gets brought up every couple of months, I don't see the point\nin denying any of the current Postgresql developers the chance\nto make some money selling a non-freeware version of Postgresql.\n\nWe can also look at it another way, let's say ER server was meant\nto be closed source, if the code it was derived from was GPL'd\nthen that chance was gone before it even happened. Hence no\nreason to develop it.\n\n*poof* no ER server.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Tue, 5 Dec 2000 12:25:52 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version"
}
]
|
[
{
"msg_contents": "Minor usability/debuggability suggestion...\n\nRI violation error messages in 7.0.0 do not appear to identify the\noffending value.\n\nExample:\n\nERROR: fk_employee_currency referential integrity violation - key\nreferenced from employee not found in currency\n\nEasier to debug would be:\n\nERROR: fk_employee_currency referential integrity violation - key '27'\nreferenced from employee not found in currency\n\nRegards,\nEd Loehr\n",
"msg_date": "Tue, 05 Dec 2000 14:24:24 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "RI violation msg suggestion"
},
{
"msg_contents": "Greetings! 'copy from stdin' on 7.0.2 appears to simply hang if more\nthan ~ 1000 records follow in one shot. I couldn't see this behavior\ndocumented anywhere. Is this a bug?\n\nTake care,\n-- \nCamm Maguire\t\t\t \t\t\[email protected]\n==========================================================================\n\"The earth is but one country, and mankind its citizens.\" -- Baha'u'llah\n",
"msg_date": "05 Dec 2000 15:50:51 -0500",
"msg_from": "Camm Maguire <[email protected]>",
"msg_from_op": false,
"msg_subject": "copy from stdin limits"
},
{
"msg_contents": "Camm Maguire <[email protected]> writes:\n> Greetings! 'copy from stdin' on 7.0.2 appears to simply hang if more\n> than ~ 1000 records follow in one shot. I couldn't see this behavior\n> documented anywhere. Is this a bug?\n\nI've never heard of any such behavior ... and you can be sure that we'd\nhave heard about this, since any moderately large pg_dump file would\ntrigger such a bug. You must have something else going on. Details\nplease?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Dec 2000 16:19:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy from stdin limits "
},
{
"msg_contents": "Greetings, and thank you for your reply.\n\nOK, I have 4 tables, and a view on a merge of the 4. I have a trigger\non insert into table 3, and a trigger on insert into the view, which\nbasically just takes the input data, does a few selects on the tables,\nand inserts the appropriate portions of the data into each table as\nnecessary. \n\nWhen I copy up to ~ 1000 lines of a file into this view, everything\ngoes fine. More than that, and after a while, cpu activity stops,\ndisk activity stops, and the job hangs indefinitely. Control-C gives\nthe later message \"You have to get out of copy state yourself\". I can\nprovide the schema if needed.\n\nThanks for your help!\n\n\nTom Lane <[email protected]> writes:\n\n> Camm Maguire <[email protected]> writes:\n> > Greetings! 'copy from stdin' on 7.0.2 appears to simply hang if more\n> > than ~ 1000 records follow in one shot. I couldn't see this behavior\n> > documented anywhere. Is this a bug?\n> \n> I've never heard of any such behavior ... and you can be sure that we'd\n> have heard about this, since any moderately large pg_dump file would\n> trigger such a bug. You must have something else going on. Details\n> please?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n-- \nCamm Maguire\t\t\t \t\t\[email protected]\n==========================================================================\n\"The earth is but one country, and mankind its citizens.\" -- Baha'u'llah\n",
"msg_date": "07 Dec 2000 11:30:35 -0500",
"msg_from": "Camm Maguire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy from stdin limits"
},
{
"msg_contents": "Camm Maguire <[email protected]> writes:\n> OK, I have 4 tables, and a view on a merge of the 4. I have a trigger\n> on insert into table 3, and a trigger on insert into the view, which\n> basically just takes the input data, does a few selects on the tables,\n> and inserts the appropriate portions of the data into each table as\n> necessary. \n\n> When I copy up to ~ 1000 lines of a file into this view, everything\n> goes fine.\n\nI'm a bit startled that COPY to a view works at all ;-). But it does\nlook like copy honors triggers, so in principle the above ought to work.\n\nI'll need to see the complete details of the trigger and all the\nreferenced table declarations --- probably don't need the data, though,\nunless the trigger needs to have nonempty tables to start with.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 12:43:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy from stdin limits "
}
]
|
[
{
"msg_contents": "Greetings! I've noticed in the documentation that the sql standard\nrequires foreign keys to reference primary key/(or maybe just unique)\ncolumns, but that postgresql does not enforce this. Is this a feature\nthat is intended to persist, or a temporary deviation from the sql\nstandard? The current postgresql behavior seems useful in cases where\none wants to update a foreign key to a value already in the original\ntable. \n\nTake care,\n-- \nCamm Maguire\t\t\t \t\t\[email protected]\n==========================================================================\n\"The earth is but one country, and mankind its citizens.\" -- Baha'u'llah\n",
"msg_date": "05 Dec 2000 15:53:42 -0500",
"msg_from": "Camm Maguire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Foreign key references to non-primary key columns"
},
{
"msg_contents": "\nOn 5 Dec 2000, Camm Maguire wrote:\n\n> Greetings! I've noticed in the documentation that the sql standard\n> requires foreign keys to reference primary key/(or maybe just unique)\n> columns, but that postgresql does not enforce this. Is this a feature\n> that is intended to persist, or a temporary deviation from the sql\n> standard? The current postgresql behavior seems useful in cases where\n> one wants to update a foreign key to a value already in the original\n> table. \n\nIt's intended to be temporary and theoretically is in fact checked in 7.1\n(although you could remove the index afterwards and it doesn't complain\n -- necessary because you might need to drop/create the index for other\n reasons).\n\nThe limitation is on the referenced columns, and the reason for it is that\nif the referenced columns are not unique, parts of the RI spec stop making\nsense as written. If you have match full and update cascade, and two pk\nrows with key 1 and an fk row with key 1, what happens when you modify\nthe key value on just one of those pk rows? We could theoretically extend\nthe spec to make sense in these cases, but we have enough trouble with the\nspec as is (match partial is amazingly awful).\n\n\n",
"msg_date": "Tue, 5 Dec 2000 14:26:50 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key references to non-primary key columns"
},
{
"msg_contents": "Greetings, and thanks so much for your reply! \n\nStephan Szabo <[email protected]> writes:\n\n> On 5 Dec 2000, Camm Maguire wrote:\n> \n> > Greetings! I've noticed in the documentation that the sql standard\n> > requires foreign keys to reference primary key/(or maybe just unique)\n> > columns, but that postgresql does not enforce this. Is this a feature\n> > that is intended to persist, or a temporary deviation from the sql\n> > standard? The current postgresql behavior seems useful in cases where\n> > one wants to update a foreign key to a value already in the original\n> > table. \n> \n> It's intended to be temporary and theoretically is in fact checked in 7.1\n> (although you could remove the index afterwards and it doesn't complain\n> -- necessary because you might need to drop/create the index for other\n> reasons).\n> \n> The limitation is on the referenced columns, and the reason for it is that\n> if the referenced columns are not unique, parts of the RI spec stop making\n> sense as written. If you have match full and update cascade, and two pk\n> rows with key 1 and an fk row with key 1, what happens when you modify\n> the key value on just one of those pk rows? We could theoretically extend\n> the spec to make sense in these cases, but we have enough trouble with the\n> spec as is (match partial is amazingly awful).\n> \n\nThis is clearly a problem. I've played with this a bit, and the\ncurrent behavior is that deleting one of the two pk rows deletes the\nfk row if on delete cascade is set. Haven't yet checked update, but I\nbet it works the same way. And while a little messy, it still seems\nbetter than having a unique constraint on a pk row in the following\ncircumstance, for example.\n\nSay you input a bunch of data with one field denoting the 'identity'\nof the entity referred to. But occasionally at some later date, this\nidentity field will change, while still referring to the same entity.\nA cusip for a stock is a good example -- the cusip uniquely references\na given stock, but a given company can change its cusip at some\npoint. One would like to have a cusip,id table, with cusip as the pk,\nbut id as the fk in all the main data tables. On cusip change, one\nmerely updates the id for the new cusip to be the id of the old\ncusip. On update cascade ensures that this propagates for any tables\nthat may be using id as a fk.\n\nDoing it the other way looks like this: have a cusip,id table, with\nid here now a fk pointing to another table ids with pk id. Have\nids.id the referenced column in all tables using a fk. But now one\ncannot simply update idnew = idold if idold is already in the table,\nso one writes an update trigger which basically updates all the fk\nrows using idnew to idold, deletes idnew from ids, and returns null. \n\nThe only problem with this approach is that one must remember to\ninclude all new tables with an fk id into this trigger when that table\nis added. The trigger properly belongs to the table with the fk, but\nshould be fired on update to ids. Foreign keys seem exactly designed\nto do this.\n\nIn any case, I take it from your recommendation that one should not\ndesign a database around this current postgresql behavior for future\ncompatibility reasons. Any suggestions are most welcome.\n\nTake care,\n\n\n\n\n\n> \n> \n> \n\n-- \nCamm Maguire\t\t\t \t\t\[email protected]\n==========================================================================\n\"The earth is but one country, and mankind its citizens.\" -- Baha'u'llah\n",
"msg_date": "06 Dec 2000 09:43:45 -0500",
"msg_from": "Camm Maguire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Foreign key references to non-primary key columns"
}
]
|
[
{
"msg_contents": "> Short Description\n> foreign key check makes a big LOCK\n> \n> Long Description\n> in: src/backend/utils/adt/ri_triggers.c\n> \n> RI_FKey_check(), RI_FKey_noaction_upd(), RI_FKey_noaction_del(), etc..\n> checking the referential with SELECT FOR UPDATE.\n> \n> After BEGIN TRANSACTION: the INSERT/DELETE/UPDATE calling \n> foreign-key checks, and the SELECT FOR UPDATE locking ALL \n> matched rows in referential table.\n> \n> I modify ri_triggers.c (remove \"FOR UPDATE\"). This working.. \n> but is correct?\n\nIt's not. If one transaction inserts FK 1 and another one deletes\nPK 1 at the same time both will succeed.\n\nRI triggers should perform dirty reads (and test if returned tuples\nalive/dead/being updated by concurrent transaction) instead of\nSELECT FOR UPDATE but dirty reads are not implemented, yet.\n\nVadim\n",
"msg_date": "Tue, 5 Dec 2000 12:59:50 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: foreign key check makes a big LOCK"
}
]
|
[
{
"msg_contents": "> > Sounds great! We can follow this way: when first after last \n> > checkpoint update to a page being logged, XLOG code can log\n> > not AM specific update record but entire page (creating backup\n> > \"physical log\"). During after crash recovery such pages will\n> > be redone first, ensuring page consistency for further redo ops.\n> > This means bigger log, of course.\n> \n> Be sure to include a CRC of each part of the block that you hope\n> to replay individually.\n\nWhy should we do this? I'm not going to replay parts individually,\nI'm going to write entire pages to OS cache and than apply changes to\nthem. Recovery is considered as succeeded after server is ensured\nthat all applyed changes are on the disk. In the case of crash during\nrecovery we'll replay entire game.\n\nVadim\n",
"msg_date": "Tue, 5 Dec 2000 13:30:37 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: beta testing version"
}
]
|
[
{
"msg_contents": "I am running 7.0 and for columns that have type 'timestamp' \nthe values end up with the format of year-month-day HH:MM:SS-[0-n]\n\ne.g.\n 2000-12-05 15:58:12-06\n\n the trailing -n (e.g. -06) is killing the JDBC driver.\n\n Is there a work around. No matter what I Insert, a trailing\n -0n gets appended.\n\n\n thanks\n\n mike\n",
"msg_date": "Tue, 5 Dec 2000 16:02:23 -0600",
"msg_from": "Mike Haberman <[email protected]>",
"msg_from_op": true,
"msg_subject": "problem with timestamps ?"
},
{
"msg_contents": "> I am running 7.0 and for columns that have type 'timestamp'\n> the values end up with the format of year-month-day HH:MM:SS-[0-n]\n\nafaik there is a newer JDBC driver which copes with this.\n\n - Thomas\n",
"msg_date": "Wed, 06 Dec 2000 04:55:26 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problem with timestamps ?"
}
]
|
[
{
"msg_contents": "\nSorry about that email. I was trying to forward your comments to a friend\nand due to a lack of sleep I just typed \"R\" in pine. Doh!\n\nCheers,\n\nRandy Jonasz\nSoftware Engineer\nClick2net Inc.\nWeb: http://www.click2net.com\nPhone: (905) 271-3550\n\n\"You cannot possibly pay a philosopher what he's worth,\nbut try your best\" -- Aristotle\n\n\n",
"msg_date": "Tue, 5 Dec 2000 17:32:08 -0500 (EST)",
"msg_from": "Randy Jonasz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sorry"
},
{
"msg_contents": "* Randy Jonasz <[email protected]> [001205 14:31] wrote:\n> \n> Sorry about that email. I was trying to forward your comments to a friend\n> and due to a lack of sleep I just typed \"R\" in pine. Doh!\n\nThat's ok, you work with Dan Moschuk right?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Tue, 5 Dec 2000 14:47:14 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sorry"
},
{
"msg_contents": "\n| > Sorry about that email. I was trying to forward your comments to a friend\n| > and due to a lack of sleep I just typed \"R\" in pine. Doh!\n| \n| That's ok, you work with Dan Moschuk right?\n\nHe's my bitch. :-)\n\nAnd as such, I've donated him to do neat things with postgres' C++ interface \n(for whatever reason, he's of the less-enlightened opinion that C++ shouldn't\nbe dragged out into the backyard and shot).\n\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n",
"msg_date": "Thu, 7 Dec 2000 22:47:10 -0500",
"msg_from": "Dan Moschuk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Sorry"
},
{
"msg_contents": "\n| | That's ok, you work with Dan Moschuk right?\n| \n| He's my bitch. :-)\n| \n| And as such, I've donated him to do neat things with postgres' C++ interface \n| (for whatever reason, he's of the less-enlightened opinion that C++ shouldn't\n| be dragged out into the backyard and shot).\n\n*sigh* No more crack for me.\n\nHow that _should_ have read (well, everything except the C++ thing :-) is\nthat as Click2Net uses postgres exclusively for all our database needs, Randy\nhas been kind enough to volunteer his time (or perhaps look for an excuse\nto stop doing PHP for a while :-) to work on this.\n\nI hope this will be the first of a string of projects that Randy, myself, and\nthe and the rest of our band of merry-men will be undertaking and giving back\nto the postgres community. An area that I'm currently examining for the\nFreeBSD project is server clustering, and you can bet that one of the\nrequirements is to make sure postgres can take full advantage of that. \nWell, assuming trying to get shared memory to work across multiple machines\ndoesn't turn me off of programming for good. :P\n\nCheers!\n-Dan\n-- \nMan is a rational animal who always loses his temper when he is called\nupon to act in accordance with the dictates of reason.\n -- Oscar Wilde\n",
"msg_date": "Thu, 7 Dec 2000 23:40:33 -0500",
"msg_from": "Dan Moschuk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Sorry"
}
]
|
[
{
"msg_contents": "Hi,\n\nI see now the following message and couldn't start\npostmaster.\n\nFATAL 2: btree_insert_redo: uninitialized page\n\nIs it a bug ?\nAnyway,how do I reset my WAL environment ?\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Wed, 06 Dec 2000 10:16:47 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to reset WAL enveironment"
}
]
|
[
{
"msg_contents": "> I see now the following message and couldn't start\n> postmaster.\n> \n> FATAL 2: btree_insert_redo: uninitialized page\n> \n> Is it a bug ?\n\nSeems so. btree_insert_redo shouldn't see uninitialized pages\n(only newroot and split ops add pages to index and they should\nbe redone before insert op).\nCan you post/ftp me tgz of data dir?\nOr start up postmaster with --wal_debug=1 and send me\noutput.\n\n> Anyway,how do I reset my WAL environment ?\n\nOnly one way - remove index file. I didn't add file node\nto elog output yet (will do for beta2), but wal_debug\nwill show it.\n\nVadim\n",
"msg_date": "Tue, 5 Dec 2000 17:29:03 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to reset WAL enveironment"
},
{
"msg_contents": "Mikheev, Vadim wrote:\n> \n> > I see now the following message and couldn't start\n> > postmaster.\n> >\n> > FATAL 2: btree_insert_redo: uninitialized page\n> >\n> > Is it a bug ?\n> \n> Seems so. btree_insert_redo shouldn't see uninitialized pages\n> (only newroot and split ops add pages to index and they should\n> be redone before insert op).\n> Can you post/ftp me tgz of data dir?\n> Or start up postmaster with --wal_debug=1 and send me\n> output.\n>\n\nProbably this is caused by my trial (local) change\nand generated an illegal log output.\nHowever it seems to mean that WAL isn't always\nredo-able. In my case the index is probably a\nsystem index unfortunately. Is there a way to\navoid invoking recovery process at startup ?\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Wed, 06 Dec 2000 11:30:22 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reset WAL enveironment"
}
]
|
[
{
"msg_contents": "well im trying to get apache + php4 + pgsql 7.0.3 running on sco\ngivin up on the udk on sco openserver 5.0.5 now using sdk on sco open\nserver 5.0.4\ni can compile all the stuff static, but php4 wants to used shared\nlibpq.so i get undefined symbold on php4 module load unresolved symbol\nPQfinish\n\nwhen i compile on linux i get shared libs, on sco with udk, or sdk just\nget static libs\ncan some on point me to the config files to hack to get both static, and\nshared libs\n\nthanks in advance\nps the mail search seems to be working, and plenty of info on sco but\nnothing on shared libs \n-- \nMy opinions are my own and not that of my employer even if I am self\nemployed\n",
"msg_date": "Tue, 05 Dec 2000 19:38:13 -0600",
"msg_from": "\"Arno A. Karner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared libs on sco how?"
},
{
"msg_contents": "Arno A. Karner writes:\n\n> when i compile on linux i get shared libs, on sco with udk, or sdk just\n> get static libs\n> can some on point me to the config files to hack to get both static, and\n> shared libs\n\nTry the patch below. I don't actually have SCO, but it's what I\nconstructed from the documentation.\n\ndiff -cr postgresql-7.0.3.orig/src/Makefile.shlib postgresql-7.0.3/src/Makefile.shlib\n*** postgresql-7.0.3.orig/src/Makefile.shlib Tue May 16 22:48:48 2000\n--- postgresql-7.0.3/src/Makefile.shlib Tue Dec 12 17:51:16 2000\n***************\n*** 207,212 ****\n--- 207,220 ----\n shlib := $(NAME)$(DLSUFFIX)\n endif\n\n+ ifeq ($(PORTNAME), sco)\n+ install-shlib-dep := install-shlib\n+ shlib := lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)\n+ LDFLAGS_SL := -G -z text\n+ CFLAGS += $(CFLAGS_SL)\n+ endif\n+\n+\n # Default target definition. Note shlib is empty if not building a shlib.\n\n all: lib$(NAME).a $(shlib)\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 12 Dec 2000 18:14:07 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared libs on sco how?"
},
{
"msg_contents": "Did this work? If so, I'd like to add it to 7.1.\n\nPeter Eisentraut writes:\n\n> Arno A. Karner writes:\n>\n> > when i compile on linux i get shared libs, on sco with udk, or sdk just\n> > get static libs\n> > can some on point me to the config files to hack to get both static, and\n> > shared libs\n>\n> Try the patch below. I don't actually have SCO, but it's what I\n> constructed from the documentation.\n>\n> diff -cr postgresql-7.0.3.orig/src/Makefile.shlib postgresql-7.0.3/src/Makefile.shlib\n> *** postgresql-7.0.3.orig/src/Makefile.shlib Tue May 16 22:48:48 2000\n> --- postgresql-7.0.3/src/Makefile.shlib Tue Dec 12 17:51:16 2000\n> ***************\n> *** 207,212 ****\n> --- 207,220 ----\n> shlib := $(NAME)$(DLSUFFIX)\n> endif\n>\n> + ifeq ($(PORTNAME), sco)\n> + install-shlib-dep := install-shlib\n> + shlib := lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)\n> + LDFLAGS_SL := -G -z text\n> + CFLAGS += $(CFLAGS_SL)\n> + endif\n> +\n> +\n> # Default target definition. Note shlib is empty if not building a shlib.\n>\n> all: lib$(NAME).a $(shlib)\n>\n>\n>\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 16 Dec 2000 19:24:01 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared libs on sco how?"
}
]
|
[
{
"msg_contents": "> > > FATAL 2: btree_insert_redo: uninitialized page\n> > >\n> > > Is it a bug ?\n> > \n> > Seems so. btree_insert_redo shouldn't see uninitialized pages\n> > (only newroot and split ops add pages to index and they should\n> > be redone before insert op).\n> > Can you post/ftp me tgz of data dir?\n> > Or start up postmaster with --wal_debug=1 and send me\n> > output.\n> >\n> \n> Probably this is caused by my trial (local) change\n> and generated an illegal log output.\n> However it seems to mean that WAL isn't always\n> redo-able.\n\nIllegal log output is like disk crash - only BAR can help.\nI agree that elog(STOP) caused by problems with single\ndata file is quite annoying, it would be great if we could\nmark table/index as corrupted after recovery, but there are\nno means for this now. For the moment we can only notify\nDBA about problems with file node and ignore further\nrecovery of corresponding table/index - I'll do this\nfor beta2/3 if no one else.\n\n> In my case the index is probably a\n> system index unfortunately. Is there a way to\n\nYour REINDEX works very well.\n\n> avoid invoking recovery process at startup ?\n\nYou would get totally corrupted database. Remember -\nWAL avoids not only fsync() but write() too.\n\nVadim\n",
"msg_date": "Tue, 5 Dec 2000 20:24:50 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to reset WAL enveironment"
},
{
"msg_contents": "Mikheev, Vadim wrote:\n> \n> > > > FATAL 2: btree_insert_redo: uninitialized page\n> > > >\n> > > > Is it a bug ?\n> > >\n> > > Seems so. btree_insert_redo shouldn't see uninitialized pages\n> > > (only newroot and split ops add pages to index and they should\n> > > be redone before insert op).\n> > > Can you post/ftp me tgz of data dir?\n> > > Or start up postmaster with --wal_debug=1 and send me\n> > > output.\n> > >\n> >\n> > Probably this is caused by my trial (local) change\n> > and generated an illegal log output.\n> > However it seems to mean that WAL isn't always\n> > redo-able.\n> \n> Illegal log output is like disk crash - only BAR can help.\n\n\nBut redo-recovery after restore would also fail.\nThe operation which corresponds to the illegal\nlog output aborted at the execution time and \nrolling back by redo also failed. It seems\npreferable to me that the transaction is rolled\nback by undo. \n \n> I agree that elog(STOP) caused by problems with single\n> data file is quite annoying, it would be great if we could\n> mark table/index as corrupted after recovery, but there are\n> no means for this now. For the moment we can only notify\n> DBA about problems with file node and ignore further\n> recovery of corresponding table/index - I'll do this\n> for beta2/3 if no one else.\n> \n> > In my case the index is probably a\n> > system index unfortunately. Is there a way to\n> \n> Your REINDEX works very well.\n>\n\nOK,the indexes of pg_class were recovered by REINDEX.\n \nRegards.\n\nHiroshi Inoue\n\n",
"msg_date": "6 Dec 2000 12:00:54 +0100",
"msg_from": "[email protected] (Hiroshi Inoue)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to reset WAL enveironment"
}
]
|
[
{
"msg_contents": "Hello,\n\nwhat this can be?\n\nFATAL: s_lock(40015071) at spin.c:127, stuck spinlock. Aborting.\n\n From other sources I can find out that there was real memory starvation. All \nswap was eated out (that's not PostgreSQL problem).\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Wed, 6 Dec 2000 10:51:09 +0600",
"msg_from": "Denis Perchine <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange messages in log."
}
]
|
[
{
"msg_contents": "\nFor an extension I am writing, I want to be able to write a function\nthat does this:\n\nselect * from table where field = textsearch(field, 'la bla bla', 10)\norder by score(10);\n\nI can currently do this:\n\ncreate temp table search_results (key int, rank int);\n\nselect plpgsql_textsearch(....);\n\nselect * from table where field = search_results.key order by rank;\n\ndrop table search_results;\n\nThe plpgsql function calls textsearch and populates the table doing\ninserts. As:\n\n for pos in 0 .. count-1 loop\n insert into search_result(key, rank)\n values (search_key(handle,pos),\nsearch_rank(handle,pos));\n end loop;\n\nObviously, if I return a field type I force a table scan, if I return a\nset of tuples postgres should be able to perform a join.\n\n>From what I can see, it looks like I need to create a heap tuple, and\nadd tuples to it. Is this correct? It is also unclear how I should go\nabout this. Does anyone have any code that does this already?\n\nIf this is the wrong forum in which to ask this question, I apologize,\nbut I just can't see any clear way to do this.\n\n\n-- \nhttp://www.mohawksoft.com\n\n",
"msg_date": "6 Dec 2000 12:10:27 +0100",
"msg_from": "[email protected] (mlw)",
"msg_from_op": true,
"msg_subject": "[HACKERS] Function returning tuples."
}
]
|
[
{
"msg_contents": "\n> > > Sounds great! We can follow this way: when first after last \n> > > checkpoint update to a page being logged, XLOG code can log\n> > > not AM specific update record but entire page (creating backup\n> > > \"physical log\"). During after crash recovery such pages will\n> > > be redone first, ensuring page consistency for further redo ops.\n> > > This means bigger log, of course.\n> > \n> > Be sure to include a CRC of each part of the block that you hope\n> > to replay individually.\n> \n> Why should we do this? I'm not going to replay parts individually,\n> I'm going to write entire pages to OS cache and than apply changes to\n> them. Recovery is considered as succeeded after server is ensured\n> that all applyed changes are on the disk. In the case of crash during\n> recovery we'll replay entire game.\n\nYes, but there would need to be a way to verify the last page or record from txlog when \nrunning on crap hardware. The point was, that crap hardware writes our 8k pages\nin any order (e.g. 512 bytes from the end, then 512 bytes from front ...), and does not\neven notice, that it only wrote part of one such 512 byte block when reading it back\nafter a crash. But, I actually doubt that this is true for all but the most crappy hardware.\n\nAndreas\n\n",
"msg_date": "6 Dec 2000 13:17:49 +0100",
"msg_from": "[email protected] (Zeugswetter Andreas SB)",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] beta testing version"
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> Yes, but there would need to be a way to verify the last page or\n> record from txlog when running on crap hardware.\n\nHow exactly *do* we determine where the end of the valid log data is,\nanyway?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Dec 2000 11:15:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: beta testing version "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Zeugswetter Andreas SB <[email protected]> writes:\n> > Yes, but there would need to be a way to verify the last page or\n> > record from txlog when running on crap hardware.\n> \n> How exactly *do* we determine where the end of the valid log data is,\n> anyway?\n\nCouldn't you use a CRC ?\n\nAnyway... may I suggest adding CRCs to the data ? I just discovered that\nI had a faulty HD controller and I fear that something could have been\nwritten erroneously (this could also help to detect faulty memory,\nthough only in certain cases).\n\nBye!\n\n--\n Daniele Orlandi\n Planet Srl\n",
"msg_date": "Wed, 06 Dec 2000 18:39:46 +0100",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: beta testing version"
},
{
"msg_contents": "On Wed, Dec 06, 2000 at 11:15:26AM -0500, Tom Lane wrote:\n> Zeugswetter Andreas SB <[email protected]> writes:\n> > Yes, but there would need to be a way to verify the last page or\n> > record from txlog when running on crap hardware.\n> How exactly *do* we determine where the end of the valid log data is,\n> anyway?\n\nI don't know how pgsql does it, but the only safe way I know of is to\ninclude an \"end\" marker after each record. When writing to the log,\nappend the records after the last end marker, ending with another end\nmarker, and fdatasync the log. Then overwrite the previous end marker\nto indicate it's not the end of the log any more and fdatasync again.\n\nTo ensure that it is written atomically, the end marker must not cross a\nhardware sector boundary (typically 512 bytes). This can be trivially\nguaranteed by making the marker a single byte.\n\nAny other way I've seen discussed (here and elsewhere) either\n- Requires atomic multi-sector writes, which are possible only if all\n the sectors are sequential on disk, the kernel issues one large write\n for all of them, and you don't powerfail in the middle of the write.\n- Assume that a CRC is a guarantee. A CRC would be a good addition to\n help ensure the data wasn't broken by flakey drive firmware, but\n doesn't guarantee consistency.\n\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Wed, 6 Dec 2000 11:49:10 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: beta testing version"
},
{
"msg_contents": "On Wed, Dec 06, 2000 at 11:49:10AM -0600, Bruce Guenter wrote:\n> On Wed, Dec 06, 2000 at 11:15:26AM -0500, Tom Lane wrote:\n> > Zeugswetter Andreas SB <[email protected]> writes:\n> > > Yes, but there would need to be a way to verify the last page or\n> > > record from txlog when running on crap hardware.\n> >\n> > How exactly *do* we determine where the end of the valid log data is,\n> > anyway?\n> \n> I don't know how pgsql does it, but the only safe way I know of is to\n> include an \"end\" marker after each record. When writing to the log,\n> append the records after the last end marker, ending with another end\n> marker, and fdatasync the log. Then overwrite the previous end marker\n> to indicate it's not the end of the log any more and fdatasync again.\n>\n> To ensure that it is written atomically, the end marker must not cross a\n> hardware sector boundary (typically 512 bytes). This can be trivially\n> guaranteed by making the marker a single byte.\n\nAn \"end\" marker is not sufficient, unless all writes are done in\none-sector units with an fsync between, and the drive buffering \nis turned off. For larger writes the OS will re-order the writes. \nMost drives will re-order them too, even if the OS doesn't.\n\n> Any other way I've seen discussed (here and elsewhere) either\n> - Requires atomic multi-sector writes, which are possible only if all\n> the sectors are sequential on disk, the kernel issues one large write\n> for all of them, and you don't powerfail in the middle of the write.\n> - Assume that a CRC is a guarantee. \n\nWe are already assuming a CRC is a guarantee. \n\nThe drive computes a CRC for each sector, and if the CRC is OK the \ndrive is happy. CRC errors within the drive are quite frequent, and \nthe drive re-reads when a bad CRC comes up. (If it sees errors too \nfrequently on a sector, it rewrites it; if it sees persistent errors \non a sector, it marks that one bad and relocates it.) You can expect \nto experience, in production, about the error rate that the drive \nmanufacturer specifies as \"maximum\".\n\n> ... A CRC would be a good addition to\n> help ensure the data wasn't broken by flakey drive firmware, but\n> doesn't guarantee consistency.\n\nNo, a CRC would be a good addition to compensate for sector write\nreordering, which is done both by the OS and by the drive, even for \n\"atomic\" writes.\n\nIt is not only \"flaky\" or \"cheap\" drives that re-order writes, or\nacknowledge writes as complete that have are not yet on disk. You \ncan generally assume that *any* drive does it unless you have \nspecifically turned that off. The assumption is that if you care,\nyou have a UPS, or at least have configured the hardware yourself\nto meet your needs.\n\nIt is purely wishful thinking to believe otherwise.\n\nNathan Myers\[email protected]\n",
"msg_date": "Wed, 6 Dec 2000 11:08:00 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "CRCs (was: beta testing version)"
},
{
"msg_contents": "On Wed, Dec 06, 2000 at 12:29:00PM +0100, Zeugswetter Andreas SB wrote:\n> \n> > Why should we do this? I'm not going to replay parts individually,\n> > I'm going to write entire pages to OS cache and than apply changes\n> > to them. Recovery is considered as succeeded after server is ensured\n> > that all applyed changes are on the disk. In the case of crash\n> > during recovery we'll replay entire game.\n>\n> Yes, but there would need to be a way to verify the last page or\n> record from txlog when running on crap hardware. The point was, that\n> crap hardware writes our 8k pages in any order (e.g. 512 bytes from\n> the end, then 512 bytes from front ...), and does not even notice,\n> that it only wrote part of one such 512 byte block when reading it\n> back after a crash. But, I actually doubt that this is true for all\n> but the most crappy hardware.\n\nBy this standard all hardware is crap. The behavior Andreas describes \nas \"crappy\" is the normal behavior of almost all drives in production, \nincluding the ones in your machine.\n\nFurthermore, OSes re-order \"atomic\" writes into file systems (i.e. \nnot raw partitions) to match partition block order, which often doesn't \nmatch the file block order. Hence, the OSes are \"crappy\" too.\n\nWishful thinking is a poor substitute for real atomicity. Block\nCRCs can at least verify complete writes to reasonable confidence, \nif not ensure them.\n\nNathan Myers\nncm\n\n",
"msg_date": "Wed, 6 Dec 2000 11:18:56 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "CRCs (was: beta testing version)"
},
{
"msg_contents": "Bruce Guenter wrote:\n> \n> - Assume that a CRC is a guarantee. A CRC would be a good addition to\n> help ensure the data wasn't broken by flakey drive firmware, but\n> doesn't guarantee consistency.\n\nEven a CRC per transaction (it could be a nice END record) ?\n\nBye!\n\n-- \n Daniele\n\n-------------------------------------------------------------------------------\n Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n-------------------------------------------------------------------------------\n",
"msg_date": "Wed, 06 Dec 2000 23:13:33 +0000",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: beta testing version"
},
{
"msg_contents": "On Wed, Dec 06, 2000 at 11:08:00AM -0800, Nathan Myers wrote:\n> On Wed, Dec 06, 2000 at 11:49:10AM -0600, Bruce Guenter wrote:\n> > On Wed, Dec 06, 2000 at 11:15:26AM -0500, Tom Lane wrote:\n> > > How exactly *do* we determine where the end of the valid log data is,\n> > > anyway?\n> > \n> > I don't know how pgsql does it, but the only safe way I know of is to\n> > include an \"end\" marker after each record. When writing to the log,\n> > append the records after the last end marker, ending with another end\n> > marker, and fdatasync the log. Then overwrite the previous end marker\n> > to indicate it's not the end of the log any more and fdatasync again.\n> >\n> > To ensure that it is written atomically, the end marker must not cross a\n> > hardware sector boundary (typically 512 bytes). This can be trivially\n> > guaranteed by making the marker a single byte.\n> \n> An \"end\" marker is not sufficient, unless all writes are done in\n> one-sector units with an fsync between, and the drive buffering \n> is turned off.\n\nThat's why an end marker must follow all valid records. When you write\nrecords, you don't touch the marker, and add an end marker to the end of\nthe records you've written. After writing and syncing the records, you\nrewrite the end marker to indicate that the data following it is valid,\nand sync again. There is no state in that sequence in which partially-\nwritten data could be confused as real data, assuming either your drives\naren't doing write-back caching or you have a UPS, and fsync doesn't\nreturn until the drives return success.\n\n> For larger writes the OS will re-order the writes. \n> Most drives will re-order them too, even if the OS doesn't.\n\nI'm well aware of that.\n\n> > Any other way I've seen discussed (here and elsewhere) either\n> > - Assume that a CRC is a guarantee. \n> \n> We are already assuming a CRC is a guarantee. \n>\n> The drive computes a CRC for each sector, and if the CRC is OK the \n> drive is happy. CRC errors within the drive are quite frequent, and \n> the drive re-reads when a bad CRC comes up.\n\nThe kind of data failures that a CRC is guaranteed to catch (N-bit\nerrors) are almost precisely those that a mis-read on a hardware sector\nwould cause.\n\n> > ... A CRC would be a good addition to\n> > help ensure the data wasn't broken by flakey drive firmware, but\n> > doesn't guarantee consistency.\n> No, a CRC would be a good addition to compensate for sector write\n> reordering, which is done both by the OS and by the drive, even for \n> \"atomic\" writes.\n\nBut it doesn't guarantee consistency, even in that case. There is a\npossibility (however small) that the random data that was located in the\nsectors before the write will match the CRC.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Wed, 6 Dec 2000 18:53:37 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs (was: beta testing version)"
},
{
"msg_contents": "On Wed, Dec 06, 2000 at 11:13:33PM +0000, Daniele Orlandi wrote:\n> Bruce Guenter wrote:\n> > - Assume that a CRC is a guarantee. A CRC would be a good addition to\n> > help ensure the data wasn't broken by flakey drive firmware, but\n> > doesn't guarantee consistency.\n> Even a CRC per transaction (it could be a nice END record) ?\n\nCRCs are designed to catch N-bit errors (ie N bits in a row with their\nvalues flipped). N is (IIRC) the number of bits in the CRC minus one.\nSo, a 32-bit CRC can catch all 31-bit errors. That's the only guarantee\na CRC gives. Everything else has a 1 in 2^32-1 chance of producing the\nsame CRC as the original data. That's pretty good odds, but not a\nguarantee.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Wed, 6 Dec 2000 18:56:04 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: beta testing version"
},
{
"msg_contents": "\n> CRCs are designed to catch N-bit errors (ie N bits in a row with their\n> values flipped). N is (IIRC) the number of bits in the CRC minus one.\n> So, a 32-bit CRC can catch all 31-bit errors. That's the only guarantee\n> a CRC gives. Everything else has a 1 in 2^32-1 chance of producing the\n> same CRC as the original data. That's pretty good odds, but not a\n> guarantee.\n\nYou've got a higher chance of undetected hard drive errors, memory errors,\nsolar flares, etc. than a CRC of that quality failing...\n\nChris\n\n",
"msg_date": "Thu, 7 Dec 2000 10:52:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: beta testing version"
},
{
"msg_contents": "On Wed, Dec 06, 2000 at 06:53:37PM -0600, Bruce Guenter wrote:\n> On Wed, Dec 06, 2000 at 11:08:00AM -0800, Nathan Myers wrote:\n> > On Wed, Dec 06, 2000 at 11:49:10AM -0600, Bruce Guenter wrote:\n> > > \n> > > I don't know how pgsql does it, but the only safe way I know of\n> > > is to include an \"end\" marker after each record.\n> > \n> > An \"end\" marker is not sufficient, unless all writes are done in\n> > one-sector units with an fsync between, and the drive buffering \n> > is turned off.\n> \n> That's why an end marker must follow all valid records. When you write\n> records, you don't touch the marker, and add an end marker to the end of\n> the records you've written. After writing and syncing the records, you\n> rewrite the end marker to indicate that the data following it is valid,\n> and sync again. There is no state in that sequence in which partially-\n> written data could be confused as real data, assuming either your drives\n> aren't doing write-back caching or you have a UPS, and fsync doesn't\n> return until the drives return success.\n\nThat requires an extra out-of-sequence write. \n\n> > > Any other way I've seen discussed (here and elsewhere) either\n> > > - Assume that a CRC is a guarantee. \n> > \n> > We are already assuming a CRC is a guarantee. \n> >\n> > The drive computes a CRC for each sector, and if the CRC is OK the \n> > drive is happy. CRC errors within the drive are quite frequent, and \n> > the drive re-reads when a bad CRC comes up.\n> \n> The kind of data failures that a CRC is guaranteed to catch (N-bit\n> errors) are almost precisely those that a mis-read on a hardware sector\n> would cause.\n\nThey catch a single mis-read, but not necessarily the quite likely\ndouble mis-read.\n\n> > > ... A CRC would be a good addition to\n> > > help ensure the data wasn't broken by flakey drive firmware, but\n> > > doesn't guarantee consistency.\n> > No, a CRC would be a good addition to compensate for sector write\n> > reordering, which is done both by the OS and by the drive, even for \n> > \"atomic\" writes.\n> \n> But it doesn't guarantee consistency, even in that case. There is a\n> possibility (however small) that the random data that was located in \n> the sectors before the write will match the CRC.\n\nGenerally, there are no guarantees, only reasonable expectations. A \n64-bit CRC would give sufficient confidence without the out-of-sequence\nwrite, and also detect corruption from any source including power outage.\n\n(I'd also like to see CRCs on all the table blocks as well; is there\na place to put them?)\n\nNathan Myers\[email protected]\n\n",
"msg_date": "Thu, 7 Dec 2000 12:25:41 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRCs (was: beta testing version)"
},
{
"msg_contents": "On Thu, Dec 07, 2000 at 12:25:41PM -0800, Nathan Myers wrote:\n> That requires an extra out-of-sequence write. \n\nAyup!\n\n> Generally, there are no guarantees, only reasonable expectations.\n\nI would differ, but that's irrelevant.\n\n> A 64-bit CRC would give sufficient confidence...\n\nThis is part of what I was getting at, in a roundabout way. If you use\na CRC, hash, or any other kind of non-trivial check code, you have a\ncertain level of confidence in the data, but not a guarantee. If you\ndecide, based on your expert opinions, that a 32 or 64 bit CRC or hash\ngives you an adequate level of confidence in the event of a crash, then\nI'll be satisfied, but don't call it a guarantee.\n\nThem's small nits we're picking at, though.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Thu, 7 Dec 2000 15:58:03 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs (was: beta testing version)"
},
{
"msg_contents": "Bruce Guenter wrote:\n> \n> CRCs are designed to catch N-bit errors (ie N bits in a row with their\n> values flipped). N is (IIRC) the number of bits in the CRC minus one.\n> So, a 32-bit CRC can catch all 31-bit errors. That's the only guarantee\n> a CRC gives. Everything else has a 1 in 2^32-1 chance of producing the\n> same CRC as the original data. That's pretty good odds, but not a\n> guarantee.\n\nNothing is a guarante. Everywhere you have a non-null probability of\nfailure. Memories of any kind doesn't give you a *guarantee* that the\ndata you read is exactly the one you wrote. CPUs and transmsision lines\nare subject to errors too.\n\nYou only may be guaranteed that the overall proabability of your system\nis under a specified level. When the level is low enought you usually\nsuppose the absence of errors guaranteed.\n\nWith CRC32 you considerably reduce p, and given the frequency when CRC\nwould need to reveal an error, I would consider it enought.\n\nBye!\n\n-- \n Daniele\n\n-------------------------------------------------------------------------------\n Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n-------------------------------------------------------------------------------\n",
"msg_date": "Fri, 08 Dec 2000 19:37:02 +0000",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: beta testing version"
}
]
|
[
{
"msg_contents": "I trying to implement a custom\ndatatype and after I have read\nthe docs I understand that I should\nuse large objects is the data type\nis to bigger than 8K.\n\nI have one question about using\nBLOBs in my intended work:\n-is there a way to automate the\nthe creation of new object when\na new row is inserted? (not by\nimpoting and exetnal file)\n\nThanks.\n",
"msg_date": "Wed, 06 Dec 2000 15:06:31 +0200",
"msg_from": "Felix Lungu <[email protected]>",
"msg_from_op": true,
"msg_subject": "BLOB Help!"
}
]
|
[
{
"msg_contents": "Hi,\n\nI create a new type that work well, but when\ni use the cast function on my new type like this i have an error:\n(other fucntion like substring don't work too )\n\nselect bare_code::text from ean_article;\nERROR: Cannot cast type 'ean13' to 'text'\n\nWhat can i do in my new type in order to use cast ???\n\nBest regards,\n\nPEJAC Pascal\n",
"msg_date": "Wed, 6 Dec 2000 16:02:52 +0100 (CET)",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "CAST ON NEW TYPE"
},
{
"msg_contents": "<[email protected]> writes:\n> select bare_code::text from ean_article;\n> ERROR: Cannot cast type 'ean13' to 'text'\n\n> What can i do in my new type in order to use cast ???\n\nYou have to provide a conversion function, declared like\n\n\ttext(ean13) returns text\n\nThe cast syntax is just an alias for invoking this function.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Dec 2000 10:34:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CAST ON NEW TYPE "
},
{
"msg_contents": "> I create a new type that work well, but when\n> i use the cast function on my new type like this i have an error:\n> (other fucntion like substring don't work too )\n> select bare_code::text from ean_article;\n> ERROR: Cannot cast type 'ean13' to 'text'\n> What can i do in my new type in order to use cast ???\n\nYou need to define one more function as text(ean13), which PostgreSQL\nwill assume is intended for casting. The actual implementation will be\ncalled something different, say ean13_text(), so that the entry point is\nunique, and then you will define it as text(ean13) in the pg_proc\ncatalog or the CREATE FUNCTION statement. There are examples of how to\ndo the actual code in the utils/adt directory for other types; I was\nlooking at the code for timestamp->text the other day.\n\nGood luck.\n\n - Thomas\n",
"msg_date": "Wed, 06 Dec 2000 15:55:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CAST ON NEW TYPE"
}
]
|
[
{
"msg_contents": "In current CVS \nutils/elog.h requires miscadmin.h which is doesn't installed.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 6 Dec 2000 19:17:31 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "CVS: miscadmin.h is missing"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> utils/elog.h requires miscadmin.h which is doesn't installed.\n\nWe've never installed miscadmin.h. I think it's a mistake to have\nelog.h include it ... will change.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Dec 2000 12:20:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS: miscadmin.h is missing "
},
{
"msg_contents": "On Wed, 6 Dec 2000, Tom Lane wrote:\n\n> Date: Wed, 06 Dec 2000 12:20:28 -0500\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] CVS: miscadmin.h is missing \n> \n> Oleg Bartunov <[email protected]> writes:\n> > utils/elog.h requires miscadmin.h which is doesn't installed.\n> \n> We've never installed miscadmin.h. I think it's a mistake to have\n> elog.h include it ... will change.\n> \n\nNever mind about it. I just tried to install pgbench and got\nerror message. Interesting, that I had no problem compiling with\n7.X releases.\n\n\tRegards,\n\t\tOleg\n\n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 6 Dec 2000 21:11:08 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CVS: miscadmin.h is missing "
}
]
|
[
{
"msg_contents": "\nHello,\n\n I was just experimenting, trying to see if I could find a function that\nwould format a numeric value like 'money' with Postgres 7.0.2. Here's\nwhat happened:\n\n######\ncascade=> select cash_out(2);\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n######\n\nThe same thing happened with Postgres 6.5.3. Here's my full version:\nPostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.96 \n\nI'm sure if what I tried is even valid input, but I'm guessing this is\nnot a desired result in any case. :) \n\nThanks for the great software and good luck with this!\n\nA frequent Postgres user,\n\n -mark\n\npersonal website } Summersault Website Development\nhttp://mark.stosberg.com/ { http://www.summersault.com/\n",
"msg_date": "Wed, 06 Dec 2000 15:06:07 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "select cash_out('2'); crashes backend on 7.0.2"
},
{
"msg_contents": "I can confirm this is crashes in 7.1 too.\n\n> \n> Hello,\n> \n> I was just experimenting, trying to see if I could find a function that\n> would format a numeric value like 'money' with Postgres 7.0.2. Here's\n> what happened:\n> \n> ######\n> cascade=> select cash_out(2);\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> ######\n> \n> The same thing happened with Postgres 6.5.3. Here's my full version:\n> PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.96 \n> \n> I'm sure if what I tried is even valid input, but I'm guessing this is\n> not a desired result in any case. :) \n> \n> Thanks for the great software and good luck with this!\n> \n> A frequent Postgres user,\n> \n> -mark\n> \n> personal website } Summersault Website Development\n> http://mark.stosberg.com/ { http://www.summersault.com/\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Dec 2000 11:22:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select cash_out('2'); crashes backend on 7.0.2"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> cascade=> select cash_out(2);\n>> pqReadData() -- backend closed the channel unexpectedly.\n\n> I can confirm this is crashes in 7.1 too.\n\nYou can get this sort of result with almost any input or output function\n:-(. The problem is that they're mostly misdeclared to take type\n\"opaque\", which for no good reason is also considered to mean \"accepts\nany input type whatever\", which means you can pass a value of any type\nat all to an input or output function.\n\nThere have been some past discussions about introducing a little more\nrigor into the type system's handling of I/O functions, but it ain't\ngonna happen for 7.1 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Dec 2000 15:25:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select cash_out('2'); crashes backend on 7.0.2 "
},
{
"msg_contents": "Added to TODO:\n\n\t* SELECT cash_out(2) crashes because of opaque \n\n> Bruce Momjian <[email protected]> writes:\n> >> cascade=> select cash_out(2);\n> >> pqReadData() -- backend closed the channel unexpectedly.\n> \n> > I can confirm this is crashes in 7.1 too.\n> \n> You can get this sort of result with almost any input or output function\n> :-(. The problem is that they're mostly misdeclared to take type\n> \"opaque\", which for no good reason is also considered to mean \"accepts\n> any input type whatever\", which means you can pass a value of any type\n> at all to an input or output function.\n> \n> There have been some past discussions about introducing a little more\n> rigor into the type system's handling of I/O functions, but it ain't\n> gonna happen for 7.1 ...\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Dec 2000 15:28:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select cash_out('2'); crashes backend on 7.0.2"
},
{
"msg_contents": "\nSeems this still fails. Hackers, does the crash cause other backends to\nabort?\n\n> \n> Hello,\n> \n> I was just experimenting, trying to see if I could find a function that\n> would format a numeric value like 'money' with Postgres 7.0.2. Here's\n> what happened:\n> \n> ######\n> cascade=> select cash_out(2);\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> ######\n> \n> The same thing happened with Postgres 6.5.3. Here's my full version:\n> PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.96 \n> \n> I'm sure if what I tried is even valid input, but I'm guessing this is\n> not a desired result in any case. :) \n> \n> Thanks for the great software and good luck with this!\n> \n> A frequent Postgres user,\n> \n> -mark\n> \n> personal website } Summersault Website Development\n> http://mark.stosberg.com/ { http://www.summersault.com/\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 22:49:16 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select cash_out('2'); crashes backend on 7.0.2"
},
{
"msg_contents": "Folks, I see we have many problems here:\n\t\n\ttest=> select textout(2);\n\tpqReadData() -- backend closed the channel unexpectedly.\n\t This probably means the backend terminated abnormally\n\t before or while processing the request.\n\tThe connection to the server was lost. Attempting reset: NOTICE: \n\tMessage from PostgreSQL backend:\n\t The Postmaster has informed me that some other backend died\n\t abnormally and possibly corrupted shared memory.\n\t I have rolled back the current transaction and am going to\n\t terminate your database system connection and exit.\n\t Please reconnect to the database system and repeat your query.\n\tFailed.\n\nand the server log shows:\n\n\tReinitializing shared memory and semaphores\n\nSeems like a pretty serious denial of service attack to me. It restarts\nall running backends.\n\nI have aligned the error messages, at least.\n\n\n> \n> Hello,\n> \n> I was just experimenting, trying to see if I could find a function that\n> would format a numeric value like 'money' with Postgres 7.0.2. Here's\n> what happened:\n> \n> ######\n> cascade=> select cash_out(2);\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> ######\n> \n> The same thing happened with Postgres 6.5.3. Here's my full version:\n> PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.96 \n> \n> I'm sure if what I tried is even valid input, but I'm guessing this is\n> not a desired result in any case. :) \n> \n> Thanks for the great software and good luck with this!\n> \n> A frequent Postgres user,\n> \n> -mark\n> \n> personal website } Summersault Website Development\n> http://mark.stosberg.com/ { http://www.summersault.com/\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 09:34:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] select cash_out('2'); crashes backend on 7.0.2"
}
]
|
[
{
"msg_contents": "Well, no one seemed very unhappy at the idea of changing the file format\nfor binary COPY, so here is a proposal.\n\nThe objectives of this change are:\n\n1. Get rid of the tuple count at the front of the file. This requires\nan extra pass over the relation, which is a lot more trouble than the\ncount is worth. Use an explicit EOF marker instead.\n2. Send fields of a tuple individually, instead of dumping out raw tuples\n(complete with alignment padding and so forth) as is currently done.\nThis is mainly to simplify TOAST-related processing.\n3. Make the format somewhat self-identifying, so that the reader has at\nleast some chance of detecting it when the data doesn't match the table\nit's supposed to be loaded into.\n\nThe proposed format consists of a file header, zero or more tuples, and a\nfile trailer.\n\nThe file header will just be a 32-bit magic number; it's present so that a\nreader can reject non-COPY-binary input data, as well as detect problems\nlike incompatible endianness. (We could also use changes in the magic\nnumber as a flag for future format changes.)\n\nEach tuple begins with an int16 count of the number of fields in the\ntuple. (Presently, all tuples in a table will have the same count, but\nthat might not always be true.) Then, repeated for each field in the\ntuple, there is an int16 typlen word possibly followed by field data.\nThe typlen field is interpreted thus:\n\n\tZero\t\tField is NULL. No data follows.\n\n\t> 0\t\tField is a fixed-length datatype. Exactly N\n\t\t\tbytes of data follow the typlen word.\n\n\t-1\t\tField is a varlena datatype. The next four\n\t\t\tbytes are the varlena header, which contains\n\t\t\tthe total value length including itself.\n\n\t< -1\t\tReserved for future use.\n\nFor non-NULL fields, the reader can check that the typlen matches the\nexpected typlen for the destination column. This provides a simple\nbut very useful check that the data is as expected.\n\nThere is no alignment padding or any other extra data between fields.\nNote also that the format does not distinguish whether a datatype is\npass-by-reference or pass-by-value. Both of these provisions are\ndeliberate: they might help improve portability of the files (although\nof course endianness and floating-point-format issues can still keep\nyou from moving a binary file across machines).\n\nThe file trailer consists of an int16 word containing -1. This is\neasily distinguished from a tuple's field-count word.\n\nA reader should report an error if a field-count word is neither -1\nnor the expected number of columns. This provides a pretty strong\ncheck against somehow getting out of sync with the data.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Dec 2000 15:26:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "COPY BINARY file format proposal"
},
{
"msg_contents": "Grumble, I forgot about COPY WITH OIDS. Amend that proposal as follows:\n\n... We should use two different\nmagic numbers depending on whether OIDs are included in the dump or not.\n\nIf OIDs are included in the dump, the OID field immediately follows the\nfield-count word. It is a normal field except that it's not included\nin the field-count. In particular it has a typlen --- this will allow\nhandling of 4-byte vs 8-byte OIDs without too much pain, and will allow\nOIDs to be shown as NULL if we someday allow OIDs to be optional.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Dec 2000 15:36:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: COPY BINARY file format proposal "
},
{
"msg_contents": "At 15:36 6/12/00 -0500, Tom Lane wrote:\n>Grumble, I forgot about COPY WITH OIDS. Amend that proposal as follows:\n>\n>... We should use two different\n>magic numbers depending on whether OIDs are included in the dump or not.\n\nI'd prefer to see a single magic number for all binary COPY output, then a\nfew bytes of header including a version number, and flags to indicate\nendianness, OIDs etc. It seems a lot cleaner than overloading the magic\nnumber.\n\nAlso, IIRC part of the problem with text-based COPY is that we can't\nspecify field order (I think this affectes dumping the regression DB).\nWould it be possible to add the ability to (a) specify field order, and (b)\ndump a subset of fields?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 07 Dec 2000 12:33:40 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> I'd prefer to see a single magic number for all binary COPY output, then a\n> few bytes of header including a version number, and flags to indicate\n> endianness, OIDs etc. It seems a lot cleaner than overloading the magic\n> number.\n\nOK, we can do it that way. I'm still going to pick a magic number that\nlooks different depending on endianness, however ;-).\n\nWhat might we need in the header besides a version indicator and a\nhas-OIDs flag?\n\n> Also, IIRC part of the problem with text-based COPY is that we can't\n> specify field order (I think this affectes dumping the regression DB).\n> Would it be possible to add the ability to (a) specify field order, and (b)\n> dump a subset of fields?\n\nThis is not an issue for the file format, but for the COPY command itself.\nAnd considering we're in beta now (or as soon as Marc gets the tarball\nmade, anyway) I'm going to call that a new feature and say it should\nwait for 7.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Dec 2000 20:40:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "At 20:40 6/12/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> I'd prefer to see a single magic number for all binary COPY output, then a\n>> few bytes of header including a version number, and flags to indicate\n>> endianness, OIDs etc. It seems a lot cleaner than overloading the magic\n>> number.\n>\n>OK, we can do it that way. I'm still going to pick a magic number that\n>looks different depending on endianness, however ;-).\n\nWhat does the smiley mean in this context? I hope you're not serious...or\nif you are, I'd be interested to know why.\n\n\n>What might we need in the header besides a version indicator and a\n>has-OIDs flag?\n\nJust of the top of my head, some things that could be there in the future: \n\n- floating point representation (for portability)\n\n- flag for compressed or uncompressed toast fields (I assume you dump them\nuncompressed?)\n\n- version number may be important if we dump a subset of fields (ie. we'll\nneed to store the field names somewhere).\n\nI really have no idea what might be there, but it seems prudent to do it\nthis way.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 07 Dec 2000 13:03:33 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n>> OK, we can do it that way. I'm still going to pick a magic number that\n>> looks different depending on endianness, however ;-).\n\n> What does the smiley mean in this context?\n\nJust thinking that the only way an endianness flag inside the header\nwould be useful is if we pick a magic number that's a bytewise\npalindrome.\n\n> - floating point representation (for portability)\n\nSpecified how? (For that matter, determined how?)\n\n> - flag for compressed or uncompressed toast fields (I assume you dump them\n> uncompressed?)\n\nYes, I want COPY to force 'em to uncompressed so as to avoid problems\nwith cross-version changes of compression algorithm. (Right at the\nmoment it gets that wrong.)\n\n> - version number may be important if we dump a subset of fields (ie. we'll\n> need to store the field names somewhere).\n\nNo we don't. ASCII COPY format doesn't store field names either ... at\nleast not as part of the data stream ... and should not IMHO. Don't you\nwant to be able to reload into a table that you've changed the column\nnames of?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Dec 2000 21:12:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "At 21:12 6/12/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>>> OK, we can do it that way. I'm still going to pick a magic number that\n>>> looks different depending on endianness, however ;-).\n>\n>> What does the smiley mean in this context?\n>\n>Just thinking that the only way an endianness flag inside the header\n>would be useful is if we pick a magic number that's a bytewise\n>palindrome.\n\nYou could just read the 1st, 2nd, 3rd, etc bytes and require that they be\n'P', 'G', 'C', 'P', 'Y' or some such. I *think* reading five bytes and\ndoing a strcmp works...ie. don't rely on the integer value, use a string.\n\n\n>> - floating point representation (for portability)\n>\n>Specified how? (For that matter, determined how?)\n\nI'd recommend a crystal ball. You did ask a question about the future ;-}.\n\n\n>> - flag for compressed or uncompressed toast fields (I assume you dump them\n>> uncompressed?)\n>\n>Yes, I want COPY to force 'em to uncompressed so as to avoid problems\n>with cross-version changes of compression algorithm. (Right at the\n>moment it gets that wrong.)\n\nSounds reasonable, but there could be an advantage in allowing a binary\ncompressed dump for short-term work.\n\n\n>> - version number may be important if we dump a subset of fields (ie. we'll\n>> need to store the field names somewhere).\n>\n>No we don't. ASCII COPY format doesn't store field names either ... at\n>least not as part of the data stream ... and should not IMHO. Don't you\n>want to be able to reload into a table that you've changed the column\n>names of?\n\nThis is essential if we ever allow subsets of columns - even if it is only\nfor displaying information to the user. If I dump 5 out of 7 columns then\nrename half of them, I'd say I'm asking for trouble. At least with the\nnames available, you have a chance of working out what goes where. But\nagain, without copy-a-subset-of-columns, this also requires a crystal ball.\n\n\nIt all gets back to whether it's a good idea to overload a magic number. \n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 07 Dec 2000 13:34:03 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n>> Just thinking that the only way an endianness flag inside the header\n>> would be useful is if we pick a magic number that's a bytewise\n>> palindrome.\n\n> You could just read the 1st, 2nd, 3rd, etc bytes and require that they be\n> 'P', 'G', 'C', 'P', 'Y' or some such. I *think* reading five bytes and\n> doing a strcmp works...ie. don't rely on the integer value, use a string.\n\nOh. We could use a string instead of an integer, I suppose, although\nI'm not sure I see the point for what's basically a binary format.\n\nGiven all that, here is a proposed spec for the header:\n\nFirst 8 bytes: signature, ASCII \"PGBCOPY\\0\" --- note that the null is a\nrequired part of the signature. (This is to catch files that have been\nmunged by a non-8-bit-clean transfer.)\n\nNext 4 bytes: integer layout field. This consists of the int32 constant\n0x0A820D0A expressed in the source machine's endianness. (Again, value\nchosen with malice aforethought, to catch files munged by things like\nDOS/Unix newline conversion or high-bit-stripping.) Potentially, a\nreader could engage in byte-flipping of subsequent fields if the wrong\nbyte order is detected here.\n\nNext 4 bytes: version number, currently 1 (expressed in source machine's\nendianness, as are all subsequent integer fields). A reader should\nabort if it does not recognize the version number.\n\nNext 4 bytes: length of remainder of header, not including self. In\nthe initial version this will be zero, and the first tuple follows\nimmediately. Future changes to the format might allow additional data\nto be present in the header. A reader should silently ignore any header\nextension data it does not know what to do with.\n\nThis allows for both backwards-compatible header additions (extend the\nheader without changing the version number) and non-backwards-compatible\nchanges (bump the version number).\n\nSince we don't yet know what we might do about the issue of\nfloating-point format, I left that out of the spec. It can be added to\nthe header extension area when and if we figure out how to do it.\n\nLikewise, addons such as column names are also punted until later.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 14:28:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "At 14:28 7/12/00 -0500, Tom Lane wrote:\n>\n>Next 4 bytes: version number, currently 1 (expressed in source machine's\n>endianness\n\nI don't want to continue being picky, but you could just use 4 bytes for a\nmaj-min-rev-patch version number (in that order), and avoid the endian\nissues by reading and writing each byte. No big deal, though.\n\n\n>This allows for both backwards-compatible header additions (extend the\n>header without changing the version number) and non-backwards-compatible\n>changes (bump the version number).\n\nThat's where the rev & patch levels help if you adopt the above version\nnumbering - 1.0-** should should all be compatibile, 1.1 should be able to\nread <= 1.1-**, 1.0-** should not be expected to read 1.1-** etc.\n\n\n>\n>Comments?\n>\n\nSounds reasonable even without the above suggestions.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 08 Dec 2000 16:46:59 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> I don't want to continue being picky, but you could just use 4 bytes for a\n> maj-min-rev-patch version number (in that order), and avoid the endian\n> issues by reading and writing each byte. No big deal, though.\n\nWell, the thing is that we need to protect the contents of\ndatatype-specific structures. If it were just a matter of byte-flipping\nthe counts and lengths defined by the (proposed) file format, I'd have\nspecified that we write 'em all in network byte order and be done with\nit. But knowing the internal structure of every datatype in the system\nis a very different game, and I don't want to try to play that game ...\nat least not yet. So the proposal is just to identify the endianness\nthat the file is being written with. Recovering the data on a machine\nof different endianness is a project for future data archeologists.\n\n>> This allows for both backwards-compatible header additions (extend the\n>> header without changing the version number) and non-backwards-compatible\n>> changes (bump the version number).\n\n> That's where the rev & patch levels help if you adopt the above version\n> numbering - 1.0-** should should all be compatibile, 1.1 should be able to\n> read <= 1.1-**, 1.0-** should not be expected to read 1.1-** etc.\n\nTell you the truth, I don't believe in file-format version numbers at\nall. My experience with such things is that they defeat portability\nrather than promote it, because readers tend to reject files that they\ncould have actually have read as a result of insignificant version number\nissues. You can read all about my view of this issue in the PNG spec\n(RFC 2083, esp section 12.13) --- the versioning philosophy described\nthere is largely yours truly's.\n\nI will not complain about sticking a \"version 1.0\" field into a format\nwhen there is no real intention of changing it in the future ... but\nassigning deep significance to major/minor numbers, or something like\nthat, is wrongheaded. You need a much finer-grained view of\ncompatibility issues than that if you want to achieve anything much\nin cross-version compatibility. Feature-based versioning, like PNG's\nnotion of critical vs. ancillary chunks, is the thing you need for\nthat. I didn't bring up the issue in this morning's proposal --- but\nif we ever do add stuff to the proposed extensible header, I will hold\nout for self-identifying feature-related items much like PNG chunks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 01:27:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "At 01:27 8/12/00 -0500, Tom Lane wrote:\n>Recovering the data on a machine\n>of different endianness is a project for future data archeologists.\n\nIt's frightening to think that in 1000 years time people will be deducing\nthings about our society from the way we stored data.\n\n\n>\n>Tell you the truth, I don't believe in file-format version numbers at\n>all...\n>(RFC 2083, esp section 12.13) --- the versioning philosophy described\n>there is largely yours truly's.\n\nSeems to be a much better approach; (non)critical chunks & chunk types are\nmuch more portable.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 08 Dec 2000 18:15:30 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "I wrote:\n> Next 4 bytes: integer layout field. This consists of the int32 constant\n> 0x0A820D0A expressed in the source machine's endianness. (Again, value\n> chosen with malice aforethought, to catch files munged by things like\n> DOS/Unix newline conversion or high-bit-stripping.)\n\nActually, that won't do. A little-endian machine would write 0A 0D 82\n0A which would fail to trigger newline converters that are looking for\n\\r followed by \\n (0D 0A). If we're going to take seriously the idea of\ndetecting newline transforms, then we need to incorporate the test\npattern into the fixed-byte-order signature.\n\nHow about:\n\nSignature: 12-byte sequence \"PGBCOPY\\n\\377\\r\\n\\0\" (detects newline\nreplacements, dropped nulls, dropped high bits, parity changes);\n\nInteger layout field: int32 constant 0x01020304 in source's byte order.\n\nThe rest as before.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 02:31:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "At 02:31 8/12/00 -0500, Tom Lane wrote:\n>\n>How about:\n>\n>Signature: 12-byte sequence \"PGBCOPY\\n\\377\\r\\n\\0\" (detects newline\n>replacements, dropped nulls, dropped high bits, parity changes);\n>\n>Integer layout field: int32 constant 0x01020304 in source's byte order.\n>\n\nHow about a CRC? ;-P\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 08 Dec 2000 22:15:52 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> How about a CRC? ;-P\n\nI take it from the smiley that you're not serious, but actually it seems\nlike it might not be a bad idea. I could see appending a CRC to each\ntuple record. Comments anyone?\n\nYou seemed to like the PNG philosophy of using feature flags rather than\na version number. Accordingly, I propose dropping the version number\nfield in favor of a flags word. (Which was needed anyway, because I had\n*again* forgotten about COPY WITH OIDS :-(.)\n\nAttached is the current state of the proposal. I haven't added a CRC\nfield but am willing to do so if that's the consensus.\n\n\t\t\tregards, tom lane\n\n\nCOPY BINARY file format proposal\n\nThe objectives of this change are:\n\n1. Get rid of the tuple count at the front of the file. This requires\nan extra pass over the relation, which is a lot more trouble than the\ncount is worth. Use an explicit EOF marker instead.\n2. Send fields of a tuple individually, instead of dumping out raw tuples\n(complete with alignment padding and so forth) as is currently done.\nThis is mainly to simplify TOAST-related processing.\n3. Make the format somewhat self-identifying, so that the reader has at\nleast some chance of detecting it when the data doesn't match the table\nit's supposed to be loaded into.\n\nThe proposed format consists of a file header, zero or more tuples, and a\nfile trailer.\n\n\nFile Header\n-----------\n\nThe proposed file header consists of 24 bytes of fixed fields, followed\nby a variable-length header extension area.\n\nSignature: 12-byte sequence \"PGBCOPY\\n\\377\\r\\n\\0\" --- note that the null\nis a required part of the signature. (The signature is designed to allow\neasy identification of files that have been munged by a non-8-bit-clean\ntransfer. The proposed signature will be changed by newline-translation\nfilters, dropped nulls, dropped high bits, or parity changes.)\n\nInteger layout field: int32 constant 0x01020304 in source's byte order.\nPotentially, a reader could engage in byte-flipping of subsequent fields\nif the wrong byte order is detected here.\n\nFlags field: a 4-byte bit mask to denote important aspects of the file\nformat. Bits are numbered from 0 (LSB) to 31 (MSB) --- note that this\nfield is stored with source's endianness, as are all subsequent integer\nfields. Bits 16-31 are reserved to denote critical file format issues;\na reader should abort if it finds an unexpected bit set in this range.\nBits 0-15 are reserved to signal backwards-compatible format issues;\na reader should simply ignore any unexpected bits set in this range.\nCurrently only one flag bit is defined, and the rest must be zero:\n\tBit 16:\tif 1, OIDs are included in the dump; if 0, not\n\nNext 4 bytes: length of remainder of header, not including self. In\nthe initial version this will be zero, and the first tuple follows\nimmediately. Future changes to the format might allow additional data\nto be present in the header. A reader should silently ignore any header\nextension data it does not know what to do with.\n\nNote that I envision the content of the header extension area as being a\nsequence of self-identifying chunks (but the specific design of same is\npostponed until we need 'em). The flags field is not intended to tell\nreaders what is in the extension area.\n\nThis design allows for both backwards-compatible header additions (add\nheader extension chunks, or set low-order flag bits) and non-backwards-\ncompatible changes (set high-order flag bits to signal such changes,\nand add supporting data to the extension area if needed).\n\n\nTuples\n------\n\nEach tuple begins with an int16 count of the number of fields in the\ntuple. (Presently, all tuples in a table will have the same count, but\nthat might not always be true.) Then, repeated for each field in the\ntuple, there is an int16 typlen word possibly followed by field data.\nThe typlen field is interpreted thus:\n\n\tZero\t\tField is NULL. No data follows.\n\n\t> 0\t\tField is a fixed-length datatype. Exactly N\n\t\t\tbytes of data follow the typlen word.\n\n\t-1\t\tField is a varlena datatype. The next four\n\t\t\tbytes are the varlena header, which contains\n\t\t\tthe total value length including itself.\n\n\t< -1\t\tReserved for future use.\n\nFor non-NULL fields, the reader can check that the typlen matches the\nexpected typlen for the destination column. This provides a simple\nbut very useful check that the data is as expected.\n\nThere is no alignment padding or any other extra data between fields.\nNote also that the format does not distinguish whether a datatype is\npass-by-reference or pass-by-value. Both of these provisions are\ndeliberate: they might help improve portability of the files (although\nof course endianness and floating-point-format issues can still keep\nyou from moving a binary file across machines).\n\nIf OIDs are included in the dump, the OID field immediately follows the\nfield-count word. It is a normal field except that it's not included\nin the field-count. In particular it has a typlen --- this will allow\nhandling of 4-byte vs 8-byte OIDs without too much pain, and will allow\nOIDs to be shown as NULL if we someday allow OIDs to be optional.\n\n\nFile Trailer\n------------\n\nThe file trailer consists of an int16 word containing -1. This is\neasily distinguished from a tuple's field-count word.\n\nA reader should report an error if a field-count word is neither -1\nnor the expected number of columns. This provides a pretty strong\ncheck against somehow getting out of sync with the data.\n",
"msg_date": "Fri, 08 Dec 2000 19:55:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "At 19:55 8/12/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> How about a CRC? ;-P\n>\n>I take it from the smiley that you're not serious, but actually it seems\n>like it might not be a bad idea. I could see appending a CRC to each\n>tuple record. Comments anyone?\n\nMore a matter of not thinking it was important enough to worry about, and\nnot really wanting to drag the MD5/MD4/CRC64/etc debate into this one.\nHaving said that, I think it would be a nice-to-have, like CRCs on db pages\n- in the latter case I'd really like VACCUM (or another utility) to be able\nto report 'invalid pages' on a nightly basis (or, better still, not report\nthem). \n\n\n>Attached is the current state of the proposal. I haven't added a CRC\n>field but am willing to do so if that's the consensus.\n\nSounds good to me. I'm not sure you need it on a per-tuple basis - but it\ncan't hurt, assuming it's cheap to generate. Does the backend send tuples\nor blocks of tuples? If the latter, and if CRC is expensive, then maybe 1\nCRC for each group of tuples.\n\nAlso having a CRC on a per-tupple basis will prevent getting out of sync\nwith the data, and make partial data recovery \n\n\n>Next 4 bytes: length of remainder of header, not including self. In\n>the initial version this will be zero, and the first tuple follows\n>immediately. Future changes to the format might allow additional data\n>to be present in the header. A reader should silently ignore any header\n>extension data it does not know what to do with.\n\nDon't you need to at least define how to specify non-essential chunks,\nsince the flags are not to be used to describe the header extensions. Or\nare we going to make the initial version barf when it encounters any header\nextension?\n\n\n>Tuples\n>------\n>\n>Each tuple begins with an int16 count of the number of fields in the\n>tuple. (Presently, all tuples in a table will have the same count, but\n>that might not always be true.)\n\nAnother option would be to:\n\n- dump the field sizes in the header somewhere (they will all be the same), \n- for each row output a bitmap of non-null fields, followed by the data.\n- varlena would have a -1 length in the header, an an int32 length in the row.\n\nThis is harder to read and to write, but saves space, if that is desirable.\n\n>\n>For non-NULL fields, the reader can check that the typlen matches the\n>expected typlen for the destination column. This provides a simple\n>but very useful check that the data is as expected.\n\nCRC seems like the go here...\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 09 Dec 2000 14:40:19 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> More a matter of not thinking it was important enough to worry about, and\n> not really wanting to drag the MD5/MD4/CRC64/etc debate into this one.\n\nI'd just as soon not drag that debate in here either ;-) ... but once we\nsettle on an appropriate CRC method for WAL it's easy enough to call the\nsame routine for this code.\n\n> Sounds good to me. I'm not sure you need it on a per-tuple basis - but it\n> can't hurt, assuming it's cheap to generate. Does the backend send tuples\n> or blocks of tuples? If the latter, and if CRC is expensive, then maybe 1\n> CRC for each group of tuples.\n\nExtending the CRC over multiple tuples would just complicate life,\nI think. The per-byte cost is the biggest factor, so you don't really\nsave all that much.\n\n>> Next 4 bytes: length of remainder of header, not including self. In\n>> the initial version this will be zero, and the first tuple follows\n>> immediately. Future changes to the format might allow additional data\n>> to be present in the header. A reader should silently ignore any header\n>> extension data it does not know what to do with.\n\n> Don't you need to at least define how to specify non-essential chunks,\n> since the flags are not to be used to describe the header extensions. Or\n> are we going to make the initial version barf when it encounters any header\n> extension?\n\nNo, the initial version will just silently skip the whole header\nextension; it's defined so that that's a legal behavior (everything\nin the header extension is inessential). We can come back and define\na format for the entries in the header extension area when we need some.\n\n> Another option would be to:\n> - dump the field sizes in the header somewhere (they will all be the same), \n> - for each row output a bitmap of non-null fields, followed by the data.\n> - varlena would have a -1 length in the header, an an int32 length in the row.\n\nThat would work if you are willing to assume that all the tuples indeed\nalways have the same set of fields --- you're not, for example, doing an\ninheritance-tree-walk \"COPY FROM foo*\". But Chris Bitmead still has a\ngleam in his eye about that sort of thing, so we might want it someday.\nI think it's worth a small amount of extra space to avoid that\nassumption, especially since it simplifies the code too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 22:58:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "> I will not complain about sticking a \"version 1.0\" field into a format\n> when there is no real intention of changing it in the future ... but\n> assigning deep significance to major/minor numbers, or something like\n\nI assume the version would be the COPY format version, not the\nPostgreSQL version.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Dec 2000 18:48:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal"
},
{
"msg_contents": "> Also, IIRC part of the problem with text-based COPY is that we can't\n> specify field order (I think this affectes dumping the regression DB).\n> Would it be possible to add the ability to (a) specify field order, and (b)\n> dump a subset of fields?\n\nInformix does this nicely:\n\n\tUNLOAD TO \"file\"\n\tSELECT *\n\tFROM tab\n\nMerging COPY and SELECT has some real advantages. You can specify\ncolumns, parts of a table using WHERE, and even joins. Very flexible.\n\nPerhaps, if the table name is missing from COPY, we can allow a SELECT:\n\n\tCOPY TO 'file'\n\tSELECT *\n\tFROM tab\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Dec 2000 18:50:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal"
},
{
"msg_contents": "On Thu, Dec 07, 2000 at 02:28:28PM -0500, Tom Lane wrote:\n> Given all that, here is a proposed spec for the header:\n> ...\n> Comments?\n\nI've been thinking about this. \n\nI'd like to see a timestamp for when the image was created, and a \n128-byte comment field to allow annotations, even after the fact.\n(I don't think we're pressed for space, right?) The more chances\nthat you don't have to actually load the file to find out what's\nin it, the better.\n\n(I have also suggested, in private mail, that the \"header length\" \nfield should be the length of the whole header, not just whatever \nwas added on in versions 2..n. Tom didn't agree.)\n\nNathan Myers\[email protected]\n",
"msg_date": "Sun, 10 Dec 2000 16:08:58 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal"
},
{
"msg_contents": "On Sun, Dec 10, 2000 at 04:08:58PM -0800, Nathan Myers wrote:\n> On Thu, Dec 07, 2000 at 02:28:28PM -0500, Tom Lane wrote:\n> > Given all that, here is a proposed spec for the header:\n> > ...\n> > Comments?\n> \n> (I have also suggested, in private mail, that the \"header length\" \n> field should be the length of the whole header, not just whatever \n> was added on in versions 2..n. Tom didn't agree.)\n\nI had the same thought, but didn't get around to posting it.\n\nRoss\n",
"msg_date": "Sun, 10 Dec 2000 19:08:29 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal"
},
{
"msg_contents": "[email protected] (Nathan Myers) writes:\n> I'd like to see a timestamp for when the image was created, and a \n> 128-byte comment field to allow annotations, even after the fact.\n\nBoth seem like reasonable options. If you don't mind, however,\nI'd suggest that they be left for inclusion as chunks in the header\nextension area, rather than nailing them down in the fixed header.\n\nThe advantage of handling a comment that way is obvious: it needn't\nbe fixed-length. As for the timestamp, handling it as an optional\nchunk would allow graceful substitution of a different timestamp\nformat, which we'll need when 2038 begins to loom.\n\nBasically what I want to do at the moment is get a minimal format\nspec nailed down for 7.1. There'll be time for neat extras later\nas long as we get it right now --- but there's not a lot of time\nfor extras before 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Dec 2000 20:51:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "> [email protected] (Nathan Myers) writes:\n> > I'd like to see a timestamp for when the image was created, and a \n> > 128-byte comment field to allow annotations, even after the fact.\n> \n> Both seem like reasonable options. If you don't mind, however,\n> I'd suggest that they be left for inclusion as chunks in the header\n> extension area, rather than nailing them down in the fixed header.\n> \n> The advantage of handling a comment that way is obvious: it needn't\n> be fixed-length. As for the timestamp, handling it as an optional\n> chunk would allow graceful substitution of a different timestamp\n> format, which we'll need when 2038 begins to loom.\n> \n> Basically what I want to do at the moment is get a minimal format\n> spec nailed down for 7.1. There'll be time for neat extras later\n> as long as we get it right now --- but there's not a lot of time\n> for extras before 7.1.\n\nThe have the look of creeping-featurism to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Dec 2000 21:06:46 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal"
},
{
"msg_contents": "On Sun, Dec 10, 2000 at 08:51:52PM -0500, Tom Lane wrote:\n> [email protected] (Nathan Myers) writes:\n> > I'd like to see a timestamp for when the image was created, and a \n> > 128-byte comment field to allow annotations, even after the fact.\n> \n> Both seem like reasonable options. If you don't mind, however,\n> I'd suggest that they be left for inclusion as chunks in the header\n> extension area, rather than nailing them down in the fixed header.\n> \n> The advantage of handling a comment that way is obvious: it needn't\n> be fixed-length. As for the timestamp, handling it as an optional\n> chunk would allow graceful substitution of a different timestamp\n> format, which we'll need when 2038 begins to loom.\n\nI don't know if you get the point of the fixed-size comment field. \nThe idea was that a comment could be poked into an existing COPY \nimage, after it was written. A variable-size comment field in an\nalready-written image might leave no space to poke in anything. A \nvariable-size comment field with a required minimum size would \nsatisfy both needs, at some cost in complexity. \n \n> Basically what I want to do at the moment is get a minimal format\n> spec nailed down for 7.1. There'll be time for neat extras later\n> as long as we get it right now --- but there's not a lot of time\n> for extras before 7.1.\n\nI understand.\n\nNathan Myers\[email protected]\n",
"msg_date": "Tue, 12 Dec 2000 13:36:53 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal"
},
{
"msg_contents": "[email protected] (Nathan Myers) writes:\n> I don't know if you get the point of the fixed-size comment field. \n> The idea was that a comment could be poked into an existing COPY \n> image, after it was written.\n\nYes, I did get the point ...\n\n> A variable-size comment field in an\n> already-written image might leave no space to poke in anything. A \n> variable-size comment field with a required minimum size would \n> satisfy both needs, at some cost in complexity. \n\nThis strikes me as a perfect argument for a variable-size field.\nIf you want to leave N bytes for a future poked-in comment, you do that.\nIf you don't, then not. Leaving 128 bytes (or any other frozen-by-the-\nfile-format number) is guaranteed to satisfy nobody.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Dec 2000 23:56:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
},
{
"msg_contents": "Tom Lane writes:\n\n> I take it from the smiley that you're not serious, but actually it seems\n> like it might not be a bad idea. I could see appending a CRC to each\n> tuple record. Comments anyone?\n\nI think I missed the point here. With CRC you typically want to detect\ndata corruption. Where's the possible source of corruption here?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 13 Dec 2000 19:15:47 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY BINARY file format proposal "
}
]
|
[
{
"msg_contents": "> Vadim, Philip changed that part of pg_dump on my advice. The idea was\n> to try to do the right thing for sequences when loading schema only or\n> data only. Analogously to loading data into a pre-existing table, we\n> felt that a data dump ought to be able to restore the current state of\n> an already-existing sequence object. Hence it should use setval().\n\nTables have many records but sequences single one.\nSo, I don't see what would be wrong if we would drop/recreate sequences\nin data-only mode - result would be the same as with setval: required\nstate of sequence. Ok, ok - sequence' OID would be different.\n\n...\n\n> My inclination is to leave pg_dump as it stands, and change \n> do_setval's error check. We could rip out the check entirely, or we\n> could modify the code so that a setval() is allowed for a sequence\n> with cache > 1 only if it's the new three-parameter form of setval().\n> That would allow pg_dump to do its thing without changing the behavior\n> for existing applications. Also, we can certainly make setval() flush\n> any cached nextval assignments that the current backend is holding, even\n> though we have no easy way to clean out cached values in other backends.\n> \n> Comments?\n\nI don't object any approach.\n\nVadim\n",
"msg_date": "Wed, 6 Dec 2000 13:18:25 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Logging for sequences "
}
]
|
[
{
"msg_contents": "\nI know it's been a while since we last discussed a possible rewrite of\nthe C++ API but I now have some time to devote to it.\n\nThe following are my ideas for implementing the C++ API:\n\nI need suggestions, additions, comments etc!\n\nAll classes will be defined in postgres namespace.\n\nThe desired usage would be as follows\n\n//create a database object\npostgres::db = database(\"host = somewhere.com user=someone\npassword=something dbname=mydb\");\n\n//synchronous connection returns TRUE upon success FALSE upon error\nif( db.connect() ) {\n string sql(\"SELECT * FROM foo\");\n postgres::result res = db.exec(sql);\n\n if( !res.error() ) {\n int numrows = res.numrows(); //used to obtain number of rows returned\n //by previous sql statement\n int numcols = res.numcols(); //used to obtain number of columns\n //returned by previous sql statement\n int field1;\n char field2[size],field3[size];\n long field4;\n\n\n//results can be obtained within a for loop using numrows, numcols or as\n//below\n while( res.getrow() ) { //increment row\n\n//result object has insert operator and array operator overloaded\n res >> field1 >> field2; //result object will return datatypes not\n //just chars\n field3 = res[0];\n field4 = res[\"fieldname\"];\n // .. do something with values ..\n }\n\n }\n else {\n cerr << res.display_error();\n }\n\n}\nelse {\n cerr << db.display_error();\n}\n\n\nAlternatively one could access db asynchronously\n\n//create a database object\npostgres::db = database(\"host = somewhere.com user=someone\npassword=something dbname=mydb\");\n\ndb.setasync(); //set asyncrhonous conection with back-end\n\t\t//setsync does the opposite\nwhile( !db.connect() && !db.error() ) {\n\n //..do something\n\n}\nif( db.error() ) {\n cerr << db.display_error();\n exit(1);\n}\n\nstring sql(\"SELECT * FROM foo\");\npostgres::result res = db.exec(sql);\n\nwhile( !res.ready() && !res.error() ) {\n\n //..do something\n\n}\n\n\nOne could also set exceptions with\n\n//create a database object\npostgres::db = database(\"host = somewhere.com user=someone\npassword=something dbname=mydb\");\n\ndb.setexception();\n\ntry {\n\n db.connect();\n string sql(\"SELECT * FROM foo\");\n postgres::result res = db.exec(sql);\n\n}\ncatch( postgres::error& err ) {\n\n //..do something\n cerr << err.display_error();\n}\n\nThe above examples make use of embedded sql being passed to the db object\nvia a string object. ( exec will be overloaded to accept a const char * as well).\nI also envision a higher level which might prove usefull.\n\n//create a database object\npostgres::db = database(\"host = somewhere.com user=someone\npassword=something dbname=mydb\");\n\npostgres::table mytable = db.gettable(\"tablename\");\n//table can now be queried about characteristics of table\nuint64_t numcols = mytable.numcols(); //need to find the max values and return an appropriate type\nuint64_t numrows = mytable.numrows();\nsize_t colsize = mytable.colsize(\"column\");\n\n//obtain an inserter\n\npostgres::inserter myinsert mytable.getinsert();\ninserter.setcolumn(\"colname\");\n\nifstream infile;\ninfile.open(\"myfile\");\nchar data[32];\n\nwhile (infile.getline(line,sizeof(data),'\\t')) {\n inserter << data;\n}\n\nthe above can be extended to include update and delete functions as well\n\npostgres::updater myupdate mytable.getupdate();\nmyupdate.setcolumn(\"colname\");\nmyupdate.setcond(\"WHERE something = something\");\n\nifstream infile;\ninfile.open(\"myfile\");\nchar data[32];\n\nwhile (infile.getline(line,sizeof(data),'\\t')) {\n myupdate << data;\n}\n\n\n\nRandy Jonasz\nSoftware Engineer\nClick2net Inc.\nWeb: http://www.click2net.com\nPhone: (905) 271-3550\n\n\"You cannot possibly pay a philosopher what he's worth,\nbut try your best\" -- Aristotle\n\n\n",
"msg_date": "Wed, 6 Dec 2000 17:09:31 -0500 (EST)",
"msg_from": "Randy Jonasz <[email protected]>",
"msg_from_op": true,
"msg_subject": "RFC C++ Interface"
},
{
"msg_contents": "On Wed, Dec 06, 2000 at 05:09:31PM -0500, Randy Jonasz wrote:\n> \n> I know it's been a while since we last discussed a possible rewrite of\n> the C++ API but I now have some time to devote to it.\n> \n> The following are my ideas for implementing the C++ API:\n> \n> I need suggestions, additions, comments etc!\n\nIt would be helpful if the interface elements were to satisfy the STL \nrequirements on iterators and collections. Those specify a minimum \ninterface, which may be extended as needed.\n\nThe one caveat is, don't try to \"shoehorn\" any semantics into the \ninterface; anything that doesn't fit precisely should be treated \nas an extension instead, and the corresponding standard interface \nleft unimplemented.\n\nNathan Myers\[email protected]\n\n",
"msg_date": "Wed, 6 Dec 2000 17:11:41 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "Randy Jonasz writes:\n\n> The following are my ideas for implementing the C++ API:\n\nMy feeling is that if you want to create a new API, don't. Instead\nimmitate an existing one. For you ambitions you could either take the C++\nAPI of another RDBMS product, try to shoehorn the SQL CLI onto C++, write\na C++ version of JDBC, or something of that nature. Designing a complete\nand good API from scratch is a really painful job when done well, and\ngiven that the market for C++ combined with PostgreSQL traditionally\nhasn't exactly been huge you need all the head starts you can get.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 7 Dec 2000 17:15:07 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "I appreciate your comments and would like to respond to your concerns.\nThe API I sketched in my earlier e-mail is borrowed heavily from\nRogue Wave's dbtools.h++ library. I think it can be a very clean and\nelegant way of accessing a database.\n\nI realize the job is not a small one nor will it be easy to implement\nefficiently but I am excited about taking on the responsibility and I have\nthe support of my employer to proceed with the project.\n\nIn comparison with the current C++ API, I think a more object oriented\napproach might encourage the use of C++ with postgreSQL for software\nsolutions.\n\nHaving said all of this, can I count on your support to proceed?\n\nCheers,\n\nRandy\n\n\nHaving said all of this, can I count on support\n\nOn Thu, 7 Dec 2000, Peter Eisentraut wrote:\n\n> Randy Jonasz writes:\n>\n> > The following are my ideas for implementing the C++ API:\n>\n> My feeling is that if you want to create a new API, don't. Instead\n> immitate an existing one. For you ambitions you could either take the C++\n> API of another RDBMS product, try to shoehorn the SQL CLI onto C++, write\n> a C++ version of JDBC, or something of that nature. Designing a complete\n> and good API from scratch is a really painful job when done well, and\n> given that the market for C++ combined with PostgreSQL traditionally\n> hasn't exactly been huge you need all the head starts you can get.\n>\n> --\n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n>\n>\n>\n\nRandy Jonasz\nSoftware Engineer\nClick2net Inc.\nWeb: http://www.click2net.com\nPhone: (905) 271-3550\n\n\"You cannot possibly pay a philosopher what he's worth,\nbut try your best\" -- Aristotle\n\n",
"msg_date": "Thu, 7 Dec 2000 12:44:33 -0500 (EST)",
"msg_from": "Randy Jonasz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "Thanks for responding. I will definitely kepp your comments in mind.\n\nCheers,\n\nRandy\n\n\nOn Wed, 6 Dec 2000, Nathan Myers wrote:\n\n> On Wed, Dec 06, 2000 at 05:09:31PM -0500, Randy Jonasz wrote:\n> >\n> > I know it's been a while since we last discussed a possible rewrite of\n> > the C++ API but I now have some time to devote to it.\n> >\n> > The following are my ideas for implementing the C++ API:\n> >\n> > I need suggestions, additions, comments etc!\n>\n> It would be helpful if the interface elements were to satisfy the STL\n> requirements on iterators and collections. Those specify a minimum\n> interface, which may be extended as needed.\n>\n> The one caveat is, don't try to \"shoehorn\" any semantics into the\n> interface; anything that doesn't fit precisely should be treated\n> as an extension instead, and the corresponding standard interface\n> left unimplemented.\n>\n> Nathan Myers\n> [email protected]\n>\n>\n>\n\nRandy Jonasz\nSoftware Engineer\nClick2net Inc.\nWeb: http://www.click2net.com\nPhone: (905) 271-3550\n\n\"You cannot possibly pay a philosopher what he's worth,\nbut try your best\" -- Aristotle\n\n",
"msg_date": "Thu, 7 Dec 2000 12:46:11 -0500 (EST)",
"msg_from": "Randy Jonasz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "> I appreciate your comments and would like to respond to your concerns.\n> The API I sketched in my earlier e-mail is borrowed heavily from\n> Rogue Wave's dbtools.h++ library. I think it can be a very clean and\n> elegant way of accessing a database.\n\nRogue Wave's API is quite interesting. It would be a challenge to\nimplement. If you think you can do it, I think it would be a real win,\nand a real object-oriented API to PostgreSQL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Dec 2000 18:53:11 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "Randy Jonasz wrote:\n> \n> I appreciate your comments and would like to respond to your concerns.\n> The API I sketched in my earlier e-mail is borrowed heavily from\n> Rogue Wave's dbtools.h++ library. I think it can be a very clean and\n> elegant way of accessing a database.\n\nYes, this looks neat. At least it is an API design that has been\nproperly tested. We've been thinking along the same lines, and were\nthinking of faking up a roguewave type API for postgres.\n\nOne thing I would like to see, which we have built into our own,\nprimitive, C++ interface, is support for binary data retrieval. For some\napplications the savings are huge. \n\nI haven't thought very hard about how to do this: we do it by having a\nperl script generate structures from the table definitions at compile\ntime, which works well in our case, but is not necessarily suitable for\na library. Code to copy the data into these structures is similarly\ngenerated. Not sure whether roguewave have a better solution.\n\nGood luck with it.\n\nAdriaan\n",
"msg_date": "Mon, 11 Dec 2000 20:01:31 +0200",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "> One thing I would like to see, which we have built into our own,\n> primitive, C++ interface, is support for binary data retrieval. For some\n> applications the savings are huge.\n> I haven't thought very hard about how to do this: we do it by having a\n> perl script generate structures from the table definitions at compile\n> time, which works well in our case, but is not necessarily suitable for\n> a library. Code to copy the data into these structures is similarly\n> generated. Not sure whether roguewave have a better solution.\n\nThis is what CORBA is designed to do. No point in reinventing the wheel\nfor an on-the-wire protocol.\n\nI'm not sure what the integration path would be for a CORBA-based\ninterface onto the server.\n\n - Thomas\n",
"msg_date": "Tue, 12 Dec 2000 03:30:48 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "\nThanks for the vote of confidence. I'm looking forward to doing it. I've\ngot most of the classes needed laid out. Once I'm done this step, I'll\npost what I have for more comments, crticism, suggestions.\n\nCheers,\n\nRandy\n\nOn Sun, 10 Dec 2000, Bruce Momjian wrote:\n\n> > I appreciate your comments and would like to respond to your concerns.\n> > The API I sketched in my earlier e-mail is borrowed heavily from\n> > Rogue Wave's dbtools.h++ library. I think it can be a very clean and\n> > elegant way of accessing a database.\n>\n> Rogue Wave's API is quite interesting. It would be a challenge to\n> implement. If you think you can do it, I think it would be a real win,\n> and a real object-oriented API to PostgreSQL.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>\n\nRandy Jonasz\nSoftware Engineer\nClick2net Inc.\nWeb: http://www.click2net.com\nPhone: (905) 271-3550\n\n\"You cannot possibly pay a philosopher what he's worth,\nbut try your best\" -- Aristotle\n\n",
"msg_date": "Tue, 12 Dec 2000 13:53:09 -0500 (EST)",
"msg_from": "Randy Jonasz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "On Sun, Dec 10, 2000 at 06:53:11PM -0500, Bruce Momjian wrote:\n> > I appreciate your comments and would like to respond to your concerns.\n> > The API I sketched in my earlier e-mail is borrowed heavily from\n> > Rogue Wave's dbtools.h++ library. I think it can be a very clean and\n> > elegant way of accessing a database.\n> \n> Rogue Wave's API is quite interesting. It would be a challenge to\n> implement. If you think you can do it, I think it would be a real win,\n> and a real object-oriented API to PostgreSQL.\n\nI was co-architect of the Rogue Wave Dbtools.h++ interface design (along \nwith somebody who actually knew something about databases, Stan Sulsky) in \nthe early 90's. We really tried to make the \"Datum\" type unnecessary in \nnormal programs. To my disgrace, I didn't participate in implementation;\nit was implemented mainly by Lars Lohn, who went on to a stellar career \nas a consultant to users of the library.\n\nAt the time, ODBC was just beginning to be used. Oracle was already\na bully, actually moreso than today; we had to buy a full production \nlicense just to develop on it. Ingres was much better; they sent two\nengineers to do the port themselves. \n\nThe design is really showing its age. SQL92 and SQL3 didn't exist then,\nand neither did the STL or the ISO 14882 C++ Language standard.\n\nNathan Myers\[email protected]\n",
"msg_date": "Tue, 12 Dec 2000 13:25:03 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "> On Sun, Dec 10, 2000 at 06:53:11PM -0500, Bruce Momjian wrote:\n> > > I appreciate your comments and would like to respond to your concerns.\n> > > The API I sketched in my earlier e-mail is borrowed heavily from\n> > > Rogue Wave's dbtools.h++ library. I think it can be a very clean and\n> > > elegant way of accessing a database.\n> > \n> > Rogue Wave's API is quite interesting. It would be a challenge to\n> > implement. If you think you can do it, I think it would be a real win,\n> > and a real object-oriented API to PostgreSQL.\n> \n> I was co-architect of the Rogue Wave Dbtools.h++ interface design (along \n> with somebody who actually knew something about databases, Stan Sulsky) in \n> the early 90's. We really tried to make the \"Datum\" type unnecessary in \n> normal programs. To my disgrace, I didn't participate in implementation;\n> it was implemented mainly by Lars Lohn, who went on to a stellar career \n> as a consultant to users of the library.\n> \n> At the time, ODBC was just beginning to be used. Oracle was already\n> a bully, actually moreso than today; we had to buy a full production \n> license just to develop on it. Ingres was much better; they sent two\n> engineers to do the port themselves. \n> \n> The design is really showing its age. SQL92 and SQL3 didn't exist then,\n> and neither did the STL or the ISO 14882 C++ Language standard.\n\nCan you suggest areas that should be changed?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Dec 2000 17:28:46 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "On Tue, Dec 12, 2000 at 05:28:46PM -0500, Bruce Momjian wrote:\n> > On Sun, Dec 10, 2000 at 06:53:11PM -0500, Bruce Momjian wrote:\n> > > > I appreciate your comments and would like to respond to your\n> > > > concerns. The API I sketched in my earlier e-mail is borrowed\n> > > > heavily from Rogue Wave's dbtools.h++ library. I think it can be\n> > > > a very clean and elegant way of accessing a database.\n> > >\n> > > Rogue Wave's API is quite interesting. It would be a challenge to\n> > > implement. If you think you can do it, I think it would be a real\n> > > win, and a real object-oriented API to PostgreSQL.\n> >\n> > I was co-architect of the Rogue Wave Dbtools.h++ interface design\n> > ... The design is really showing its age. SQL92 and SQL3 didn't\n> > exist then, and neither did the STL or the ISO 14882 C++ Language\n> > standard.\n>\n> Can you suggest areas that should be changed?\n\nAs I recall, we were much more fond of operator overloading then than is\nconsidered tasteful or wise today. Also, there was no standard for how\niterators ought to work, then, whereas today one needs unusually good\nreasons to depart from the STL style.\n\nNathan Myers \[email protected]\n",
"msg_date": "Tue, 12 Dec 2000 15:50:31 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "Interesting comments. I can see using the STL framework for iterating\nthrough result sets being one way to go. Would something like:\n\nvector<pgrows>table = pgdb.exec(\"Select * from foo\");\nvector<pgrows>::iterator row;\nvector<pgrows>::iterator end = table.end();\n\nfor( row = table.begin(); row != end; ++row ) {\n\n *row >> field1 >> field2;\n\n //do something with fields\n\n}\n\nbe along the lines you were thinking?\n\nCheers,\n\nRandy\n\nOn Tue, 12 Dec 2000, Nathan Myers wrote:\n\n> On Tue, Dec 12, 2000 at 05:28:46PM -0500, Bruce Momjian wrote:\n> > > On Sun, Dec 10, 2000 at 06:53:11PM -0500, Bruce Momjian wrote:\n> > > > > I appreciate your comments and would like to respond to your\n> > > > > concerns. The API I sketched in my earlier e-mail is borrowed\n> > > > > heavily from Rogue Wave's dbtools.h++ library. I think it can be\n> > > > > a very clean and elegant way of accessing a database.\n> > > >\n> > > > Rogue Wave's API is quite interesting. It would be a challenge to\n> > > > implement. If you think you can do it, I think it would be a real\n> > > > win, and a real object-oriented API to PostgreSQL.\n> > >\n> > > I was co-architect of the Rogue Wave Dbtools.h++ interface design\n> > > ... The design is really showing its age. SQL92 and SQL3 didn't\n> > > exist then, and neither did the STL or the ISO 14882 C++ Language\n> > > standard.\n> >\n> > Can you suggest areas that should be changed?\n>\n> As I recall, we were much more fond of operator overloading then than is\n> considered tasteful or wise today. Also, there was no standard for how\n> iterators ought to work, then, whereas today one needs unusually good\n> reasons to depart from the STL style.\n>\n> Nathan Myers\n> [email protected]\n>\n>\n\nRandy Jonasz\nSoftware Engineer\nClick2net Inc.\nWeb: http://www.click2net.com\nPhone: (905) 271-3550\n\n\"You cannot possibly pay a philosopher what he's worth,\nbut try your best\" -- Aristotle\n\n",
"msg_date": "Wed, 13 Dec 2000 15:16:28 -0500 (EST)",
"msg_from": "Randy Jonasz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "On Wed, Dec 13, 2000 at 03:16:28PM -0500, Randy Jonasz wrote:\n> On Tue, 12 Dec 2000, Nathan Myers wrote:\n> > On Tue, Dec 12, 2000 at 05:28:46PM -0500, Bruce Momjian wrote:\n> > > > I was co-architect of the Rogue Wave Dbtools.h++ interface design\n> > > > ... The design is really showing its age.\n> > > Can you suggest areas that should be changed?\n> > As I recall, we were much more fond of operator overloading then than is\n> > considered tasteful or wise today. Also, there was no standard for how\n> > iterators ought to work, then, whereas today one needs unusually good\n> > reasons to depart from the STL style.\n> \n> Interesting comments. I can see using the STL framework for iterating\n> through result sets being one way to go. Would something like:\n> \n> vector<pgrows>table = pgdb.exec(\"Select * from foo\");\n> vector<pgrows>::iterator row;\n> vector<pgrows>::iterator end = table.end();\n> \n> for( row = table.begin(); row != end; ++row ) {\n> *row >> field1 >> field2;\n> //do something with fields\n> }\n> \n> be along the lines you were thinking?\n\nNo. The essence of STL is its iterator interface framework. \n(The containers and algorithms that come with STL should be seen\nmerely as examples of how to use the iterators.) A better\nexample would be\n\n Postgres::Result_iterator end;\n for (Postgres::Result_iterator it = pgdb.exec(\"Select * from foo\");\n it != end; ++it) {\n int field1;\n string field2;\n *it >> field1 >> field2;\n // do something with fields\n }\n \n(although this still involves overloading \">>\"). \nThe points illustrated above are:\n\n1. Shoehorning the results of a query into an actual STL container is\n probably a Bad Thing. Users who want that can do it themselves\n with std::copy().\n\n2. Lazy extraction of query results is almost always better than\n aggressive extraction. Often you don't actually care about\n the later results, and you may not have room for them anyhow.\n\nRather than the generic result iterator type illustrated above, with \nconversion to C++ types on extraction via \">>\", I would prefer to use \na templated iterator type so that the body of the loop would look more \nlike\n\n // do something with it->field1 and it->field2\n\nor\n\n // do something with it->field1() and it->field2()\n\nHowever, it can be tricky to reconcile compile-time type-safety with \nthe entirely runtime-determined result of a \"select\". Probably you \ncould put in conformance checking where the result of exec() gets \nconverted to the iterator, and throw an exception if the types don't \nmatch. \n\nNathan Myers\[email protected]\n\n",
"msg_date": "Wed, 13 Dec 2000 14:21:22 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: RFC C++ Interface"
},
{
"msg_contents": "Thanks for responding. I've made some comments below.\n\nOn Wed, 13 Dec 2000, Nathan Myers wrote:\n\n> On Wed, Dec 13, 2000 at 03:16:28PM -0500, Randy Jonasz wrote:\n> > On Tue, 12 Dec 2000, Nathan Myers wrote:\n> > > On Tue, Dec 12, 2000 at 05:28:46PM -0500, Bruce Momjian wrote:\n> > > > > I was co-architect of the Rogue Wave Dbtools.h++ interface design\n> > > > > ... The design is really showing its age.\n> > > > Can you suggest areas that should be changed?\n> > > As I recall, we were much more fond of operator overloading then than is\n> > > considered tasteful or wise today. Also, there was no standard for how\n> > > iterators ought to work, then, whereas today one needs unusually good\n> > > reasons to depart from the STL style.\n> >\n> > Interesting comments. I can see using the STL framework for iterating\n> > through result sets being one way to go. Would something like:\n> >\n> > vector<pgrows>table = pgdb.exec(\"Select * from foo\");\n> > vector<pgrows>::iterator row;\n> > vector<pgrows>::iterator end = table.end();\n> >\n> > for( row = table.begin(); row != end; ++row ) {\n> > *row >> field1 >> field2;\n> > //do something with fields\n> > }\n> >\n> > be along the lines you were thinking?\n>\n> No. The essence of STL is its iterator interface framework.\n> (The containers and algorithms that come with STL should be seen\n> merely as examples of how to use the iterators.) A better\n> example would be\n>\n> Postgres::Result_iterator end;\n> for (Postgres::Result_iterator it = pgdb.exec(\"Select * from foo\");\n> it != end; ++it) {\n> int field1;\n> string field2;\n> *it >> field1 >> field2;\n> // do something with fields\n> }\n>\n> (although this still involves overloading \">>\").\n> The points illustrated above are:\n>\nThe above I like very much although it implies a synchronous connection to\nthe back end. This can be worked around though by providing both a\nsynchronous and an asynchronous interface rather than using just one. I\ndon't see any problems with overloading \">>\" or \"[]\" to obtain the value\nof a column.\n\n> 1. Shoehorning the results of a query into an actual STL container is\n> probably a Bad Thing. Users who want that can do it themselves\n> with std::copy().\n\nOn further thought I agree with you here. Returning an iterator to a\nresult container would be much more efficient than what I originally\nproposed.\n>\n> 2. Lazy extraction of query results is almost always better than\n> aggressive extraction. Often you don't actually care about\n> the later results, and you may not have room for them anyhow.\n>\n> Rather than the generic result iterator type illustrated above, with\n> conversion to C++ types on extraction via \">>\", I would prefer to use\n> a templated iterator type so that the body of the loop would look more\n> like\n>\n> // do something with it->field1 and it->field2\n>\n> or\n>\n> // do something with it->field1() and it->field2()\n>\nThis creates the problem of having public member variables and/or having a\nmechanism to clone enough variables as there were columns returned in an\nSQL statement. I much prefer the earlier methods of accessing these\nvalues.\n\n> However, it can be tricky to reconcile compile-time type-safety with\n> the entirely runtime-determined result of a \"select\". Probably you\n> could put in conformance checking where the result of exec() gets\n> converted to the iterator, and throw an exception if the types don't\n> match.\n>\n> Nathan Myers\n> [email protected]\n>\n>\n>\n\nCheers,\n\nRandy Jonasz\nSoftware Engineer\nClick2net Inc.\nWeb: http://www.click2net.com\nPhone: (905) 271-3550\n\n\"You cannot possibly pay a philosopher what he's worth,\nbut try your best\" -- Aristotle\n\n",
"msg_date": "Wed, 13 Dec 2000 20:35:32 -0500 (EST)",
"msg_from": "Randy Jonasz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RFC C++ Interface"
}
]
|
[
{
"msg_contents": "> > CRCs are designed to catch N-bit errors (ie N bits in a row \n> with their\n> > values flipped). N is (IIRC) the number of bits in the CRC \n> minus one.\n> > So, a 32-bit CRC can catch all 31-bit errors. That's the \n> only guarantee\n> > a CRC gives. Everything else has a 1 in 2^32-1 chance of \n> producing the\n> > same CRC as the original data. That's pretty good odds, but not a\n> > guarantee.\n> \n> You've got a higher chance of undetected hard drive errors, \n> memory errors,\n> solar flares, etc. than a CRC of that quality failing...\n\nAlso, how long is CRC in TCP/IP packages? => there is always\nrisk that backend will commit not what you sended to it.\n\nVadim\n",
"msg_date": "Wed, 6 Dec 2000 19:03:07 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: AW: beta testing version"
}
]
|
[
{
"msg_contents": "Could someone explain this to me?\n\n(This is my second attempt at asking this)\n\ncreate function bunch_o_tuples (varchar) returns opaque\n as '/local/projects/phoenix/pg/pgtpl.so', 'tuple'\n language 'c';\n\n\nI should be able to do either:\n\ncreate table fubar as select bunch_o_tuples('bla bla');\n\nor \n\nselect * from table where field = bunch_o_tuples('bla bla');\n\n\nI call CreateTemplateTupleDesc(2), this creates a heaptuple with two\nempty entries.\n\nI can then use the TupleDesc returned as a parameter for heap_formtuple.\n\nI can't quite get where I can get valid type oids or how to initialize\nthe types, and type names, and values.\n\nAlso, can someone explain this to me....\n\nDo I have a heap tuple with one entry that acts like a container, and\ncreate [n] heaptuples with the values, or are heaptuples somehow\ninherently containers. \n\nAny help anyone could give me, on any of these issues would be helpful.\nI'm sure this is a common problem, and if someone helps me I promise I\nwill submit a FAQ and some sample code. ;-)\n \n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Wed, 06 Dec 2000 22:09:50 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "HeapTuple?"
},
{
"msg_contents": "mlw <[email protected]> writes:\n> I should be able to do either:\n> create table fubar as select bunch_o_tuples('bla bla');\n\nYou might think that, but you can't.\n\nThe support for functions returning tuple sets that you are looking for\njust plain ain't there. Maybe in 7.2...\n\nIt is possible to return a tuple (stuff it into a TupleTableSlot, and\nthen return a pointer to the slot casted to Datum). But the only thing\nthat can usefully be done with such a function result is to extract one\ndatum (column) from it; and the syntax needed to invoke such a function\nand then get a column out of its result is so bizarre and limited that\nyou might as well not bother.\n\nFunctions returning sets are possible in 7.1, but they have to return\none value per invocation together with an \"I'm not done yet\" indication.\nSee src/backend/utils/fmgr/README in current sources. Again, the\ncontexts in which such a function can actually be used are limited and\nnot remarkably helpful.\n\nI'm hoping that we can rethink, redesign, and reimplement all of this\nstuff from the ground up in 7.2 or 7.3. Where I'd like to get to is\nbeing able to write\n\n\tSELECT ... FROM function_returning_tuple_set(params);\n\nbut I'm not at all clear yet on what the internal interface to such\na function ought to look like. In particular, should such a function\nexpect to be called multiple times, or should it be called just once\nand stuff multiple result rows into some temporary data structure?\nThe latter would be easier to program but would be likely to run out\nof resources on big result sets.\n\n\n> Do I have a heap tuple with one entry that acts like a container, and\n> create [n] heaptuples with the values, or are heaptuples somehow\n> inherently containers. \n\nA heap tuple is just a tuple, ie, one or more Datum values smushed\ntogether. Tuple descriptors are auxiliary data structures that are\nused by routines like heap_getattr to extract datums from tuples.\nI guess you could call a tuple a container if you wanted, but it's\na pretty simplistic sort of container...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Dec 2000 23:47:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HeapTuple? "
}
]
|
[
{
"msg_contents": "> > > > Sounds great! We can follow this way: when first after last \n> > > > checkpoint update to a page being logged, XLOG code can log\n> > > > not AM specific update record but entire page (creating backup\n> > > > \"physical log\"). During after crash recovery such pages will\n> > > > be redone first, ensuring page consistency for further redo ops.\n> > > > This means bigger log, of course.\n> > > \n> > > Be sure to include a CRC of each part of the block that you hope\n> > > to replay individually.\n> > \n> > Why should we do this? I'm not going to replay parts individually,\n> > I'm going to write entire pages to OS cache and than apply \n> > changes to them. Recovery is considered as succeeded after server\n> > is ensured that all applyed changes are on the disk. In the case of\n> > crash during recovery we'll replay entire game.\n> \n> Yes, but there would need to be a way to verify the last page \n> or record from txlog when running on crap hardware. The point was,\n> that crap hardware writes our 8k pages in any order (e.g. 512 bytes\n> from the end, then 512 bytes from front ...), and does not\n> even notice, that it only wrote part of one such 512 byte block when\n> reading it back after a crash. But, I actually doubt that this is\n> true for all but the most crappy hardware.\n\nOh, I didn't consider log consistency that time. Anyway we need in CRC\nfor entire log record not for its 512-bytes parts.\n\nWell, I didn't care about not atomic 8K-block writes in current WAL\nimplementation - we never were protected from this: backend inserts\ntuple, but only line pointers go to disk => new lp points on some\ngarbade inside unupdated page content. Yes, transaction was not\ncommitted but who knows content of this garbade and what we'll get\nfrom scan trying to read it. Same for index pages.\n\nCan we come to agreement about CRC in log records? Probably it's\nnot too late to add it (initdb).\n\nSeeing bad CRC recovery procedure will assume that current record\n(and all others after it, if any) is garbade - ie comes from\ninterrupted disk write - and may be ignored (backend writes data\npages only after changes are logged - if changes weren't\nsuccessfully logged then on-disk image of data pages was not\nupdated and we are not interested in log records).\n\nThis may be implemented very fast (if someone points me where\nI can find CRC func). And I could implement \"physical log\"\ntill next monday.\n\nComments?\n\nVadim\n",
"msg_date": "Wed, 6 Dec 2000 19:50:42 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: beta testing version"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> This may be implemented very fast (if someone points me where\n> I can find CRC func).\n\nLifted from the PNG spec (RFC 2083):\n\n\n15. Appendix: Sample CRC Code\n\n The following sample code represents a practical implementation of\n the CRC (Cyclic Redundancy Check) employed in PNG chunks. (See also\n ISO 3309 [ISO-3309] or ITU-T V.42 [ITU-V42] for a formal\n specification.)\n\n\n /* Make the table for a fast CRC. */\n void make_crc_table(void)\n {\n unsigned long c;\n int n, k;\n for (n = 0; n < 256; n++) {\n c = (unsigned long) n;\n for (k = 0; k < 8; k++) {\n if (c & 1)\n c = 0xedb88320L ^ (c >> 1);\n else\n c = c >> 1;\n }\n crc_table[n] = c;\n }\n crc_table_computed = 1;\n }\n\n /* Update a running CRC with the bytes buf[0..len-1]--the CRC\n should be initialized to all 1's, and the transmitted value\n is the 1's complement of the final running CRC (see the\n crc() routine below)). */\n\n unsigned long update_crc(unsigned long crc, unsigned char *buf,\n int len)\n {\n unsigned long c = crc;\n int n;\n\n if (!crc_table_computed)\n make_crc_table();\n for (n = 0; n < len; n++) {\n c = crc_table[(c ^ buf[n]) & 0xff] ^ (c >> 8);\n }\n return c;\n }\n\n /* Return the CRC of the bytes buf[0..len-1]. */\n unsigned long crc(unsigned char *buf, int len)\n {\n return update_crc(0xffffffffL, buf, len) ^ 0xffffffffL;\n }\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Dec 2000 23:26:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version "
},
{
"msg_contents": "> Lifted from the PNG spec (RFC 2083):\n\nDrat, I dropped the table declarations:\n\n /* Table of CRCs of all 8-bit messages. */\n unsigned long crc_table[256];\n\n /* Flag: has the table been computed? Initially false. */\n int crc_table_computed = 0;\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Dec 2000 23:48:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version "
},
{
"msg_contents": "> This may be implemented very fast (if someone points me where\n> I can find CRC func). And I could implement \"physical log\"\n> till next monday.\n\nI have been experimenting with CRCs for the past 6 month in our database for\ninternal logging purposes. Downloaded a lot of hash libraries, tried\ndifferent algorithms, and implemented a few myself. Which algorithm do you\nwant? Have a look at the openssl libraries (www.openssl.org) for a start -if\nyou don't find what you want let me know.\n\nAs the logging might include large data blocks, especially now that we can\nTOAST our data, I would strongly suggest to use strong hashes like RIPEMD or\nMD5 instead of CRC-32 and the like. Sure, it takes more time tocalculate and\nmore place on the hard disk, but then: a database without data integrity\n(and means of _proofing_ integrity) is pretty worthless.\n\nHorst\n\n",
"msg_date": "Thu, 7 Dec 2000 18:40:49 +1100",
"msg_from": "\"Horst Herb\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "CRC was: Re: beta testing version"
},
{
"msg_contents": "recently I have downloaded a pre-beta postgresql, I found insert and update speed is slower then 7.0.3,\neven I turn of sync flag, it is still slow than 7.0, why? how can I make it faster?\n\nRegards,\nXuYifeng\n\n",
"msg_date": "Thu, 7 Dec 2000 16:46:28 +0800",
"msg_from": "\"xuyifeng\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "pre-beta is slow"
},
{
"msg_contents": "Horst Herb wrote:\n> \n> > This may be implemented very fast (if someone points me where\n> > I can find CRC func). And I could implement \"physical log\"\n> > till next monday.\n> \n> I have been experimenting with CRCs for the past 6 month in our database for\n> internal logging purposes. Downloaded a lot of hash libraries, tried\n> different algorithms, and implemented a few myself. Which algorithm do you\n> want? Have a look at the openssl libraries (www.openssl.org) for a start -if\n> you don't find what you want let me know.\n> \n> As the logging might include large data blocks, especially now that we can\n> TOAST our data, I would strongly suggest to use strong hashes like RIPEMD or\n> MD5 instead of CRC-32 and the like. Sure, it takes more time tocalculate and\n> more place on the hard disk, but then: a database without data integrity\n> (and means of _proofing_ integrity) is pretty worthless.\n\nThe choice of hash algoritm could be made a compile-time switch quite\neasyly I guess.\n\n---------\nHannu\n",
"msg_date": "Thu, 07 Dec 2000 12:17:23 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version"
},
{
"msg_contents": "On Thu, Dec 07, 2000 at 06:40:49PM +1100, Horst Herb wrote:\n> > This may be implemented very fast (if someone points me where\n> > I can find CRC func). And I could implement \"physical log\"\n> > till next monday.\n> \n> As the logging might include large data blocks, especially now that\n> we can TOAST our data, I would strongly suggest to use strong hashes\n> like RIPEMD or MD5 instead of CRC-32 and the like. \n\nCryptographically-secure hashes are unnecessarily expensive to compute.\nA simple 64-bit CRC would be of equal value, at much less expense.\n\nNathan Myers\[email protected]\n\n",
"msg_date": "Thu, 7 Dec 2000 11:42:53 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version"
}
]
|
[
{
"msg_contents": "Chih-Chang,\n\n> Do you remember the mb problem about Big5?\n\nSure.\n\n> Now I have tested all Big5 chars (with ETen extension -- some chars in\n> CNS 11643-1992 Plane 3) by the program in the attachment on\n> PostgreSQL 7.0.3 with patches from you show me.\n> The execution result is also in the attachment.\n> The first two insertion fails are normal, because these two chars are\n> duplicated in Big5.\n> But the 3rd Big5 char (0xF9DC <- > CNS Plane3 0x4B5C) insertion\n> is failed. I have no idea about why it is a \"Unterminated quoted\n> string\".\n> Could you see where the problem is?\n\nThanks for the testing effort!\n\nPlease apply following one-line-patch and test it again. If it's ok, I\nwill commit it to both current and stable trees.\n--\nTatsuo Ishii\n\n*** postgresql-7.0.3/src/backend/utils/mb/big5.c~\tThu May 27 00:19:54 1999\n--- postgresql-7.0.3/src/backend/utils/mb/big5.c\tThu Dec 7 13:39:01 2000\n***************\n*** 322,328 ****\n \t\t\tif (b2c3[i][0] == big5)\n \t\t\t{\n \t\t\t\t*lc = LC_CNS11643_3;\n! \t\t\t\treturn (b2c3[i][1]);\n \t\t\t}\n \t\t}\n \n--- 322,328 ----\n \t\t\tif (b2c3[i][0] == big5)\n \t\t\t{\n \t\t\t\t*lc = LC_CNS11643_3;\n! \t\t\t\treturn (b2c3[i][1] | 0x8080U);\n \t\t\t}\n \t\t}\n \n",
"msg_date": "Thu, 07 Dec 2000 13:47:10 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A mb problem in PostgreSQL"
},
{
"msg_contents": "Tatsuo,\n\nTatsuo Ishii 嚙篇嚙瘩嚙瘦\n\n> Please apply following one-line-patch and test it again. If it's ok, I\n> will commit it to both current and stable trees.\n>\n> ! return (b2c3[i][1] | 0x8080U);\n\nYes, it's OK. Thank you!\nBut I wonder why we need to \"| 0x8080U\"?\nb2c3[][] and BIG5toCNS()'s return value are both unsigned short, aren't they?\n\n--\nChih-Chang Hsieh\n\n\n\n",
"msg_date": "Sat, 09 Dec 2000 09:15:20 +0800",
"msg_from": "Chih-Chang Hsieh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A mb problem in PostgreSQL"
},
{
"msg_contents": "> > Please apply following one-line-patch and test it again. If it's ok, I\n> > will commit it to both current and stable trees.\n> >\n> > ! return (b2c3[i][1] | 0x8080U);\n> \n> Yes, it's OK. Thank you!\n> But I wonder why we need to \"| 0x8080U\"?\n> b2c3[][] and BIG5toCNS()'s return value are both unsigned short, aren't they?\n\nb2c3 has CNS 11643-1992 value. That is, we need to add 0x8080 to\nconvert to EUC_TW.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 09 Dec 2000 13:05:21 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: A mb problem in PostgreSQL"
},
{
"msg_contents": "> > Please apply following one-line-patch and test it again. If it's ok, I\n> > will commit it to both current and stable trees.\n> >\n> > ! return (b2c3[i][1] | 0x8080U);\n> \n> Yes, it's OK. Thank you!\n\nThanks for the testings. I will commit soon.\n\n> But I wonder why we need to \"| 0x8080U\"?\n> b2c3[][] and BIG5toCNS()'s return value are both unsigned short, aren't they?\n\nSince the function returns EUC_TW. In b2c3[] we have CNS 11643-1992\nvalue, and we need to add 0x8080 to convert from CNS 11643-1992 to\nEUC.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 09 Dec 2000 13:21:39 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: A mb problem in PostgreSQL"
}
]
|
[
{
"msg_contents": "\nhi, what kind of organization file does postgresql use?\n\n-- \n/earth is 98% full ... please delete anyone you can. \n",
"msg_date": "07 Dec 2000 01:46:55 -0600",
"msg_from": "\"Ivan =?iso-8859-1?q?Hern=E1ndez?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "organization file"
},
{
"msg_contents": "> hi, what kind of organization file does postgresql use?\n\nHi. I'm not sure what you mean by \"file organization\". Are you asking\nabout the file format of tables, or about the directory layout? Both are\ndiscussed in the documentation afaik.\n\n - Thomas\n",
"msg_date": "Thu, 07 Dec 2000 15:25:09 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: organization file"
},
{
"msg_contents": "> hi, yes a was talking about the first: the file format of tables. I was\n> reading about diferent file organizations (structures): sequential, heal,\n> ring, multi ring, etc...\n\nafaik most of the files are sequential in nature, with some record\nupdates happening in the middle to mark records as \"obsolete\". So data\nis added on to the end, which is why running VACUUM is so important.\n\n> I look for some info in the documentation but i\n> didn't find nothing, also i'm interested about the recovery system of\n> postgresql.... i hope that you can give me some hints about where i can\n> look for it....\n\nIn previous releases, since all files are written sequentially the\nrecovery system is very simple. For the upcoming 7.1 release with WAL,\nthere is likely more done, but I'm not familiar with the details.\n\nSomebody want to write a (short) description? I'll include it in the\ndocs...\n\n - Thomas\n",
"msg_date": "Thu, 07 Dec 2000 15:59:29 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: organization file"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n\n> > hi, what kind of organization file does postgresql use?\n> \n> Hi. I'm not sure what you mean by \"file organization\". Are you asking\n> about the file format of tables, or about the directory layout? Both are\n> discussed in the documentation afaik.\n\nhi, yes a was talking about the first: the file format of tables. I was\nreading about diferent file organizations (structures): sequential, heal,\nring, multi ring, etc... I look for some info in the documentation but i\ndidn't find nothing, also i'm interested about the recovery system of\npostgresql.... i hope that you can give me some hints about where i can\nlook for it....\n\nivan ...\n\n> - Thomas\n\n-- \nYou are not alone. You are accepted by some non-deterministic automaton\n",
"msg_date": "07 Dec 2000 10:17:14 -0600",
"msg_from": "\"Ivan =?iso-8859-1?q?Hern=E1ndez?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: organization file"
}
]
|
[
{
"msg_contents": "Hi,\n\nMy pgsql an postgres ( version V 7.0.2 ) stop with an error when i write \nthe following query under pgsql :\n\n create function test(text) returns text AS '' LANGUAGE 'sql';\n\nERROR under pgsql :\n\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\n\nError under postmaster :\n\nERROR: ProcedureCreate: procedure toto already exists with same arguments\nServer process (pid 21532) exited with status 11 at Thu Dec 7 10:55:08 2000\nTerminating any active server processes...\nServer processes were terminated at Thu Dec 7 10:55:08 2000\nReinitializing shared memory and semaphores\nThe Data Base System is starting up\nDEBUG: Data Base System is starting up at Thu Dec 7 10:55:08 2000\nDEBUG: Data Base System was interrupted being in production at Thu Dec 7 10:38:10 2000\nDEBUG: Data Base System is in production state at Thu Dec 7 10:55:08 2000\n\n\nRegards\n\nPEJAC Pascal\n\n",
"msg_date": "Thu, 7 Dec 2000 11:06:16 +0100 (CET)",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG WITH CREATE FUNCTION......."
},
{
"msg_contents": "<[email protected]> writes:\n> create function test(text) returns text AS '' LANGUAGE 'sql';\n> [crashes]\n\nOK, now it says:\n\nregression=# create function test(text) returns text AS '' LANGUAGE 'sql';\nERROR: function declared to return text, but no SELECT provided\n\nThanks for the report!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 14:42:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG WITH CREATE FUNCTION....... "
}
]
|
[
{
"msg_contents": "Could anyone please add the following to our todo list:\n\n- make sure ECPG and postmaster use the same grammar\n- make sure ECPG accepts ip numbers as host name\n\nAlso there are some more bugs in ecpg that I didn't find time to fix yet.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 7 Dec 2000 14:05:54 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "ECPG todo"
}
]
|
[
{
"msg_contents": "Now that the postmaster takes a noticeable amount of time to shut down,\nI'm wondering if pg_ctl's default about whether or not to wait ought\nto be reversed. That is, \"-w\" would become the norm, and some new\nswitch (\"-n\" maybe) would be needed if you didn't want it to wait.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 14:05:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Switch pg_ctl's default about waiting?"
},
{
"msg_contents": "Tom Lane writes:\n\n> Now that the postmaster takes a noticeable amount of time to shut down,\n> I'm wondering if pg_ctl's default about whether or not to wait ought\n> to be reversed. That is, \"-w\" would become the norm, and some new\n> switch (\"-n\" maybe) would be needed if you didn't want it to wait.\n\nTwo concerns:\n\n1. The waiting isn't very reliable as we recently found out. (If you\nwait on shutdown, then wait on startup would be default as well, no?)\n\n2. Why would you necessarily care to wait for shutdown? Startup I can\nsee, but shutdown doesn't seem so important.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 7 Dec 2000 21:53:27 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Switch pg_ctl's default about waiting?"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> 2. Why would you necessarily care to wait for shutdown? Startup I can\n> see, but shutdown doesn't seem so important.\n\nWell, maybe I'm the only one who has a script like\n\tpg_ctl -w stop\n\tcd ~/.../backend; make installbin\n\tpg_ctl start\nbut I got burnt regularly until I put -w in there ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 16:08:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Switch pg_ctl's default about waiting? "
}
]
|
[
{
"msg_contents": "\nOkay, since I haven't gotten word back on where to find the docs for v7.1,\nit still contains those for v7.0, but I just put up beta1 tarballs in the\n/pub/dev directory ... can someone take a look at these before we announce\nthem to make sure they look okay?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 7 Dec 2000 15:48:25 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "v7.1 beta 1 ...packaged, finally ..."
},
{
"msg_contents": "On Thursday 07 December 2000 16:48, The Hermit Hacker wrote:\n> Okay, since I haven't gotten word back on where to find the docs for v7.1,\n> it still contains those for v7.0, but I just put up beta1 tarballs in the\n> /pub/dev directory ... can someone take a look at these before we announce\n> them to make sure they look okay?\n\nI'm in the process of downloading. What would be the diff between the beta1 \nand the snapshot?\n\nSaludos... :-)\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart���n Marqu���s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Thu, 7 Dec 2000 17:34:54 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ..."
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> it still contains those for v7.0, but I just put up beta1 tarballs in the\n> /pub/dev directory ... can someone take a look at these before we announce\n> them to make sure they look okay?\n\nThe tarballs match what I have locally ... ship 'em ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 15:51:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ... "
},
{
"msg_contents": "The Hermit Hacker writes:\n\n> Okay, since I haven't gotten word back on where to find the docs for v7.1,\n\n/home/projects/pgsql/ftp/www/html/devel-corner/docs\n\nIdeally (IMHO) we'd build the documentation right in place when making the\ndistribution tarball, i.e., broken docs, no release. I'm not sure how to\nusefully extrapolate that to the snapshot builds, though.\n\nAnother thing we should think about is to not tar.gz the documentation\nfiles. That way we could create useful incremental diffs between releases\nlater on. Any comments here?\n\n> it still contains those for v7.0, but I just put up beta1 tarballs in the\n> /pub/dev directory ... can someone take a look at these before we announce\n> them to make sure they look okay?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 7 Dec 2000 22:35:11 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ..."
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Another thing we should think about is to not tar.gz the documentation\n> files. That way we could create useful incremental diffs between releases\n> later on. Any comments here?\n\nI've never figured out why we do that. Since the thing is going to be\ninside a tarball anyway, there's no possible savings from distributing\nthe built doco that way, rather than as ordinary files.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 16:43:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ... "
},
{
"msg_contents": "On Thursday 07 December 2000 18:35, Peter Eisentraut wrote:\n>\n> Ideally (IMHO) we'd build the documentation right in place when making the\n> distribution tarball, i.e., broken docs, no release. I'm not sure how to\n> usefully extrapolate that to the snapshot builds, though.\n>\n> Another thing we should think about is to not tar.gz the documentation\n> files. That way we could create useful incremental diffs between releases\n> later on. Any comments here?\n\nIf you dont't tar.gz the docs, what should the downladable format be? CVS?\nI think CVS would be great.\n\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart���n Marqu���s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Thu, 7 Dec 2000 19:22:55 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ..."
},
{
"msg_contents": "On Thu, 7 Dec 2000, Martin A. Marques wrote:\n\n> On Thursday 07 December 2000 16:48, The Hermit Hacker wrote:\n> > Okay, since I haven't gotten word back on where to find the docs for v7.1,\n> > it still contains those for v7.0, but I just put up beta1 tarballs in the\n> > /pub/dev directory ... can someone take a look at these before we announce\n> > them to make sure they look okay?\n> \n> I'm in the process of downloading. What would be the diff between the beta1 \n> and the snapshot?\n\nNone for today ... snapshot's are build daily, beta1 right now is \"a\nrelease candidate, if nobody reports any problems, we release what is\npackaged\" ... we usually wait for a two week period or so after each beta\nis released for bug reporst before saying \"its clean\" ... if nobody\nchanges the code in CVS in two weeks, beta1 goes out as v7.1 ... if we\nrelease a beta2, its two weeks from that, and so on ...\n\n\n",
"msg_date": "Thu, 7 Dec 2000 19:17:20 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ..."
},
{
"msg_contents": "\nI just recently heard from Julia Case, who is willing to work on\nmaintaining the ODBC drivers after Xmas ... for those that don't know the\nname, she was pretty much the original ODBC developer/maintainer, until\nwork overwhelmed her ...\n\nOn Thu, 7 Dec 2000, Joel Burton wrote:\n\n> The official ODBC driver from pg7.0.x doesn't work w/7.1 (b/c of the \n> changes in the system catalogs, IIRC).\n> \n> The CVS 7.1devel code works and builds easily, but I suspect 99% \n> of the beta testers won't have Visual C++ or won't be able to \n> compile the driver. Is there an official driver-compiler-person that \n> could package this up for 7.1beta?\n> \n> (I know that a binary driver isn't part of the beta per se, and that \n> it's not *unreleasable* to think that everyone could compile their \n> own, but I bought VC++ just to compile this driver, and would hate \n> to see M$ get richer for even more people. Also, I doubt we'd want \n> to impugn the perceived quality of 7.1beta b/c people don't \n> understand that its just the ODBC drivers that out-of-date.)\n> \n> If there's no one official tasked w/this, I'd be happy to submit my \n> compiled version, at http://www.scw.org/pgaccess.\n> \n> \n> --\n> Joel Burton, Director of Information Systems -*- [email protected]\n> Support Center of Washington (www.scw.org)\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n",
"msg_date": "Thu, 7 Dec 2000 20:40:05 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] v7.1 beta 1 (ODBC driver?)"
},
{
"msg_contents": "The official ODBC driver from pg7.0.x doesn't work w/7.1 (b/c of the \nchanges in the system catalogs, IIRC).\n\nThe CVS 7.1devel code works and builds easily, but I suspect 99% \nof the beta testers won't have Visual C++ or won't be able to \ncompile the driver. Is there an official driver-compiler-person that \ncould package this up for 7.1beta?\n\n(I know that a binary driver isn't part of the beta per se, and that \nit's not *unreleasable* to think that everyone could compile their \nown, but I bought VC++ just to compile this driver, and would hate \nto see M$ get richer for even more people. Also, I doubt we'd want \nto impugn the perceived quality of 7.1beta b/c people don't \nunderstand that its just the ODBC drivers that out-of-date.)\n \nIf there's no one official tasked w/this, I'd be happy to submit my \ncompiled version, at http://www.scw.org/pgaccess.\n\n\n--\nJoel Burton, Director of Information Systems -*- [email protected]\nSupport Center of Washington (www.scw.org)\n",
"msg_date": "Thu, 7 Dec 2000 18:15:21 -0800",
"msg_from": "\"Joel Burton\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 (ODBC driver?)"
},
{
"msg_contents": "> > Another thing we should think about is to not tar.gz the documentation\n> > files. That way we could create useful incremental diffs between releases\n> > later on. Any comments here?\n> I've never figured out why we do that.\n\nWell...\n\n> Since the thing is going to be\n> inside a tarball anyway, there's no possible savings from distributing\n> the built doco that way, rather than as ordinary files.\n\nA couple of reasons, historically:\n\n1) I was building docs locally, and moving them across to postgresql.org\nover a modem. It wasn't for another year (?) before postgresql.org could\nbuild them locally.\n\n2) The first html docs were available before a release, and they needed\nto be distributed.\n\n3) We put the docs into cvs, but the jade/docbook output did not have\npredictable file names. So each release would require wiping the output\ndocs and somehow guessing which files were obsolete and which were new.\n\n4) We would have to install these individual files, and we didn't have a\ntechnique for installing docs. Untarring seemed compatible with (2) and\n(3).\n\nAnyway, since we no longer put the docs tarball into cvs, then we could\nrethink the techniques. Peter, you seem to have done enough work on this\nto have an opinion, so what exactly would you prefer for packaging? I\nrecall that an unpacked tree was the suggestion??\n\nI think that *requiring* that the html docs be built in place to effect\na release is an extra toolset burden that we should not accept. The fact\nthat the docs tools work well on postgresql.org as well as on other\nmachines is something to be enjoyed, not put into the critical path ;)\n\n - Thomas\n",
"msg_date": "Fri, 08 Dec 2000 04:36:26 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ..."
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> [ various good reasons ]\n\n> 3) We put the docs into cvs, but the jade/docbook output did not have\n> predictable file names. So each release would require wiping the output\n> docs and somehow guessing which files were obsolete and which were new.\n\nThat's something that's annoyed me for a good while in a different\ncontext, namely that URLs for particular pages of the docs on\npostgresql.org aren't stable. (Well, maybe they are? but foo58342.htm\ndoesn't give one a warm feeling about it. chap3sec7.htm would look\na lot better.)\n\nIs there any prospect of making the output filenames more predictable?\nWho should I annoy about it?\n\n> I think that *requiring* that the html docs be built in place to effect\n> a release is an extra toolset burden that we should not accept.\n\nAgreed on that one...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 00:55:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ... "
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> ... and find the *every* .htm file has a \"good\" name. Hmm. Is it the\n> fact that someone went through and added an \"id field\" to every chapter\n> and section header? Whoever it was, good job! It wasn't me, but whoever\n> it was: good job :)\n\n> Ah, a perusal of the cvs log shows that Peter E. is the culprit. Looks\n> like it is a non-issue from here on.\n\nOh, is *that* what that was for? Thanks, Peter!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 01:29:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ... "
},
{
"msg_contents": "> Is there any prospect of making the output filenames more predictable?\n> Who should I annoy about it?\n\nWell, you could annoy me about it...\n\n... and I would go to my local installation of the source tree...\n\n... and build the docs to confirm that the *chapters* have good\npredictable names...\n\n... and find the *every* .htm file has a \"good\" name. Hmm. Is it the\nfact that someone went through and added an \"id field\" to every chapter\nand section header? Whoever it was, good job! It wasn't me, but whoever\nit was: good job :)\n\nAh, a perusal of the cvs log shows that Peter E. is the culprit. Looks\nlike it is a non-issue from here on.\n\n - Thomas\n",
"msg_date": "Fri, 08 Dec 2000 06:43:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ..."
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> [valid reasons why docs are shipped in tar.gz format]\n>\n> Anyway, since we no longer put the docs tarball into cvs, then we could\n> rethink the techniques. Peter, you seem to have done enough work on this\n> to have an opinion, so what exactly would you prefer for packaging? I\n> recall that an unpacked tree was the suggestion??\n\nUnpacked would be okay.\n\nIf somehow a single-file approach is preferred, another option would be to\nput the html files in an 'ar' archive. That might seem a bit unusual, but\nsince the metadata in 'ar' archives is printable we can make diffs of the\narchive files (the ability to make diffs being to goal here).\n\n> I think that *requiring* that the html docs be built in place to effect\n> a release is an extra toolset burden that we should not accept. The fact\n> that the docs tools work well on postgresql.org as well as on other\n> machines is something to be enjoyed, not put into the critical path ;)\n\nThe problem is that there's a delay of several hours (at best) between\nupdates to the source code and the availability of the built docs. If\nMarc makes the release he'll go into configure.in and change the version\nnumber, if he doesn't forget he'll also update version.sgml, but then\nhe'll have to twiddle his thumbs until the documentation gets rebuild to\nreflect the version change. If he doesn't the release will ship\ndocumentation that says \"PostgreSQL 7.1devel\".\n\nThe alternative is that the change to version.sgml be made earlier (like\nnow), but then he may run the mk-release script and the documentation that\nsits on the ftp server might be several days old because no one bothered\nto check the docbuild.log for several days.\n\nAll in all it's a synchronization and communication problem, but it's a\nreal problem, as history shows.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 10 Dec 2000 19:41:21 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ..."
},
{
"msg_contents": "> All in all it's a synchronization and communication problem, but it's a\n> real problem, as history shows.\n\nThere is nothing stopping Marc from running the docs generation\nexplicitly just before release. The group permissions in my docs build\narea are set to allow this.\n\nAlso, since the hardcopy is usually frozen ~2 weeks before release, I\ndon't see a reason why the tags etc couldn't be set for the release at\nthat time.\n\n - Thomas\n",
"msg_date": "Mon, 11 Dec 2000 07:44:54 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ..."
},
{
"msg_contents": "On Mon, 11 Dec 2000, Thomas Lockhart wrote:\n\n> > All in all it's a synchronization and communication problem, but it's a\n> > real problem, as history shows.\n> \n> There is nothing stopping Marc from running the docs generation\n> explicitly just before release. The group permissions in my docs build\n> area are set to allow this.\n\nI have no probs with that ... what command should I run to do this, and do\nI need to be cd'd into a specific directory before I run it? \n\n\n",
"msg_date": "Mon, 11 Dec 2000 10:00:38 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ..."
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> There is nothing stopping Marc from running the docs generation\n> explicitly just before release. The group permissions in my docs build\n> area are set to allow this.\n\nThat's kind of what I meant, only instead of \"Marc running the docs\ngeneration explicitly just before release\" I imagined \"the docs generation\nbeing run automatically when Marc generates the release\". It's just one\ndetour less that way.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 11 Dec 2000 18:08:33 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v7.1 beta 1 ...packaged, finally ..."
}
]
|
[
{
"msg_contents": "> > This may be implemented very fast (if someone points me where\n> > I can find CRC func).\n> \n> Lifted from the PNG spec (RFC 2083):\n\nThanks! What about Copyrights/licence?\n\nVadim\n",
"msg_date": "Thu, 7 Dec 2000 11:52:13 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: beta testing version "
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>>>> This may be implemented very fast (if someone points me where\n>>>> I can find CRC func).\n>> \n>> Lifted from the PNG spec (RFC 2083):\n\n> Thanks! What about Copyrights/licence?\n\nShould fit fine under our regular BSD license. CRC as such is long\nsince in the public domain...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 15:28:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: beta testing version "
}
]
|
[
{
"msg_contents": "> > This may be implemented very fast (if someone points me where\n> > I can find CRC func). And I could implement \"physical log\"\n> > till next monday.\n> \n> I have been experimenting with CRCs for the past 6 month in \n> our database for internal logging purposes. Downloaded a lot of\n> hash libraries, tried different algorithms, and implemented a few\n> myself. Which algorithm do you want? Have a look at the openssl\n> libraries (www.openssl.org) for a start -if you don't find what\n> you want let me know.\n\nThanks.\n\n> As the logging might include large data blocks, especially \n> now that we can TOAST our data, \n\nTOAST breaks data into a few 2K (or so) tuples to be inserted\nseparately. But first after checkpoint btree split will require\nlogging of 2x8K record -:(\n\n> I would strongly suggest to use strong hashes like RIPEMD or\n> MD5 instead of CRC-32 and the like. Sure, it takes more time \n> tocalculate and more place on the hard disk, but then: a database\n> without data integrity (and means of _proofing_ integrity) is\n> pretty worthless.\n\nOther opinions? Also, we shouldn't forget licence issues.\n\nVadim\n",
"msg_date": "Thu, 7 Dec 2000 12:01:25 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: CRC was: Re: beta testing version"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> I would strongly suggest to use strong hashes like RIPEMD or\n>> MD5 instead of CRC-32 and the like.\n\n> Other opinions? Also, we shouldn't forget licence issues.\n\nI agree with whoever commented that crypto hashes are silly for this\napplication. A 64-bit CRC *might* be enough stronger than a 32-bit\nCRC to be worth the extra calculation, but frankly I doubt that too.\n\nRemember that we are already sitting atop hardware that's really pretty\nreliable, despite the carping that's been going on in this thread. All\nthat we have to do is detect the infrequent case where a block of data\ndidn't get written due to system failure. It's wildly pessimistic to\nthink that we might get called on to do so as much as once a day (if\nyou are trying to run a reliable database, and are suffering power\nfailures once a day, and haven't bought a UPS, you're a lost cause).\nA 32-bit CRC will fail to detect such an error with a probability of\nabout 1 in 2^32. So, a 32-bit CRC will have an MBTF of 2^32 days, or\n11 million years, on the wildly pessimistic side --- real installations\nprobably 100 times better. That's plenty for me, and improving the odds\nto 2^64 or 2^128 is not worth any slowdown IMHO.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 16:35:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version "
},
{
"msg_contents": "On Thu, Dec 07, 2000 at 04:35:00PM -0500, Tom Lane wrote:\n> Remember that we are already sitting atop hardware that's really\n> pretty reliable, despite the carping that's been going on in this\n> thread. All that we have to do is detect the infrequent case where a\n> block of data didn't get written due to system failure. It's wildly\n> pessimistic to think that we might get called on to do so as much as\n> once a day (if you are trying to run a reliable database, and are\n> suffering power failures once a day, and haven't bought a UPS, you're\n> a lost cause). A 32-bit CRC will fail to detect such an error with a\n> probability of about 1 in 2^32. So, a 32-bit CRC will have an MBTF of\n> 2^32 days, or 11 million years, on the wildly pessimistic side ---\n> real installations probably 100 times better. That's plenty for me,\n> and improving the odds to 2^64 or 2^128 is not worth any slowdown\n> IMHO.\n\n1. Computing a CRC-64 takes only about twice as long as a CRC-32, for\n 2^32 times the confidence. That's pretty cheap confidence.\n\n2. I disagree with way the above statistics were computed. That eleven \n million-year figure gets whittled down pretty quickly when you \n factor in all the sources of corruption, even without crashes. \n (Power failures are only one of many sources of corruption.) They \n grow with the size and activity of the database. Databases are \n getting very large and busy indeed.\n\n3. Many users clearly hope to be able to pull the plug on their hardware \n and get back up confidently. While we can't promise they won't have \n to go to their backups, we should at least be equipped to promise,\n with confidence, that they will know whether they need to.\n\n4. For a way to mark the \"current final\" log entry, you want a lot more\n confidence, because you read a lot more of them, and reading beyond \n the end may cause you to corrupt a currently-valid database, which \n seems a lot worse than just using a corrupted database.\n\nStill, I agree that a 32-bit CRC is better than none at all. \n\nNathan Myers\[email protected]\n",
"msg_date": "Thu, 7 Dec 2000 16:01:23 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version"
},
{
"msg_contents": "[email protected] (Nathan Myers) writes:\n> 2. I disagree with way the above statistics were computed. That eleven \n> million-year figure gets whittled down pretty quickly when you \n> factor in all the sources of corruption, even without crashes. \n> (Power failures are only one of many sources of corruption.) They \n> grow with the size and activity of the database. Databases are \n> getting very large and busy indeed.\n\nSure, but the argument still holds. If the net MTBF of your underlying\nsystem is less than a day, it's too unreliable to run a database that\nyou want to trust. Doesn't matter what the contributing failure\nmechanisms are. In practice, I'd demand an MTBF of a lot more than a\nday before I'd accept a hardware system as satisfactory...\n\n> 3. Many users clearly hope to be able to pull the plug on their hardware \n> and get back up confidently. While we can't promise they won't have \n> to go to their backups, we should at least be equipped to promise,\n> with confidence, that they will know whether they need to.\n\nAnd the difference in odds between 2^32 and 2^64 matters here? I made\na numerical case that it doesn't, and you haven't refuted it. By your\nlogic, we might as well say that we should be using a 128-bit CRC, or\n256-bit, or heck, a few kilobytes. It only takes a little longer to go\nup each step, right, so where should you stop? I say MTBF measured in\nmegayears ought to be plenty. Show me the numerical argument that 64\nbits is the right place on the curve.\n\n> 4. For a way to mark the \"current final\" log entry, you want a lot more\n> confidence, because you read a lot more of them,\n\nYou only need to make the distinction during a restart, so I don't\nthink that argument is correct.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 19:36:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version "
},
{
"msg_contents": "I believe that there are many good points to have CRC facilities \"built\nint\", and I failed to detect any arguments against it. In my domain (the\nmedical domain) we simply can't use data without \"proof\" of integrity\n(\"proof\" as in highest possible level of confidence within reasonable\neffort)\n\nTherefore, I propose defining new data types like \"CRC32\", \"CRC64\",\n\"RIPEMD\", whatever (rather than pluggable arbitrary CRCs).\n\nSimilar as with the SERIAL data type, the CRC data type would generate\nautomatically a trigger function that calculates a CRC across a tuple\n(omitting the CRC property of course, and maybe the OID as well) before each\nupdate and store it in itself.\n\nIs there anything wrong with this idea?\n\nCan somebody help me by pointing me into the right direction to implement\nit? (The person who implemeted the SERIAL data type maybe ?)\n\nRegards,\nHorst\n\n",
"msg_date": "Fri, 8 Dec 2000 19:06:25 +1100",
"msg_from": "\"Horst Herb\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RFC: CRC datatype"
},
{
"msg_contents": "> Therefore, I propose defining new data types like \"CRC32\", \"CRC64\",\n> \"RIPEMD\", whatever (rather than pluggable arbitrary CRCs).\n\nI suspect that you are really looking at the problem from the wrong end.\nCRC checking should not need to be done by the database user, with a fancy\ntype. The postgres server itself should guarantee data integrity - you\nshouldn't have to worry about it in userland.\n\nThis is, in fact, what the recent discussion on this list has been\nproposing...\n\nChris\n\n",
"msg_date": "Fri, 8 Dec 2000 16:08:22 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: RFC: CRC datatype"
},
{
"msg_contents": "> I suspect that you are really looking at the problem from the wrong end.\n> CRC checking should not need to be done by the database user, with a fancy\n> type. The postgres server itself should guarantee data integrity - you\n> shouldn't have to worry about it in userland.\n\nI agree in principle. However, performance sometimes is more important than\nintegrity. Think of a data logger of uncritical data. A online forum. There\na plenty of occasions where you don't have to worry for a single bit on or\noff, but a lot to worry about performance. Look at all those people using M$\nAccess or MySQL who don't give a damn about data integrity. As opposed to\nthem, there will always be other \"typical\" database applications where 100%\nintegrity is paramount. Then it is nice to have a choice of CRCs, where the\ndatabase designer can choose according to his/her specific\nperformance/integrity balanced needs. This is why I would prefer the\n\"datatype\" solution.\n\n> This is, in fact, what the recent discussion on this list has been\n> proposing...\n\nAFAIK the thread for \"built in\" crcs referred only to CRCs in the\ntransaction log. This here is a different thing. CRCs in the transaction log\nare crucial to proof integrity of the log, CRCs as datatype are neccessary\nto proof integrity of database entries at row level. Always remember that a\npsotgres data base on the harddisk can be manipulated accidentally /\nmaliciously without postgres even running. These are the cases where you\nneed row level CRCs.\n\nHorst\n\n",
"msg_date": "Sat, 9 Dec 2000 00:03:52 +1100",
"msg_from": "\"Horst Herb\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: CRC datatype"
},
{
"msg_contents": "\"Horst Herb\" <[email protected]> writes:\n> AFAIK the thread for \"built in\" crcs referred only to CRCs in the\n> transaction log. This here is a different thing. CRCs in the transaction log\n> are crucial to proof integrity of the log, CRCs as datatype are neccessary\n> to proof integrity of database entries at row level.\n\nI think a row-level CRC is rather pointless. Perhaps it'd be a good\nidea to have a disk-page-level CRC, though. That would check the rows\non the page *and* allow catching errors in the page and tuple overhead\nstructures, which row-level CRCs would not cover.\n\nI suspect TOAST breaks your notion of computing a CRC at trigger time\nanyway --- some of the fields may be toasted already, some not.\n\nIf you're sufficiently paranoid that you insist you need a row-level\nCRC, it seems to me that you'd want to generate it and later check it\nin your application, not in the database. That's the only way you get\nend-to-end coverage. Surely you don't trust your TCP connection to the\nserver, either?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 11:49:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: CRC datatype "
},
{
"msg_contents": "> I think a row-level CRC is rather pointless. Perhaps it'd be a good\n> idea to have a disk-page-level CRC, though. That would check the rows\n> on the page *and* allow catching errors in the page and tuple overhead\n> structures, which row-level CRCs would not cover.\n\nrow level is neccessary to be able tocheck integrity at application level.\n\n> I suspect TOAST breaks your notion of computing a CRC at trigger time\n> anyway --- some of the fields may be toasted already, some not.\n\nThe workaround is a loggingtable where you store the crcs as well. Lateron,\nan \"integrity daemon\" can compare whether match or not.\n\n> If you're sufficiently paranoid that you insist you need a row-level\n> CRC, it seems to me that you'd want to generate it and later check it\n> in your application, not in the database. That's the only way you get\n\nOh, sure, that is the way we do it now. And no, nothing to do with paranoia.\nBurnt previously badly by assumption that a decent SQL server is a\n\"guarantee\" for data integrity. Shit simply happens.\n\n> end-to-end coverage. Surely you don't trust your TCP connection to the\n> server, either?\n\nTCP _IS_ heavily checksummed. But yes, we _do_ calculate checksums at the\nclient, recalculate at the server, and compare after the transaction is\ncompleted. As we have only few writes between heavy read access, the\nperformance penalty doing this (for our purposes) is minimal.\n\nHorst\n\n",
"msg_date": "Sat, 9 Dec 2000 04:11:11 +1100",
"msg_from": "\"Horst Herb\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: CRC datatype "
},
{
"msg_contents": "On Thu, Dec 07, 2000 at 04:01:23PM -0800, Nathan Myers wrote:\n> 1. Computing a CRC-64 takes only about twice as long as a CRC-32, for\n> 2^32 times the confidence. That's pretty cheap confidence.\n\nIncidentally, I benchmarked the previously mentioned 64-bit fingerprint,\nthe standard 32-bit CRC, MD5 and SHA, and the fastest algorithm on my\nCeleron and on a PIII was MD5. The 64-bit fingerprint was only a hair\nslower, the CRC was (quite surprisingly) about 40% slower, and the\nimplementation of SHA that I had available was a real dog. Taking an\narbitrary 32 bits of a MD5 would likely be less collision prone than\nusing a 32-bit CRC, and it appears faster as well.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Fri, 8 Dec 2000 12:19:39 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version"
},
{
"msg_contents": "\"Horst Herb\" <[email protected]> writes:\n>> Surely you don't trust your TCP connection to the\n>> server, either?\n\n> TCP _IS_ heavily checksummed.\n\nYes, and so are the disk drives that you are asserting you don't trust.\n\nMy point is that in both cases, there are lots and lots of failure\nmechanisms that won't be caught by the transport or storage CRC.\nThe same applies to anything other than an end-to-end check.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 13:31:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: CRC datatype "
},
{
"msg_contents": " Date: Fri, 8 Dec 2000 12:19:39 -0600\n From: Bruce Guenter <[email protected]>\n\n Incidentally, I benchmarked the previously mentioned 64-bit fingerprint,\n the standard 32-bit CRC, MD5 and SHA, and the fastest algorithm on my\n Celeron and on a PIII was MD5. The 64-bit fingerprint was only a hair\n slower, the CRC was (quite surprisingly) about 40% slower, and the\n implementation of SHA that I had available was a real dog. Taking an\n arbitrary 32 bits of a MD5 would likely be less collision prone than\n using a 32-bit CRC, and it appears faster as well.\n\nI just want to confirm that you used something like the fast 32-bit\nCRC algorithm, appended. The one posted earlier was accurate but\nslow.\n\nIan\n\n/*\n * Copyright (C) 1986 Gary S. Brown. You may use this program, or\n * code or tables extracted from it, as desired without restriction.\n */\n\n/* Modified slightly by Ian Lance Taylor, [email protected], for use with\n Taylor UUCP. */\n\n#include \"uucp.h\"\n#include \"prot.h\"\n\n/* First, the polynomial itself and its table of feedback terms. The */\n/* polynomial is */\n/* X^32+X^26+X^23+X^22+X^16+X^12+X^11+X^10+X^8+X^7+X^5+X^4+X^2+X^1+X^0 */\n/* Note that we take it \"backwards\" and put the highest-order term in */\n/* the lowest-order bit. The X^32 term is \"implied\"; the LSB is the */\n/* X^31 term, etc. The X^0 term (usually shown as \"+1\") results in */\n/* the MSB being 1. */\n\n/* Note that the usual hardware shift register implementation, which */\n/* is what we're using (we're merely optimizing it by doing eight-bit */\n/* chunks at a time) shifts bits into the lowest-order term. In our */\n/* implementation, that means shifting towards the right. Why do we */\n/* do it this way? Because the calculated CRC must be transmitted in */\n/* order from highest-order term to lowest-order term. UARTs transmit */\n/* characters in order from LSB to MSB. By storing the CRC this way, */\n/* we hand it to the UART in the order low-byte to high-byte; the UART */\n/* sends each low-bit to hight-bit; and the result is transmission bit */\n/* by bit from highest- to lowest-order term without requiring any bit */\n/* shuffling on our part. Reception works similarly. */\n\n/* The feedback terms table consists of 256, 32-bit entries. Notes: */\n/* */\n/* The table can be generated at runtime if desired; code to do so */\n/* is shown later. It might not be obvious, but the feedback */\n/* terms simply represent the results of eight shift/xor opera- */\n/* tions for all combinations of data and CRC register values. */\n/* [this code is no longer present--ian]\t\t\t */\n/* */\n/* The values must be right-shifted by eight bits by the \"updcrc\" */\n/* logic; the shift must be unsigned (bring in zeroes). On some */\n/* hardware you could probably optimize the shift in assembler by */\n/* using byte-swap instructions. */\n\nstatic const unsigned long aicrc32tab[] = { /* CRC polynomial 0xedb88320 */\n0x00000000L, 0x77073096L, 0xee0e612cL, 0x990951baL, 0x076dc419L, 0x706af48fL, 0xe963a535L, 0x9e6495a3L,\n0x0edb8832L, 0x79dcb8a4L, 0xe0d5e91eL, 0x97d2d988L, 0x09b64c2bL, 0x7eb17cbdL, 0xe7b82d07L, 0x90bf1d91L,\n0x1db71064L, 0x6ab020f2L, 0xf3b97148L, 0x84be41deL, 0x1adad47dL, 0x6ddde4ebL, 0xf4d4b551L, 0x83d385c7L,\n0x136c9856L, 0x646ba8c0L, 0xfd62f97aL, 0x8a65c9ecL, 0x14015c4fL, 0x63066cd9L, 0xfa0f3d63L, 0x8d080df5L,\n0x3b6e20c8L, 0x4c69105eL, 0xd56041e4L, 0xa2677172L, 0x3c03e4d1L, 0x4b04d447L, 0xd20d85fdL, 0xa50ab56bL,\n0x35b5a8faL, 0x42b2986cL, 0xdbbbc9d6L, 0xacbcf940L, 0x32d86ce3L, 0x45df5c75L, 0xdcd60dcfL, 0xabd13d59L,\n0x26d930acL, 0x51de003aL, 0xc8d75180L, 0xbfd06116L, 0x21b4f4b5L, 0x56b3c423L, 0xcfba9599L, 0xb8bda50fL,\n0x2802b89eL, 0x5f058808L, 0xc60cd9b2L, 0xb10be924L, 0x2f6f7c87L, 0x58684c11L, 0xc1611dabL, 0xb6662d3dL,\n0x76dc4190L, 0x01db7106L, 0x98d220bcL, 0xefd5102aL, 0x71b18589L, 0x06b6b51fL, 0x9fbfe4a5L, 0xe8b8d433L,\n0x7807c9a2L, 0x0f00f934L, 0x9609a88eL, 0xe10e9818L, 0x7f6a0dbbL, 0x086d3d2dL, 0x91646c97L, 0xe6635c01L,\n0x6b6b51f4L, 0x1c6c6162L, 0x856530d8L, 0xf262004eL, 0x6c0695edL, 0x1b01a57bL, 0x8208f4c1L, 0xf50fc457L,\n0x65b0d9c6L, 0x12b7e950L, 0x8bbeb8eaL, 0xfcb9887cL, 0x62dd1ddfL, 0x15da2d49L, 0x8cd37cf3L, 0xfbd44c65L,\n0x4db26158L, 0x3ab551ceL, 0xa3bc0074L, 0xd4bb30e2L, 0x4adfa541L, 0x3dd895d7L, 0xa4d1c46dL, 0xd3d6f4fbL,\n0x4369e96aL, 0x346ed9fcL, 0xad678846L, 0xda60b8d0L, 0x44042d73L, 0x33031de5L, 0xaa0a4c5fL, 0xdd0d7cc9L,\n0x5005713cL, 0x270241aaL, 0xbe0b1010L, 0xc90c2086L, 0x5768b525L, 0x206f85b3L, 0xb966d409L, 0xce61e49fL,\n0x5edef90eL, 0x29d9c998L, 0xb0d09822L, 0xc7d7a8b4L, 0x59b33d17L, 0x2eb40d81L, 0xb7bd5c3bL, 0xc0ba6cadL,\n0xedb88320L, 0x9abfb3b6L, 0x03b6e20cL, 0x74b1d29aL, 0xead54739L, 0x9dd277afL, 0x04db2615L, 0x73dc1683L,\n0xe3630b12L, 0x94643b84L, 0x0d6d6a3eL, 0x7a6a5aa8L, 0xe40ecf0bL, 0x9309ff9dL, 0x0a00ae27L, 0x7d079eb1L,\n0xf00f9344L, 0x8708a3d2L, 0x1e01f268L, 0x6906c2feL, 0xf762575dL, 0x806567cbL, 0x196c3671L, 0x6e6b06e7L,\n0xfed41b76L, 0x89d32be0L, 0x10da7a5aL, 0x67dd4accL, 0xf9b9df6fL, 0x8ebeeff9L, 0x17b7be43L, 0x60b08ed5L,\n0xd6d6a3e8L, 0xa1d1937eL, 0x38d8c2c4L, 0x4fdff252L, 0xd1bb67f1L, 0xa6bc5767L, 0x3fb506ddL, 0x48b2364bL,\n0xd80d2bdaL, 0xaf0a1b4cL, 0x36034af6L, 0x41047a60L, 0xdf60efc3L, 0xa867df55L, 0x316e8eefL, 0x4669be79L,\n0xcb61b38cL, 0xbc66831aL, 0x256fd2a0L, 0x5268e236L, 0xcc0c7795L, 0xbb0b4703L, 0x220216b9L, 0x5505262fL,\n0xc5ba3bbeL, 0xb2bd0b28L, 0x2bb45a92L, 0x5cb36a04L, 0xc2d7ffa7L, 0xb5d0cf31L, 0x2cd99e8bL, 0x5bdeae1dL,\n0x9b64c2b0L, 0xec63f226L, 0x756aa39cL, 0x026d930aL, 0x9c0906a9L, 0xeb0e363fL, 0x72076785L, 0x05005713L,\n0x95bf4a82L, 0xe2b87a14L, 0x7bb12baeL, 0x0cb61b38L, 0x92d28e9bL, 0xe5d5be0dL, 0x7cdcefb7L, 0x0bdbdf21L,\n0x86d3d2d4L, 0xf1d4e242L, 0x68ddb3f8L, 0x1fda836eL, 0x81be16cdL, 0xf6b9265bL, 0x6fb077e1L, 0x18b74777L,\n0x88085ae6L, 0xff0f6a70L, 0x66063bcaL, 0x11010b5cL, 0x8f659effL, 0xf862ae69L, 0x616bffd3L, 0x166ccf45L,\n0xa00ae278L, 0xd70dd2eeL, 0x4e048354L, 0x3903b3c2L, 0xa7672661L, 0xd06016f7L, 0x4969474dL, 0x3e6e77dbL,\n0xaed16a4aL, 0xd9d65adcL, 0x40df0b66L, 0x37d83bf0L, 0xa9bcae53L, 0xdebb9ec5L, 0x47b2cf7fL, 0x30b5ffe9L,\n0xbdbdf21cL, 0xcabac28aL, 0x53b39330L, 0x24b4a3a6L, 0xbad03605L, 0xcdd70693L, 0x54de5729L, 0x23d967bfL,\n0xb3667a2eL, 0xc4614ab8L, 0x5d681b02L, 0x2a6f2b94L, 0xb40bbe37L, 0xc30c8ea1L, 0x5a05df1bL, 0x2d02ef8dL\n};\n\n/*\n * IUPDC32 macro derived from article Copyright (C) 1986 Stephen Satchell. \n * NOTE: First argument must be in range 0 to 255.\n * Second argument is referenced twice.\n * \n * Programmers may incorporate any or all code into their programs, \n * giving proper credit within the source. Publication of the \n * source routines is permitted so long as proper credit is given \n * to Stephen Satchell, Satchell Evaluations and Chuck Forsberg, \n * Omen Technology.\n */\n\n#define IUPDC32(b, ick) \\\n (aicrc32tab[((int) (ick) ^ (b)) & 0xff] ^ (((ick) >> 8) & 0x00ffffffL))\n\nunsigned long\nicrc (z, c, ick)\n const char *z;\n size_t c;\n unsigned long ick;\n{\n while (c > 4)\n {\n ick = IUPDC32 (*z++, ick);\n ick = IUPDC32 (*z++, ick);\n ick = IUPDC32 (*z++, ick);\n ick = IUPDC32 (*z++, ick);\n c -= 4;\n }\n while (c-- != 0)\n ick = IUPDC32 (*z++, ick);\n return ick;\n}\n",
"msg_date": "8 Dec 2000 10:36:39 -0800",
"msg_from": "Ian Lance Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version"
},
{
"msg_contents": "Bruce Guenter <[email protected]> writes:\n> ... Taking an\n> arbitrary 32 bits of a MD5 would likely be less collision prone than\n> using a 32-bit CRC, and it appears faster as well.\n\n... but that would be an algorithm that you know NOTHING about the\nproperties of. What is your basis for asserting it's better than CRC?\n\nCRC is pretty well studied and its error-detection behavior is known\n(and good). MD5 has been studied less thoroughly AFAIK, and in any\ncase what's known about its behavior is that the *entire* MD5 output\nprovides a good signature for a datastream. If you pick some ad-hoc\nmethod like taking a randomly chosen subset of MD5's output bits,\nyou really don't know anything at all about what the error-detection\nproperties of the method are.\n\nI am reminded of Knuth's famous advice about random number generators:\n\"Random numbers should not be generated with a method chosen at random.\nSome theory should be used.\" Error-detection codes, like random-number\ngenerators, have decades of theory behind them. Seat-of-the-pants\ntinkering, even if it starts with a known-good method, is not likely to\nproduce an improvement.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 13:58:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version "
},
{
"msg_contents": "On Fri, Dec 08, 2000 at 10:36:39AM -0800, Ian Lance Taylor wrote:\n> Incidentally, I benchmarked the previously mentioned 64-bit fingerprint,\n> the standard 32-bit CRC, MD5 and SHA, and the fastest algorithm on my\n> Celeron and on a PIII was MD5. The 64-bit fingerprint was only a hair\n> slower, the CRC was (quite surprisingly) about 40% slower, and the\n> implementation of SHA that I had available was a real dog. Taking an\n> arbitrary 32 bits of a MD5 would likely be less collision prone than\n> using a 32-bit CRC, and it appears faster as well.\n> \n> I just want to confirm that you used something like the fast 32-bit\n> CRC algorithm, appended. The one posted earlier was accurate but\n> slow.\n\nYes. I just rebuilt the framework using this exact code, and it\nperformed identically to the previous CRC code (which didn't have an\nunrolled inner loop). These were compiled with -O6 with egcs 1.1.2.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Fri, 8 Dec 2000 13:01:27 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version"
},
{
"msg_contents": "On Fri, Dec 08, 2000 at 12:19:39PM -0600, Bruce Guenter wrote:\n> On Thu, Dec 07, 2000 at 04:01:23PM -0800, Nathan Myers wrote:\n> > 1. Computing a CRC-64 takes only about twice as long as a CRC-32, for\n> > 2^32 times the confidence. That's pretty cheap confidence.\n> \n> Incidentally, I benchmarked the previously mentioned 64-bit fingerprint,\n> the standard 32-bit CRC, MD5 and SHA, and the fastest algorithm on my\n> Celeron and on a PIII was MD5. The 64-bit fingerprint was only a hair\n> slower, the CRC was (quite surprisingly) about 40% slower, and the\n> implementation of SHA that I had available was a real dog. Taking an\n> arbitrary 32 bits of a MD5 would likely be less collision prone than\n> using a 32-bit CRC, and it appears faster as well.\n\nThis is very interesting. MD4 is faster than MD5. (MD5, described as \n\"MD4 with suspenders on\", does some extra stuff to protect against more-\nobscure attacks, of no interest to us.) Which 64-bit CRC code did you \nuse, Mark Mitchell's? Are you really saying MD5 was faster than CRC-32?\n\nI don't know of any reason to think that 32 bits of an MD5 would be \nbetter distributed than a CRC-32, or that having computed the 64 bits\nthere would be any point in throwing away half.\n\nCurrent evidence suggests that MD4 would be a good choice for a hash\nalgorithm.\n\nNathan Myers\[email protected]\n\n",
"msg_date": "Fri, 8 Dec 2000 11:10:19 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRC"
},
{
"msg_contents": "On Fri, Dec 08, 2000 at 01:58:12PM -0500, Tom Lane wrote:\n> Bruce Guenter <[email protected]> writes:\n> > ... Taking an\n> > arbitrary 32 bits of a MD5 would likely be less collision prone than\n> > using a 32-bit CRC, and it appears faster as well.\n> \n> ... but that would be an algorithm that you know NOTHING about the\n> properties of. What is your basis for asserting it's better than CRC?\n\nMD5 is a cryptographic hash, which means (AFAIK) that ideally it is\nimpossible to produce a collision using any other method than brute\nforce attempts. In other words, any stream of input to the hash that is\nlonger than the hash length (8 bytes for MD5) is equally probable to\nproduce a given hash code.\n\n> CRC is pretty well studied and its error-detection behavior is known\n> (and good). MD5 has been studied less thoroughly AFAIK, and in any\n> case what's known about its behavior is that the *entire* MD5 output\n> provides a good signature for a datastream. If you pick some ad-hoc\n> method like taking a randomly chosen subset of MD5's output bits,\n> you really don't know anything at all about what the error-detection\n> properties of the method are.\n\nActually, in my reading reagarding the properties of MD5, I read an\narticle that stated that if a smaller number of bits was desired, one\ncould either (and here's where my memory fails me) just select the\nmiddle N bits from the resulting hash, or fold the hash using XOR until\nthe desired number of bits was reached. I'll see if I can find a\nreference...\n\nRFC2289 (http://www.ietf.org/rfc/rfc2289.txt) includes an algorithm for\nfolding MD5 digests down to 64 bits by XORing the top half with the\nbottom half. See appendix A.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Fri, 8 Dec 2000 13:18:34 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version"
},
{
"msg_contents": "Bruce Guenter <[email protected]> writes:\n> MD5 is a cryptographic hash, which means (AFAIK) that ideally it is\n> impossible to produce a collision using any other method than brute\n> force attempts.\n\nTrue but irrelevant. What we need to worry about is the probability\nthat a random error will be detected, not the computational effort that\na malicious attacker would need in order to insert an undetectable\nerror.\n\nMD5 is designed for a purpose that really doesn't have much to do with\nerror detection, when you think about it. It says \"you will have a hard\ntime computing a different string that produces the same hash as some\nprespecified string\". This is not the same as promising\nbetter-than-random odds against a damaged copy of some string having the\nsame hash as the original. CRC, on the other hand, is specifically\ndesigned for error detection, and for localized errors (such as a\ncorrupted byte or two) it does a provably better-than-random job.\nFor nonlocalized errors you don't get a guarantee, but you do get\nsame-as-random odds of detection (ie, 1 in 2^N for an N-bit CRC).\nI really doubt that MD5 can beat a CRC with the same number of output\nbits for the purpose of error detection; given the lack of guarantee\nabout short burst errors, I doubt it's even as good. (Wild-pointer\nstomps on disk buffers are an example of the sort of thing that may\nlook like a burst error.)\n\nNow, if you are worried about crypto-capable gremlins living in your\nfile system, maybe what you want is MD5. But I'm inclined to think that\nCRC is more appropriate for the job at hand.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 15:38:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version "
},
{
"msg_contents": "On Fri, Dec 08, 2000 at 11:10:19AM -0800, Nathan Myers wrote:\n> This is very interesting. MD4 is faster than MD5. (MD5, described as \n> \"MD4 with suspenders on\", does some extra stuff to protect against more-\n> obscure attacks, of no interest to us.) Which 64-bit CRC code did you \n> use, Mark Mitchell's?\n\nYes.\n\n> Are you really saying MD5 was faster than CRC-32?\n\nYes. I expect it's because the operations used in MD5 are easily\nparallelized, and operate on blocks of 64-bytes at a time, while the CRC\nis mostly non-parallelizable, uses a table lookup, and operates on\nsingle bytes.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Fri, 8 Dec 2000 14:54:30 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "On Fri, Dec 08, 2000 at 11:10:19AM -0800, I wrote:\n> Current evidence suggests that MD4 would be a good choice for a hash\n> algorithm.\n\nThinking about it, I suspect that any CRC implementation that can't outrun \nMD5 by a wide margin is seriously sub-optimal. Can you post any more\ndetails about how the tests were run? I'd like to try it.\n\nNathan Myers\[email protected]\n",
"msg_date": "Fri, 8 Dec 2000 13:00:27 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "[email protected] (Nathan Myers) writes:\n> Thinking about it, I suspect that any CRC implementation that can't outrun \n> MD5 by a wide margin is seriously sub-optimal.\n\nI was finding that hard to believe, too, at least for CRC-32 (CRC-64\nwould take more code, so I'm not so sure about it).\n\nIs that 64-bit code you pointed us to before actually a CRC, or\nsomething else? It doesn't call itself a CRC, and I was having a hard\ntime extracting anything definite (like the polynomial) from all the\nbit-pushing underbrush :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 16:21:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC "
},
{
"msg_contents": "On Fri, Dec 08, 2000 at 03:38:09PM -0500, Tom Lane wrote:\n> Bruce Guenter <[email protected]> writes:\n> > MD5 is a cryptographic hash, which means (AFAIK) that ideally it is\n> > impossible to produce a collision using any other method than brute\n> > force attempts.\n> True but irrelevant. What we need to worry about is the probability\n> that a random error will be detected,\n\nWhich I indicated immediately after the sentence you quoted. The\nprobability that a random error will be detected is the same as the\nprobability of a collision in the hash given two different inputs. The\nbrute force note means that the probability of a collision is as good as\nrandom.\n\n> MD5 is designed for a purpose that really doesn't have much to do with\n> error detection, when you think about it. It says \"you will have a hard\n> time computing a different string that produces the same hash as some\n> prespecified string\". This is not the same as promising\n> better-than-random odds against a damaged copy of some string having the\n> same hash as the original.\n\nIt does provide as-good-as-random odds against a damaged copy of some\nstring having the same hash as the original -- nobody has been able to\nexhibit any collisions in MD5 (see http://cr.yp.to/papers/hash127.ps,\npage 18 for notes on this).\n\n> CRC, on the other hand, is specifically\n> designed for error detection, and for localized errors (such as a\n> corrupted byte or two) it does a provably better-than-random job.\n> For nonlocalized errors you don't get a guarantee, but you do get\n> same-as-random odds of detection (ie, 1 in 2^N for an N-bit CRC).\n\nFor the log, the CRC's primary function (as far as I understand it)\nwould be to guard against inconsistent transaction being treated as\nconsistent data. Such inconsistent transactions would be partially\nwritten, resulting in errors much larger than a small burst.\n\nFor guarding the actual record data, I agree with you 100% -- what we're\nlikely to see is a few localized bytes with flipped bits due to hardware\nfailure of one kind or another. However, if the data is really\ncritical, an ECC may be more appropriate, but that would make the data\nsignificantly larger (9/8 for the algorithms I've seen).\n\n> I really doubt that MD5 can beat a CRC with the same number of output\n> bits for the purpose of error detection;\n\nAgreed. However, MD5 provides four times as many bits as the standard\n32-bit CRC.\n\n(I think I initially suggested you could take an arbitrary 32 bits out\nof MD5 to provide a check code \"as good as CRC-32\". I now take that\nback. Due to the burst error nature of CRCs, nothing else could be as\ngood as it, unless the alternate algorithm also made some guarantees,\nwhich MD5 definitely doesn't.)\n\n> (Wild-pointer\n> stomps on disk buffers are an example of the sort of thing that may\n> look like a burst error.)\n\nActually, wild-pointer incidents involving disk buffers at the kernel\nlevel, from my experience, are characterized by content from one file\nappearing in another, which is distinctly different than a burst error,\nand more like what would be seen if a log record were partially written.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Fri, 8 Dec 2000 15:28:25 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version"
},
{
"msg_contents": "Bruce Guenter <[email protected]> writes:\n>> Are you really saying MD5 was faster than CRC-32?\n\n> Yes. I expect it's because the operations used in MD5 are easily\n> parallelized, and operate on blocks of 64-bytes at a time, while the CRC\n> is mostly non-parallelizable, uses a table lookup, and operates on\n> single bytes.\n\nWhat MD5 implementation did you use? The one I have handy (the original\nRSA reference version) sure looks like it's more computation per byte\nthan a CRC.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 16:30:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC "
},
{
"msg_contents": "On Fri, Dec 08, 2000 at 04:21:21PM -0500, Tom Lane wrote:\n> [email protected] (Nathan Myers) writes:\n> > Thinking about it, I suspect that any CRC implementation that can't outrun \n> > MD5 by a wide margin is seriously sub-optimal.\n> I was finding that hard to believe, too, at least for CRC-32 (CRC-64\n> would take more code, so I'm not so sure about it).\n\nWould you like to see the simple benchmarking setup I used? The amount\nof code involved (once all the hashes are factored in) is fairly large,\nso I'm somewhat hesitant to just send it to the mailing list.\n\n> Is that 64-bit code you pointed us to before actually a CRC, or\n> something else? It doesn't call itself a CRC, and I was having a hard\n> time extracting anything definite (like the polynomial) from all the\n> bit-pushing underbrush :-(\n\nIt isn't a CRC. It's a fingerprint. As you've mentioned, it doesn't\nhave the guarantees against burst errors that a CRC would have, but it\ndoes have as good as random collision avoidance over any random data\ncorruption. At least, that's what the author claims. My math isn't\nnearly good enough to verify such claims.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Fri, 8 Dec 2000 15:32:56 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "On Fri, Dec 08, 2000 at 04:30:58PM -0500, Tom Lane wrote:\n> Bruce Guenter <[email protected]> writes:\n> >> Are you really saying MD5 was faster than CRC-32?\n> > Yes. I expect it's because the operations used in MD5 are easily\n> > parallelized, and operate on blocks of 64-bytes at a time, while the CRC\n> > is mostly non-parallelizable, uses a table lookup, and operates on\n> > single bytes.\n> What MD5 implementation did you use?\n\nI used the GPL'ed implementation written by Ulrich Drepper in 1995. The\ncode from OpenSSL looks identical in terms of the operations performed.\n\n> The one I have handy (the original\n> RSA reference version) sure looks like it's more computation per byte\n> than a CRC.\n\nThe algorithm itself does use more computation per byte. However, the\nalgorithm works on blocks of 64 bytes at a time. As well, the\noperations should be easily pipelined. On the other hand, the CRC code\nis largely serial, and highly dependant on a table lookup operation.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Fri, 8 Dec 2000 16:02:54 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "Bruce Guenter <[email protected]> writes:\n> Would you like to see the simple benchmarking setup I used? The amount\n> of code involved (once all the hashes are factored in) is fairly large,\n> so I'm somewhat hesitant to just send it to the mailing list.\n\nI agree, don't send it to the whole list. But I'd like a copy.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 17:25:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC "
},
{
"msg_contents": "Bruce Guenter <[email protected]> writes:\n>> I agree, don't send it to the whole list. But I'd like a copy.\n\n> Here you go.\n\nAs near as I could tell, the test as you have it (one CRC computation per\nfread) is purely I/O bound. I changed the main loop to this:\n\nint main() {\n static char buf[8192];\n size_t rd;\n hash_t hash;\n\n while (rd = fread(buf, 1, sizeof buf, stdin)) {\n\t int i;\n\t for (i = 0; i < 1000; i++) {\n\t\t init(&hash);\n\t\t update(&hash, buf, rd);\n\t }\n }\n return 0;\n}\n\nso as to get a reasonable amount of computation per fread. On an\notherwise idle HP 9000 C180 machine, I get the following numbers on a\n1MB input file:\n\ntime benchcrc <random32\n\nreal 35.3\nuser 35.0\nsys 0.0\n\ntime benchmd5 <random32\n\nreal 37.6\nuser 37.3\nsys 0.0\n\nThis is a lot closer than I'd have expected, but it sure ain't\n\"MD5 40% faster\" as you reported. I wonder why the difference\nin results between your platform and mine?\n\nBTW, I used gcc 2.95.2 to compile, -O6, no other switches.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 21:28:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC "
},
{
"msg_contents": "A couple further observations while playing with this benchmark ---\n\n1. This MD5 implementation is not too robust. On my machine it dumps\ncore if given a non-word-aligned data buffer. We could probably work\naround that, but it bespeaks a little *too* much hand optimization...\n\n2. It's a bad idea to ignore the startup/termination costs of the\nalgorithms. Of course startup/termination is trivial for CRC, but\nit's not so trivial for MD5. I changed the code so that the md5\nupdate() routine also calls md5_finish_ctx(), so that each inner\nloop represents a complete MD5 calculation for a message of the\nsize of the main routine's fread buffer. I then experimented with\ndifferent buffer sizes. At a buffer size of 1K:\n\ntime benchcrc <random32\n\nreal 35.4\nuser 35.1\nsys 0.0\ntime benchmd5 <random32\n\nreal 41.4\nuser 41.1\nsys 0.0\n\nAt a buffer size of 100 bytes:\n\ntime benchcrc <random32\n\nreal 36.3\nuser 36.0\nsys 0.0\ntime benchmd5 <random32\n\nreal 1:09.7\nuser 1:09.2\nsys 0.0\n\n(The total amount of data processed is 1000 MB in either case, but\nit's divided into more messages in the second case.)\n\nI'm not sure exactly what Vadim has in mind for computing CRCs on the\nWAL log. If he's thinking of a CRC for each log message, the MD5 stuff\nwould be at a definite disadvantage. For disk-page checksums (8K or\nmore) this isn't too much of an issue, however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 22:17:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC "
},
{
"msg_contents": "On Fri, Dec 08, 2000 at 09:28:38PM -0500, Tom Lane wrote:\n> Bruce Guenter <[email protected]> writes:\n> >> I agree, don't send it to the whole list. But I'd like a copy.\n> > Here you go.\n> As near as I could tell, the test as you have it (one CRC computation per\n> fread) is purely I/O bound.\n\nNope. They got 99-100% CPU time with the original version.\n\n> I changed the main loop to this:\n> [...hash each block repeatedly...]\n\nGood idea. Might have been even better to just read the block once and\nhash it even more times.\n\n> On an\n> otherwise idle HP 9000 C180 machine, I get the following numbers on a\n> 1MB input file:\n> \n> time benchcrc <random32\n> real 35.3 > user 35.0 > sys 0.0\n> \n> time benchmd5 <random32\n> real 37.6 > user 37.3 > sys 0.0\n> \n> This is a lot closer than I'd have expected, but it sure ain't\n> \"MD5 40% faster\" as you reported. I wonder why the difference\n> in results between your platform and mine?\n\nThe difference is likely because PA-RISC (like most other RISC\narchitectures) lack a \"roll\" opcode that is very prevalent in the MD5\nalgorithm. Intel CPUs have it. With a new version modified to repeat\nthe inner loop 100,000 times, I got the following:\n\ntime benchcrc <random\n21.35user 0.01system 0:21.39elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (79major+11minor)pagefaults 0swaps\ntime benchmd5 <random\n12.79user 0.01system 0:12.79elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (80major+11minor)pagefaults 0swaps\ntime benchcrc <random\n21.32user 0.06system 0:21.52elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (79major+11minor)pagefaults 0swaps\ntime benchmd5 <random\n12.79user 0.01system 0:12.80elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (80major+11minor)pagefaults 0swaps\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Fri, 8 Dec 2000 23:46:26 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "Bruce Guenter <[email protected]> writes:\n>> This is a lot closer than I'd have expected, but it sure ain't\n>> \"MD5 40% faster\" as you reported. I wonder why the difference\n>> in results between your platform and mine?\n\n> The difference is likely because PA-RISC (like most other RISC\n> architectures) lack a \"roll\" opcode that is very prevalent in the MD5\n> algorithm.\n\nA good theory, but unfortunately not a correct theory. PA-RISC can do a\ncircular shift in one cycle using the \"shift right double\" instruction,\nwith the same register specified as both high and low halves of the\n64-bit operand. And gcc does know about that.\n\nAfter some groveling through assembly code, it seems that the CRC-32\nimplementation is about as tight as it could get: two loads, two XORs,\nand two EXTRU's per byte (one used to implement the right shift, the\nother to implement masking with 0xFF). And the wall clock timing does\nindeed come out to just about six cycles per byte. The MD5 code also\nlooks pretty tight. Each basic OP requires either two or three logical\noperations (and/or/xor/not) depending on which round you're looking at,\nplus four additions and a circular shift. PA-RISC needs two cycles to\nload an arbitrary 32-bit constant, but other than that I see no wasted\ncycles here:\n\n\tldil L'-1444681467,%r20\n\txor %r3,%r14,%r19\n\tldo R'-1444681467(%r20),%r20\n\tand %r1,%r19,%r19\n\taddl %r15,%r20,%r20\n\txor %r14,%r19,%r19\n\taddl %r19,%r26,%r19\n\taddl %r20,%r19,%r15\n\tshd %r15,%r15,27,%r15\n\taddl %r15,%r3,%r15\n\nNote gcc has been smart enough to assign all the correct_words[] array\nelements to registers, else we'd lose another cycle to a load operation\n--- fortunately PA-RISC has lots of registers.\n\nThere are 64 of these basic OPs needed in each round, and each round\nprocesses 64 input bytes, so basically you can figure one OP per byte.\nIgnoring loop overhead and so forth, it's nine or ten cycles per byte\nfor MD5 versus six for CRC.\n\nI'm at a loss to see how a Pentium would arrive at a better result for\nMD5 than for CRC. For one thing, it's going to be at a disadvantage\nbecause it hasn't got enough registers. I'd be interested to see the\nassembly code...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Dec 2000 18:46:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC "
},
{
"msg_contents": "On Fri, Dec 08, 2000 at 10:17:00PM -0500, Tom Lane wrote:\n> A couple further observations while playing with this benchmark ---\n> \n> 1. This MD5 implementation is not too robust. On my machine it dumps\n> core if given a non-word-aligned data buffer. We could probably work\n> around that, but it bespeaks a little *too* much hand optimization...\n\nThe operations in the MD5 core are based on word-sized chunks.\nObviously, the implentation only does word-sized loads and stores for\nthat data, and you got a bus error.\n\n> 2. It's a bad idea to ignore the startup/termination costs of the\n> algorithms.\n\nYes. I had included the startup costs in my benchmark, but not the\ntermination costs, which are large for MD5 as you point out.\n\n> Of course startup/termination is trivial for CRC, but\n> it's not so trivial for MD5. I changed the code so that the md5\n> update() routine also calls md5_finish_ctx(), so that each inner\n> loop represents a complete MD5 calculation for a message of the\n> size of the main routine's fread buffer. I then experimented with\n> different buffer sizes. At a buffer size of 1K:\n\nOn my Celeron, at 1K blocks MD5 is still significantly faster than CRC,\nbut is slightly slower at 100 byte blocks. For comparison, I added\nRIPEMD-160, but it's far slower than any of them (twice as long as CRC).\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Sat, 9 Dec 2000 23:37:42 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "On Sat, Dec 09, 2000 at 06:46:23PM -0500, Tom Lane wrote:\n> Bruce Guenter <[email protected]> writes:\n> > The difference is likely because PA-RISC (like most other RISC\n> > architectures) lack a \"roll\" opcode that is very prevalent in the MD5\n> > algorithm.\n> \n> A good theory, but unfortunately not a correct theory. PA-RISC can do a\n> circular shift in one cycle using the \"shift right double\" instruction,\n> with the same register specified as both high and low halves of the\n> 64-bit operand. And gcc does know about that.\n\nInteresting. I was under the impression that virtually no RISC CPU had\na rotate instruction. Do any others?\n\n> After some groveling through assembly code, it seems that the CRC-32\n> implementation is about as tight as it could get: two loads, two XORs,\n> and two EXTRU's per byte (one used to implement the right shift, the\n> other to implement masking with 0xFF).\n\nSame with the x86 core:\n movb %dl,%al\n xorb (%ecx),%al\n andl $255,%eax\n shrl $8,%edx\n incl %ecx\n xorl (%esi,%eax,4),%edx\n\n> And the wall clock timing does\n> indeed come out to just about six cycles per byte.\n\nOn my Celeron, the timing for those six opcodes is almost whopping 13\ncycles per byte. Obviously there's some major performance hit to do the\nmemory instructions, because there's no more than 4 cycles worth of\ndependant instructions in that snippet.\n\nBTW, for reference, P3 timings are almost identical to those of the\nCeleron, so it's not causing problems outside the built-in caches common\nto the two chips.\n\n> The MD5 code also\n> looks pretty tight. Each basic OP requires either two or three logical\n> operations (and/or/xor/not) depending on which round you're looking at,\n> plus four additions and a circular shift. PA-RISC needs two cycles to\n> load an arbitrary 32-bit constant, but other than that I see no wasted\n> cycles here:\n> \n> \tldil L'-1444681467,%r20\n> \txor %r3,%r14,%r19\n> \tldo R'-1444681467(%r20),%r20\n> \tand %r1,%r19,%r19\n> \taddl %r15,%r20,%r20\n> \txor %r14,%r19,%r19\n> \taddl %r19,%r26,%r19\n> \taddl %r20,%r19,%r15\n> \tshd %r15,%r15,27,%r15\n> \taddl %r15,%r3,%r15\n\nHere's the x86 assembly code for what appears to be the same basic OP:\n movl %edx,%eax\n xorl %esi,%eax\n andl %edi,%eax\n xorl %esi,%eax\n movl -84(%ebp),%ecx\n leal -1444681467(%ecx,%eax),%eax\n addl %eax,%ebx\n roll $5,%ebx\n addl %edx,%ebx\nThis is a couple fewer instructions, mainly saving on doing any loads to\nuse the constant value. This takes almost exactly 9 cycles per byte.\n\n> There are 64 of these basic OPs needed in each round, and each round\n> processes 64 input bytes, so basically you can figure one OP per byte.\n> Ignoring loop overhead and so forth, it's nine or ten cycles per byte\n> for MD5 versus six for CRC.\n\nOn Celeron/P3, CRC scores almost 13 cycles per byte, MD4 is about 6\ncycles per byte, and MD5 is about 9 cycles per byte. On Pentium MMX,\nCRC is 7.25, MD4 is 7.5 and MD5 is 10.25. So, the newer CPUs actually\ndo worse on CRC than the older ones do. Weirder and weirder.\n\n> I'm at a loss to see how a Pentium would arrive at a better result for\n> MD5 than for CRC. For one thing, it's going to be at a disadvantage\n> because it hasn't got enough registers.\n\nI agree. It would appear that the table lookup is causing a major\nbubble in the pipelines on the newer Celeron/P2/P3 CPUs.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Sun, 10 Dec 2000 00:24:17 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "On Sat, Dec 09, 2000 at 06:46:23PM -0500, Tom Lane wrote:\n> I'm at a loss to see how a Pentium would arrive at a better result for\n> MD5 than for CRC. For one thing, it's going to be at a disadvantage\n> because it hasn't got enough registers. I'd be interested to see the\n> assembly code...\n\nMinutiae aside, it's clear that the MD5 and CRC are \"comparable\",\nregardless of CPU.\n\nFor a 32-bit hash, the proven characteristics of CRCs are critical in \nsome applications. With a good 64-bit hash, the probability of any \ncollision whether from a burst error or otherwise becomes much lower \nthan every other systematic source of error -- the details just don't\nmatter any more. If you miss the confidence that CRCs gave you about \nburst errors, consider how easy it would be to construct a collision \nif you could just try changing a couple of adjacent bytes -- an \nexhaustive search would be easy. \n\nMD4 would be a better choice than MD5, despite that a theoretical attack\non MD4 has been described (albeit never executed). We don't even care \nabout real attacks, never mind theoretical ones. What matters is that \nMD4 is entirely good enough, and faster to compute than MD5.\n\nI find these results very encouraging. BSD-licensed MD4 code is readily\navailable, e.g. from any of the BSDs themselves.\n\nNathan Myers\[email protected]\n\n",
"msg_date": "Sat, 9 Dec 2000 23:07:24 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "[email protected] (Nathan Myers) writes:\n> Minutiae aside, it's clear that the MD5 and CRC are \"comparable\",\n> regardless of CPU.\n\nWe've established that the inner loops are pretty comparable. I'm\nstill concerned about the startup/termination costs of MD5 on short\nrecords. The numbers Bruce and I were trading were mostly for records\nlong enough to make the startup costs negligible, but they're not\nnegligible for sub-100-byte records.\n\n> For a 32-bit hash, the proven characteristics of CRCs are critical in \n> some applications. With a good 64-bit hash, the probability of any \n> collision whether from a burst error or otherwise becomes much lower \n> than every other systematic source of error -- the details just don't\n> matter any more.\n\nThat's a good point. Of course the critical phrase there is *good*\nhash, ie, one without any systematic weaknesses, but as long as we\ndon't use a \"method chosen at random\" you're right, it hardly matters.\n\nHowever, this just begs the question: can't the same be said of a 32-bit\nchecksum? My argument the other day essentially was that 32 bits is\nplenty for what we need to do, and I have not heard a refutation.\n\nOne thing we should look at before going with a 64-bit method is the\nextra storage space for the larger checksum. We can clearly afford\nan extra 32 bits for a checksum on an 8K disk page, but if Vadim is\nenvisioning checksumming each individual XLOG record then the extra\nspace is more annoying.\n\nAlso, there's the KISS issue. When it takes less than a dozen lines\nto do a CRC, versus pages to do MD5, you have to ask yourself what the\nextra code space is buying you... also whether you want to get into\nlicensing issues by borrowing someone else's code. The MD5 code that\nBruce was using is GPL, not BSD, and so couldn't go into the Postgres\ncore anyway. \n\n> MD4 would be a better choice than MD5, despite that a theoretical attack\n> on MD4 has been described (albeit never executed). We don't even care \n> about real attacks, never mind theoretical ones. What matters is that \n> MD4 is entirely good enough, and faster to compute than MD5.\n\n> I find these results very encouraging. BSD-licensed MD4 code is readily\n> available, e.g. from any of the BSDs themselves.\n\nMD4 would be worth looking at, especially if it has less\nstartup/shutdown overhead than MD5. I think a 64-bit true CRC might\nalso be worth looking at, just for completeness. But I don't know\nwhere to find code for one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Dec 2000 14:36:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC "
},
{
"msg_contents": "Bruce Guenter <[email protected]> writes:\n>> A good theory, but unfortunately not a correct theory. PA-RISC can do a\n>> circular shift in one cycle using the \"shift right double\" instruction,\n\n> Interesting. I was under the impression that virtually no RISC CPU had\n> a rotate instruction. Do any others?\n\nDarn if I know. A RISC purist would probably say that PA-RISC isn't all\nthat reduced ... for example, the reason it needs six cycles not seven\nfor the CRC inner loop is that the LOAD instruction has an option to\npostincrement the pointer register (like a C \"*ptr++\").\n\n> Same with the x86 core:\n> movb %dl,%al\n> xorb (%ecx),%al\n> andl $255,%eax\n> shrl $8,%edx\n> incl %ecx\n> xorl (%esi,%eax,4),%edx\n\n> On my Celeron, the timing for those six opcodes is almost whopping 13\n> cycles per byte. Obviously there's some major performance hit to do the\n> memory instructions, because there's no more than 4 cycles worth of\n> dependant instructions in that snippet.\n\nYes. It looks like we're looking at pipeline stalls for the memory\nreads. I expect PA-RISC would have the same problem if it were not that\nthe CRC table and data buffer are almost certainly loaded into level-2\ncache memory. Curious that you don't get the same result --- what is\nthe memory cache architecture on your box?\n\nAs Nathan remarks nearby, this is just minutiae, but I'm interested\nanyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Dec 2000 14:53:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC "
},
{
"msg_contents": "* Tom Lane <[email protected]> [001210 12:00] wrote:\n> Bruce Guenter <[email protected]> writes:\n> >> A good theory, but unfortunately not a correct theory. PA-RISC can do a\n> >> circular shift in one cycle using the \"shift right double\" instruction,\n> \n> > Interesting. I was under the impression that virtually no RISC CPU had\n> > a rotate instruction. Do any others?\n> \n> Darn if I know. A RISC purist would probably say that PA-RISC isn't all\n> that reduced ... for example, the reason it needs six cycles not seven\n> for the CRC inner loop is that the LOAD instruction has an option to\n> postincrement the pointer register (like a C \"*ptr++\").\n> \n> > Same with the x86 core:\n> > movb %dl,%al\n> > xorb (%ecx),%al\n> > andl $255,%eax\n> > shrl $8,%edx\n> > incl %ecx\n> > xorl (%esi,%eax,4),%edx\n> \n> > On my Celeron, the timing for those six opcodes is almost whopping 13\n> > cycles per byte. Obviously there's some major performance hit to do the\n> > memory instructions, because there's no more than 4 cycles worth of\n> > dependant instructions in that snippet.\n> \n> Yes. It looks like we're looking at pipeline stalls for the memory\n> reads. I expect PA-RISC would have the same problem if it were not that\n> the CRC table and data buffer are almost certainly loaded into level-2\n> cache memory. Curious that you don't get the same result --- what is\n> the memory cache architecture on your box?\n> \n> As Nathan remarks nearby, this is just minutiae, but I'm interested\n> anyway...\n\nI would try unrolling the loop some (if possible) and retesting.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Sun, 10 Dec 2000 12:24:59 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "On Sun, Dec 10, 2000 at 02:53:43PM -0500, Tom Lane wrote:\n> > On my Celeron, the timing for those six opcodes is almost whopping 13\n> > cycles per byte. Obviously there's some major performance hit to do the\n> > memory instructions, because there's no more than 4 cycles worth of\n> > dependant instructions in that snippet.\n> Yes. It looks like we're looking at pipeline stalls for the memory\n> reads.\n\nIn particular, for the single-byte memory read. By loading in 32-bit\nwords at a time, the cost drops to about 7 cycles per byte. I\nimagine on a 64-bit CPU, loading 64-bit words at a time would drop the\ncost even further.\n word1 = *(unsigned long*)z;\n while (c > 4)\n {\n z += 4;\n ick = IUPDC32 (word1, ick); word1 >>= 8;\n c -= 4;\n ick = IUPDC32 (word1, ick); word1 >>= 8;\n word1 = *(unsigned long*)z;\n ick = IUPDC32 (word1, ick); word1 >>= 8;\n ick = IUPDC32 (word1, ick);\n }\nI tried loading two words at a time, starting to load the second word\nwell before it's used, but that didn't actually reduce the time taken.\n\n> As Nathan remarks nearby, this is just minutiae, but I'm interested\n> anyway...\n\nYup.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Sun, 10 Dec 2000 15:37:32 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "On Sun, Dec 10, 2000 at 12:24:59PM -0800, Alfred Perlstein wrote:\n> I would try unrolling the loop some (if possible) and retesting.\n\nThe inner loop was already unrolled, but was only processing single\nbytes at a time. By loading in 32-bit words at once, it reduced the\ncost to only 7 cycles per byte (from 13).\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Sun, 10 Dec 2000 15:38:29 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "\nPG should include support for SHA1 anyway. MD5 is not being used in\nnew stuff anymore, I think. I actually have an SHA1 implementation\nthat links into PG if anyone is interested (especially if it could get\nincluded in a future release).\n\ne\n",
"msg_date": "Sun, 10 Dec 2000 23:45:10 -0800 (PST)",
"msg_from": "Erich <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "> Interesting. I was under the impression that virtually no RISC CPU had\n> a rotate instruction. Do any others?\n\n(fyi; doesn't really contribute to the thread :/\n\nMost or all do. There are no \"pure RISC\" chips in production; all have\nhad some optimized complex operations added for performance and for code\ncompactness.\n\n - Thomas\n",
"msg_date": "Mon, 11 Dec 2000 07:48:15 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "On Sun, Dec 10, 2000 at 02:36:38PM -0500, Tom Lane wrote:\n> > MD4 would be a better choice than MD5, despite that a theoretical attack\n> > on MD4 has been described (albeit never executed). We don't even care \n> > about real attacks, never mind theoretical ones. What matters is that \n> > MD4 is entirely good enough, and faster to compute than MD5.\n> \n> > I find these results very encouraging. BSD-licensed MD4 code is readily\n> > available, e.g. from any of the BSDs themselves.\n> \n> MD4 would be worth looking at, especially if it has less\n> startup/shutdown overhead than MD5. I think a 64-bit true CRC might\n> also be worth looking at, just for completeness. But I don't know\n> where to find code for one.\n\nThe startup/shutdown for MD4 is identical to that of MD5, however the\ninner loop is much smaller (a total of 48 operations instead of 64, with\nfewer constants). The inner MD4 loop is about 1.5 times the speed of\nMD5.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Mon, 11 Dec 2000 11:54:18 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
},
{
"msg_contents": "On Thu, Dec 07, 2000 at 07:36:33PM -0500, Tom Lane wrote:\n> [email protected] (Nathan Myers) writes:\n> > 2. I disagree with way the above statistics were computed. That eleven \n> > million-year figure gets whittled down pretty quickly when you \n> > factor in all the sources of corruption, even without crashes. \n> > (Power failures are only one of many sources of corruption.) They \n> > grow with the size and activity of the database. Databases are \n> > getting very large and busy indeed.\n> \n> Sure, but the argument still holds. If the net MTBF of your underlying\n> system is less than a day, it's too unreliable to run a database that\n> you want to trust. Doesn't matter what the contributing failure\n> mechanisms are. In practice, I'd demand an MTBF of a lot more than a\n> day before I'd accept a hardware system as satisfactory...\n\nIn many intended uses (such as Landmark's original plan?) it is not just \none box, but hundreds or thousands. With thousands of databases deployed, \nthe MTBF (including power outages) for commodity hardware is well under a \nday, and there's not much you can do about that.\n\nIn a large database (e.g. 64GB) you have 8M blocks. Each hash covers\none block. With a 32-bit checksum, when you check one block, you have \na 2^(-32) likelihood of missing an error, assuming there is one. With \n8M blocks, you can only claim a 2^(-9) chance.\n\nThis is what I meant by \"whittling\". A factor of ten or a thousand\nhere, another there, and pretty soon the possibility of undetected\ncorruption is something that can't reasonably be ruled out.\n\n\n> > 3. Many users clearly hope to be able to pull the plug on their hardware \n> > and get back up confidently. While we can't promise they won't have \n> > to go to their backups, we should at least be equipped to promise,\n> > with confidence, that they will know whether they need to.\n> \n> And the difference in odds between 2^32 and 2^64 matters here? I made\n> a numerical case that it doesn't, and you haven't refuted it. By your\n> logic, we might as well say that we should be using a 128-bit CRC, or\n> 256-bit, or heck, a few kilobytes. It only takes a little longer to go\n> up each step, right, so where should you stop? I say MTBF measured in\n> megayears ought to be plenty. Show me the numerical argument that 64\n> bits is the right place on the curve.\n\nI agree that this is a reasonable question. However, the magic of \nexponential growth makes any dissatisfaction with a 64-bit checksum\nfar less likely than with a 32-bit checksum.\n\nIt would forestall any such problems to arrange a configure-time\nflag such as \"--with-checksum crc-32\" or \"--with-checksum md4\",\nand make it clear where to plug in the checksum of one's choice.\nThen, ship 7.2 with just crc-32 and let somebody else produce \npatches for alternatives if they want them.\n\nBTW, I have been looking for Free 64-bit CRC codes/polynomials and \nthe closest thing I have found so far was Mark Mitchell's hash, \ntranslated from the Modula-3 system. All the tape drive makers\nadvertise (but don't publish (AFAIK)) a 64-bit CRC.\n\nA reasonable approach would be to deliver CRC-32 in 7.2, and then\nreconsider the default later if anybody contributes good alternatives.\n\nNathan Myers\[email protected]\n",
"msg_date": "Tue, 12 Dec 2000 12:31:12 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRC was: Re: beta testing version"
},
{
"msg_contents": "On Sat, Dec 09, 2000 at 12:03:52AM +1100, Horst Herb wrote:\n> AFAIK the thread for \"built in\" crcs referred only to CRCs in\n> the transaction log. \n\nWe have been discussing checksums for both the table blocks and for\nthe transaction log.\n\n> Always remember that a psotgres data base on the harddisk can be\n> manipulated accidentally / maliciously without postgres even running.\n> These are the cases where you need row level CRCs.\n\n\"There is no security without physical security.\" If somebody can\nchange the row contents, they can also change the row and/or block \nchecksum to match.\n\nNathan Myers\[email protected]\n",
"msg_date": "Tue, 12 Dec 2000 12:34:19 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: RFC: CRC datatype"
},
{
"msg_contents": "O\n> > Always remember that a psotgres data base on the harddisk can be\n> > manipulated accidentally / maliciously without postgres even running.\n> > These are the cases where you need row level CRCs.\n>\n> \"There is no security without physical security.\" If somebody can\n> change the row contents, they can also change the row and/or block\n> checksum to match.\n\nThey may, but in a proper setup they won't be able to access the CRC log \nfiles. That way, you can still detect alterations. I presume anyway that most \nalterations would be rather accidental than malicious, and in that case the \nCRC is extremely helpful\n\nHorst\n",
"msg_date": "Wed, 13 Dec 2000 15:06:24 +1100",
"msg_from": "Horst Herb <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: CRC datatype"
}
]
|
[
{
"msg_contents": "> Now that the postmaster takes a noticeable amount of time to \n> shut down, I'm wondering if pg_ctl's default about whether or not\n> to wait ought to be reversed. That is, \"-w\" would become the norm,\n> and some new switch (\"-n\" maybe) would be needed if you didn't want\n> it to wait.\n> \n> Comments?\n\nAgreed.\n\nActually, without -m f|i flag to pg_ctl and with active sessions 7.0.X\npostmaster shuts down long time too.\n\nVadim\n",
"msg_date": "Thu, 7 Dec 2000 12:05:14 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Switch pg_ctl's default about waiting?"
}
]
|
[
{
"msg_contents": "> recently I have downloaded a pre-beta postgresql, I found \n> insert and update speed is slower then 7.0.3,\n> even I turn of sync flag, it is still slow than 7.0, why? \n> how can I make it faster?\n\nTry to compare 7.0.3 & 7.1beta in multi-user environment.\n\nVadim\n",
"msg_date": "Thu, 7 Dec 2000 12:12:31 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pre-beta is slow"
},
{
"msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > recently I have downloaded a pre-beta postgresql, I found\n> > insert and update speed is slower then 7.0.3,\n> > even I turn of sync flag, it is still slow than 7.0, why?\n\nHow much slower do you see it to be ?\n\n> > how can I make it faster?\n>\n> Try to compare 7.0.3 & 7.1beta in multi-user environment.\n\nAs I understand it you claim it to be faster in multi-user environment ?\n\nCould you give some brief technical background why is it so \nand why must it make single-user slower ?\n\n---------------\nHannu\n",
"msg_date": "Fri, 08 Dec 2000 16:07:07 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pre-beta is slow"
}
]
|
[
{
"msg_contents": "> > That's why an end marker must follow all valid records. \n...\n> \n> That requires an extra out-of-sequence write. \n\nYes, and also increase probability to corrupt already committed\nto log data.\n\n> (I'd also like to see CRCs on all the table blocks as well; is there\n> a place to put them?)\n\nDo we need it? \"physical log\" feature suggested by Andreas will protect\nus from non atomic data block writes.\n\nVadim\n",
"msg_date": "Thu, 7 Dec 2000 12:22:12 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: CRCs (was: beta testing version)"
},
{
"msg_contents": "> > (I'd also like to see CRCs on all the table blocks as well; is there\n> > a place to put them?)\n>\n> Do we need it? \"physical log\" feature suggested by Andreas will protect\n> us from non atomic data block writes.\n\nCRCs are neccessary because of glitches, hardware failures, operating system\nbugs, viruses, etc - a lot of factors which can alter data stored on the\nharddisk independend of postgresql. I learned this lesson the hard way when\nI wrote a database application for a hospital, where data integrity is\nvital.\n\nLogging CRCs with each record gave us proof that data had been corrupted by\n\"external\" factors (we never found out what it was). It was only a few bytes\nin a data base with several 100k of records, but still intolerable. Medicine\nis heading a way where decisions will be backed up by computerized\nalgorithms which in turn depend on exact data. A one bit glitch in a\nTerabyte database can make the difference between life and death. These\nglitches will happen, no doubt. Doesn't matter - as long as you have some\nmeans of proofing your data integrity and some mechanism of alerting you\nwhen shit has happend.\n\nAt present I am coordinating another medical project, we have chosen\nPostgreSQL as our backend, and the main problem we have is creating\nefficient CRC triggers (I'd wish postgres would support generic triggers\nthat are valid system wide or at least valid for all tables inheriting the\nsame table) for own homegrown integrity logging.\n\nHorst\n\n\n",
"msg_date": "Fri, 8 Dec 2000 08:26:31 +1100",
"msg_from": "\"Horst Herb\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs (was: beta testing version)"
},
{
"msg_contents": "P.S.: I would volunteer to integrate CRC routines into postgres if somebody\npoints me in the right direction in the source code.\n\nHorst\n\n",
"msg_date": "Fri, 8 Dec 2000 08:38:57 +1100",
"msg_from": "\"Horst Herb\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs (was: beta testing version)"
},
{
"msg_contents": "On Thu, Dec 07, 2000 at 12:22:12PM -0800, Mikheev, Vadim wrote:\n> > > That's why an end marker must follow all valid records. \n> > That requires an extra out-of-sequence write. \n> Yes, and also increase probability to corrupt already committed\n> to log data.\n\nAre you referring to the case where the drive loses power in mid-write?\nThat is solved by either arranging for the markers to always be placed\nat the start of a block, or by plugging in a UPS.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Thu, 7 Dec 2000 15:47:07 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs (was: beta testing version)"
},
{
"msg_contents": "On Thu, Dec 07, 2000 at 12:22:12PM -0800, Mikheev, Vadim wrote:\n> > > That's why an end marker must follow all valid records. \n> ...\n> > \n> > That requires an extra out-of-sequence write. \n> \n> Yes, and also increase probability to corrupt already committed\n> to log data.\n> \n> > (I'd also like to see CRCs on all the table blocks as well; is there\n> > a place to put them?)\n> \n> Do we need it? \"physical log\" feature suggested by Andreas will protect\n> us from non atomic data block writes.\n\nThere are myriad sources of corruption, including RAM bit rot and\nsoftware bugs. The earlier and more reliably it's caught, the better.\nThe goal is to be able to say that a power outage won't invisibly\ncorrupt your database.\n\nHere is are sources to a 64-bit CRC computation, under BSD license:\n\n http://gcc.gnu.org/ml/gcc/1999-11n/msg00592.html\n\nNathan Myers\[email protected]\n",
"msg_date": "Thu, 7 Dec 2000 14:45:49 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRCs (was: beta testing version)"
}
]
|
[
{
"msg_contents": "> > > Probably this is caused by my trial (local) change\n> > > and generated an illegal log output.\n> > > However it seems to mean that WAL isn't always\n> > > redo-able.\n> > \n> > Illegal log output is like disk crash - only BAR can help.\n> \n> But redo-recovery after restore would also fail.\n> The operation which corresponds to the illegal\n> log output aborted at the execution time and \n> rolling back by redo also failed. It seems\n> preferable to me that the transaction is rolled\n> back by undo. \n\nWhat exactly did you change in code?\nWhat kind of illegal log output?\nWas something breaking btree/WAL logic written to log?\n\nVadim\n",
"msg_date": "Thu, 7 Dec 2000 12:34:17 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: How to reset WAL enveironment"
},
{
"msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > > > Probably this is caused by my trial (local) change\n> > > > and generated an illegal log output.\n> > > > However it seems to mean that WAL isn't always\n> > > > redo-able.\n> > >\n> > > Illegal log output is like disk crash - only BAR can help.\n> >\n> > But redo-recovery after restore would also fail.\n> > The operation which corresponds to the illegal\n> > log output aborted at the execution time and\n> > rolling back by redo also failed. It seems\n> > preferable to me that the transaction is rolled\n> > back by undo.\n> \n> What exactly did you change in code?\n\nI'm changing REINDEX under postmaster to be safe under WAL.\n(When I met Tatsuo last week he asked me if REINDEX under\npostmaster is possible and I replied yes. However I'm\nnot sure REINDEX under postmaster is sufficiently safe\nespecially under WAL and I started to change REINDEX to\nbe rollbackable using relfilenode.)\n\n> What kind of illegal log output?\n\nProbably a new block was about to be inserted into new\nrelfilenode suddenly. \nI've been anxious about rolling back by redo.\nThere's no guarantee that retrying redo-log would never\nfail.\n\nI see a vacuum failure now.\nI probably fixed a bug(see pgsql-committers) but\nthere seems to remain other bugs.\n\nRegards.\n\nHirohi Inoue\n",
"msg_date": "Fri, 08 Dec 2000 09:06:25 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to reset WAL enveironment"
}
]
|
[
{
"msg_contents": "Tom,\n\nHope this helps\n\n From the Oracle manual:\nPurpose\nReturns char1, left-padded to length n with the sequence of characters\nin char2; char2 defaults to a single blank. If char1 is longer than n,\nthis function returns the portion of char1 that fits in n.\n\nThe argument n is the total length of the return value as it is\ndisplayed on your terminal screen. In most character sets, this is also\nthe number of characters in the return value. However, in some multibyte\ncharacter sets, the display length of a character string can differ from\nthe number of characters in the string.\n\n\nand some examples (8.1.5 on NT):\n\nSQL> select lpad('abcdef',3,'x') from dual;\n\nLPA\n---\nabc\n\nSQL> select lpad ('abcdef',8,'x') from dual;\n\nLPAD('AB\n--------\nxxabcdef\n\nSQL>\nSQL> select lpad('abcdef',0,'x') from dual;\n\nL\n-\n\n\nZeugswetter Andreas SB <[email protected]> writes:\n>> lpad and rpad never truncate, they only pad.\n>>\n>> Perhaps they *should* truncate if the specified length is less than\n>> the original string length. Does Oracle do that?\n\n> Yes, it truncates, same as Informix.\n\nI went to fix this and then realized I still don't have an adequate spec\n\nof how Oracle defines these functions. It would seem logical, for\nexample, that lpad might truncate on the left instead of the right,\nie lpad('abcd', 3, 'whatever') might yield 'bcd' not 'abc'. Would\nsomeone check?\n\nAlso, what happens if the specified length is less than zero? Error,\nor is it treated as zero?\n\nregards, tom lane\n\n",
"msg_date": "Thu, 07 Dec 2000 12:46:21 -0800",
"msg_from": "Paul <[email protected]>",
"msg_from_op": true,
"msg_subject": "[HACKERS] Oracle-compatible lpad/rpad behavior"
}
]
|
[
{
"msg_contents": "I have an abstract solution for a problem in postgresql's\nhandling of what should be constant data.\n\nWe had problem with a query taking way too long, basically\nwe had this:\n\nselect\n date_part('hour',t_date) as hour,\n transval as val\nfrom st\nwhere\n id = 500 \n AND hit_date >= '2000-12-07 14:27:24-08'::timestamp - '24 hours'::timespan\n AND hit_date <= '2000-12-07 14:27:24-08'::timestamp\n;\n\nturning it into:\n\nselect\n date_part('hour',t_date) as hour,\n transval as val\nfrom st\nwhere\n id = 500 \n AND hit_date >= '2000-12-07 14:27:24-08'::timestamp\n AND hit_date <= '2000-12-07 14:27:24-08'::timestamp\n;\n\n(doing the -24 hours seperately)\n\nThe values of cost went from:\n(cost=0.00..127.24 rows=11 width=12)\nto:\n(cost=0.00..4.94 rows=1 width=12)\n\nBy simply assigning each sql \"function\" a taint value for constness\none could easily reduce:\n '2000-12-07 14:27:24-08'::timestamp - '24 hours'::timespan\nto:\n '2000-12-07 14:27:24-08'::timestamp\nby applying the expression and rewriting the query.\n\nEach function should have a marker that explains whether when given\na const input if the output might vary, that way subexpressions can\nbe collapsed until an input becomes non-const.\n\nHere, let's break up:\n '2000-12-07 14:27:24-08'::timestamp - '24 hours'::timespan\n\nWhat we have is:\n timestamp(const) - timespan(const)\n\nwe have timestamp defined like so:\nconst timestamp(const string)\nnon-const timestamp(non-const)\n\nand timespan like so:\nconst timespan(const string)\nnon-const timespan(non-const)\n\nSo now we have:\n const timestamp((const string)'2000-12-07 14:27:24-08')\n - const timespan((const string)'24 hours')\n-----------------------------------------------------------\n const\n - const\n----------------\n const\n\nthen eval the query.\n\nYou may want to allow a function to have a hook where it can\neval a const because depending on the const it may or may not\nbe able to return a const, for instance if some string\nyou passed to timestamp() caused it to return non-const data.\n\nOr maybe this is fixed in 7.1?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 7 Dec 2000 14:42:28 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "abstract: fix poor constant folding in 7.0.x, fixed in 7.1?"
},
{
"msg_contents": "* Joel Burton <[email protected]> [001207 15:52] wrote:\n> > We had problem with a query taking way too long, basically\n> > we had this:\n> > \n> > select\n> > date_part('hour',t_date) as hour,\n> > transval as val\n> > from st\n> > where\n> > id = 500 \n> > AND hit_date >= '2000-12-07 14:27:24-08'::timestamp - '24\n> > hours'::timespan AND hit_date <= '2000-12-07 14:27:24-08'::timestamp\n> > ;\n> > \n> > turning it into:\n> > \n> > select\n> > date_part('hour',t_date) as hour,\n> > transval as val\n> > from st\n> > where\n> > id = 500 \n> > AND hit_date >= '2000-12-07 14:27:24-08'::timestamp\n> > AND hit_date <= '2000-12-07 14:27:24-08'::timestamp\n> > ;\n> \n> Perhaps I'm being daft, but why should hit_date be both >= and <= \n> the exact same time and date? (or did you mean to subtract 24 \n> hours from your example and forgot?)\n\nYes, typo.\n\n> > (doing the -24 hours seperately)\n> > \n> > The values of cost went from:\n> > (cost=0.00..127.24 rows=11 width=12)\n> > to:\n> > (cost=0.00..4.94 rows=1 width=12)\n> > \n> > By simply assigning each sql \"function\" a taint value for constness\n> > one could easily reduce:\n> > '2000-12-07 14:27:24-08'::timestamp - '24 hours'::timespan\n> > to:\n> > '2000-12-07 14:27:24-08'::timestamp\n> \n> You mean '2000-12-06', don't you?\n\nYes, typo. :)\n\n> \n> > Each function should have a marker that explains whether when given a\n> > const input if the output might vary, that way subexpressions can be\n> > collapsed until an input becomes non-const.\n> \n> There is \"with (iscachable)\".\n> \n> Does\n> \n> CREATE FUNCTION YESTERDAY(timestamp) RETURNS timestamp AS\n> 'SELECT $1-''24 hours''::interval' WITH (iscachable)\n> \n> work faster?\n\nIt could be, but it could be done in the sql compiler/planner\nexplicitly to save me from myself, no?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 7 Dec 2000 16:20:09 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: abstract: fix poor constant folding in 7.0.x, fixed in 7.1?"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> Each function should have a marker that explains whether when given\n> a const input if the output might vary, that way subexpressions can\n> be collapsed until an input becomes non-const.\n\nWe already have that and do that.\n\nThe reason the datetime-related routines are generally not marked\n'proiscachable' is that there's this weird notion of a CURRENT time\nvalue, which means that the result of a datetime calculation may\nvary depending on when you do it, even though the inputs don't.\n\nNote that CURRENT here does not mean translating 'now' to current\ntime during input conversion, it's a special-case data value inside\nthe system.\n\nI proposed awhile back (see pghackers thread \"Constant propagation and\nsimilar issues\" from mid-September) that we should eliminate the CURRENT\nconcept, so that datetime calculations can be constant-folded safely.\nThat, um, didn't meet with universal approval... but I still think it\nwould be a good idea.\n\nIn the meantime you can cheat by defining functions that you choose\nto mark ISCACHABLE, as has been discussed several times in the archives.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 19:24:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abstract: fix poor constant folding in 7.0.x, fixed in 7.1? "
},
{
"msg_contents": "* Tom Lane <[email protected]> [001207 16:45] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > Each function should have a marker that explains whether when given\n> > a const input if the output might vary, that way subexpressions can\n> > be collapsed until an input becomes non-const.\n> \n> We already have that and do that.\n> \n> The reason the datetime-related routines are generally not marked\n> 'proiscachable' is that there's this weird notion of a CURRENT time\n> value, which means that the result of a datetime calculation may\n> vary depending on when you do it, even though the inputs don't.\n> \n> Note that CURRENT here does not mean translating 'now' to current\n> time during input conversion, it's a special-case data value inside\n> the system.\n> \n> I proposed awhile back (see pghackers thread \"Constant propagation and\n> similar issues\" from mid-September) that we should eliminate the CURRENT\n> concept, so that datetime calculations can be constant-folded safely.\n> That, um, didn't meet with universal approval... but I still think it\n> would be a good idea.\n\nI agree with you that doing anything to be able to fold these would\nbe nice. However there's a hook mentioned in my abstract that\nexplains that if a constant makes it into a function, you can\nprovide a hook so that the function can return whether or not that\nconstant is cacheable.\n\nIf the date functions used that hook to get a glimpse of the constant\ndata passed in, they could return 'cachable' if it doesn't contain\nthe 'CURRENT' stuff you're talking about.\n\nsomething like this could be called on input to \"maybe-cachable\"\nfunctions:\n\nint\ndate_cachable_hook(const char *datestr)\n{\n\n\tif (strcasecmp(\"current\", datestr) == 0)\n\t\treturn (UNCACHEABLE);\n\treturn (CACHEABLE);\n}\n\nOr maybe I'm missunderstanding what CURRENT implies?\n\nI do see that on:\n http://www.postgresql.org/mhonarc/pgsql-hackers/2000-09/msg00408.html\n\nboth you and Thomas Lockhart agree that CURRENT is a broken concept\nbecause it can cause btree inconsistancies and should probably be\nremoved anyway.\n\nNo one seems to dispute that, and then the thread leads off into\ndiscussions about optimizer hints.\n\n> In the meantime you can cheat by defining functions that you choose\n> to mark ISCACHABLE, as has been discussed several times in the archives.\n\nYes, but it doesn't help the niave user (me :) ) much. :(\n\nSomehow I doubt that if 'CURRENT' was ifdef'd people would complain.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 7 Dec 2000 17:13:18 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: abstract: fix poor constant folding in 7.0.x, fixed in 7.1?"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> ... However there's a hook mentioned in my abstract that\n> explains that if a constant makes it into a function, you can\n> provide a hook so that the function can return whether or not that\n> constant is cacheable.\n\nOh, I see --- you're right, I missed that part of your proposal.\n\nI dunno ... if we had more than one example of a case where this was\nneeded (and if that example weren't demonstrably broken for other\nreasons), maybe that'd be worth doing. But it seems like a lot of\nmechanism to add to solve a problem we shouldn't have anyway.\n\n> I do see that on:\n> http://www.postgresql.org/mhonarc/pgsql-hackers/2000-09/msg00408.html\n> both you and Thomas Lockhart agree that CURRENT is a broken concept\n> because it can cause btree inconsistancies and should probably be\n> removed anyway.\n\nI had forgotten the btree argument, actually ... thanks for reminding\nme!\n\nI think it's too late to do anything about this for 7.1, in any case,\nbut I'll put removing CURRENT back on the front burner for 7.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 20:58:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abstract: fix poor constant folding in 7.0.x, fixed in 7.1? "
},
{
"msg_contents": "> We had problem with a query taking way too long, basically\n> we had this:\n> \n> select\n> date_part('hour',t_date) as hour,\n> transval as val\n> from st\n> where\n> id = 500 \n> AND hit_date >= '2000-12-07 14:27:24-08'::timestamp - '24\n> hours'::timespan AND hit_date <= '2000-12-07 14:27:24-08'::timestamp\n> ;\n> \n> turning it into:\n> \n> select\n> date_part('hour',t_date) as hour,\n> transval as val\n> from st\n> where\n> id = 500 \n> AND hit_date >= '2000-12-07 14:27:24-08'::timestamp\n> AND hit_date <= '2000-12-07 14:27:24-08'::timestamp\n> ;\n\nPerhaps I'm being daft, but why should hit_date be both >= and <= \nthe exact same time and date? (or did you mean to subtract 24 \nhours from your example and forgot?)\n\n> (doing the -24 hours seperately)\n> \n> The values of cost went from:\n> (cost=0.00..127.24 rows=11 width=12)\n> to:\n> (cost=0.00..4.94 rows=1 width=12)\n> \n> By simply assigning each sql \"function\" a taint value for constness\n> one could easily reduce:\n> '2000-12-07 14:27:24-08'::timestamp - '24 hours'::timespan\n> to:\n> '2000-12-07 14:27:24-08'::timestamp\n\nYou mean '2000-12-06', don't you?\n\n> Each function should have a marker that explains whether when given a\n> const input if the output might vary, that way subexpressions can be\n> collapsed until an input becomes non-const.\n\nThere is \"with (iscachable)\".\n\nDoes\n\nCREATE FUNCTION YESTERDAY(timestamp) RETURNS timestamp AS\n'SELECT $1-''24 hours''::interval' WITH (iscachable)\n\nwork faster?\n\n--\nJoel Burton, Director of Information Systems -*- [email protected]\nSupport Center of Washington (www.scw.org)\n",
"msg_date": "Thu, 7 Dec 2000 18:52:04 -0800",
"msg_from": "\"Joel Burton\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abstract: fix poor constant folding in 7.0.x, fixed in 7.1?"
}
]
|
[
{
"msg_contents": "We recently had a very satisfactory contract completed by\nVadim.\n\nBasically Vadim has been able to reduce the amount of time\ntaken by a vacuum from 10-15 minutes down to under 10 seconds.\n\nWe've been running with these patches under heavy load for\nabout a week now without any problems except one:\n don't 'lazy' (new option for vacuum) a table which has just\n had an index created on it, or at least don't expect it to\n take any less time than a normal vacuum would.\n\nThere's three patchsets and they are available at:\n\nhttp://people.freebsd.org/~alfred/vacfix/\n\ncomplete diff:\nhttp://people.freebsd.org/~alfred/vacfix/v.diff\n\nonly lazy vacuum option to speed up index vacuums:\nhttp://people.freebsd.org/~alfred/vacfix/vlazy.tgz\n\nonly lazy vacuum option to only scan from start of modified\ndata:\nhttp://people.freebsd.org/~alfred/vacfix/mnmb.tgz\n\nAlthough the patches are for 7.0.x I'm hoping that they\ncan be forward ported (if Vadim hasn't done it already)\nto 7.1.\n\nenjoy!\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 7 Dec 2000 14:57:32 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Patches with vacuum fixes available for 7.0.x"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> Basically Vadim has been able to reduce the amount of time\n> taken by a vacuum from 10-15 minutes down to under 10 seconds.\n\nCool. What's it do, exactly?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2000 19:42:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patches with vacuum fixes available for 7.0.x "
},
{
"msg_contents": "* Tom Lane <[email protected]> [001207 17:10] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > Basically Vadim has been able to reduce the amount of time\n> > taken by a vacuum from 10-15 minutes down to under 10 seconds.\n> \n> Cool. What's it do, exactly?\n\n================================================================\n\nThe first is a bonus that Vadim gave us to speed up index\nvacuums, I'm not sure I understand it completely, but it \nwork really well. :)\n\nhere's the README he gave us:\n\n Vacuum LAZY index cleanup option\n\nLAZY vacuum option introduces new way of indices cleanup.\nInstead of reading entire index file to remove index tuples\npointing to deleted table records, with LAZY option vacuum\nperformes index scans using keys fetched from table record\nto be deleted. Vacuum checks each result returned by index\nscan if it points to target heap record and removes\ncorresponding index tuple.\nThis can greatly speed up indices cleaning if not so many\ntable records were deleted/modified between vacuum runs.\nVacuum uses new option on user' demand.\n\nNew vacuum syntax is:\n\nvacuum [verbose] [analyze] [lazy] [table [(columns)]]\n\n================================================================\n\nThe second is one of the suggestions I gave on the lists a while\nback, keeping track of the \"last dirtied\" block in the data files\nto only scan the tail end of the file for deleted rows, I think\nwhat he instead did was keep a table that holds all the modified\nblocks and vacuum only scans those:\n\n Minimal Number Modified Block (MNMB)\n\nThis feature is to track MNMB of required tables with triggers\nto avoid reading unmodified table pages by vacuum. Triggers\nstore MNMB in per-table files in specified directory\n($LIBDIR/contrib/mnmb by default) and create these files if not\nexisted.\n\nVacuum first looks up functions\n\nmnmb_getblock(Oid databaseId, Oid tableId)\nmnmb_setblock(Oid databaseId, Oid tableId, Oid block)\n\nin catalog. If *both* functions were found *and* there was no\nANALYZE option specified then vacuum calls mnmb_getblock to obtain\nMNMB for table being vacuumed and starts reading this table from\nblock number returned. After table was processed vacuum calls\nmnmb_setblock to update data in file to last table block number.\nNeither mnmb_getblock nor mnmb_setblock try to create file.\nIf there was no file for table being vacuumed then mnmb_getblock\nreturns 0 and mnmb_setblock does nothing.\nmnmb_setblock() may be used to set in file MNMB to 0 and force\nvacuum to read entire table if required.\n\nTo compile MNMB you have to add -DMNMB to CUSTOM_COPT\nin src/Makefile.custom.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 7 Dec 2000 17:19:58 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Patches with vacuum fixes available for 7.0.x"
},
{
"msg_contents": "\nOn Thu, 7 Dec 2000, Alfred Perlstein wrote:\n\n> We recently had a very satisfactory contract completed by\n> Vadim.\n> \n> Basically Vadim has been able to reduce the amount of time\n> taken by a vacuum from 10-15 minutes down to under 10 seconds.\n...\n\n What size database was that on?\n\n I looking at moving a 2GB database from MySQL to Postgres. Most of that\ndata is one table with 12 million records, to which we post about 1.5\nmillion records a month. MySQL's table locking sucks, but as long as are\ncareful about what reports we run and when, we can avoid the problem. \nHowever, Postgres' vacuum also sucks. I have no idea how long our\nparticular database would take to vacuum, but I don't think it would be\nvery nice.\n\n That also leads to the erserver thing. erserver sounds nice, but I sure\nwish it was possible to get more details on it. It seems rather\nintangible right now. If erserver is payware, where do I buy it?\n\n This is getting a bit off-topic now...\n\n\nTom\n\n",
"msg_date": "Thu, 7 Dec 2000 18:11:29 -0800 (PST)",
"msg_from": "Tom Samplonius <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patches with vacuum fixes available for 7.0.x"
},
{
"msg_contents": "* Tom Samplonius <[email protected]> [001207 18:55] wrote:\n> \n> On Thu, 7 Dec 2000, Alfred Perlstein wrote:\n> \n> > We recently had a very satisfactory contract completed by\n> > Vadim.\n> > \n> > Basically Vadim has been able to reduce the amount of time\n> > taken by a vacuum from 10-15 minutes down to under 10 seconds.\n> ...\n> \n> What size database was that on?\n\nTables were around 300 megabytes.\n\n> I looking at moving a 2GB database from MySQL to Postgres. Most of that\n> data is one table with 12 million records, to which we post about 1.5\n> million records a month. MySQL's table locking sucks, but as long as are\n> careful about what reports we run and when, we can avoid the problem. \n> However, Postgres' vacuum also sucks. I have no idea how long our\n> particular database would take to vacuum, but I don't think it would be\n> very nice.\n\nWe only do about 54,000,000 updates to a single table per-month.\n\n> That also leads to the erserver thing. erserver sounds nice, but I sure\n> wish it was possible to get more details on it. It seems rather\n> intangible right now. If erserver is payware, where do I buy it?\n\nContact Pgsql Inc. I think it's free, but you have to discuss terms\nwith them.\n\n> This is getting a bit off-topic now...\n\nScalabilty is hardly ever off-topic. :)\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 7 Dec 2000 19:07:17 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Patches with vacuum fixes available for 7.0.x"
},
{
"msg_contents": "* Peter Schmidt <[email protected]> [010102 12:53] wrote:\n> Will these patchsets be available to the public?\n> I get:\n> \"You don't have permission to access /~alfred/vacfix/vlazy.tgz on this\n> server\"\n> \n> Thanks.\n> Peter\n> \n> \n> There's three patchsets and they are available at:\n> \n> http://people.freebsd.org/~alfred/vacfix/\n> \n> complete diff:\n> http://people.freebsd.org/~alfred/vacfix/v.diff\n> \n> only lazy vacuum option to speed up index vacuums:\n> http://people.freebsd.org/~alfred/vacfix/vlazy.tgz\n> \n> only lazy vacuum option to only scan from start of modified\n> data:\n> http://people.freebsd.org/~alfred/vacfix/mnmb.tgz\n\nOops! The permissions should be fixed now, if anyone wants to\ngrab these feel free.\n\nPeter, thanks for pointing it out.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Tue, 2 Jan 2001 15:58:19 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Patches with vacuum fixes available for 7.0.x"
},
{
"msg_contents": "\nVadim, did these patches ever make it into 7.1?\n\n> We recently had a very satisfactory contract completed by\n> Vadim.\n> \n> Basically Vadim has been able to reduce the amount of time\n> taken by a vacuum from 10-15 minutes down to under 10 seconds.\n> \n> We've been running with these patches under heavy load for\n> about a week now without any problems except one:\n> don't 'lazy' (new option for vacuum) a table which has just\n> had an index created on it, or at least don't expect it to\n> take any less time than a normal vacuum would.\n> \n> There's three patchsets and they are available at:\n> \n> http://people.freebsd.org/~alfred/vacfix/\n> \n> complete diff:\n> http://people.freebsd.org/~alfred/vacfix/v.diff\n> \n> only lazy vacuum option to speed up index vacuums:\n> http://people.freebsd.org/~alfred/vacfix/vlazy.tgz\n> \n> only lazy vacuum option to only scan from start of modified\n> data:\n> http://people.freebsd.org/~alfred/vacfix/mnmb.tgz\n> \n> Although the patches are for 7.0.x I'm hoping that they\n> can be forward ported (if Vadim hasn't done it already)\n> to 7.1.\n> \n> enjoy!\n> \n> -- \n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \"I have the heart of a child; I keep it in a jar on my desk.\"\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 22:54:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patches with vacuum fixes available for 7.0.x"
},
{
"msg_contents": "* Bruce Momjian <[email protected]> [010122 19:55] wrote:\n> \n> Vadim, did these patches ever make it into 7.1?\n\nAccording to:\nhttp://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/parser/gram.y?rev=2.217&content-type=text/x-cvsweb-markup\n\nnope. :(\n\n> \n> > We recently had a very satisfactory contract completed by\n> > Vadim.\n> > \n> > Basically Vadim has been able to reduce the amount of time\n> > taken by a vacuum from 10-15 minutes down to under 10 seconds.\n> > \n> > We've been running with these patches under heavy load for\n> > about a week now without any problems except one:\n> > don't 'lazy' (new option for vacuum) a table which has just\n> > had an index created on it, or at least don't expect it to\n> > take any less time than a normal vacuum would.\n> > \n> > There's three patchsets and they are available at:\n> > \n> > http://people.freebsd.org/~alfred/vacfix/\n> > \n> > complete diff:\n> > http://people.freebsd.org/~alfred/vacfix/v.diff\n> > \n> > only lazy vacuum option to speed up index vacuums:\n> > http://people.freebsd.org/~alfred/vacfix/vlazy.tgz\n> > \n> > only lazy vacuum option to only scan from start of modified\n> > data:\n> > http://people.freebsd.org/~alfred/vacfix/mnmb.tgz\n> > \n> > Although the patches are for 7.0.x I'm hoping that they\n> > can be forward ported (if Vadim hasn't done it already)\n> > to 7.1.\n> > \n> > enjoy!\n> > \n> > -- \n> > -Alfred Perlstein - [[email protected]|[email protected]]\n> > \"I have the heart of a child; I keep it in a jar on my desk.\"\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Tue, 23 Jan 2001 11:15:25 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Patches with vacuum fixes available for 7.0.x"
},
{
"msg_contents": "\nHere is another open item. What are we doing with LAZY vacuum? \n\n> We recently had a very satisfactory contract completed by\n> Vadim.\n> \n> Basically Vadim has been able to reduce the amount of time\n> taken by a vacuum from 10-15 minutes down to under 10 seconds.\n> \n> We've been running with these patches under heavy load for\n> about a week now without any problems except one:\n> don't 'lazy' (new option for vacuum) a table which has just\n> had an index created on it, or at least don't expect it to\n> take any less time than a normal vacuum would.\n> \n> There's three patchsets and they are available at:\n> \n> http://people.freebsd.org/~alfred/vacfix/\n> \n> complete diff:\n> http://people.freebsd.org/~alfred/vacfix/v.diff\n> \n> only lazy vacuum option to speed up index vacuums:\n> http://people.freebsd.org/~alfred/vacfix/vlazy.tgz\n> \n> only lazy vacuum option to only scan from start of modified\n> data:\n> http://people.freebsd.org/~alfred/vacfix/mnmb.tgz\n> \n> Although the patches are for 7.0.x I'm hoping that they\n> can be forward ported (if Vadim hasn't done it already)\n> to 7.1.\n> \n> enjoy!\n> \n> -- \n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \"I have the heart of a child; I keep it in a jar on my desk.\"\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 09:37:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patches with vacuum fixes available for 7.0.x"
},
{
"msg_contents": "On Wednesday 24 January 2001 20:37, Bruce Momjian wrote:\n> Here is another open item. What are we doing with LAZY vacuum?\n\nSorry for inserting in the middle. I would like to say that when I tried LAZY \nvacuum on 7.0.3, I had a lockup on one of the table which disappeared after I \ndid usual vacuum. I have sent an original version of a table from the backup \nto Vadim, but did not get any response. Just for your info.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Wed, 24 Jan 2001 22:53:54 +0600",
"msg_from": "Denis Perchine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patches with vacuum fixes available for 7.0.x"
}
]
|
[
{
"msg_contents": "I have a working version of a text search engine. I want to make it work\nfor Postgres (I will be releasing it GPL). It can literally find the\noccurrence of a string of words within 5 million records in a few\nmilliseconds. It is very fast, it works similarly to many web search\nengines.\n\nI have tried many approaches to integrate the search system with\nPostgres, but I can't find any method that isn't too slow or too\ncumbersome.\n\nThe best I have been able to come up with is this:\n\ncreate function textsearch(varchar) returns integer as\n '\n DECLARE\n handle integer;\n count integer;\n pos integer;\n BEGIN\n handle = search_exec( \\'localhost\\', $1);\n \n count = search_count(handle);\n \n for pos in 0 .. count-1 loop\n insert into search_result(key, rank)\n values (search_key(handle,pos),\nsearch_rank(handle,pos));\n end loop;\n \n return search_done(handle);\n \n END;\n' language 'plpgsql'; \n\nAnd this is used as:\n\ncreate temp table search_result (key integer, rank integer);\n\nselect textsearch('bla bla');\n\nselect field from table where field_key = search_result.key order by\nsearch_result.rank ;\n\ndrop table search_result ; \n\n\nThe problems with this are, I can't seem to be able to create a table in\nplpgsql. (I read about a patch, but have to find out what version it is\nin), so I have to create a table outside the function.\n\nI can only execute one text search, because I can't seem to use the name\nof a table that has been passed in to the plpgsql environment, that\nwould allow multiple searches to be joined. As:\n\nselect textsearch(temp_tbl1, 'bla bla');\nselect textsearch(temp_tbl2, 'foo bar');\n\nselect field from table1, table2 where table1.field_key = temp_tbl1.key\nand table2.field_key = temp_tbl2.key;\n\n\nThis could be so sweet, but, right now, it is just a disaster and I am\npulling my hair out. Does anyone have any suggestions or tricks that\ncould make this easier/faster, or is Postgres just unable to do this\nsort of thing.\n\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Fri, 08 Dec 2000 11:01:11 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "OK, does anyone have any better ideas?"
},
{
"msg_contents": "mlw <[email protected]> writes:\n> I have a working version of a text search engine. I want to make it work\n> for Postgres (I will be releasing it GPL). It can literally find the\n> occurrence of a string of words within 5 million records in a few\n> milliseconds.\n\nWhere are the records coming from? Are they inside the database?\n(If not, why do you care about integrating this with Postgres?)\n\nIt seems like the right way to integrate this sort of functionality\nis to turn it into a kind of index, so that you can do\n\n\tSELECT * FROM mytable WHERE keyfield ~~~ 'search string';\n\nwhere ~~~ is the name of some operator that is associated with the\nindex. The temporary-table approach you are taking seems inherently\nklugy, and would still be awkward even if we had functions returning\nrecordsets...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 14:21:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, does anyone have any better ideas? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <[email protected]> writes:\n> > I have a working version of a text search engine. I want to make it work\n> > for Postgres (I will be releasing it GPL). It can literally find the\n> > occurrence of a string of words within 5 million records in a few\n> > milliseconds.\n> \n> Where are the records coming from? Are they inside the database?\n> (If not, why do you care about integrating this with Postgres?)\n> \n> It seems like the right way to integrate this sort of functionality\n> is to turn it into a kind of index, so that you can do\n> \n> SELECT * FROM mytable WHERE keyfield ~~~ 'search string';\n> \n> where ~~~ is the name of some operator that is associated with the\n> index. The temporary-table approach you are taking seems inherently\n> klugy, and would still be awkward even if we had functions returning\n> recordsets...\n\nOK, I get the misunderstanding, you are absolutely right it is VERY\nkludgy.\n\nIt is sort of like a bitmap index, but it is more of a search engine. I\nactually have it working on a commercial website. You run a program\nperiodically (cron job?) that executes a query, the query is then parsed\nand an index of words, keys, ranks and phrase meta-data is created. You\nalso specify which fields in the query should be indexed and which field\nwill be the \"key.\" (It is not ACID if I understand what they term\nmeans.) The data for the text search need not even be in the database,\nas long as the \"key\" being indexed is.\n\nThen you call search with a string, such as \"the long and winding road\"\nor \"software OR hardware AND engineer NOT sales.\" A few milliseconds\nlater, a list of key/rank pairs are produced. This is FAR faster than\nthe '~~~' operator because it never does a full table scan. It is\nassumed that the \"key\" field specified is properly indexed.\n\nIf I had a way of getting the key/rank result pair deeper into Postgres,\nit would be an amazing platform to make some serious high speed search\napplications. Think about a million resumes' online and searchable with\nan arbitrary text string to get a list of candidates, powered by\nPostgres, handling 100 queries a second. \n\nRight now, the way I have it working is PHP makes the search call and\nthen executes a query with the first result (highest rank) and returns\nthe data. If I could get the key/rank pair into postgres as a table or\nmultiple searches into postgres as a set of tables, then you could do\nsome amazing queries really really fast.\n\nStill, you said that \"select foo from bar where key = textsearch('bla\nbla',..)\" could not be done, and my previous example was the only other\nway I have been able to even prototype my idea.\n\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Fri, 08 Dec 2000 19:48:03 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, does anyone have any better ideas?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <[email protected]> writes:\n> > I have a working version of a text search engine. I want to make it work\n> > for Postgres (I will be releasing it GPL). It can literally find the\n> > occurrence of a string of words within 5 million records in a few\n> > milliseconds.\n> \n> Where are the records coming from? Are they inside the database?\n> (If not, why do you care about integrating this with Postgres?)\n> \n> It seems like the right way to integrate this sort of functionality\n> is to turn it into a kind of index, so that you can do\n> \n> SELECT * FROM mytable WHERE keyfield ~~~ 'search string';\n> \n> where ~~~ is the name of some operator that is associated with the\n> index. The temporary-table approach you are taking seems inherently\n> klugy, and would still be awkward even if we had functions returning\n> recordsets...\n\nOh! Another method I tried and just could not get working was returning\nan array of integers. I as thinking about \"select * from table where\nkey_field in ( textsearch('bla bla') ), but I haven't been able to get\nthat to work, and as per a previous post and belatedly reading a FAQ,\nthis would probably still force a full table scan.\n\nAnother method I thought about was having a table with some maximum\nnumber of zero initialized records, and trying something like: \n\ncreate table temp_table as select * from ts_template limit\ntextsearch('bla bla', 10);\n\nselect filltable(temp_table, 10);\n\nselect * from table where key_field = temp_table.key;\n\nAs you can see, all of these ideas are heinous hacks, there has to be a\nbetter way. Surely someone has a better idea.\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Fri, 08 Dec 2000 20:17:34 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, does anyone have any better ideas?"
},
{
"msg_contents": "mlw <[email protected]> writes:\n> Then you call search with a string, such as \"the long and winding road\"\n> or \"software OR hardware AND engineer NOT sales.\" A few milliseconds\n> later, a list of key/rank pairs are produced. This is FAR faster than\n> the '~~~' operator because it never does a full table scan.\n\nAn index-associated operator doesn't imply a full table scan either.\nThe whole purpose of an index is to pull out the rows matched by the\nWHERE expression without doing a full scan.\n\nThe thing that bothers me about the way you're doing it is that the\nsearch result as such doesn't give you access to anything but the keys\nthemselves. Typically what you want to do is get the whole record(s)\nin which the matching keys are located --- and that's why the notion\nof SELECT ... WHERE textfield-matches-search-string looks so attractive.\nYou get the records immediately, in one step. Without that, your next\nstep after the search engine call is to do a join of the search result\ntable against your data table, and poof there goes much of your speed\ngain. (At best, you can make the join reasonably quick by having an\nindex on the unique key field ... but that just means another index to\nmaintain.)\n\nAnother advantage of handling it as an index is that you don't have to\nrely on a periodic recomputation of the index; you can do on-the-fly\nupdates each time the table is altered. (Of course, if your indexing\ntechnology can't handle incremental updates efficiently, that might not\nbe of any value to you. But there's nothing in the system design that\nprecludes making an index type that's only updated by REINDEX.)\n\nI realize this is probably not what you wanted to hear, since building a\nnew index type is a lot more work than I suppose you were looking for.\nBut if you want a full-text index that's integrated naturally into\nPostgres, that's the path to travel. The way you're doing it is\nswimming against the tide. Even when the function-returning-recordset\nlimitation is gone (maybe a version or two away), it's still going to\nbe an awkward and inefficient way to work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 20:27:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, does anyone have any better ideas? "
},
{
"msg_contents": "\nCould you perhaps post the code you have for splitting a text field up into\nkeys, then I could work on turning into a new type of index with a new\noperator, as Tom suggested?\n\n(Or is this already what the text search code in contrib already does??)\n\n\n- Andrew\n\n",
"msg_date": "Sat, 9 Dec 2000 12:58:41 +1100 (EST)",
"msg_from": "Andrew Snow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, does anyone have any better ideas?"
},
{
"msg_contents": "Andrew Snow wrote:\n> \n> Could you perhaps post the code you have for splitting a text field up into\n> keys, then I could work on turning into a new type of index with a new\n> operator, as Tom suggested?\n> \n> (Or is this already what the text search code in contrib already does??)\n\nGo to a search engine like lycos, alltheweb, or altavista. This is the\ntype of search capability I want to use in Postgres. I have it working\nas a stand-alone daemon, it is fast and produces very relevant results.\nI just thought that this sort of functionality would be a big plus if I\ncould tie it down deep in Postgres.\n\nThe big advantage to the code is high relevance, boolean operations, and\nvery very high speed operation. If I could get easy Postgres record\nselection out of it, it would rock.\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Fri, 08 Dec 2000 21:07:22 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, does anyone have any better ideas?"
},
{
"msg_contents": "Andrew Snow wrote:\n> \n> Could you perhaps post the code you have for splitting a text field up into\n> keys, then I could work on turning into a new type of index with a new\n> operator, as Tom suggested?\n> \n> (Or is this already what the text search code in contrib already does??)\n> \n> - Andrew\n\nOK, I guess I am not getting everything across. Let me give the basics:\n\nThere are two programs: sqlindex, and sqlfts.\n\nsqlindex, is the SQL indexer.\nsqlfts, is the SQL full text server.\n\nThey currently take a config file, which will be replaced by columns in\nthe database. (This technology is aimed at multiple databases and\nnon-SQL uses) The config file currently looks like the example at the\nbottom of this post.\n\nThe current incarnation of this server sits outside of Postgres and\nexecute joins based the results of the query.\n\nThe indexing query returns a number of fields, one must be designated as\nthe \"key\" field. In websearch lingo, think of it as \"document name.\"\nDuring index time, I separate the individual fields and create bitmap\nfiles which relate word numbers to document bits.\n\nWords are parsed and a dictionary is created. Phrase meta-data is also\nstored along with the document reference (key field) associated with a\ndocument number.\n\nWhen a query is executed, each word is picked out of the dictionary. At\nvarious points, phrases are evaluated, the bitmap indexes are ANDed,\nORed, or NOTed together, rank is applied. The results are then sorted by\nrank, and the document numbers are merged in with document \"references\"\n(key field value) and return with the rank.\n\nThis technology works quite well as a search engine sort of thing if I\nstore a URL or file name and a teaser as the document reference. I\nthought it would be cool (and easy) if I just stored a SQL key field as\nthe URL, and connected this stuff to a SQL database. I chose Postgres\nbecause I had used it in a number of projects, and thought since it was\nopen source I would have fewer problems.\n\nIt has not been easy to do what I thought would be a fairly trivial\ntask. I am starting to get Windows programming flashbacks of the \"so\nclose, but yet so far\" feeling one gets when one tries to do\nconceptually simple things on Windows.\n\nI'm sorry I am getting discouraged and beginning to think that this\nproject is not going to work.\n\n\n>>>>>>>>>>> configuration file <<<<<<<<<<<<<<<<<<<<<<\n# The computer host name used for the database\nsqlindex=localhost\nsqlfts=localhost\n \n# The name of the database\nsqldb=cdinfo\n \n# Base name of the index files.\nbasename=cdinfo/index\n \n# The key field used to index and find records.\nsqlkey=trackid\nsqlkeyindex=off\nmetaphone=1\n \n# A SQL query that produces a single result, which is the count of\n# records to be indexed.\nsqlcount=select count(trackid) from zsong\n \n# The SQL query used to produce data to be indexed.\nsqlretrieve=select * from songview;\nsqlfields = all,performer2,title,song\n\n# A SQL query that will be used to display a list records found\nsqldisplay=select zsong.muzenbr, performer, performer2, title, song,\ntrackid from ztitles, zsong where zsong.muzenbr = ztitles.muzenbr and\nzsong.trackid = %s\n \n# The tcport is the TCP/IP port for the server\ntcpport = 8090\nftsproc = 5\nftsqueue = 32 \n<<<<<<<<<<<<\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Fri, 08 Dec 2000 21:40:14 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, does anyone have any better ideas?"
},
{
"msg_contents": "We need multi-key B-tree like index for such problem.\nOur full text search engine is based on arrays and we need to find quickly\nis some number exists in array - some kind of index over int array.\nWe're currently testing GiST approach and seems will have some conclusions\nsoon. I think multi-key B-tree like index would be better in my\nopinion, but this requires to much work and knowledge of postgres's internals.\nYesterday I read about UBTree, seems like it's good for index and query\nsets. Currently postgres has no set specific methods.\n\n\tRegards,\n\n\t\tOleg\nOn Fri, 8 Dec 2000, mlw wrote:\n\n> Date: Fri, 08 Dec 2000 20:17:34 -0500\n> From: mlw <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Hackers List <[email protected]>\n> Subject: Re: [HACKERS] OK, does anyone have any better ideas?\n> \n> Tom Lane wrote:\n> > \n> > mlw <[email protected]> writes:\n> > > I have a working version of a text search engine. I want to make it work\n> > > for Postgres (I will be releasing it GPL). It can literally find the\n> > > occurrence of a string of words within 5 million records in a few\n> > > milliseconds.\n> > \n> > Where are the records coming from? Are they inside the database?\n> > (If not, why do you care about integrating this with Postgres?)\n> > \n> > It seems like the right way to integrate this sort of functionality\n> > is to turn it into a kind of index, so that you can do\n> > \n> > SELECT * FROM mytable WHERE keyfield ~~~ 'search string';\n> > \n> > where ~~~ is the name of some operator that is associated with the\n> > index. The temporary-table approach you are taking seems inherently\n> > klugy, and would still be awkward even if we had functions returning\n> > recordsets...\n> \n> Oh! Another method I tried and just could not get working was returning\n> an array of integers. I as thinking about \"select * from table where\n> key_field in ( textsearch('bla bla') ), but I haven't been able to get\n> that to work, and as per a previous post and belatedly reading a FAQ,\n> this would probably still force a full table scan.\n> \n> Another method I thought about was having a table with some maximum\n> number of zero initialized records, and trying something like: \n> \n> create table temp_table as select * from ts_template limit\n> textsearch('bla bla', 10);\n> \n> select filltable(temp_table, 10);\n> \n> select * from table where key_field = temp_table.key;\n> \n> As you can see, all of these ideas are heinous hacks, there has to be a\n> better way. Surely someone has a better idea.\n> \n> -- \n> http://www.mohawksoft.com\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 9 Dec 2000 11:50:17 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, does anyone have any better ideas?"
},
{
"msg_contents": "Here is a link to paper\n\"A new method to index and Query Sets\" by J.Hoffman and J. Koehler\nhttp://www.informatik.uni-freiburg.de/~hoffmann/publications.html\nbtw, is there a restriction to length of key in GiST in 7.1 ?\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Sat, 9 Dec 2000 15:40:53 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "None"
},
{
"msg_contents": "Oleg Bartunov wrote:\n> \n> We need multi-key B-tree like index for such problem.\n> Our full text search engine is based on arrays and we need to find quickly\n> is some number exists in array - some kind of index over int array.\n> We're currently testing GiST approach and seems will have some conclusions\n> soon. I think multi-key B-tree like index would be better in my\n> opinion, but this requires to much work and knowledge of postgres's internals.\n> Yesterday I read about UBTree, seems like it's good for index and query\n> sets. Currently postgres has no set specific methods.\n\nThe way I do my search indexing is with bitmap objects and a word\ndictionary. One creates a searchable dictionary of words by scanning the\nselected records. So, in one query that results in 2 million records,\nthe total aggregate number of words is about 60,000 depending on how you\nparse. For each word, you create a \"bitmap object\" (in one of a few\nforms) where bit '0' represents the first record, bit '1' represents the\nsecond, and so on, until you have 2 million bits.\n\nSet the correct bit in the bitmap for each document that contains that\nword. In the end you will have the equivalent 60,000 bitmaps or 2\nmillion bits.\n\nDuring search time, one creates an empty bitmap of 2 million bits as a\nwork space. One parses the search term, and performs boolean operation\non the workspace from the bitmap retrieved for each word.\n\nWhen you are done parsing, you have a bitmap with a bit set for each\ndocument that fits the search criteria. You then enumerate the bits by\nbit position, and you now have a list of document numbers.\n\nIf only I could get the list of document numbers back into\npostgres....... It would be great.\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Sat, 09 Dec 2000 08:18:10 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, does anyone have any better ideas?"
},
{
"msg_contents": "On Sat, 9 Dec 2000, mlw wrote:\n\n> Date: Sat, 09 Dec 2000 08:18:10 -0500\n> From: mlw <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Tom Lane <[email protected]>,\n> Hackers List <[email protected]>\n> Subject: Re: [HACKERS] OK, does anyone have any better ideas?\n> \n> Oleg Bartunov wrote:\n> > \n> > We need multi-key B-tree like index for such problem.\n> > Our full text search engine is based on arrays and we need to find quickly\n> > is some number exists in array - some kind of index over int array.\n> > We're currently testing GiST approach and seems will have some conclusions\n> > soon. I think multi-key B-tree like index would be better in my\n> > opinion, but this requires to much work and knowledge of postgres's internals.\n> > Yesterday I read about UBTree, seems like it's good for index and query\n> > sets. Currently postgres has no set specific methods.\n> \n> The way I do my search indexing is with bitmap objects and a word\n> dictionary. One creates a searchable dictionary of words by scanning the\n> selected records. So, in one query that results in 2 million records,\n> the total aggregate number of words is about 60,000 depending on how you\n> parse. For each word, you create a \"bitmap object\" (in one of a few\n> forms) where bit '0' represents the first record, bit '1' represents the\n> second, and so on, until you have 2 million bits.\n> \n> Set the correct bit in the bitmap for each document that contains that\n> word. In the end you will have the equivalent 60,000 bitmaps or 2\n> million bits.\n> \n> During search time, one creates an empty bitmap of 2 million bits as a\n> work space. One parses the search term, and performs boolean operation\n> on the workspace from the bitmap retrieved for each word.\n> \n> When you are done parsing, you have a bitmap with a bit set for each\n> document that fits the search criteria. You then enumerate the bits by\n> bit position, and you now have a list of document numbers.\n> \n> If only I could get the list of document numbers back into\n> postgres....... It would be great.\n\nGotcha. It's impossible to return a set from a function, so the only\nway to use perl to parse your bitmap. We did (in one project) external\nsearch using suffix arrays which incredibly fast and use postgres to \nreturn results to perl for processing.\n\nRegards,\n\n\tOleg\n\n> -- \n> http://www.mohawksoft.com\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 9 Dec 2000 16:41:37 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, does anyone have any better ideas?"
},
{
"msg_contents": "Oleg Bartunov wrote:\n> postgres....... It would be great.\n> \n> Gotcha. It's impossible to return a set from a function, so the only\n> way to use perl to parse your bitmap. We did (in one project) external\n> search using suffix arrays which incredibly fast and use postgres to\n> return results to perl for processing.\n\nHere's a question, and I simply do not know enough about the internals\nof postgres to know, I had a brainstorm last night and though of a\nmethod.\n\nCreate a table:\nIs it possible to call \"SPI_exec\" in a C function which does this:\n\n\"create temp table fubar as select ts_key(10) as 'key', ts_rank(10) as\n'rank' from textsearch_template where ts_placeholder(10) limit\nts_count(10)\"\n\nIn the above example, which call would be called first? I assume the\ncount would be called first, but I'm probably wrong. Which ever function\nwould be called first would execute the query. textsearch_template would\nbe a bogus table with 1000 or so zeros.\n\nSo, in a query one does this:\n\nselect ts_search('fubar', 'bla bla');\n\nselect * from table, fubar where table.field_key = fubar.key;\n\nHow about this: Is there a construct in Postgres that represents a row\nID, so a row can be found quickly without using an index? I tried oid\nbut that didn't seem fast at all.\n\nP.S. If you want to see the system working, I have a test fixture\nrunning on \"http://gateway.mohawksoft.com/music.php3\" It calls the text\nsearch daemon from PHP and the text search daemon executes a sql query\nper result (PQExec). Look for a popular song and press \"search.\" \n\nA good example is look for \"pink floyd pigs,\" then try \"pink floyd pigs\n-box.\" (It is running slow because it has debugging code, but it is\nstill pretty fast.) This index has been metaphoned so something like\n\"penk floid\" will work too. \n\nThe \"+\" operator is \"requires\" this is the default. The \"-\" operator is\n\"must not have\" and the \"?\" operator is \"may have\" (the \"?\" operator is\na big hit because it increases the selection size.)\n\nI think if you try it, you'll see why I want to be able to get it deep\ninto postgres, and what the possibilities are.\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Sat, 09 Dec 2000 09:28:37 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, does anyone have any better ideas?"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> btw, is there a restriction to length of key in GiST in 7.1 ?\n\nIndex tuples still have to fit in a block --- no TOAST table for\nan index (yet).\n\nI think we can compress index datums in-line though; at least that\nshould work in principle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Dec 2000 10:45:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Sat, 9 Dec 2000, Tom Lane wrote:\n\n> Date: Sat, 09 Dec 2000 10:45:33 -0500\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected], Hackers List <[email protected]>\n> Subject: Re: [HACKERS] \n> \n> Oleg Bartunov <[email protected]> writes:\n> > btw, is there a restriction to length of key in GiST in 7.1 ?\n> \n> Index tuples still have to fit in a block --- no TOAST table for\n> an index (yet).\n\nnot a good news. indexing and quering sets requires much more length.\nis there plan to do this ?\n\n> \n> I think we can compress index datums in-line though; at least that\n> should work in principle.\n\nwe could use loosy compression of GiST indices.\n\n\tregards,\n\n\t\tOleg\n\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 9 Dec 2000 19:03:17 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Personally, I'm not too afraid of having an FTS engine outside the database.\nOracle's Intermedia, which has a very powerful/fast FTS engine, uses that\napproach.\n\nI could look into how they do the SQL integration, maybe it would yeld some\nideas.\n\nMark, about that row identifier: OID's are no good for fast find. Maybe you\ncould use tuple id (TID). But please note that tuple ID's change after a\nsimple update. It's like the offset of the row inside the table file (and\ntherefore blazing fast to get to that row again).\n\nOne possible idea for SQL integration: can one use index access-method\nfunctions to query the FTS outside the database? Yes, it would require some\nwork, but the results I guess it would be wonderful. Ok, Tom, not so fast as\nan index sitting inside Postgres, but AFAICS that's not going to happen\nanytime soon.\n\nYours sincerely,\n\nEdmar Wiggers\nBRASMAP Information Systems\n+55 48 9960 2752\n\n\n",
"msg_date": "Sat, 9 Dec 2000 14:20:17 -0200",
"msg_from": "\"Edmar Wiggers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: OK, does anyone have any better ideas?"
},
{
"msg_contents": "On Sat, 9 Dec 2000, Edmar Wiggers wrote:\n\n> Date: Sat, 9 Dec 2000 14:20:17 -0200\n> From: Edmar Wiggers <[email protected]>\n> To: mlw <[email protected]>, Oleg Bartunov <[email protected]>\n> Cc: Tom Lane <[email protected]>,\n> Hackers List <[email protected]>\n> Subject: RE: [HACKERS] OK, does anyone have any better ideas?\n> \n> \n> One possible idea for SQL integration: can one use index access-method\n> functions to query the FTS outside the database? Yes, it would require some\n> work, but the results I guess it would be wonderful. Ok, Tom, not so fast as\n> an index sitting inside Postgres, but AFAICS that's not going to happen\n> anytime soon.\n\nWe did external indexing using suffix arrays ( http://sary.namazu.org )\nfor one project because of limiting time :-) But we had to do a lot\nof work like ACID (well, we already have some technology) and\neverything we get usually from SQL.\nNow we're trying to implement fast indexing using GiST.\n\n\tRegards,\n\n\t\tOleg\n\n> \n> Yours sincerely,\n> \n> Edmar Wiggers\n> BRASMAP Information Systems\n> +55 48 9960 2752\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 9 Dec 2000 19:34:39 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: OK, does anyone have any better ideas?"
},
{
"msg_contents": "\"Edmar Wiggers\" <[email protected]> writes:\n> One possible idea for SQL integration: can one use index access-method\n> functions to query the FTS outside the database?\n\nHm. In principle an index access method can do whatever it darn\npleases. In practice, though, I think the main problem with making an\nindex type for FTS is simply learning *how* to make a new index access\nmethod. (No one currently associated with the project has ever done it\n--- the four existing types all date back to Berkeley, I believe.)\nSeems to me that that learning curve is not going to be made any easier\nby pushing the guts of the information outside of Postgres ... if\nanything, it'd probably be harder because the existing examples of index\naccess methods would become less relevant.\n\n\t\t\tregards, tom lane\n\nPS: by the way, do any of the rest of you find that Mark's email address\ndoesn't work? Everytime I send him something, it sits in my mail queue\nfor five days and then bounces with Name server: mohawksoft.com.: host\nname lookup failure. The DNS server's syslog entries look like\nDec 9 11:34:56 sss2 named[10258]: ns_resp: query(mohawksoft.com) All possible A RR's lame\nSorry to use up list bandwidth on this, but I have no other way to reach\nhim --- apparently hub.org's nameserver is less picky than bind\n8.2.2.p7?\n",
"msg_date": "Sat, 09 Dec 2000 11:39:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, does anyone have any better ideas? "
},
{
"msg_contents": "Oleg Bartunov wrote:\n> \n> On Sat, 9 Dec 2000, Edmar Wiggers wrote:\n> \n> > Date: Sat, 9 Dec 2000 14:20:17 -0200\n> > From: Edmar Wiggers <[email protected]>\n> > To: mlw <[email protected]>, Oleg Bartunov <[email protected]>\n> > Cc: Tom Lane <[email protected]>,\n> > Hackers List <[email protected]>\n> > Subject: RE: [HACKERS] OK, does anyone have any better ideas?\n> >\n> >\n> > One possible idea for SQL integration: can one use index access-method\n> > functions to query the FTS outside the database? Yes, it would require some\n> > work, but the results I guess it would be wonderful. Ok, Tom, not so fast as\n> > an index sitting inside Postgres, but AFAICS that's not going to happen\n> > anytime soon.\n> \n> We did external indexing using suffix arrays ( http://sary.namazu.org )\n> for one project because of limiting time :-) But we had to do a lot\n> of work like ACID (well, we already have some technology) and\n> everything we get usually from SQL.\n> Now we're trying to implement fast indexing using GiST.\n\nI think I have the answer, or at least as good as I think I can get in\nthe near term.\n\nThe syntax is as follows:\n\ncreate temp table search_results as select ts_key(10) as \"key\",\nts_rank(10) as \"rank\" from ts_template where ts_search('bla bla bla',\n10)>0;\n\nselect * from table where search_results.key = table.field; \n\nIt is not ideal, obviously, but it solves a couple problems, and should\nnot be too big a performance hit. (If ANYONE can come up with a better\nway, I'd really really love to hear it.)\n\nThe first call to ts_search(...) does the search and saves the results.\nEach successive call simply advances the result number. ts_key() and\nts_rank() work on the current result number and return the respective\nvalue. ts_template is a table of some maximum number of records plus 1,\nsay 1001. \n\nWhen the total number of results have been returned (or maximum has been\nreached), ts_search frees the search results (because they should be\nsaved in the table) and returns 0, stopping the select call.\n\nAnyone see any problems with this approach? It is not ideal, but it is\nas efficient as I can come up with, without spending a year or two\ncreating a new Postgres index type.\n\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Sat, 09 Dec 2000 17:30:46 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OK, does anyone have any better ideas?"
},
{
"msg_contents": "Andrew Snow wrote:\n> \n> Could you perhaps post the code you have for splitting a text field up into\n> keys, then I could work on turning into a new type of index with a new\n> operator, as Tom suggested?\n> \n> (Or is this already what the text search code in contrib already does??)\n\nI think that splitting the text is the least of the problems.\n\nI've been contemplating integrating my own (currently BSDDB based) \nfull-text-index with postgresql and I've come to the conclusion that \nthe index proper should work on already split list-of-words (so it could\nbe \nmade to work with any array types easily)\n\nSo my suggestion is to start working on an inverted-index on a field of\nany \narray type.\n\nOr even better - start digging in the indexing interface of PostgreSQL\nfirst \ndocumenting it (the current docs just state that it is too complicated\nfor \nanyone to comprehend ;) and then cleaning it up and making it thus\nmaking \nit easyer for any kind of new index to be written.\n\n-------------\nHannu\n",
"msg_date": "Sun, 10 Dec 2000 14:25:46 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK, does anyone have any better ideas?"
},
{
"msg_contents": "Can I use table as a variable in a function ?\neg the following does what I need, (I have a trigger using it after an\ninsert on another table) but there will be a number of threads_whatever\ntables and I thought it might be neater just using one function.\nIf it can be done - can I use a table as a variable in a sql function ?\nAlso bsides the docs and the pdf book - is there any other info around\non plpgsql ?\n\n/* function to update threads table */\nCREATE FUNCTION add_post_php() RETURNS OPAQUE AS '\n\nBEGIN\n\nIF (SELECT count(*) FROM threads_php WHERE thread_id = NEW.thread_id) <\n1 THEN\n\nINSERT INTO threads_php\n(thread_id,date,user_name,last_post,last_user_name,posts,thread)\nVALUES\n(NEW.thread_id,NEW.date,NEW.user_name,NEW.date,NEW.user_name,count_posts_thread_php(NEW.thread_id),NEW.thread);\n\nELSE\nUPDATE threads_php set last_post = New.date, last_user_name =\nNEW.user_name, posts = count_posts_thread_php(NEW.thread_id) WHERE\nthread_id = NEW.thread_id;\nEND IF;\nRETURN NEW;\nEND;\n' LANGUAGE 'plpgsql';\n",
"msg_date": "Mon, 11 Dec 2000 01:48:35 +1100",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "plpgsql question"
}
]
|
[
{
"msg_contents": "\nHi all\nSmall intrusion in the Threads discussion\n\n1) don' t forget other OS .. the linux is not the only one (for now :-)\n)\n For example check the performance under Solaris\n http://www2.linuxjournal.com/lj-issues/issue70/3184.html\n\n2) unfortunatly some platforms that had no threads (but is a few) or an\nincompatible threads library (the major .. )\n\nOn the informix doc i read :\nTo most effectively utilize system resources, a configurable pool of\n database server processes called virtual\nprocessors schedule and manage\n user requests across multiple CPUs and disks.\nUser requests are\n represented by lightweight mechanisms called\nthreads. Each thread\n consists of a single sequential flow of control\n\nthat represents part of a\n discrete task within a database server process.\n\nFor example, a\n processing-intensive request such as a\nmulti-table join can be divided into\n multiple database threads (subtasks) and spread\n\nacross all the available\n virtual processors in the system.\n\nCould be intresting:\n One process for connection client ,but composed with many threads\n (similar to apache 2.0)\n\n - No crash all the system\n - Better performance in sql execution (i hope)\n\nI think will be the way for the future (Postgresql 8.0 ? ) , i know is\n\nnot simple\n\n\n\n",
"msg_date": "Fri, 08 Dec 2000 17:49:03 +0100",
"msg_from": "Fabrizio Manfredi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Threads"
}
]
|
[
{
"msg_contents": "I have just returned from a seven-day trip to Japan. I spoke for seven\nhours to three separate groups, totalling 200 people. I spoke to a\nLinux Conference, a PostgreSQL user's group, and to SRA, a PostgreSQL\nsupport company. You can get more information on my home page under\n\"Writings\". Here are some pictures from Japan:\n\n\thttp://www.sra.co.jp/people/t-ishii/Bruce/\n http://ebony.rieb.kobe-u.ac.jp/~yasuda/BRUCE/\n\nPostgreSQL is more popular in Japan than in the United States. Japan\nhas user's groups in many cities and has had a commercial support\ncompany (SRA) for two years. MySQL is not popular in Japan.\n\nIt was great to meet so many nice people.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 8 Dec 2000 12:07:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trip to Japan"
},
{
"msg_contents": "Bruce,\n\nwhat was the camera ?\n\n\tRegards,\n\t\tOleg\nOn Fri, 8 Dec 2000, Bruce Momjian wrote:\n\n> Date: Fri, 8 Dec 2000 12:07:56 -0500 (EST)\n> From: Bruce Momjian <[email protected]>\n> To: PostgreSQL-general <[email protected]>\n> Cc: PostgreSQL-development <[email protected]>\n> Subject: [HACKERS] Trip to Japan\n> \n> I have just returned from a seven-day trip to Japan. I spoke for seven\n> hours to three separate groups, totalling 200 people. I spoke to a\n> Linux Conference, a PostgreSQL user's group, and to SRA, a PostgreSQL\n> support company. You can get more information on my home page under\n> \"Writings\". Here are some pictures from Japan:\n> \n> \thttp://www.sra.co.jp/people/t-ishii/Bruce/\n> http://ebony.rieb.kobe-u.ac.jp/~yasuda/BRUCE/\n> \n> PostgreSQL is more popular in Japan than in the United States. Japan\n> has user's groups in many cities and has had a commercial support\n> company (SRA) for two years. MySQL is not popular in Japan.\n> \n> It was great to meet so many nice people.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 8 Dec 2000 20:47:52 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Trip to Japan"
},
{
"msg_contents": "> Bruce,\n> \n> what was the camera ?\n\nNo idea. It was not mine. I brought a video camera, and made 30\nminutes of video for my family and company. I don't know how to make an\nMP3 of that.\n\nMy wife wants a digital camera now, so it looks like I will have one\nsoon. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 8 Dec 2000 12:50:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Trip to Japan"
},
{
"msg_contents": "> > Bruce,\n> > \n> > what was the camera ?\n> \n> No idea. It was not mine. I brought a video camera, and made 30\n> minutes of video for my family and company. I don't know how to make an\n> MP3 of that.\n> \n> My wife wants a digital camera now, so it looks like I will have one\n> soon. :-)\n\nMine is a Nikon \"CoolPix990\". Originally those pictures had 2048x1536\npixcels that is too much for web pages (~700KB per picture). I\nshrinked them to 1024x768 using ImageMagick.\n\nMore pictures will be uploaded to the web pages...\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 09 Dec 2000 10:11:39 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Trip to Japan"
},
{
"msg_contents": "Does the regular expression parser have anything equivalent to Perl's \\w\nword boundary metacharacter?\n\nI want to select tuples where a text field contains a certail whole word.\nUsing fieldname ~* 'searchword' wont work because it picks up the\nsearchword emdedded in other words. Using ~*' searchword ' wont find it at\nthe beginning or end of the string.\nSo far we have:\n field ~*' searchword ' OR field ~*'^searchword ' OR field ~*' searchword$' \nbut I would like something more elegant.\n\nSteve\n\n-- \nthorNET - Internet Consultancy, Services & Training\nPhone: 01454 854413\nFax: 01454 854412\nhttp://www.thornet.co.uk \n",
"msg_date": "Mon, 11 Dec 2000 11:09:56 +0000",
"msg_from": "Steve Heaven <[email protected]>",
"msg_from_op": false,
"msg_subject": "Regular expression question"
},
{
"msg_contents": "Steve Heaven <[email protected]> writes:\n> Does the regular expression parser have anything equivalent to Perl's \\w\n> word boundary metacharacter?\n\nsrc/backend/regex/re_format.7 contains the whole scoop (for some reason\nthis page doesn't seem to get installed with the rest of the\ndocumentation). In particular:\n\n\tThere are two special cases of bracket expressions:\n\tthe bracket expressions `[[:<:]]' and `[[:>:]]' match the null\n\tstring at the beginning and end of a word respectively.\n\tA word is defined as a sequence of word characters\n\twhich is neither preceded nor followed by word characters.\n\tA word character is an alnum character (as defined by ctype(3))\n\tor an underscore. This is an extension, compatible with but not\n\tspecified by POSIX 1003.2, and should be used with caution in\n\tsoftware intended to be portable to other systems.\n\t\n\t...\n\t\n\tBUGS\n\t\n\tThe syntax for word boundaries is incredibly ugly.\n\nPOSIX bracket expressions are pretty ugly anyway, and this is no worse\nthan the rest. However, if you prefer Perl or Tcl, I'd recommend that\nyou just *use* Perl or Tcl ;-). plperl and pltcl make great\nimplementation languages for text-mashing functions...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2000 10:30:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regular expression question "
}
]
|
[
{
"msg_contents": "Here is a late report on the OSDN database summit.\n\nIt was great to meet so many PostgreSQL users, and to meet the major\ndevelopers of MySQL, Interbase, and Sleepycat. We clearly have many of\nthe same hopes and concerns for open-source databases.\n\nPostgreSQL had half of all attendees. Seems we must have done a good\njob publicising it. There is hope to do this again next year.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 8 Dec 2000 12:11:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "OSDN database summit"
}
]
|
[
{
"msg_contents": "> > Try to compare 7.0.3 & 7.1beta in multi-user environment.\n> \n> As I understand it you claim it to be faster in multi-user \n> environment ?\n> \n> Could you give some brief technical background why is it so \n> and why must it make single-user slower ?\n\nBecause of commit in 7.1 does fsync, with ot without -F\n(we can discuss and change this), but in multi-user env\na number of commits can be made with single fsync.\nSeems I've described this before?\n\nVadim\n",
"msg_date": "Fri, 8 Dec 2000 10:24:32 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pre-beta is slow"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> Because of commit in 7.1 does fsync, with ot without -F\n> (we can discuss and change this), but in multi-user env\n> a number of commits can be made with single fsync.\n\nI was planning to ask why you disabled the -F switch. Seems to me that\npeople who trusted their OS+hardware before would still want to do so\nin 7.1, and so it still makes sense to be able to suppress the fsync\ncalls.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 14:05:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pre-beta is slow "
}
]
|
[
{
"msg_contents": "> > > Try to compare 7.0.3 & 7.1beta in multi-user environment.\n> > \n> > As I understand it you claim it to be faster in multi-user \n> > environment ?\n> > \n> > Could you give some brief technical background why is it so \n> > and why must it make single-user slower ?\n> \n> Because of commit in 7.1 does fsync, with ot without -F\n> (we can discuss and change this), but in multi-user env\n> a number of commits can be made with single fsync.\n> Seems I've described this before?\n\nOps, I forgot to answer question \"why in single-user env 7.1 is\nslower than 7.0.3?\". I assumed that 7.1 was compared with 7.0.3\n*with -F*, which probably is not correct, I don't know.\nWell, the next test shows that 7.1 is faster in single-user env\nthan 7.0.3 *without -F*:\n\ntable (i int, t text); 1000 INSERTs (in separate transactions),\nsizeof(t) 1 .. 256:\n\n7.0.3: 42 sec -> 24 tps\n7.1 : 24 sec -> 42 tps\n\nVadim\n",
"msg_date": "Fri, 8 Dec 2000 11:11:02 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pre-beta is slow"
}
]
|
[
{
"msg_contents": "> > Because of commit in 7.1 does fsync, with ot without -F\n> > (we can discuss and change this), but in multi-user env\n> > a number of commits can be made with single fsync.\n> \n> I was planning to ask why you disabled the -F switch. Seems \n> to me that people who trusted their OS+hardware before would\n> still want to do so in 7.1, and so it still makes sense to be\n> able to suppress the fsync calls.\n\nI just didn't care about -F functionality, sorry.\nI agreed that we should resurrect it.\n\nVadim\n",
"msg_date": "Fri, 8 Dec 2000 11:15:05 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pre-beta is slow "
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> I just didn't care about -F functionality, sorry.\n> I agreed that we should resurrect it.\n\nOK. Do you want to work on that, or shall I?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 15:39:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pre-beta is slow "
}
]
|
[
{
"msg_contents": "> > I just didn't care about -F functionality, sorry.\n> > I agreed that we should resurrect it.\n> \n> OK. Do you want to work on that, or shall I?\n\nIn near future I'll be busy doing CRC + \"physical log\"\nthings...\n\nVadim\n",
"msg_date": "Fri, 8 Dec 2000 12:33:34 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pre-beta is slow "
}
]
|
[
{
"msg_contents": "It was just pointed out on pggeneral that hash indexes on macaddr\ncolumns don't work. Looking into it, I find that someone (me :-()\nmade a booboo: pg_amproc claims that hashvarlena is the appropriate\nhash function for macaddr --- but macaddr isn't a varlena type,\nit's a fixed-length pass-by-reference type.\n\nWe could fix this either by adding a new hash function to support\nmacaddr, or by removing the pg_amXXX entries that claim macaddr is\nhashable. Either change will not take effect without an initdb,\nhowever, and I'm loath to force one now that we've started beta.\n\nWhat I'm inclined to do is add the hash function but not force an\ninitdb (ie, not increment catversion). That would mean that people\nrunning 7.1beta1 would still have the bug in 7.1 final if they don't\nchoose to do an initdb when they update. But hashing macaddr isn't\nvery common (else we'd have noticed sooner!) so this seems OK, and\nbetter than forcing an initdb on our beta testers.\n\nComments, objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 16:07:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hash index on macaddr -> crash"
},
{
"msg_contents": "> We could fix this either by adding a new hash function to support\n> macaddr, or by removing the pg_amXXX entries that claim macaddr is\n> hashable. Either change will not take effect without an initdb,\n> however, and I'm loath to force one now that we've started beta.\n\nHow about creating an SQL statement that will make the change and\nputting a blurb about it it in the README, INSTALL and/or FAQ?\n\nThis wouldn't require an initdb and would let people have the fix.\n\nFor things like this that update exising fields (vs adding/deleting\nfields hard-wired for use in the backend), it should work, no?\n\nDarren\n\n",
"msg_date": "Fri, 8 Dec 2000 16:56:19 -0500",
"msg_from": "\"Darren King\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Hash index on macaddr -> crash"
},
{
"msg_contents": "\"Darren King\" <[email protected]> writes:\n> How about creating an SQL statement that will make the change and\n> putting a blurb about it it in the README, INSTALL and/or FAQ?\n\nIn theory we could do that, but I doubt it's worth the trouble.\nHash on macaddr has never worked (until my upcoming commit anyway ;-))\nand the lack of complaints seems to be adequate evidence that no one\nin the beta-test community has any use for it... so who's going to\ngo to the trouble of manually updating each of their databases?\n\nMy bet is that there'll be an initdb forced for some other reason\n(like adding CRCs, or some much-more-serious bug than this one)\nbefore 7.1 final anyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 18:52:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash index on macaddr -> crash "
}
]
|
[
{
"msg_contents": "> We could fix this either by adding a new hash function to support\n> macaddr, or by removing the pg_amXXX entries that claim macaddr is\n> hashable. Either change will not take effect without an initdb,\n> however, and I'm loath to force one now that we've started beta.\n\nIf we're going to add CRC to log then we need in beta anyway...\nAre we going?\n\nVadim\n",
"msg_date": "Fri, 8 Dec 2000 13:38:48 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Hash index on macaddr -> crash"
}
]
|
[
{
"msg_contents": "> > We could fix this either by adding a new hash function to support\n> > macaddr, or by removing the pg_amXXX entries that claim macaddr is\n> > hashable. Either change will not take effect without an initdb,\n> > however, and I'm loath to force one now that we've started beta.\n> \n> If we're going to add CRC to log then we need \n> in beta anyway...\n^^^^^^^^^\nOps - we need in INITDB...\n\n> Are we going?\n\nVadim\n",
"msg_date": "Fri, 8 Dec 2000 13:53:09 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Hash index on macaddr -> crash"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>>>> hashable. Either change will not take effect without an initdb,\n>>>> however, and I'm loath to force one now that we've started beta.\n>> \n>> If we're going to add CRC to log then we need \n>> in beta anyway...\n> Ops - we need in INITDB...\n\nNot to mention adding a CRC to page headers, which was the other part of\nthe thread.\n\n>> Are we going?\n\nI dunno. For now, I'll put in the hash function but not force an\ninitdb. If we do the CRC thing then we'll have the initdb at that\npoint.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 17:17:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash index on macaddr -> crash "
}
]
|
[
{
"msg_contents": "\nHello,\n\nWhy is this happening ?\n\nctonet=# show datestyle;\nNOTICE: DateStyle is ISO with European conventions\nSHOW VARIABLE\n\nctonet=# select creation_date from users limit 1;\n creation_date \n------------------------\n 2000-12-07 04:40:23+01\n ^^^^^^^^^^\n\nDatestyle has been set either with -e and with \"set datestyle\" with no\nchange.\n\nContext: Postgresql 7.0.3 on RedHat Linux 7.0 - Kernel 2.4.0-test10 -\nGlibc 2.1.94 and 2.2\n\nThanks!\nBye!\n\n-- \n Daniele Orlandi\n",
"msg_date": "Fri, 08 Dec 2000 22:05:30 +0000",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": true,
"msg_subject": "European Datestyle"
},
{
"msg_contents": "Daniele Orlandi <[email protected]> writes:\n\n> Hello,\n> \n> Why is this happening ?\n> \n> ctonet=# show datestyle;\n> NOTICE: DateStyle is ISO with European conventions\n> SHOW VARIABLE\n> \n> ctonet=# select creation_date from users limit 1;\n> creation_date \n> ------------------------\n> 2000-12-07 04:40:23+01\n> ^^^^^^^^^^\n\nThat is the ISO-style, isn't it?\n\nThere are two ways of making dates make sense, none of them American\n(but hey, they're still using Fahrenheit, feet, lb, fl.oz. acres and\nother nonsensical units... )\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "08 Dec 2000 17:37:51 -0500",
"msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=d8d?=)",
"msg_from_op": false,
"msg_subject": "Re: European Datestyle"
},
{
"msg_contents": "Trond Eivind Glomsr�d wrote:\n> \n> > 2000-12-07 04:40:23+01\n> > ^^^^^^^^^^\n> \n> That is the ISO-style, isn't it?\n\nYes, it is; but according to the documentation (and how it used to be on\nother machines running PG 6.x) it should be ordered in european format,\nI don't know\nif I'm missing something obviuous or what...\n\n> There are two ways of making dates make sense, none of them American\n> (but hey, they're still using Fahrenheit, feet, lb, fl.oz. acres and\n> other nonsensical units... )\n\nI do not mean to cricticize british units, after all, I would have\npreferred base16 units instead of base10 :)\n\nBye!\n\n-- \n Daniele\n\n-------------------------------------------------------------------------------\n Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n-------------------------------------------------------------------------------\n",
"msg_date": "Sat, 09 Dec 2000 02:31:27 +0000",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: European Datestyle"
},
{
"msg_contents": "> > That is the ISO-style, isn't it?\n> Yes, it is; but according to the documentation (and how it used to be on\n> other machines running PG 6.x) it should be ordered in european format,\n\nThe documentation should be clear that this is the correct output,\ngiving the current \"datestyle\" settings. Please let me know what part\nwas confusing and I can fix it for the next release.\n\nThe default date style in 7.0 was changed from \"Postgres\" to \"ISO\". The\neuro vs US setting determines how *input* dates are interpreted, since\nthey are not restricted to being only the format of the default output\nstyle.\n\nUse \"set datestyle = 'Postgres,European'\" to change to what you expect.\nYou can set an environment variable or change the defaults when building\nthe backend to get this always.\n\nThere is an appendix in the docs discussing the parsing strategy, though\nit is all detail.\n\n - Thomas\n",
"msg_date": "Sat, 09 Dec 2000 07:13:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: European Datestyle"
}
]
|
[
{
"msg_contents": "I've run tests (with 50 .. 250 simult users) for some PG project\nof my company. 7.1 was 3 times faster than 7.0.3 (fsync) but near\n3 times slower than 7.0.3 (nofsync). It was not the best day in\nmy life - WAL looked like big bottleneck -:(\n\nBut finally I've realized that this test makes ~3 FK insertions\n... and FK insert means SELECT FOR UPDATE ... and this could\nreduce # of commits per fsync.\n\nSo, I've run simple test (below) to check this. Seems that 7.1\nis faster than 7.0.3 (nofsync), and that SELECT FOR UPDATE in RI\ntriggers is quite bad for performance.\n\nPlease take this into account when comparing 7.1 with 7.0.3.\nAlso, we should add new TODO item: implement dirty reads\nand use them in RI triggers.\n\nVadim\n\n==================================================================\n\nTables:\nPK (p int primary key), p in 1 .. 1000.\nFK (f int, foreign key(f) references pk(p)).\n\nFirst column below - # of users; second - # of FK inserts\nper user; next - what values were used in each insert:\neither unique (ie there was no users inserting same value -\nno waiters on SELECT FOR UPDATE on PK) or some random value\nfrom range.\n\n7.1\n\n 50 1000 unique: 250 tps\n100 1000 unique: 243 tps\n 50 1000 rand(1 .. 10): 178 tps\n 50 1000 rand(1 .. 5): 108 tps\n\n7.0.3 (nofsync)\n\n 50 1000 unique: 157 tps\n 50 1000 rand(1 .. 10): 154 tps\n 50 1000 rand(1 .. 5): 154 tps\n\n",
"msg_date": "Fri, 8 Dec 2000 15:02:53 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.0.3(nofsync) vs 7.1"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> So, I've run simple test (below) to check this. Seems that 7.1\n> is faster than 7.0.3 (nofsync), and that SELECT FOR UPDATE in RI\n> triggers is quite bad for performance.\n> Also, we should add new TODO item: implement dirty reads\n> and use them in RI triggers.\n\nThat would fix RI triggers, I guess, but what about plain SELECT FOR\nUPDATE being used by applications?\n\nWhy exactly is SELECT FOR UPDATE such a performance problem for 7.1,\nanyway? I wouldn't have thought it'd be a big deal...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 19:14:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.3(nofsync) vs 7.1 "
},
{
"msg_contents": "> I've run tests (with 50 .. 250 simult users) for some PG project\n> of my company. 7.1 was 3 times faster than 7.0.3 (fsync) but near\n> 3 times slower than 7.0.3 (nofsync). It was not the best day in\n> my life - WAL looked like big bottleneck -:(\n> \n> But finally I've realized that this test makes ~3 FK insertions\n> ... and FK insert means SELECT FOR UPDATE ... and this could\n> reduce # of commits per fsync.\n> \n> So, I've run simple test (below) to check this. Seems that 7.1\n> is faster than 7.0.3 (nofsync), and that SELECT FOR UPDATE in RI\n> triggers is quite bad for performance.\n> \n> Please take this into account when comparing 7.1 with 7.0.3.\n> Also, we should add new TODO item: implement dirty reads\n> and use them in RI triggers.\n> \n> Vadim\n\nI did some testings using contrib/pgbench (100k tuples, 32 concurrent\nusers) with 7.1 and 7.0.3. It seems 7.1 is 5 times faster than 7.0.3\nwith fsync, but 1.5 times slower than 7.0.3 without fsync.\n\nSo I modified access/transam/xlog.c to disable fsync() call at\nall. Now I get nearly equal performance as 7.0.3 without fsync. It\nseems the bottle neck is logging with fsync(). It might be interesting\nmoving data/pg_xlog to a separate disk drive and see how it performs\nbetter.\n\nBTW pgbench does PK insertions and updates, but does no FK things.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 09 Dec 2000 16:27:11 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.3(nofsync) vs 7.1"
},
{
"msg_contents": "At 16:27 9/12/00 +0900, Tatsuo Ishii wrote:\n>It might be interesting\n>moving data/pg_xlog to a separate disk drive and see how it performs\n>better.\n\nGood idea. One of the fundamental rules with Dec/RDB is to put the\nXLOG-equivant on a different drive from *any* database-related file.\nAnother setting you might consider is 'fsync-every-n-commits'.\n\nThe different drive provides enhanced recoverability as well as enhanced\nperformance.\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 09 Dec 2000 18:53:18 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.3(nofsync) vs 7.1"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> I've run tests (with 50 .. 250 simult users) for some PG project\n> of my company. 7.1 was 3 times faster than 7.0.3 (fsync) but near\n> 3 times slower than 7.0.3 (nofsync). It was not the best day in\n> my life - WAL looked like big bottleneck -:(\n> \n> But finally I've realized that this test makes ~3 FK insertions\n> ... and FK insert means SELECT FOR UPDATE ... and this could\n> reduce # of commits per fsync.\n> \n> So, I've run simple test (below) to check this. Seems that 7.1\n> is faster than 7.0.3 (nofsync), and that SELECT FOR UPDATE in RI\n> triggers is quite bad for performance.\n> \n> Please take this into account when comparing 7.1 with 7.0.3.\n> Also, we should add new TODO item: implement dirty reads\n> and use them in RI triggers.\n\nAdded:\n\n* Implement dirty reads and use them in RI triggers\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Dec 2000 20:44:24 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.3(nofsync) vs 7.1"
}
]
|
[
{
"msg_contents": "\nIn researching a problem I have uncovered the following bug in index\nscans when Locale support is enabled.\n\nGiven a 7.0.3 postgres installation built with Locale support enabled\nand a default US RedHat 7.0 Linux installation (meaning that the LANG\nenvironment variable is set to en_US) to enable the US english locale\nand\n\nGiven the following table and index structure with the following data:\n\ncreate table test (test_col text);\ncreate index test_index on test (test_col);\ninsert into test values ('abc.xyz');\ninsert into test values ('abcxyz');\ninsert into test values ('abc/xyz');\n\nIf you run the query:\n\nselect * from test where test_col >= 'abc.';\n\nOne would normally expect to only get one record returned, but instead\nall records are returned.\n\nThe reason for this is that in the en_US locale all non-alphanumeric\ncharacters are ignored when doing string comparisons. So the data above\ngets treated as:\nabc.xyz = abcxyz = abc/xyz (as the non-alphanumeric characters of '.'\nand '/' are ignored). This implys that the above query will then return\nall rows as the constant 'abc.' is the same as 'abc' for comparison\npurposes and all rows are >= 'abc'.\n\nNote that if you use a different locale for example en_UK, you will get\ndifferent results as this locale does not ignore the . and / in the\ncomparison.\n\nNow the real problem comes in when either the like or regex operators\nare used in a sql statement. Consider the following sql:\n\nselect * from text where test_col like 'abc/%';\n\nThis query should return one row, the row for 'abc/xyz'. However if the\nabove query is executed via an index scan it will return the wrong\nnumber of rows (0 in this case).\n\nWhy is this? Well the query plan created for the above like expression\nlooks like the following:\nselect * from text where test_col >= 'abc/' and test_col < 'abc0';\n\nIn order to use the index the like has been changed into a '>=' and a\n'<' for the constant prefix ('abc/') and the constant prefix with the\nlast character incremented by one ('/abc0') (0 is the next character\nafter / in ASCII).\n\nGiven what was shown above about how the en_US locale does comparisons\nwe know that the non-alphanumeric characters are ignored. So the query\nessentially becomes:\nselect * from text where test_col >= 'abc' and test_col < 'abc0';\nand the data it is comparing against is 'abcxyz' in all cases (once the\n.'s an /'s are removed). Therefore since 'abcxyz' > 'abc0', no rows are\nreturned.\n\nOver the last couple of months that I have been on the postgres mail\nlists there have been a few people who reported that queries of the form\n\"like '/aaa/bbb/%' don't work. From the above information I have\ndetermined that such queries don't work if:\na) database is built with Locale support enabled (--enable-locale)\nb) the database is running with locale en_US\nc) the column the like is being performed on is indexed\nd) the query execution plan uses the above index\n\n(Discovering the exact set of circumstances for how to reproduce this\nhas driven me crazy for a while now).\n\nThe current implementation for converting the like into an index scan\ndoesn't work with Locale support enabled and the en_US locale as shown\nabove. \n\nthanks,\n--Barry\n\nPS. my test case:\n\ndrop table test;\ncreate table test (test_col text);\ncreate index test_index on test (test_col);\ninsert into test values ('abc.xyz');\ninsert into test values ('abcxyz');\ninsert into test values ('abc/xyz');\nexplain select * from test where test_col like 'abc/%';\nselect * from test where test_col like 'abc/%';\n\n\nwhen run against postgres 7.0.3 with locale support enabled (used the\nstandard RPMs on postgresql.org for RedHat) with LANG=en_US:\n\nbarry=# drop table test;\nDROP\nbarry=# create table test (test_col text);\nCREATE\nbarry=# create index test_index on test (test_col);\nCREATE\nbarry=# insert into test values ('abc.xyz');\nINSERT 227611 1\nbarry=# insert into test values ('abcxyz');\nINSERT 227612 1\nbarry=# insert into test values ('abc/xyz');\nINSERT 227613 1\nbarry=# explain select * from test where test_col like 'abc/%';\nNOTICE: QUERY PLAN:\n\nIndex Scan using test_index on test (cost=0.00..8.14 rows=10 width=12)\n\nEXPLAIN\nbarry=# select * from test where test_col like 'abc/%';\n test_col \n----------\n(0 rows)\n\nbarry=# \n\n\n\nwhen run against postgres 7.0.3 with locale support enabled (used the\nstandard RPMs on postgresql.org) with LANG=en_UK:\n\nbarry=# drop table test;\nDROP\nbarry=# create table test (test_col text);\nCREATE\nbarry=# create index test_index on test (test_col);\nCREATE\nbarry=# insert into test values ('abc.xyz');\nINSERT 227628 1\nbarry=# insert into test values ('abcxyz');\nINSERT 227629 1\nbarry=# insert into test values ('abc/xyz');\nINSERT 227630 1\nbarry=# explain select * from test where test_col like 'abc/%';\nNOTICE: QUERY PLAN:\n\nIndex Scan using test_index on test (cost=0.00..8.14 rows=10 width=12)\n\nEXPLAIN\nbarry=# select * from test where test_col like 'abc/%';\n test_col \n----------\n abc/xyz\n(1 row)\n\nbarry=# \n\nNote the second query (under en_UK) returns the correct rows, but the\nfirst query (under en_US) returned the wrong number of rows.\n\n\nPPS. Another way to work around the problem is to turn off locale\nspecific collation using the environment variable LC_COLLATE and setting\nit to the value C.\n",
"msg_date": "Fri, 08 Dec 2000 16:55:43 -0800",
"msg_from": "Barry Lind <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug in index scans with Locale support enabled"
},
{
"msg_contents": "Barry Lind <[email protected]> writes:\n> Now the real problem comes in when either the like or regex operators\n> are used in a sql statement.\n\nRight. As of 7.1beta1 we are dealing with this by suppressing LIKE/regex\nindex optimization in all locales other than \"C\". That's a pretty crude\nanswer but it seems the only reliable one :-(.\n\n> Over the last couple of months that I have been on the postgres mail\n> lists there have been a few people who reported that queries of the form\n> \"like '/aaa/bbb/%' don't work. From the above information I have\n> determined that such queries don't work if:\n> a) database is built with Locale support enabled (--enable-locale)\n> b) the database is running with locale en_US\n> c) the column the like is being performed on is indexed\n> d) the query execution plan uses the above index\n\nen_US is not the only dangerous locale, unfortunately.\n\nI suspect that there are some non-C locales in which we could still do\nthe optimization safely. The trick is to know which ones have collation\nrules that are affected by character combinations, multi-pass ordering\nrules, etc. Do you have any info on that?\n\nBTW, thanks for the very clear explanation --- we can point people at\nthis next time the question comes up, which it does regularly...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 20:07:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in index scans with Locale support enabled "
},
{
"msg_contents": "Hi,\n\n...\n> In researching a problem I have uncovered the following bug in index\n> scans when Locale support is enabled.\n...\n> environment variable is set to en_US) to enable the US english locale\n...\n> create table test (test_col text);\n> create index test_index on test (test_col);\n> insert into test values ('abc.xyz');\n> insert into test values ('abcxyz');\n> insert into test values ('abc/xyz');\n>\n> If you run the query:\n>\n> select * from test where test_col >= 'abc.';\n>\n> One would normally expect to only get one record returned, but instead\n> all records are returned.\n\nI would expect all to be returned (maybe not \"abc/...\"). Because noice\nshould be sorted first. ie. '.' is less than '0' and 'x' (and maybe '/').\n\n...\n> The reason for this is that in the en_US locale all non-alphanumeric\n> characters are ignored when doing string comparisons. So the data above\n\n...or... *I think* they are sorted first. If that is correct in your locale,\nI do not know.\n\n...\n> Note that if you use a different locale for example en_UK, you will get\n\nThats odd, I would expect en_UK and en_US to sort the same way (same\ncharset).\n\n...\n> select * from text where test_col like 'abc/%';\n>\n> This query should return one row, the row for 'abc/xyz'. However if the\n> above query is executed via an index scan it will return the wrong\n> number of rows (0 in this case).\n\nehh index scan? test_col >= 'abc/' or test_col >= 'abc/%' ????\nThe first one should return all rows but the one with '.', while the second\nshould return 0 rows. If the first one returns zero rows, then its a bug.\n\nIf you meant what the optimizer does with LIKE, well *I think* such\noptimazion is asking for trouble (compare strings with anything else than =\nand != are, well hard to predict).\n\n...\n> \"like '/aaa/bbb/%' don't work. From the above information I have\n> determined that such queries don't work if:\n> a) database is built with Locale support enabled (--enable-locale)\n\nActually they should not work without '--enable-locale', or then Im wrong.\n\n> b) the database is running with locale en_US\n> c) the column the like is being performed on is indexed\n\nDangerous LIKE optimation.\n\n...\n> The current implementation for converting the like into an index scan\n> doesn't work with Locale support enabled and the en_US locale as shown\n\nHmm. If memory serves its dropped in the later builds (no like optimation).\n\n// Jarmo\n\n",
"msg_date": "Sat, 9 Dec 2000 11:28:35 +0100",
"msg_from": "\"Jarmo Paavilainen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "SV: Bug in index scans with Locale support enabled"
},
{
"msg_contents": "Barry Lind writes:\n\n> The reason for this is that in the en_US locale all non-alphanumeric\n> characters are ignored when doing string comparisons. So the data above\n> gets treated as:\n> abc.xyz = abcxyz = abc/xyz (as the non-alphanumeric characters of '.'\n> and '/' are ignored). This implys that the above query will then return\n> all rows as the constant 'abc.' is the same as 'abc' for comparison\n> purposes and all rows are >= 'abc'.\n>\n> Note that if you use a different locale for example en_UK, you will get\n> different results as this locale does not ignore the . and / in the\n> comparison.\n\nThe reason for that is that en_UK is not a valid locale name and will get\ntreated as the default \"C\" locale. If you use en_GB you will get the same\nbehaviour as for en_US.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 10 Dec 2000 23:02:35 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in index scans with Locale support enabled"
}
]
|
[
{
"msg_contents": "> > So, I've run simple test (below) to check this. Seems that 7.1\n> > is faster than 7.0.3 (nofsync), and that SELECT FOR UPDATE in RI\n> > triggers is quite bad for performance.\n> > Also, we should add new TODO item: implement dirty reads\n> > and use them in RI triggers.\n> \n> That would fix RI triggers, I guess, but what about plain SELECT FOR\n> UPDATE being used by applications?\n\nWhat about it? Application normally uses exclusive row locks only when\nit's really required by application logic.\n\nExclusive PK locks are not required for FK inserts by RI logic,\nwe just have no other means to ensure PK existence currently.\n\nKeeping in mind that RI is used near in every application I would\nlike to see this fixed. And ppl already complained about it.\n\n> Why exactly is SELECT FOR UPDATE such a performance problem for 7.1,\n> anyway? I wouldn't have thought it'd be a big deal...\n\nI have only one explanation: it reduces number of transactions ready\nto commit (because of the same FK writers will wait till first one\ncommitted - ie log fsynced) and WAL commit performance greatly depends\non how many commits were done by single log fsync.\n7.0.3+nofsync commit performance doesn't depend on this factor.\n\nVadim\n",
"msg_date": "Fri, 8 Dec 2000 17:12:26 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: 7.0.3(nofsync) vs 7.1 "
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> I have only one explanation: it reduces number of transactions ready\n> to commit (because of the same FK writers will wait till first one\n> committed - ie log fsynced) and WAL commit performance greatly depends\n> on how many commits were done by single log fsync.\n> 7.0.3+nofsync commit performance doesn't depend on this factor.\n\nSure, but that's not exactly a fair comparison is it? 7.0.3+nofsync\nshould be compared against 7.1+nofsync. I put the pg_fsync routine\nback in a little while ago, so nofsync should work again.\n\nIt occurs to me though that disabling fsync should probably also reduce\nthe WAL commit delay to zero, no? What is the default commit delay now?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 20:32:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.3(nofsync) vs 7.1 "
},
{
"msg_contents": "Mikheev, Vadim wrote:\n> > > So, I've run simple test (below) to check this. Seems that 7.1\n> > > is faster than 7.0.3 (nofsync), and that SELECT FOR UPDATE in RI\n> > > triggers is quite bad for performance.\n> > > Also, we should add new TODO item: implement dirty reads\n> > > and use them in RI triggers.\n> >\n> > That would fix RI triggers, I guess, but what about plain SELECT FOR\n> > UPDATE being used by applications?\n>\n> What about it? Application normally uses exclusive row locks only when\n> it's really required by application logic.\n>\n> Exclusive PK locks are not required for FK inserts by RI logic,\n> we just have no other means to ensure PK existence currently.\n>\n> Keeping in mind that RI is used near in every application I would\n> like to see this fixed. And ppl already complained about it.\n\n I still don't see how dirty reads can solve the RI problems.\n If Xact A deletes a PK while Xact B inserts an FK, one of\n them will either see the new reference or the PK gone. But\n from a transactional POV it depends on if the opposite Xact\n finally commits or not to tell if that really happened.\n\n With dirty read, you only get \"maybe my PK is gone\" or \"maybe\n there is a reference\".\n\n\nJan\n\n>\n> > Why exactly is SELECT FOR UPDATE such a performance problem for 7.1,\n> > anyway? I wouldn't have thought it'd be a big deal...\n>\n> I have only one explanation: it reduces number of transactions ready\n> to commit (because of the same FK writers will wait till first one\n> committed - ie log fsynced) and WAL commit performance greatly depends\n> on how many commits were done by single log fsync.\n> 7.0.3+nofsync commit performance doesn't depend on this factor.\n>\n> Vadim\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 13 Dec 2000 08:39:57 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.3(nofsync) vs 7.1"
}
]
|
[
{
"msg_contents": "> > I have only one explanation: it reduces number of transactions ready\n> > to commit (because of the same FK writers will wait till first one\n> > committed - ie log fsynced) and WAL commit performance \n> > greatly depends on how many commits were done by single log fsync.\n> > 7.0.3+nofsync commit performance doesn't depend on this factor.\n> \n> Sure, but that's not exactly a fair comparison is it? 7.0.3+nofsync\n> should be compared against 7.1+nofsync. I put the pg_fsync routine\n> back in a little while ago, so nofsync should work again.\n\nBut I tested old 7.1 (fsync) version -:)\n\nAnd we always will have to enable fsync when comparing our\nperformance with other DBes.\n\n> It occurs to me though that disabling fsync should probably \n> also reduce the WAL commit delay to zero, no? What is the default\n\nI think so.\n\n> commit delay now?\n\nAs before 5 * 10^(-6) sec - pretty the same as sleep(0) -:)\nSeems CommitDelay is not very useful parameter now - XLogFlush\nlogic and fsync time add some delay.\n\nVadim\n",
"msg_date": "Fri, 8 Dec 2000 17:39:38 -0800 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: 7.0.3(nofsync) vs 7.1 "
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> And we always will have to enable fsync when comparing our\n> performance with other DBes.\n\nOf course, but when people say \"it's slower than 7.0.3+nofsync\"\nI think that turning off fsync makes a fairer comparison there.\n\n\n>> also reduce the WAL commit delay to zero, no? What is the default\n\n> I think so.\n\n>> commit delay now?\n\n> As before 5 * 10^(-6) sec - pretty the same as sleep(0) -:)\n> Seems CommitDelay is not very useful parameter now - XLogFlush\n> logic and fsync time add some delay.\n\nThere was a thread recently about smarter ways to handle shared fsync\nof the log --- IIRC, we talked about self-tuning commit delay, releasing\nwaiting processes as soon as someone else had fsync'd, etc. Looks like\nnone of those ideas are in the code now. Did you not like any of those\nideas, or just no time to work on it yet?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2000 20:58:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.3(nofsync) vs 7.1 "
}
]
|
[
{
"msg_contents": "I would like to mention that I met Tatsuo Ishii and Hiroshi Inoue while\nin Japan. This was the first time I met them, though I have worked with\nthem on PostgreSQL for many years. Tatsuo is really the voice of\nPostgreSQL in Japan. It was a real thrill. \n\nThere is a picture somewhere of the three of us. Hopefully Tatsuo can\nfind it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 8 Dec 2000 20:40:07 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Japan pictures"
},
{
"msg_contents": "> I would like to mention that I met Tatsuo Ishii and Hiroshi Inoue while\n> in Japan. This was the first time I met them, though I have worked with\n> them on PostgreSQL for many years. Tatsuo is really the voice of\n> PostgreSQL in Japan. It was a real thrill. \n\nI was thrilled to meet with you too. This is the first time ever for\nus (all PostgreSQL users in Japan) to have one of cores visit to\nJapan. This has been the plan in my mind since I have started to run a\nlocal mailing list in Japan in 1995!\n\n> There is a picture somewhere of the three of us. Hopefully Tatsuo can\n> find it.\n\nYes, I will upload soon.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 09 Dec 2000 13:18:13 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Japan pictures"
},
{
"msg_contents": "> > There is a picture somewhere of the three of us. Hopefully Tatsuo can\n> > find it.\n> \n> Yes, I will upload soon.\n\nDone. \n\nhttp://www.sra.co.jp/people/t-ishii/Bruce/DSCN0295.JPG\n\n From left to right: Hiroshi, Bruce, me.\n\nMore pictures are in:\n\nhttp://www.sra.co.jp/people/t-ishii/Bruce/\n\nEnjoy!\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 09 Dec 2000 16:47:21 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Japan pictures"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian\n> \n> I would like to mention that I met Tatsuo Ishii and Hiroshi Inoue while\n> in Japan. This was the first time I met them, though I have worked with\n> them on PostgreSQL for many years. Tatsuo is really the voice of\n> PostgreSQL in Japan. It was a real thrill. \n> \n\nI had a good time with Bruce and other PG users in Japan.\nGreat thanks to Bruce and Tatsuo.\nIt was also the first time for me to meet many of them. \nBruce has already met more PG users in Japan than I've met.:-)\n\nRegards.\nHiroshi Inoue \n",
"msg_date": "Sat, 9 Dec 2000 22:23:09 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Japan pictures"
}
]
|
[
{
"msg_contents": "There have been some misconceptions in previous mails.\n\n1.) A CRC is _not_ stronger than a hash. CRC is a subset of the hash domain,\ndefined as \"a fast error-check hash based on mod 2 polynomial operations\"\nwhich has typically no crypto strength (and does not need it either for most\npurposes).\n\n2.) Theoretically, an optimal MD5 implementation can't be faster than an\noptimal CRC-32 implementation. Check it yourself: download openssl\n(www.openssl.org) or Peter Gutmans cryptlib where all sorts of hashes and\nCRC-32 are implemented in a very reasonable way. Write a tiny routine\ngenerating random strings, popping them through the hash function. You will\nsee, CRC-32 is typically several times faster.\n\n3.) There are many domains where you need to protect yout database not only\nagainst random accidental glitches, but also against malicious attacks. In\nthese cases, CRC-32 (and other CRCs without any cryptographic strength) are\nno help.\nThe majority will probably be more happy with fast CRCs, but there will\nalways be some heavy weight users (such as in medical, legal and financial\ndomains) where strong hashes are required. Thus, it should be user-definable\nat runtime which one to choose.\n\n4.) Without CRC/hash facility, we will have no means of checking our data\nintegrity at all. At least in my domain (medical) most developers are\ncraving for database backends where we don't have to re-implement the\nintegrity checking stuff again and again. If postgres could provide this, it\nwould be a strong argument in favour of postgres.\n\n5.) As opposed to a previous posting (Bruce ?), MD5 has been shown to be\n\"crackable\" (deliberate collison feasible withavailable technology) - that\nwas one of the main reasons for the development of RIPEMD-160 (check the\nRIPEMD home page for more information)\n\nOnce again, I am happy to implement any number of CRC/hash methods in\npostgres if anybody (especially theone who implemented the SERIAL data type)\npoints me into the right direction within the postgres source code. No\ntakers so far :-(\n\nHorst\n\n",
"msg_date": "Sat, 9 Dec 2000 13:58:29 +1100",
"msg_from": "\"Horst Herb\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "CRC, hash & Co."
},
{
"msg_contents": "On Sat, Dec 09, 2000 at 01:58:29PM +1100, Horst Herb wrote:\n> There have been some misconceptions in previous mails.\n> \n> 1.) A CRC is _not_ stronger than a hash. CRC is a subset of the hash domain,\n> defined as \"a fast error-check hash based on mod 2 polynomial operations\"\n> which has typically no crypto strength (and does not need it either for most\n> purposes).\n\nNot true, unless your definition of strength is different than mine.\nThe point under question is if different data can produce the same hash\nas correct data. A CRC will always be different if the difference is a\nburst error of N-1 bits or less (N being the size of the CRC), and has a\n2^N chance of being different for all other errors. Cryptographic\nhashes can only claim the 2^N factor, with no guarantees.\n\n> 2.) Theoretically, an optimal MD5 implementation can't be faster than an\n> optimal CRC-32 implementation. Check it yourself: download openssl\n> (www.openssl.org) or Peter Gutmans cryptlib where all sorts of hashes and\n> CRC-32 are implemented in a very reasonable way. Write a tiny routine\n> generating random strings, popping them through the hash function. You will\n> see, CRC-32 is typically several times faster.\n\nYou check it yourself. I'll send you a copy of my benchmarking code\nunder seperate cover. All the core hashes except for CRC were taken\nfrom openssl. As per my last message, CRC on Celeron/P2/P3 sucks, and\nin the worst case would only be 1.5 times faster than MD5. MD4 would be\nnear par with CRC.\n\n> 3.) There are many domains where you need to protect yout database not only\n> against random accidental glitches, but also against malicious attacks. In\n> these cases, CRC-32 (and other CRCs without any cryptographic strength) are\n> no help.\n\nIf you have malicious attackers who can deliberately modify live data in\na database, you have problems beyond what any kind of hash can protect\nagainst.\n\n> 4.) Without CRC/hash facility, we will have no means of checking our data\n> integrity at all. At least in my domain (medical) most developers are\n> craving for database backends where we don't have to re-implement the\n> integrity checking stuff again and again. If postgres could provide this, it\n> would be a strong argument in favour of postgres.\n\nI agree that it would be useful to CRC data blocks to protect against\nbad data errors. If you're data is really that sensitive, though, you\nmay be looking for ECC, not CRC or hash facilities.\n\n> 5.) As opposed to a previous posting (Bruce ?), MD5 has been shown to be\n> \"crackable\" (deliberate collison feasible withavailable technology)\n\nNo, it hasn't, unless you can provide us a reference to a paper showing\nthat. I've seen references that there are internal collisions in the\nMD5 reduction function, but still no way to produce collisions on the\nfinal digest.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Sun, 10 Dec 2000 00:35:54 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRC, hash & Co."
},
{
"msg_contents": "On Sunday 10 December 2000 17:35, you wrote:\n\n> > 1.) A CRC is _not_ stronger than a hash. CRC is a subset of the hash\n> > domain, defined as \"a fast error-check hash based on mod 2 polynomial\n> > operations\" which has typically no crypto strength (and does not need it\n> > either for most purposes).\n>\n> Not true, unless your definition of strength is different than mine.\n\nIt is not my definition, but the definition found in any technical / IT \ndictionary I could grab. Examples:\n\n\n> > 3.) There are many domains where you need to protect yout database not\n> > only against random accidental glitches, but also against malicious\n> > attacks. In these cases, CRC-32 (and other CRCs without any cryptographic\n> > strength) are no help.\n>\n> If you have malicious attackers who can deliberately modify live data in\n> a database, you have problems beyond what any kind of hash can protect\n> against.\n\nIn the medical domain, the \"malicious attacker\" is often the user himself. \nFor medico-legal reasons, we need a complete audit trail proofing that no \nalterations have been made to medical records. For practical reasons, the \nquickest means (AFAIK) to achieve this is digestig the digests of all entries \n(or at least those of the change log) and store these externally on a trusted \nauthentication server. No matter how unlikely such a manipulation is; for a \ncourt case you always need the state-of-the-art precautions.\n\n> > 5.) As opposed to a previous posting (Bruce ?), MD5 has been shown to be\n> > \"crackable\" (deliberate collison feasible withavailable technology)\n>\n> No, it hasn't, unless you can provide us a reference to a paper showing\n> that. I've seen references that there are internal collisions in the\n> MD5 reduction function, but still no way to produce collisions on the\n> final digest.\n\nYou are partially right. It was only the compression function of MD5. But \nthat's enough. \n\"An iterated hash function is thus in this regard at most as strong as its \ncompression function\" ( A.J.Menezes, P.C.van Oorschot, S.A.Vanstone \"Handbook \nof Applied Cryptography\", CRC Press, 1999, link to online version: \nhttp://www.cacr.math.uwaterloo.ca/hac/ ).\nRead Cryptobytes Vol.2 No.2 Summer 1996; Hans Dobbertin: The status of MD5 \nafter a recent attack \n(ftp://ftp.rsasecurity.com/pub/cryptobytes/crypto2n2.pdf). \nand\nRSA Data security recommended already 1996 that MD4 should no longer be used \nand that MD5 \"should not be used for future applications that require the \nhash function to be collision resistant\" \n(ftp://ftp.rsasecurity.com/pub/pdfs/bulletn4.pdf)\nEven in S/MIME MD5 \"is only provided for backwards compatibility\" for that \nvery reason \n(http://web.elastic.org/~fche/mirrors/www.jya.com/pgpfaq-ss.htm#SubMD5Broke)\nand Bruce Schneier stated that he is \"wary of MD5\" ( B.Schneier, \"Applied \nCryptography, Second Edition\", Wiley, 1996 (cited at \nhttp://web.elastic.org/~fche/mirrors/www.jya.com/pgpfaq-ss.htm, I am still \ntrying to find the original quote in the book))\n\nFor further reference I recommend the \"Handbook of applied cryptography\" \nwhich surprisingly is available online (full text) at \nhttp://www.cacr.math.uwaterloo.ca/hac/\n\nPlease remember that the whole reason for my reasoning is that we need a \nrun-time definable choice of CRCs/digests as no one single hash will suit all \nneeds.\n\nHorst\n",
"msg_date": "Mon, 11 Dec 2000 07:13:31 +1100",
"msg_from": "Horst Herb <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRC, hash & Co."
}
]
|
[
{
"msg_contents": "\nI am also trying to port PostgreSQL to Dynix/ptx\n4.4.5. I have GNU gcc 2.95.2. \n\nI also have errors when trying to configure and make\nthe application and I am not particularily well\nqualified to sort them out. I would however be pleased\nto help where I can. \n\nBarry\n\n\n\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Shopping - Thousands of Stores. Millions of Products.\nhttp://shopping.yahoo.com/\n",
"msg_date": "Sat, 9 Dec 2000 09:31:27 -0800 (PST)",
"msg_from": "=?iso-8859-1?q?Barry=20Jenner?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re:Postgresql on dynix/ptx system "
}
]
|
[
{
"msg_contents": "Hello,\n\nI have quite strange problem. It's lsof -i -n -P\npostmaste 20018 postgres 6u IPv4 12241380 TCP \n127.0.0.1:5432->127.0.0.1:6651 (CLOSE)\n\nAnd there is no pair for it.\n\nBacktrace of the backend shows:\n(gdb) bt\n#0 0x40198ba2 in recv () from /lib/libc.so.6\n#1 0x80ab2c5 in pq_recvbuf ()\n#2 0x80ab395 in pq_getbytes ()\n#3 0x80eaa7c in SocketBackend ()\n#4 0x80eab13 in ReadCommand ()\n#5 0x80ebb9c in PostgresMain ()\n#6 0x80d69a2 in DoBackend ()\n#7 0x80d6581 in BackendStartup ()\n#8 0x80d593a in ServerLoop ()\n#9 0x80d53c4 in PostmasterMain ()\n#10 0x80abbb6 in main ()\n#11 0x400fe9cb in __libc_start_main () at ../sysdeps/generic/libc-start.c:122\n\n[root@mx /root]# ps axwfl|grep 20018\n040 507 20018 21602 0 0 15292 0 tcp_re SW pts/0 0:00 \\_ \n[postmaster]\n\n[root@mx /root]# uname -a\nLinux mx.xxx.com 2.2.16-3 #1 Mon Jun 19 19:11:44 EDT 2000 i686 unknown\n\nLooks like it tries to read on socket which is already closed from other \nside. And it seems like recv did not return in this case. Is this OK, or \nkernel bug?\n\nOn the other side I see entries like this:\nhttpd 4260 root 4u IPv4 12173018 TCP \n127.0.0.1:3994->127.0.0.1:5432 (CLOSE_WAIT)\n\nAnd again. There is no any corresponding postmaster process. Does anyone has \nsuch expirience before? And what can be the reason of such strange things.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Sun, 10 Dec 2000 12:51:46 +0600",
"msg_from": "Denis Perchine <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange behavior of PostgreSQL on Linux"
},
{
"msg_contents": "Denis Perchine <[email protected]> writes:\n> Looks like it tries to read on socket which is already closed from other \n> side. And it seems like recv did not return in this case. Is this OK, or \n> kernel bug?\n\nSounds like a kernel bug --- recv() should *always* return immediately\nif the socket is known closed. I'd think the kernel didn't believe the\nsocket was closed, if not for your lsof evidence. That's certainly\npointing a finger at the kernel...\n\nWe've heard (undetailed) reports before of backends hanging around when\nthe client was long gone. I always assumed that the client machine had\nfailed to disconnect properly, but now I wonder. A kernel bug might\nexplain those reports.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Dec 2000 13:18:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange behavior of PostgreSQL on Linux "
}
]
|
[
{
"msg_contents": "parse_coerce.c contains the following conversation --- I believe the\nfirst XXX comment is from me and the second from you:\n\n /*\n * Still too many candidates? Try assigning types for the unknown\n * columns.\n *\n * We do this by examining each unknown argument position to see if all\n * the candidates agree on the type category of that slot. If so, and\n * if some candidates accept the preferred type in that category,\n * eliminate the candidates with other input types. If we are down to\n * one candidate at the end, we win.\n *\n * XXX It's kinda bogus to do this left-to-right, isn't it? If we\n * eliminate some candidates because they are non-preferred at the\n * first slot, we won't notice that they didn't have the same type\n * category for a later slot.\n * XXX Hmm. How else would you do this? These candidates are here because\n * they all have the same number of matches on arguments with explicit\n * types, so from here on left-to-right resolution is as good as any.\n * Need a counterexample to see otherwise...\n */\n\nThe comment is out of date anyway because it fails to mention the new\nrule about preferring STRING category. But to answer your request for\na counterexample: consider\n\n\tSELECT foo('bar', 'baz')\n\nFirst, suppose the available candidates are\n\n\tfoo(float8, int4)\n\tfoo(float8, point)\n\nIn this case, we examine the first argument position, see that all the\ncandidates agree on NUMERIC category, so we consider resolving the first\nunknown input to float8. That eliminates neither candidate so we move\non to the second argument position. Here there is a conflict of\ncategories so we can't eliminate anything, and we decide the call is\nambiguous. That's correct (or at least Operating As Designed ;-)).\n\nBut now suppose we have\n\n\tfoo(float8, int4)\n\tfoo(float4, point)\n\nHere, at the first position we will still see that all candidates agree\non NUMERIC category, and then we will eliminate candidate 2 because it\nisn't the preferred type in that category. Now when we come to the\nsecond argument position, there's only one candidate left so there's\nno category conflict. Result: this call is considered non-ambiguous.\n\nThis means there is a left-to-right bias in the algorithm. For example,\nthe exact same call *would* be considered ambiguous if the candidates'\nargument orders were reversed:\n\n\tfoo(int4, float8)\n\tfoo(point, float4)\n\nI do not like that. You could maybe argue that earlier arguments are\nmore important than later ones for functions, but it's harder to make\nthat case for binary operators --- and in any case this behavior is\nextremely difficult to explain in prose.\n\nTo fix this, I think we need to split the loop into two passes.\nThe first pass does *not* remove any candidates. What it does is to\nlook separately at each UNKNOWN-argument position and attempt to deduce\na probable category for it, using the following rules:\n\n* If any candidate has an input type of STRING category, use STRING\ncategory; else if all candidates agree on the category, use that\ncategory; else fail because no resolution can be made.\n\n* The first pass must also remember whether any candidates are of a\npreferred type within the selected category.\n\nThe probable categories and exists-preferred-type booleans are saved in\nlocal arrays. (Note this has to be done this way because\nIsPreferredType currently allows more than one type to be considered\npreferred in a category ... so the first pass cannot try to determine a\nunique type, only a category.)\n\nIf we find a category for every UNKNOWN arg, then we enter a second loop\nin which we discard candidates. In this pass we discard a candidate if\n(a) it is of the wrong category, or (b) it is of the right category but\nis not of preferred type in that category, *and* we found candidate(s)\nof preferred type at this slot.\n\nIf we end with exactly one candidate then we win.\n\nIt is clear in this algorithm that there is no order dependency: the\nconditions for keeping or discarding a candidate are fixed before we\nstart the second pass, and do not vary depending on which other\ncandidates were discarded before it.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Dec 2000 13:08:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unknown-type resolution rules, redux"
},
{
"msg_contents": "> It is clear in this algorithm that there is no order dependency: the\n> conditions for keeping or discarding a candidate are fixed before we\n> start the second pass, and do not vary depending on which other\n> candidates were discarded before it.\n\nI won't argue strongly for either solution, but have the deep-seating\n(but vague) feeling that a left to right resolution algorithm is easier\nto explain, hence to understand, hence to predict, hence to use. An\nextra pass will solve the edge case you describe in perhaps a \"better\"\norder.\n\nI do think that the two algorithms under discussion are better than what\nwe've had in the past. Comments from others?\n\n - Thomas\n",
"msg_date": "Mon, 11 Dec 2000 18:50:26 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unknown-type resolution rules, redux"
},
{
"msg_contents": "\nAdded to TODO.detail.\n\n> parse_coerce.c contains the following conversation --- I believe the\n> first XXX comment is from me and the second from you:\n> \n> /*\n> * Still too many candidates? Try assigning types for the unknown\n> * columns.\n> *\n> * We do this by examining each unknown argument position to see if all\n> * the candidates agree on the type category of that slot. If so, and\n> * if some candidates accept the preferred type in that category,\n> * eliminate the candidates with other input types. If we are down to\n> * one candidate at the end, we win.\n> *\n> * XXX It's kinda bogus to do this left-to-right, isn't it? If we\n> * eliminate some candidates because they are non-preferred at the\n> * first slot, we won't notice that they didn't have the same type\n> * category for a later slot.\n> * XXX Hmm. How else would you do this? These candidates are here because\n> * they all have the same number of matches on arguments with explicit\n> * types, so from here on left-to-right resolution is as good as any.\n> * Need a counterexample to see otherwise...\n> */\n> \n> The comment is out of date anyway because it fails to mention the new\n> rule about preferring STRING category. But to answer your request for\n> a counterexample: consider\n> \n> \tSELECT foo('bar', 'baz')\n> \n> First, suppose the available candidates are\n> \n> \tfoo(float8, int4)\n> \tfoo(float8, point)\n> \n> In this case, we examine the first argument position, see that all the\n> candidates agree on NUMERIC category, so we consider resolving the first\n> unknown input to float8. That eliminates neither candidate so we move\n> on to the second argument position. Here there is a conflict of\n> categories so we can't eliminate anything, and we decide the call is\n> ambiguous. That's correct (or at least Operating As Designed ;-)).\n> \n> But now suppose we have\n> \n> \tfoo(float8, int4)\n> \tfoo(float4, point)\n> \n> Here, at the first position we will still see that all candidates agree\n> on NUMERIC category, and then we will eliminate candidate 2 because it\n> isn't the preferred type in that category. Now when we come to the\n> second argument position, there's only one candidate left so there's\n> no category conflict. Result: this call is considered non-ambiguous.\n> \n> This means there is a left-to-right bias in the algorithm. For example,\n> the exact same call *would* be considered ambiguous if the candidates'\n> argument orders were reversed:\n> \n> \tfoo(int4, float8)\n> \tfoo(point, float4)\n> \n> I do not like that. You could maybe argue that earlier arguments are\n> more important than later ones for functions, but it's harder to make\n> that case for binary operators --- and in any case this behavior is\n> extremely difficult to explain in prose.\n> \n> To fix this, I think we need to split the loop into two passes.\n> The first pass does *not* remove any candidates. What it does is to\n> look separately at each UNKNOWN-argument position and attempt to deduce\n> a probable category for it, using the following rules:\n> \n> * If any candidate has an input type of STRING category, use STRING\n> category; else if all candidates agree on the category, use that\n> category; else fail because no resolution can be made.\n> \n> * The first pass must also remember whether any candidates are of a\n> preferred type within the selected category.\n> \n> The probable categories and exists-preferred-type booleans are saved in\n> local arrays. (Note this has to be done this way because\n> IsPreferredType currently allows more than one type to be considered\n> preferred in a category ... so the first pass cannot try to determine a\n> unique type, only a category.)\n> \n> If we find a category for every UNKNOWN arg, then we enter a second loop\n> in which we discard candidates. In this pass we discard a candidate if\n> (a) it is of the wrong category, or (b) it is of the right category but\n> is not of preferred type in that category, *and* we found candidate(s)\n> of preferred type at this slot.\n> \n> If we end with exactly one candidate then we win.\n> \n> It is clear in this algorithm that there is no order dependency: the\n> conditions for keeping or discarding a candidate are fixed before we\n> start the second pass, and do not vary depending on which other\n> candidates were discarded before it.\n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 23:01:45 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unknown-type resolution rules, redux"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Added to TODO.detail.\n\nMight as well take it out again; that's already done.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Jan 2001 23:21:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unknown-type resolution rules, redux "
},
{
"msg_contents": "\nRemoved. Thanks.\n\n> Bruce Momjian <[email protected]> writes:\n> > Added to TODO.detail.\n> \n> Might as well take it out again; that's already done.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 23:36:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unknown-type resolution rules, redux"
}
]
|
[
{
"msg_contents": "This is the last post I will make about my text search system, I'll\nrespond off line if anyone wants to correspond. I have taken up too much\nbandwidth as it is. The only reason I have been so persistent is that I\ncan't believe that I am the only person having this particular problem,\nand that some resolution will make Postgres an even better product. I am\nposting this so people trying to do what I have been trying to do, have\nsomeplace to start. ;-)\n\nResolution:\nShort of creating a new index type in Postgres, and waiting for some\nlater version which allows \"sets\" to be returned by functions, we are\nleft with few options. My best solution, as kludgy as it is, is this:\n\nselect search_exec('bla bla');\n\ncreate temp table fubar as select search_key() as \"key\", search_rank()\nas \"rank\" from search_template where search_advance()>0 limit count;\n\nselect * from table where some_table.key = fubar.key;\n\nAssumptions:\n\nsearch_template is a table which contains [n] number of rows of one or\nmore fields. [n] is some maximum number of results. search_template\nshould be sufficiently small to be as efficient as possible. A valid\ntable must be used because multiple results need to be returned. \n\n<HACKERS>\nIf the postgres hackers want to make a 'virtual table' called something\nlike \"pg_counter\" which, when queried, returns an enumeration starting\nwith '0'..'[n]' that would be cool. The danger, of course, is that a\nlimit must be specified, but it could be useful beyond just my\napplication\n</HACKERS>\n\nAside from this hack, I have been unable to find a reasonably expedient\nway to do this. I have this working on a system, and it does seem to\nwork. If anyone has a reasonable alternative, or knows of any problems\nthis approach may have, I'd love to hear it. If anyone is interested in\nthe interface code, I would be glad to supply it as GPL.\n\nI am very grateful for all the attention and patience.\n\nMark.\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Sun, 10 Dec 2000 16:31:20 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Functions returning sets, last one I promise!"
}
]
|
[
{
"msg_contents": "OK, I am through mail email backlog from my Japan visit. I will work on\na full 7.1 changes list this week.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Dec 2000 20:53:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.1 features list"
}
]
|
[
{
"msg_contents": "I thought the hackers team would be interested in knowing that SourceForge, as\nof Friday evening, is running on Postgres. Some 95,000 users and 12,500 Open\nSource projects are depending on your stuff, so I hope it's going to be stable\nfor us. ;-)\n\nThroughout the codebase we're making good use of transactions, subselects, and\nforeign keys in all the places I've been wanting them for the past year, but\nI'm running into some places where the query optimizer is not using the right\nindexes, and sometimes does sequential scans on tables.\n\nHere's a good example. If I remove the ORDER BY (which I didn't care to have),\npostgres resorts to a sequential scan of the table, instead of using one of\n3 or 4 appropriate indexes. I have an index on group_id, one on\n(group_id,status_id) and one on (group_id,status_id,assigned_to) \n\nSELECT\nbug.group_id,bug.priority,bug.bug_id,bug.summary,bug.date,users.user_name AS\nsubmitted_by,user2.user_name AS assigned_to_user\nFROM bug,users,users user2\nWHERE group_id='1'\nAND bug.status_id <> '3'\nAND users.user_id=bug.submitted_by\nAND user2.user_id=bug.assigned_to\n-- \nORDER BY bug.group_id,bug.status_id\n--\nLIMIT 51 OFFSET 0;\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems",
"msg_date": "Sun, 10 Dec 2000 21:26:35 -0800",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "SourceForge & Postgres"
},
{
"msg_contents": "Tim Perdue wrote:\n> \n> I thought the hackers team would be interested in knowing that SourceForge, as\n> of Friday evening, is running on Postgres. Some 95,000 users and 12,500 Open\n> Source projects are depending on your stuff, so I hope it's going to be stable\n> for us. ;-)\n> \n> Throughout the codebase we're making good use of transactions, subselects, and\n> foreign keys in all the places I've been wanting them for the past year, but\n> I'm running into some places where the query optimizer is not using the right\n> indexes, and sometimes does sequential scans on tables.\n> \n> Here's a good example. If I remove the ORDER BY (which I didn't care to have),\n> postgres resorts to a sequential scan of the table, instead of using one of\n> 3 or 4 appropriate indexes. I have an index on group_id, one on\n> (group_id,status_id) and one on (group_id,status_id,assigned_to)\n> \n> SELECT\n> bug.group_id,bug.priority,bug.bug_id,bug.summary,bug.date,users.user_name AS\n> submitted_by,user2.user_name AS assigned_to_user\n> FROM bug,users,users user2\n> WHERE group_id='1'\n> AND bug.status_id <> '3'\n> AND users.user_id=bug.submitted_by\n> AND user2.user_id=bug.assigned_to\n> --\n> ORDER BY bug.group_id,bug.status_id\n> --\n> LIMIT 51 OFFSET 0;\n\nThis is one of my long standing problems with Postgres, and I have\nprobably pissed of most of the Postgres guys with my views, but.....\n\nPostgres is stubborn about index selection. I have a FAQ on my website.\n\nhttp://www.mohawksoft.com/postgres/pgindex.html\n\nIn short, run vacuum analyze. If that doesn't fix it, it is because the\ndata being indexed has a lot of key fields that are probably duplicated.\nGiven a large table with a statistically significant number of records\nassigned to a relatively few unique keys, Postgres will likely calculate\nthat doing a table scan is the best path.\n\nI almost always start postmaster with the \"-o -fs\" switches because of\nthis problem.\n",
"msg_date": "Mon, 11 Dec 2000 21:44:26 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SourceForge & Postgres"
},
{
"msg_contents": "-- Start of PGP signed section.\n> I thought the hackers team would be interested in knowing that SourceForge, as\n> of Friday evening, is running on Postgres. Some 95,000 users and 12,500 Open\n> Source projects are depending on your stuff, so I hope it's going to be stable\n> for us. ;-)\n\nThis is great news. As far as the optimizer, any chance of testing 7.1\nto see if it is improved. I believe it has been over 7.0.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Dec 2000 22:27:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SourceForge & Postgres"
},
{
"msg_contents": "Bruce Momjian wrote:\n> This is great news. As far as the optimizer, any chance of testing 7.1\n> to see if it is improved. I believe it has been over 7.0.3.\n\nI just did a test of my database that exhibits this behavior, using 7.1\nfrom CVS.\n\nWhen postmaster is started with \"-o -fs\" I get this:\n\ncdinfo=# explain select * from ztitles where artistid = 0 ;\nNOTICE: QUERY PLAN:\n\nIndex Scan using ztitles_artistid_ndx on ztitles (cost=0.00..5915.01\nrows=3163 width=296)\n\nEXPLAIN\n\nWhen postmaster is started without \"-o -fs\" I get this:\n\ncdinfo=# explain select * from ztitles where artistid = 0 ;\nNOTICE: QUERY PLAN:\n \nSeq Scan on ztitles (cost=0.00..4740.75 rows=3163 width=296)\n \nEXPLAIN \n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Mon, 11 Dec 2000 22:54:09 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SourceForge & Postgres"
},
{
"msg_contents": "Tim Perdue <[email protected]> writes:\n> I thought the hackers team would be interested in knowing that SourceForge,\n> as of Friday evening, is running on Postgres.\n\nCool!\n\n> I'm running into some places where the query optimizer is not using the right\n> indexes, and sometimes does sequential scans on tables.\n\nI assume you've done a VACUUM ANALYZE at least once since loading up\nyour data?\n\nIt'd be useful to see the results of an EXPLAIN for the problem query,\nboth with and without SET ENABLE_SEQSCAN TO OFF. Also, it'd be helpful\nto see VACUUM's stats for the relevant tables. You can get those for\na table named 'FOO' with\n\nselect attname,attdisbursion,s.*\nfrom pg_statistic s, pg_attribute a, pg_class c\nwhere starelid = c.oid and attrelid = c.oid and staattnum = attnum\nand relname = 'FOO';\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2000 23:13:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SourceForge & Postgres "
},
{
"msg_contents": "mlw <[email protected]> writes:\n> cdinfo=# explain select * from ztitles where artistid = 0 ;\n> NOTICE: QUERY PLAN:\n\n> Index Scan using ztitles_artistid_ndx on ztitles (cost=0.00..5915.01\n> rows=3163 width=296)\n\n> When postmaster is started without \"-o -fs\" I get this:\n\n> cdinfo=# explain select * from ztitles where artistid = 0 ;\n> NOTICE: QUERY PLAN:\n \n> Seq Scan on ztitles (cost=0.00..4740.75 rows=3163 width=296)\n\nHow many tuples are in the table? How many are actually returned\nby this query? Also, what do you get from\n\nselect attname,attdisbursion,s.*\nfrom pg_statistic s, pg_attribute a, pg_class c\nwhere starelid = c.oid and attrelid = c.oid and staattnum = attnum\nand relname = 'ztitles';\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2000 23:19:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SourceForge & Postgres "
},
{
"msg_contents": "\none thing I've found to get around this is for any query that doesn't\nappear to use the index properly, just do:\n\nSET ENABLE_SEQSCAN=OFF;\n<query>\nSET ENABLE_SEQSCAN=ON;\n\nthat way for those queries that do work right, ou haven't forced it a\ndifferent route ..\n\nOn Mon, 11 Dec 2000, mlw wrote:\n\n> Bruce Momjian wrote:\n> > This is great news. As far as the optimizer, any chance of testing 7.1\n> > to see if it is improved. I believe it has been over 7.0.3.\n> \n> I just did a test of my database that exhibits this behavior, using 7.1\n> from CVS.\n> \n> When postmaster is started with \"-o -fs\" I get this:\n> \n> cdinfo=# explain select * from ztitles where artistid = 0 ;\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using ztitles_artistid_ndx on ztitles (cost=0.00..5915.01\n> rows=3163 width=296)\n> \n> EXPLAIN\n> \n> When postmaster is started without \"-o -fs\" I get this:\n> \n> cdinfo=# explain select * from ztitles where artistid = 0 ;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on ztitles (cost=0.00..4740.75 rows=3163 width=296)\n> \n> EXPLAIN \n> \n> -- \n> http://www.mohawksoft.com\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 12 Dec 2000 00:20:00 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SourceForge & Postgres"
},
{
"msg_contents": "On Tue, Dec 12, 2000 at 12:20:00AM -0400, The Hermit Hacker wrote:\n> \n> one thing I've found to get around this is for any query that doesn't\n> appear to use the index properly, just do:\n> \n> SET ENABLE_SEQSCAN=OFF;\n> <query>\n> SET ENABLE_SEQSCAN=ON;\n> \n> that way for those queries that do work right, ou haven't forced it a\n> different route ..\n\nI've heard there are other ways to give clues to the\noptimizer, but haven't seen anything in the docs on it. Anyway, I have gotten\nvirtually all of the queries optimized as much as possible. Some of the\nqueries are written in such a way that they key off of things in 2 or more\ntables, so that's kinda hard to optimize in any circumstance.\n\nAny plans to optimize:\n\n-Views\n-IN (1,2,3)\n-SELECT count(*) FROM x WHERE indexed_field='z'\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems",
"msg_date": "Mon, 11 Dec 2000 22:40:14 -0800",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SourceForge & Postgres"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <[email protected]> writes:\n> > cdinfo=# explain select * from ztitles where artistid = 0 ;\n> > NOTICE: QUERY PLAN:\n> \n> > Index Scan using ztitles_artistid_ndx on ztitles (cost=0.00..5915.01\n> > rows=3163 width=296)\n> \n> > When postmaster is started without \"-o -fs\" I get this:\n> \n> > cdinfo=# explain select * from ztitles where artistid = 0 ;\n> > NOTICE: QUERY PLAN:\n> \n> > Seq Scan on ztitles (cost=0.00..4740.75 rows=3163 width=296)\n> \n> How many tuples are in the table? How many are actually returned\n> by this query? Also, what do you get from\n> \n> select attname,attdisbursion,s.*\n> from pg_statistic s, pg_attribute a, pg_class c\n> where starelid = c.oid and attrelid = c.oid and staattnum = attnum\n> and relname = 'ztitles';\n\nI have attached the output. \n\nbtw anyone trying this query should use: \"attdispersion\"\n\nThe explain I gave, there are no records that actually have an artistid\nof 0. However, I will show the explain with a valid artistid number.\n\nThis is without \"-o -fs\"\ncdinfo=# explain select * from ztitles where artistid = 100000220 ;\nNOTICE: QUERY PLAN:\n \nSeq Scan on ztitles (cost=0.00..4740.75 rows=3163 width=296)\n \nEXPLAIN \n\nAnd this is with \"-o -fs\"\n\ncdinfo=# explain select * from ztitles where artistid = 100000220 ;\nNOTICE: QUERY PLAN:\n \nIndex Scan using ztitles_artistid_ndx on ztitles (cost=0.00..5915.01\nrows=3163 width=296)\n \nEXPLAIN\n\n\nselect count(*) from ztitles where artistid = 100000220 ;\n count\n-------\n 16\n(1 row) \n\n-- \nhttp://www.mohawksoft.com",
"msg_date": "Tue, 12 Dec 2000 07:17:00 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SourceForge & Postgres"
},
{
"msg_contents": "mlw <[email protected]> writes:\n> btw anyone trying this query should use: \"attdispersion\"\n\nSorry about that --- I just copied-and-pasted the query from some notes\nthat are obsolete as of 7.1...\n\n> cdinfo=# explain select * from ztitles where artistid = 100000220 ;\n> NOTICE: QUERY PLAN:\n \n> Seq Scan on ztitles (cost=0.00..4740.75 rows=3163 width=296)\n \n> And this is with \"-o -fs\"\n\n> Index Scan using ztitles_artistid_ndx on ztitles (cost=0.00..5915.01\n> rows=3163 width=296)\n \n> attname | attdispersion | starelid | staattnum | staop | stanullfrac | stacommonfrac | stacommonval | staloval | stahival \n> artistid | 0.0477198 | 19274 | 2 | 97 | 0 | 0.149362 | 100050450 | 100000000 | 100055325\n\nThe reason why the thing is going for a sequential scan is that\nastonishingly high stacommonfrac statistic. Does artistid 100050450\nreally account for 14.9% of all the rows in your table? (Who is that\nanyway? ;-)) If so, a search for artistid 100050450 definitely *should*\nuse a sequential scan. The problem at hand is estimating the frequency\nof entries for some other artistid, given that we only have this much\nstatistical info available. Obviously the stats are insufficient, and\nI hope to do something about that in a release or two, but it ain't\ngonna happen for 7.1. In the meantime, if you've got huge outliers\nlike that, you could try reducing the value of NOT_MOST_COMMON_RATIO\nin src/backend/utils/adt/selfuncs.c.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Dec 2000 16:27:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SourceForge & Postgres "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <[email protected]> writes:\n> > btw anyone trying this query should use: \"attdispersion\"\n> \n> Sorry about that --- I just copied-and-pasted the query from some notes\n> that are obsolete as of 7.1...\n> \n> > cdinfo=# explain select * from ztitles where artistid = 100000220 ;\n> > NOTICE: QUERY PLAN:\n> \n> > Seq Scan on ztitles (cost=0.00..4740.75 rows=3163 width=296)\n> \n> > And this is with \"-o -fs\"\n> \n> > Index Scan using ztitles_artistid_ndx on ztitles (cost=0.00..5915.01\n> > rows=3163 width=296)\n> \n> > attname | attdispersion | starelid | staattnum | staop | stanullfrac | stacommonfrac | stacommonval | staloval | stahival\n> > artistid | 0.0477198 | 19274 | 2 | 97 | 0 | 0.149362 | 100050450 | 100000000 | 100055325\n> \n> The reason why the thing is going for a sequential scan is that\n> astonishingly high stacommonfrac statistic. Does artistid 100050450\n> really account for 14.9% of all the rows in your table? (Who is that\n> anyway? ;-)) If so, a search for artistid 100050450 definitely *should*\n> use a sequential scan.\n\nI tested this statement against the database and you are right, about 14\nseconds with the index, 4 without.\n\nBTW ID # 100050450 is \"Various Artists\"\n\nThis is sort of a point I was trying to make in previous emails. I think\nthis situation, and this sort of ratio is far more likely than the\nattention it has been given.\n\nIn about every project I have used postgres I have run into this. It is\nonly recently that I have understood what the problem was and how to get\naround it (sort of).\n\nThis one entry is destroying any intelligent performance we could hope\nto attain. As I said, I always see this sort of behavior in some\nimplementation.\n\n\n> The problem at hand is estimating the frequency\n> of entries for some other artistid, given that we only have this much\n> statistical info available. Obviously the stats are insufficient, and\n> I hope to do something about that in a release or two, but it ain't\n> gonna happen for 7.1. In the meantime, if you've got huge outliers\n> like that, you could try reducing the value of NOT_MOST_COMMON_RATIO\n> in src/backend/utils/adt/selfuncs.c.\n\nI did some playing with this value, and I can seem to have it\ndifferentiate between 100050450 and anything else.\n\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Tue, 12 Dec 2000 19:20:03 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SourceForge & Postgres"
},
{
"msg_contents": "\nTim, how is PostgreSQL working for you?\n\n-- Start of PGP signed section.\n> I thought the hackers team would be interested in knowing that SourceForge, as\n> of Friday evening, is running on Postgres. Some 95,000 users and 12,500 Open\n> Source projects are depending on your stuff, so I hope it's going to be stable\n> for us. ;-)\n> \n> Throughout the codebase we're making good use of transactions, subselects, and\n> foreign keys in all the places I've been wanting them for the past year, but\n> I'm running into some places where the query optimizer is not using the right\n> indexes, and sometimes does sequential scans on tables.\n> \n> Here's a good example. If I remove the ORDER BY (which I didn't care to have),\n> postgres resorts to a sequential scan of the table, instead of using one of\n> 3 or 4 appropriate indexes. I have an index on group_id, one on\n> (group_id,status_id) and one on (group_id,status_id,assigned_to) \n> \n> SELECT\n> bug.group_id,bug.priority,bug.bug_id,bug.summary,bug.date,users.user_name AS\n> submitted_by,user2.user_name AS assigned_to_user\n> FROM bug,users,users user2\n> WHERE group_id='1'\n> AND bug.status_id <> '3'\n> AND users.user_id=bug.submitted_by\n> AND user2.user_id=bug.assigned_to\n> -- \n> ORDER BY bug.group_id,bug.status_id\n> --\n> LIMIT 51 OFFSET 0;\n> \n> Tim\n> \n> -- \n> Founder - PHPBuilder.com / Geocrawler.com\n> Lead Developer - SourceForge\n> VA Linux Systems\n-- End of PGP section, PGP failed!\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 23:05:47 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SourceForge & Postgres"
},
{
"msg_contents": "Tim Perdue wrote:\n> I thought the hackers team would be interested in knowing that SourceForge, as\n> of Friday evening, is running on Postgres. Some 95,000 users and 12,500 Open\n> Source projects are depending on your stuff, so I hope it's going to be stable\n> for us. ;-)\n\nTim,\n\n the PG core team is wondering if SourceForge might still be\n running on a snapshot prior to BETA3, because there is a\n major bug in it that could result in a complete corruption of\n the system catalog.\n\n The bug is that the shared buffer cache might mix up blocks\n between different databases. As long as you only use one\n database, you're fairly safe. But a single 'createdb' or\n 'createuser' on the same instance, which is connecting to\n template1, could blow away your entire installation. It is\n fixed in BETA3.\n\n My personal recommendation should be clear.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 25 Jan 2001 16:51:44 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SourceForge & Postgres"
},
{
"msg_contents": "Tim,\n\nI've found your message in postgres hackers list and wondering if\nsourceforge db part could be improved using our recent (7.1) GiST improvements.\n\nIn short, using RD-Tree + GiST we've added index support for arrays of\nintegers. For example, in our rather busy web site we have pool\nof online news. Most complex query to construct main page is\nselect messages from given list of categories, because it requires\njoin from message_section_map (message could belong to several\ncategories).\nmessages message_section_map\n-------- -------------------\nmsg_id msg_id\ntitle sect_id\n.....\n\nWHERE clause (simplificated) looks like\n......\nmessage_section_map.sect_id in (1,13,103,10488,105,17,9,4,2,260000373,12,7,8,14,5,6,11,15,\n10339,10338,10336,10335,260000404,260000405,260000403,206) and\nmessage_section_map.msg_id = messages.msg_id order by publication_date\ndesc .....\n\nThis is really difficult query and takes a long time to execute.\n\nnow, we exclude message_section_map, just add array <sections> to\ntable messages which contains all sect_id given message belong to.\nUsing our index support for arrays of int4 our complex query\nexecutes very fast !\n\nI think sourceforge uses some kind of such queries.\n\nSome info about GiST extension and our contribution could be find\nat http://www.sai.msu.su/~megera/postgres/gist/\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 10 Feb 2001 18:08:54 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SourceForge & Postgres"
}
]
|
[
{
"msg_contents": "\n> >> Perhaps they *should* truncate if the specified length is less than\n> >> the original string length. Does Oracle do that?\n> \n> > Yes, it truncates, same as Informix.\n> \n> I went to fix this and then realized I still don't have an adequate spec\n> of how Oracle defines these functions. It would seem logical, for\n> example, that lpad might truncate on the left instead of the right,\n> ie lpad('abcd', 3, 'whatever') might yield 'bcd' not 'abc'. Would\n> someone check?\n\nreturns 'abc' on Oracle and Informix.\n\n> \n> Also, what happens if the specified length is less than zero? Error,\n> or is it treated as zero?\n\nReturns NULL in both if length <= 0. I would see the < 0 case as proper,\nbut the == 0 case sure looks weird to me.\n\nVery good catch, Tom !! :-)\nAndreas \n",
"msg_date": "Mon, 11 Dec 2000 09:41:10 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Oracle-compatible lpad/rpad behavior"
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> Also, what happens if the specified length is less than zero? Error,\n>> or is it treated as zero?\n\n> Returns NULL in both if length <= 0. I would see the < 0 case as proper,\n> but the == 0 case sure looks weird to me.\n\nSince Oracle fails to distinguish NULL from empty string, it's hard to\ntell what they have in mind here. I've implemented it as empty-string\nresult for length <= 0. You could possibly make a case for empty string\nat length = 0 and NULL for length < 0, but I'm not sure it's worth the\ntrouble...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2000 10:32:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] AW: Oracle-compatible lpad/rpad behavior "
}
]
|
[
{
"msg_contents": "\n> > The cons side of processes model is not the startup time. It is about\n> > kernel resource and context-switch cost. Processes consume much more\n> > kernel resource than threads, and have a much higher cost for context\n> > switch. The scalability of threads model is much better than that of\n> > processes model.\n> \n> My question here is how much do we really context switch. We \n> do quite a\n> bit of work for each query, and I don't see us giving up the CPU very\n> often, as would be the case for a GUI where each thread does a little\n> work and goes to sleep.\n\nEvery IO gives up the CPU ! In a threaded model the process could use up it's \nCPU timeslice for other clients.\n\nThe optimum would be a multi process multi threaded server. But this only starts to \nshow an effect if there are a lot of clients (thousands).\n\nAndreas\n",
"msg_date": "Mon, 11 Dec 2000 11:54:03 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Using Threads?"
}
]
|
[
{
"msg_contents": "\n> This brings up a good point. Threads are mostly useful when you have\n> multiple processes that need to share lots of data, and the interprocess\n> overhead is excessive. Because we already have that shared memory area,\n> this benefit of threads doesn't buy us much. We sort of already have\n> done the _shared_ part, and the addition of sharing our data pages is\n> not much of a win.\n\nI agree, that this is not the issue. The only issue for PostgreSQL would be to \nefficiently support tenthousands of users.\n\nAndreas\n",
"msg_date": "Mon, 11 Dec 2000 11:58:57 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Using Threads?"
}
]
|
[
{
"msg_contents": "Sorry, but I just found out that many of my mails bounced because I was using \nmy secondary email address =8-0\n\n---------- Forwarded Message ----------\nSubject: Re: [HACKERS] CRC, hash & Co.\nDate: Mon, 11 Dec 2000 07:13:31 +1100\nFrom: Horst Herb <[email protected]>\nTo: [email protected]\n\n\nOn Sunday 10 December 2000 17:35, you wrote:\n> > 1.) A CRC is _not_ stronger than a hash. CRC is a subset of the hash\n> > domain, defined as \"a fast error-check hash based on mod 2 polynomial\n> > operations\" which has typically no crypto strength (and does not need it\n> > either for most purposes).\n>\n> Not true, unless your definition of strength is different than mine.\n\nIt is not my definition, but the definition found in any technical / IT\n\ndictionary I could grab. Examples:\n> > 3.) There are many domains where you need to protect your database not\n> > only against random accidental glitches, but also against malicious\n> > attacks. In these cases, CRC-32 (and other CRCs without any cryptographic\n> > strength) are no help.\n>\n> If you have malicious attackers who can deliberately modify live data in\n> a database, you have problems beyond what any kind of hash can protect\n> against.\n\nIn the medical domain, the \"malicious attacker\" is often the user himself.\nFor medico-legal reasons, we need a complete audit trail proofing that no\nalterations have been made to medical records. For practical reasons, the\nquickest means (AFAIK) to achieve this is digesting the digests of all entries\n(or at least those of the change log) and store these externally on a trusted\nauthentication server. No matter how unlikely such a manipulation is; for a\ncourt case you always need the state-of-the-art precautions.\n\n> > 5.) As opposed to a previous posting (Bruce ?), MD5 has been shown to be\n> > \"crackable\" (deliberate collision feasible with available technology)\n>\n> No, it hasn't, unless you can provide us a reference to a paper showing\n> that. I've seen references that there are internal collisions in the\n> MD5 reduction function, but still no way to produce collisions on the\n> final digest.\n\nYou are partially right. It was only the compression function of MD5. But\nthat's enough.\n\"An iterated hash function is thus in this regard at most as strong as its\ncompression function\" ( A.J.Menezes, P.C.van Oorschot, S.A.Vanstone \"Handbook\nof Applied Cryptography\", CRC Press, 1999, link to online version:\nhttp://www.cacr.math.uwaterloo.ca/hac/ ).\nRead Cryptobytes Vol.2 No.2 Summer 1996; Hans Dobbertin: The status of MD5\nafter a recent attack\n(ftp://ftp.rsasecurity.com/pub/cryptobytes/crypto2n2.pdf).\nand\nRSA Data security recommended already 1996 that MD4 should no longer be used\nand that MD5 \"should not be used for future applications that require the\nhash function to be collision resistant\"\n(ftp://ftp.rsasecurity.com/pub/pdfs/bulletn4.pdf)\nEven in S/MIME MD5 \"is only provided for backwards compatibility\" for that\nvery reason\n(http://web.elastic.org/~fche/mirrors/www.jya.com/pgpfaq-ss.htm#SubMD5Broke)\nand Bruce Schneier stated that he is \"wary of MD5\" ( B.Schneier, \"Applied\nCryptography, Second Edition\", Wiley, 1996 (cited at\nhttp://web.elastic.org/~fche/mirrors/www.jya.com/pgpfaq-ss.htm, I am still\ntrying to find the original quote in the book))\n\nFor further reference I recommend the \"Handbook of applied cryptography\"\nwhich surprisingly is available online (full text) at\nhttp://www.cacr.math.uwaterloo.ca/hac/\n\nPlease remember that the whole reason for my reasoning is that we need a\nrun-time definable choice of CRCs/digests as no one single hash will suit all\nneeds.\n\nHorst\n\n-------------------------------------------------------\n",
"msg_date": "Mon, 11 Dec 2000 23:59:34 +1100",
"msg_from": "Horst Herb <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Re: CRC, hash & Co."
}
]
|
[
{
"msg_contents": "Hi,\nI have been using Postgres-7.0.2 on Solaris 8 for the past few months, and \nwas about to upgrade to 7.1-test, and after following carefully the docs, I \nget this:\n\npostgres@ultra31:~ > /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data\nIpcSemaphoreCreate: semget(key=5432004, num=17, 03600) failed: No space left \non\ndevice\n \nThis error does *not* mean that you have run out of disk space.\n \nIt occurs either because system limit for the maximum number of\nsemaphore sets (SEMMNI), or the system wide maximum number of\nsemaphores (SEMMNS), would be exceeded. You need to raise the\nrespective kernel parameter. Look into the PostgreSQL documentation\nfor details.\n \npostgres@ultra31:~ > \n\nI looked at the FAQ_Solaris, but found nothing on this case. I remember \nmaking changes to the kernel parameters when I fist installed postgres, but \ncan't remember where I found that info.\n\nAny clues?\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Mon, 11 Dec 2000 12:07:01 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "No postgres on Solaris"
},
{
"msg_contents": "\"Martin A. Marques\" <[email protected]> writes:\n\n> It occurs either because system limit for the maximum number of\n> semaphore sets (SEMMNI), or the system wide maximum number of\n> semaphores (SEMMNS), would be exceeded. You need to raise the\n> respective kernel parameter. Look into the PostgreSQL documentation\n> for details.\n[...]\n> I looked at the FAQ_Solaris, but found nothing on this case. I remember \n> making changes to the kernel parameters when I fist installed postgres, but \n> can't remember where I found that info.\n\nI don't remember where on the PG site I found this, but this is what\nI'm using currently:\n\nset shmsys:shminfo_shmmax=0x2000000\nset shmsys:shminfo_shmmin=1\nset shmsys:shminfo_shmmni=256\nset shmsys:shminfo_shmseg=256\nset semsys:seminfo_semmap=256\nset semsys:seminfo_semmni=512\nset semsys:seminfo_semmns=512\nset semsys:seminfo_semmsl=32\n\nThese lines are all at the bottom of /etc/system.\n\nChris\n\n-- \n----------------------------------------------------- [email protected]\nChris Jones SRI International, Inc.\n",
"msg_date": "12 Dec 2000 10:00:49 -0700",
"msg_from": "Chris Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No postgres on Solaris"
},
{
"msg_contents": "El Lun 11 Dic 2000 12:07, Martin A. Marques escribi�:\n> Hi,\n> I have been using Postgres-7.0.2 on Solaris 8 for the past few months, and\n> was about to upgrade to 7.1-test, and after following carefully the docs, I\n> get this:\n>\n> postgres@ultra31:~ > /usr/local/pgsql/bin/postmaster -D\n> /usr/local/pgsql/data IpcSemaphoreCreate: semget(key=5432004, num=17,\n> 03600) failed: No space left on\n> device\n\nSorry, checked the FAQ (I thought this would be in the FAQ_Solaris, but it \nwas in the general), and I just recompiled without the --with-maxbackends=64, \nso I ran out of semaphores.\n\nFixed. ;-)\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Wed, 13 Dec 2000 11:41:47 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: No postgres on Solaris"
},
{
"msg_contents": "Martin A. Marques writes:\n\n> IpcSemaphoreCreate: semget(key=5432004, num=17, 03600) failed: No space left\n> on\n> device\n\nhttp://www.postgresql.org/devel-corner/docs/postgres/kernel-resources.htm#SYSVIPC\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 13 Dec 2000 18:58:12 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No postgres on Solaris"
},
{
"msg_contents": "I found it in the PostgreSQL Administrator manual under \"Managing Kernel \nResources\".\n\nWade Oberpriller\n\n> \n> Hi,\n> I have been using Postgres-7.0.2 on Solaris 8 for the past few months, and \n> was about to upgrade to 7.1-test, and after following carefully the docs, I \n> get this:\n> \n> postgres@ultra31:~ > /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data\n> IpcSemaphoreCreate: semget(key=5432004, num=17, 03600) failed: No space left \n> on\n> device\n> \n> This error does *not* mean that you have run out of disk space.\n> \n> It occurs either because system limit for the maximum number of\n> semaphore sets (SEMMNI), or the system wide maximum number of\n> semaphores (SEMMNS), would be exceeded. You need to raise the\n> respective kernel parameter. Look into the PostgreSQL documentation\n> for details.\n> \n> postgres@ultra31:~ > \n> \n> I looked at the FAQ_Solaris, but found nothing on this case. I remember \n> making changes to the kernel parameters when I fist installed postgres, but \n> can't remember where I found that info.\n> \n> Any clues?\n> \n> -- \n> System Administration: It's a dirty job, \n> but someone told I had to do it.\n> -----------------------------------------------------------------\n> Mart�n Marqu�s\t\t\temail: \[email protected]\n> Santa Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\n> Administrador de sistemas en math.unl.edu.ar\n> -----------------------------------------------------------------\n> \n\n",
"msg_date": "Wed, 13 Dec 2000 13:29:29 -0600 (CST)",
"msg_from": "[email protected] (Wade D. Oberpriller)",
"msg_from_op": false,
"msg_subject": "Re: No postgres on Solaris"
},
{
"msg_contents": "\n\nSorry...that url for easy PostgreSQL (windowsm\n\n",
"msg_date": "Thu, 14 Dec 2000 17:59:48 +0800 (PST)",
"msg_from": "Chris Ian Capon Fiel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v.7.0.2 for Windows 98,2000,NT"
},
{
"msg_contents": "\n\nSorry for the website is not accessble that time .... but now it can be\naccess at this url http://208.160.255.143\n\nthis include an easy installation of PostgreSQL v.7.0.2 for windows\n98,2000 and NT. there is a pg guardian that automatically start and setup\nur server and many more...:) hope you like this piece of program....\n\n\nian\n\n",
"msg_date": "Thu, 14 Dec 2000 18:11:31 +0800 (PST)",
"msg_from": "Chris Ian Capon Fiel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL for Windows 98, 2000, NT"
}
]
|
[
{
"msg_contents": "I suggest to remove the following elog from line 943 of xlog.c. \nIt does not give real useful info and is repeated for each checkpoint,\nthus filling the log of an otherwise idle server.\n\n elog(LOG, \"MoveOfflineLogs: skip %s\", xlde->d_name);\n\nDEBUG: MoveOfflineLogs: skip 0000000000000001\n\nGreetings :-)\nAndreas\n",
"msg_date": "Mon, 11 Dec 2000 16:09:03 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "suggest remove of elog in xlog.c"
}
]
|
[
{
"msg_contents": "> Hiroshi Inoue <[email protected]> writes:\n> > When VACUUM for a table starts, the transaction is not\n> > committed yet of cource. After *commit* VACUUM has handled\n> > heap/index tuples very carefully to be crash-safe before\n> > 7.1. Currently another vacuum could be invoked in the\n> > already committed transaction. There has been no such\n> > situation before 7.1. Yes,VACUUM isn't crash-safe now.\n> \n> Vadim, do you agree with this argument? If so, I think it's\n> something we need to fix. I don't see what Hiroshi is worried\n> about, myself, but if there really is an issue here...\n\nIf we move tuples in already committed state, a page with new\ntuple position goes to disk and backend crashes before page with\nold tuple position updated then we'll have two version of tuple\nafter restart (new tuple with HEAP_MOVED_IN is valid and there is\nno HEAP_MOVED_OFF in old tuple version).\nI don't know how bad is it for TOAST tables though.\n\nVadim\nP.S. I had no Inet access on weekends - my home phone line was broken...\n",
"msg_date": "Mon, 11 Dec 2000 09:19:34 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Is VACUUM still crash-safe?"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> If we move tuples in already committed state, a page with new\n> tuple position goes to disk and backend crashes before page with\n> old tuple position updated then we'll have two version of tuple\n> after restart (new tuple with HEAP_MOVED_IN is valid and there is\n> no HEAP_MOVED_OFF in old tuple version).\n\nThat's not good. Perhaps VACUUM still needs to fsync the file before\nits internal commit?\n\n> I don't know how bad is it for TOAST tables though.\n\nI still don't see anything here that affects the handling of TOAST\ntables, which was Hiroshi's original complaint.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2000 12:36:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is VACUUM still crash-safe? "
}
]
|
[
{
"msg_contents": "> I suggest to remove the following elog from line 943 of xlog.c. \n> It does not give real useful info and is repeated for each checkpoint,\n> thus filling the log of an otherwise idle server.\n> \n> elog(LOG, \"MoveOfflineLogs: skip %s\", \n> xlde->d_name);\n> \n> DEBUG: MoveOfflineLogs: skip 0000000000000001\n> \n> Greetings :-)\n\nNo problems -:)\n\nVadim\n",
"msg_date": "Mon, 11 Dec 2000 09:20:41 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: suggest remove of elog in xlog.c"
}
]
|
[
{
"msg_contents": "> > If we move tuples in already committed state, a page with new\n> > tuple position goes to disk and backend crashes before page with\n> > old tuple position updated then we'll have two version of tuple\n> > after restart (new tuple with HEAP_MOVED_IN is valid and there is\n> > no HEAP_MOVED_OFF in old tuple version).\n> \n> That's not good. Perhaps VACUUM still needs to fsync the file before\n> its internal commit?\n\nOps, sorry - this case is not relevant to 7.1: WAL guarantees that\nboth pages will be updated on restart. Seems we are safe now.\n\nVadim\n",
"msg_date": "Mon, 11 Dec 2000 10:00:54 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Is VACUUM still crash-safe? "
},
{
"msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > > If we move tuples in already committed state, a page with new\n> > > tuple position goes to disk and backend crashes before page with\n> > > old tuple position updated then we'll have two version of tuple\n> > > after restart (new tuple with HEAP_MOVED_IN is valid and there is\n> > > no HEAP_MOVED_OFF in old tuple version).\n> >\n> > That's not good. Perhaps VACUUM still needs to fsync the file before\n> > its internal commit?\n> \n> Ops, sorry - this case is not relevant to 7.1: WAL guarantees that\n> both pages will be updated on restart. Seems we are safe now.\n> \n\nFirst,already committed state isn't a normal state at least without WAL.\nWe must have access to db as less as possible in the state without WAL.\nAFAIK there has been no proof that we are sufficently safe in the \nstate under WAL. Don't you have to prove it if you dare to do another\nvacuum in the state ?\n\nSecond,isn't the following an example that VACUUM isn't crash-safe.\n\n VACUUM of a toast table crashed immediately after the movement\n of a tuple(and before inserting corresponding index tuples).\n Unfortunately the movement of a tuple is directly committed in\n already committed state but corresponding index tuples aren't\n inserted.\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Tue, 12 Dec 2000 08:59:07 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is VACUUM still crash-safe?"
},
{
"msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> VACUUM of a toast table crashed immediately after the movement\n> of a tuple(and before inserting corresponding index tuples).\n> Unfortunately the movement of a tuple is directly committed in\n> already committed state but corresponding index tuples aren't\n> inserted.\n\nAh, *now* I see what you're talking about. You're right, the TOAST\ntable has to be vacuumed under a separate transaction number.\n\nI still don't like releasing the lock on the master table though.\nVACUUM cheats on the commit already, could it start a new transaction\nnumber without releasing the lock?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2000 20:46:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is VACUUM still crash-safe? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <[email protected]> writes:\n> > VACUUM of a toast table crashed immediately after the movement\n> > of a tuple(and before inserting corresponding index tuples).\n> > Unfortunately the movement of a tuple is directly committed in\n> > already committed state but corresponding index tuples aren't\n> > inserted.\n> \n> Ah, *now* I see what you're talking about. You're right, the TOAST\n> table has to be vacuumed under a separate transaction number.\n> \n> I still don't like releasing the lock on the master table though.\n> VACUUM cheats on the commit already, could it start a new transaction\n> number without releasing the lock?\n> \n\nIt is also preferable that we could replace current intermediate\n*commit*\nof vacuum by real commit(without releaseing the lock).\nIIRC,Vadim and I talked about it a little once before.\n\nWe could avoid releasing the lock at commit time but probably\nthe next StartTransaction() has to change xid-s of LockTable\nentries. I'm not sure if it's sufficient or not. For example\nWe could hardly keep row-level lock. We could acquire the lock\non a row which is already committed.\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Tue, 12 Dec 2000 13:13:57 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is VACUUM still crash-safe?"
}
]
|
[
{
"msg_contents": "> One thing we should look at before going with a 64-bit method is the\n> extra storage space for the larger checksum. We can clearly afford\n> an extra 32 bits for a checksum on an 8K disk page, but if Vadim is\n> envisioning checksumming each individual XLOG record then the extra\n> space is more annoying.\n\nWe need in checksum for each record. But there is no problem with\n64bit CRC: log record header is 8byte aligned, so CRC addition\nwill add 8bytes to header anyway. Is there any CRC64 code?\n\nVadim\n",
"msg_date": "Mon, 11 Dec 2000 10:09:01 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Re: CRC "
},
{
"msg_contents": "On Mon, Dec 11, 2000 at 10:09:01AM -0800, Mikheev, Vadim wrote:\n> > One thing we should look at before going with a 64-bit method is the\n> > extra storage space for the larger checksum. We can clearly afford\n> > an extra 32 bits for a checksum on an 8K disk page, but if Vadim is\n> > envisioning checksumming each individual XLOG record then the extra\n> > space is more annoying.\n> We need in checksum for each record. But there is no problem with\n> 64bit CRC: log record header is 8byte aligned, so CRC addition\n> will add 8bytes to header anyway. Is there any CRC64 code?\n\nAll you need is a good 64-bit polynomial. Unfortunately, I've been\nunable to find one that's been analyzed to any amount.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/",
"msg_date": "Mon, 11 Dec 2000 14:42:31 -0600",
"msg_from": "Bruce Guenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: CRC"
}
]
|
[
{
"msg_contents": "Hello all,\n\nGreat Bridge formally announced its first product and service offerings \ntoday. Here are the highlights:\n\n * QA-tested distribution of PostgreSQL 7.0.3 for Linux (free, source \n and binaries available at http://www.greatbridge.com/download)\n * Automated graphical installer (free, source and binaries available \n at http://www.greatbridge.org/project/gbinstaller/)\n * 500+ pages of documentation (free, available at \n http://www.greatbridge.com/docs)\n * professional support offerings ranging all the way up to 24 \n hours/7 days\n * consulting services ranging from planning and design to porting \n and implementation\n \nI'd be happy to answer any questions on- or off-list. Or of course you \ncan talk to John Rickman, our VP Sales, [email protected].\n\nHere's a link to the announcement:\n\nhttp://www.greatbridge.com/about/press.php?content_id=23\nRegards,\nNed\n\n-- \n----------------------------------------------------\nNed Lilly e: [email protected]\nVice President w: www.greatbridge.com\nEvangelism / Hacker Relations v: 757.233.5523\nGreat Bridge, LLC f: 757.233.5555\n\n",
"msg_date": "Mon, 11 Dec 2000 12:17:51 -0600",
"msg_from": "Ned Lilly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Great Bridge PostgreSQL products and services"
},
{
"msg_contents": "Looks good.\n\n> Hello all,\n> \n> Great Bridge formally announced its first product and service offerings \n> today. Here are the highlights:\n> \n> * QA-tested distribution of PostgreSQL 7.0.3 for Linux (free, source \n> and binaries available at http://www.greatbridge.com/download)\n> * Automated graphical installer (free, source and binaries available \n> at http://www.greatbridge.org/project/gbinstaller/)\n> * 500+ pages of documentation (free, available at \n> http://www.greatbridge.com/docs)\n> * professional support offerings ranging all the way up to 24 \n> hours/7 days\n> * consulting services ranging from planning and design to porting \n> and implementation\n> \n> I'd be happy to answer any questions on- or off-list. Or of course you \n> can talk to John Rickman, our VP Sales, [email protected].\n> \n> Here's a link to the announcement:\n> \n> http://www.greatbridge.com/about/press.php?content_id=23\n> Regards,\n> Ned\n> \n> -- \n> ----------------------------------------------------\n> Ned Lilly e: [email protected]\n> Vice President w: www.greatbridge.com\n> Evangelism / Hacker Relations v: 757.233.5523\n> Great Bridge, LLC f: 757.233.5555\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Dec 2000 13:46:41 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Great Bridge PostgreSQL products and services"
},
{
"msg_contents": "On Mon, Dec 11, 2000 at 12:17:51PM -0600, Ned Lilly wrote:\n> Great Bridge formally announced its first product and service offerings \n> today. Here are the highlights:\n>...\n \nNed, do you have any plans towards Europe? For instance will anyone of you\nbe at the CeBit next year?\n\nI think we should start talking about some sort of partnership.\n\nLater\nMichael\n-- \nDr. Michael Meskes | Tel.: +49 (2461) 69071-0\nGesch�ftsf�hrer, credativ GmbH | Fax: +49 (2461) 69071-1\nKarl-Heinz-Beckurts-Str. 13, | Mobil: +49 (170) 1857143\n52428 J�lich, Deutschland | Email: [email protected]\n",
"msg_date": "Sat, 16 Dec 2000 17:27:18 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Great Bridge PostgreSQL products and services"
},
{
"msg_contents": "ARGH!\n\nSorry, forget about my last mail on this topic. It was supposed to go to Ned\nand not the list.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sat, 16 Dec 2000 17:32:59 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Great Bridge PostgreSQL products and services"
}
]
|
[
{
"msg_contents": "Ok, after taking in feedback from several RPM users over the last six\nmonths or so, I have come up with a list of possible changes to the\nRPMset for PostgreSQL 7.1. I'd like comments on these changes. If\nthere are now comments, I'll go ahead and make the changes.\n\n1.)\tAddition of a postgresql-lib subpackage. Rationale: those using\njust the Perl, Python, or Tcl clients may not want the full psql cli and\ndocumetation installed just to use their client. This package would\nsimply be the shared object dynamic load libraries necessary for any\nclient.\n\n2.)\tAddition of a postgresql-pltcl subpackage. Rationale: pl/tcl is\ncurrently included as part of the postgresql-tcl package. If someone\nhas the need for a tcl-client ONLY installation, they currently cannot\ndo so due to the postgresql-tcl package's dependency upon the server\nsubpackage being loaded. Likewise, answering the question 'why not put\npl/tcl in the main server package', someone needing the server package,\nbut not pl/tcl, bight not want to have the full Tcl client installed\njust to run a server.\n\n3.)\tCross-distribution mechanisms. This is a work-in-progress -- but\nthe eventual goal is a source RPM that will build working packages on\nthe most popular RPM-based Linux distributions, as well as on other OS's\nthat have RPM installed (RPM is not just for Linux).\n\n4.)\tAddition of a postgresql-docs subpackage. The main package is\nrather large -- there may be those that want the docs but not the other\nprograms that are in the main package. Now, the argument can be made\nthat, if the docs are split out, then there is much less need for a\nseparate libs subpackage -- and I am inclined to agree. I am certainly\nopen to suggestions -- but the gist is that I am looking at ways for the\nRPM installation to have the minimum necessary footprint for what the\nuser wants to do. \n\nI am a minimalist not because of a desire to save disk space -- disk is\ncheap, after all. I am a minimalist because of security -- if I don't\n_need_ Perl installed, then I shouldn't be forced to install Perl, for\ninstance. A _full_ PostgreSQL RPM installation requires Python, Perl,\nTcl/Tk, and X installed -- and many folk will not need nor will they\nwant to have X installed on their production database server. The\nsecurity angle is also why I use my own RPM's -- on a machine that has\nno compiler. It is hard for a malicious cracker to compile cracking\ntools or rootkits on my machine when no compiler exists on my machine. \nI have been cracked into once -- and I don't plan on being so obliging\nto the cracker next time one does get in.\n\nThe from-source configure allows much flexibility in what parts get\nbuilt -- I just want to provide the same or similar flexibility in the\nprebuilt RPM installation.\n\nI am cross-posting (via blind copy) this to -hackers since the community\nin -hackers is well-versed in system issues such as these.\n\nReplies should go to the -ports list, or directly to me.\n\nMany thanks for the feedback I have received thus far!\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 11 Dec 2000 14:09:35 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "RPM changes for 7.1."
},
{
"msg_contents": "Lamar Owen writes:\n\n> 1.)\tAddition of a postgresql-lib subpackage.\n\nWhat exactly will be in this one?\n\nOne major gripe about the RPMs I have is that the client package is named\n\"postgresql\". If I install \"postgresql\" I'd sort of expect a database\nserver. I suggest naming the package \"postgresql-clients\".\n\n> It is hard for a malicious cracker to compile cracking tools or\n> rootkits on my machine when no compiler exists on my machine.\n\nWhy would a cracker need to compile the rootkit on your machine? He can\njust build it on his and upload it. If your intent is on making things\ninconvenient for crackers then you're missing the point. But I\ndisgress...\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 11 Dec 2000 22:00:56 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RPM changes for 7.1."
},
{
"msg_contents": "On Mon, Dec 11, 2000 at 02:09:35PM -0500, Lamar Owen wrote:\n> \n> I am cross-posting (via blind copy) this to -hackers since the community\n> in -hackers is well-versed in system issues such as these.\n\nAh, _that's_ why my procmail filter missed this one! Couldn't think of\nwhy you'd email me about RPMs directly, since I use Oliver's debs.\n\nLooks good, though, and the analysis applies regardless of the details of\nwhich packing system is used. You and Oliver would be well advised to \nconsider using the same scheme, as much as possible.\n\nHere's the current set of Debian packages derived from the main postgresql \nsource, with their descriptions:\n\npgaccess - Tk/Tcl front-end for PostgreSQL database\npostgresql-slink - Package to ease upgrade of postgresql from Debian 2.1 to 2.2\nlibpgsql2 - Shared library libpq.so.2 for PostgreSQL\npostgresql-contrib - Additional facilities for PostgreSQL\npostgresql-test - Regression test suite for PostgreSQL\nlibpgjava - Java database (JDBC) driver for PostgreSQL\npostgresql-pl - A procedural language for PostgreSQL\necpg - Embedded SQL for PostgreSQL\nlibpgtcl - Tcl/Tk library and front-end for PostgreSQL.\nlibpgsql - Library for connecting to PostgreSQL 6.3 backend\npostgresql-dev - Header files for libpq (postgresql library)\npostgresql - Object-relational SQL database, descended from POSTGRES.\npostgresql-doc - Documentation for the PostgreSQL database.\npostgresql-client - Front-end programs for PostgreSQL\nlibpgperl - Perl modules for PostgreSQL.\nodbc-postgresql - ODBC support for PostgreSQL\ntask-database-pg - PostgreSQL database\n\nthe -pl package contains both plpgsql and pltcl. The descripton there\nneeds updating. The task- package is an empty package, with dependencies\nthat pull in a 'recommended set' of packages to fullfill a given task, in \nthis case, running a postgresql database installation. It pulls in:\n\npostgresql\npostgresql-client\npgaccess\necpg\npostgresql-doc\npostgresql-pl\npostgresql-contrib\nphp-pgsql\n\nAs you can see, Oliver has also taken the route of allowing the admin\nto load the minimum set of software for any configuration, except for\nthe procedural languages on the server itself. Debian has package naming\npolicies for library and development packages. Libraries are named libfoo,\nwhile development packages (usually header files) are foo-dev.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Mon, 11 Dec 2000 15:28:49 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RPM changes for 7.1."
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > 1.) Addition of a postgresql-lib subpackage.\n> What exactly will be in this one?\n\nlibpq and associated symlinks.\n \n> One major gripe about the RPMs I have is that the client package is named\n> \"postgresql\". If I install \"postgresql\" I'd sort of expect a database\n> server. I suggest naming the package \"postgresql-clients\".\n\nI'm actually considering removing the 'postgresql' package altogether --\nand replace it with postgresql-libs, postgresql-docs, and\npostgresql-psql. Not pretty, though. But I do get your point -- and\nagree with it to an extent.\n\nThe postgresql-client package existed once -- and the current RPM's\nObsoletes: it. That can be changed if need be.\n \n> > It is hard for a malicious cracker to compile cracking tools or\n> > rootkits on my machine when no compiler exists on my machine.\n \n> Why would a cracker need to compile the rootkit on your machine? He can\n> just build it on his and upload it. If your intent is on making things\n> inconvenient for crackers then you're missing the point.\n\nThe time I was cracked into the cracker compiled the rootkit on my box\n-- libc versioning difficulties and all of older RedHat's. This was\nwhen there were major changes between RedHat 4.x (which I was running at\nthe time) and RedHat 5.0, which was more popular by that time. The more\nstumblingblocks I put in the cracker's way, the better I like it.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 11 Dec 2000 16:39:25 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RPM changes for 7.1."
},
{
"msg_contents": "> > 1.) Addition of a postgresql-lib subpackage.\n> What exactly will be in this one?\n> One major gripe about the RPMs I have is that the client package is named\n> \"postgresql\". If I install \"postgresql\" I'd sort of expect a database\n> server. I suggest naming the package \"postgresql-clients\".\n\nWe had it this way for some time, and I found it annoying for at least a\ncouple of reasons stemming from the observation that in a real\ndistributed system, there will be more clients than servers:\n\n1) The docs etc should colocate with clients, and RPMs make that more\ndifficult if the \"primary package\" does not have the base name of the\ntotal package. If the docs (or at least some docs) are traveling with\nthe clients, and if it would be easiest to find them in\n/usr/doc/postgresql-7.1/, then that package should have the docs (it\ndoes). If Lamar moves them to a -docs package, then they will show up in\n/usr/doc/postgresql-docs-7.1/ which is redundantly named and somewhat\nobscure to guessing.\n\n2) The base package should be able to be installed in a useful way by\nitself. For a single-machine installation, both will be installed\nanyway, but in general a server cannot be accessed or configured without\nthe client interfaces available.\n\n - Thomas\n",
"msg_date": "Tue, 12 Dec 2000 07:06:57 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RPM changes for 7.1."
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n >Here's the current set of Debian packages derived from the main postgresql \n >source, with their descriptions:\n...\n >\n >the -pl package contains both plpgsql and pltcl. The descripton there\n >needs updating. \n\nThis is a mistake, and the dependencies aren't right. pltcl ought\nto be in with libpgtcl, to share the dependency on tcl. I suppose\nI originally classed the PLs together, but there's plperl as well now. \nI'll be fixing this soon.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Be of good courage, and he shall strengthen your \n heart, all ye that hope in the LORD.\" \n Psalms 31:24 \n\n\n",
"msg_date": "Tue, 12 Dec 2000 14:29:25 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RPM changes for 7.1. "
},
{
"msg_contents": "Thomas Lockhart wrote:\n> > > 1.) Addition of a postgresql-lib subpackage.\n> > What exactly will be in this one?\n> > One major gripe about the RPMs I have is that the client package is named\n> > \"postgresql\". If I install \"postgresql\" I'd sort of expect a database\n> > server. I suggest naming the package \"postgresql-clients\".\n \n> We had it this way for some time, and I found it annoying for at least a\n> couple of reasons stemming from the observation that in a real\n> distributed system, there will be more clients than servers:\n \n> 1) The docs etc should colocate with clients, and RPMs make that more\n[snip]\n> does). If Lamar moves them to a -docs package, then they will show up in\n> /usr/doc/postgresql-docs-7.1/ which is redundantly named and somewhat\n> obscure to guessing.\n\nWe already have the problem with postgresql-tk -- the pgaccess docs get\nplaced in the postgresql-tk-x.y.z directory under the docs. There are\nsome things I can do with that -- I'll have to investigate. The %doc\ndirective can be made do some other things.\n\nHowever, I am not leaning towards a separate docs subpackage -- it was\nsuggested to me, and I placed it on my list for discussion. However, I\n_am_ inclined to put the PostScript docs in separate packaging, once I\nget the %doc directive to do what I want it to do. But the man pages and\nHTML docs (plus and index page I need to put together) should, IMO, stay\nin the 'main' package. But what else belongs there?\n\nSuSE is already shipping a separate libs RPM (although their naming has\nthus far been somewhat different due to legacy considerations --\nhopefully that will change to allow naming consistency amongst other\ncross-distribution consistencies). The suggestion for a separate -libs\nsubpackage (which would be very small, BTW) was made by the SuSE\nmaintainer, Reinhard Max. And I tend to agree with his reasoning.\n \n> 2) The base package should be able to be installed in a useful way by\n> itself. For a single-machine installation, both will be installed\n> anyway, but in general a server cannot be accessed or configured without\n> the client interfaces available.\n\nThis is most definitely a valid point. But, I still go back to the FAQ\nof 'I installed the postgresql RPM, and I can't find initdb to start a\ndatabase!' \n\nMaking the postgresql package depend upon the postgresql-libs package is\neasy enough. That means you do have at leats two packages to install. \nAnd, that dependency won't interfere with OS installs, as they can\nautomatically resolve dependencies such as that.\n\nNo, better instructions on the download page, detailing which package to\ninstall for functionality required, should be applied. I can write that\ndoc easy enough.\n\nAs to 'superpackages' that simply require other packages in order to\ninstall, well, that may be a possibility. In the RPM context, that will\nmean an empty RPM with an ugly marriage of a dependency list, and a\n%post scriptlet that does an 'rpm -e' of itself. I cringe thinking\nabout recursively calling RPM inside a scriptlet :-).... Although, I\n_have_ seen it done.\n\nI am a firm beliver in the client-server split -- but I am open to\nsuggestion as to the best way of documenting said split.\n\nOne example of a split that seems to work well (AFAIK) is the amanda\nnetwork backup tool.\nThere are four packages:\namanda\namanda-client\namanda-server\namanda-devel.\n\nThe main package contains files common to the client and server. Amanda\n(the Advanced Maryland Automatic Network Disk Archiver) is similar\nenough to the postgresql situation to be a good analogy -- the client\ncan exist on a machine with no server, and vice-versa. \n\nThe client subpackage contains files only needed by the client, and the\nserver package contains files only needed by the server. The devel\nsubpackage contains the static libs and headers for building custom apps\nthat might need to connect to an amanda server.\n\nHowever, we have a different split: the main package is also the\nclient. But, just what files are really _common_ to a server\ninstallation versus a client installation? Well, the psql cli is\n_required_ for a server installation, really -- unless you want to not\nrun pgdump and the standard restore on the server machine. The problem\nis that the upgrade to a new major version requires a dump/restore --\nmeaning the server package really _needs_ the cli, as well as the libpq\nclient-side libs.\n\nSo, it _is_ appropriate for us to have the main package of common files\nhave the client stuff as well.\n\nAlthough there is precedent for a 'postgresql-common' RPM -- see the\nnetscape packages -- you have netscape-common, netscape-navigator, and\nnetscape-communicator packages -- but there are some differences there\nas well.\n\nHaving the libs split out means someone can install postgresql-libs and\npostgresql-perl and have a meaningful perl client, for very little\nspace. Of course, you can install the main postgresql RPM with the\n--nodocs directive and not have any of the docs install for a lean main\npackage installation.\n\nOr postgresql-libs and php (and php-pgsql) for a PHP/Apache\ninstallation. No need to pull in _any_ cli or much of the docs. \nAlthough if they want the docs, then they can install the main package\n(which is predominately docs). The rest of the main package is\ncurrently the c and c++ libs and the base client software (the create*\nand drop* scripts, pg_dump, pg_dumpall, vacuumdb, and psql).\n\nComments?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 12 Dec 2000 14:30:01 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RPM changes for 7.1."
},
{
"msg_contents": "> We already have the problem with postgresql-tk -- the pgaccess docs get\n> placed in the postgresql-tk-x.y.z directory under the docs.\n\nI find it somewhat odd that pgaccess is thrown together with pgtksh.\nMostly they have nothing to do with each other besides both being somehow\nconnected with Tcl/Tk. That's like a postgresql-c package: nonsense.\n\nSimilarly, I find it not useful that PL/Perl and thrown together with\nPg.pm, and PL/Tcl is thrown together with pgtclsh. Maybe you want to make\na separate postgresql-server-{perl,tcl} package. I think you already\nsuggested that.\n\n> However, I am not leaning towards a separate docs subpackage -- it was\n> suggested to me, and I placed it on my list for discussion.\n\nI don't think this is a bad idea. Maybe people only want to install the\ndocs once in their network and make them available via a web server. I\ndid it that way.\n\n> Making the postgresql package depend upon the postgresql-libs package is\n> easy enough. That means you do have at leats two packages to install.\n\n(On a quiet night you can hear the Debian users laughing...)\n\n> One example of a split that seems to work well (AFAIK) is the amanda\n> network backup tool.\n\n> The main package contains files common to the client and server.\n\nIn PostgreSQL there are, strictly speaking, no files in common to client\nand server.\n\nTwo more points:\n\n* createlang, droplang, and pg_id should be in the server package.\n\n* Maybe you want to create a postgresql-server-devel package with the\nbackend header files. These are needed rather seldom.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 13 Dec 2000 21:05:03 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RPM changes for 7.1."
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Similarly, I find it not useful that PL/Perl and thrown together with\n> Pg.pm, and PL/Tcl is thrown together with pgtclsh. Maybe you want to make\n> a separate postgresql-server-{perl,tcl} package. I think you already\n> suggested that.\n\nI agree -- I use DBD, and thus do not feel the need for Pg.pm, but I do use\nPL/Perl.\n\n> > However, I am not leaning towards a separate docs subpackage -- it was\n> > suggested to me, and I placed it on my list for discussion.\n> \n> I don't think this is a bad idea. Maybe people only want to install the\n> docs once in their network and make them available via a web server. I\n> did it that way.\n\nI have created docs packages in the past. Until I was reminded of the\n--excludedocs option for rpm. Actually, I may have been one who suggested a\ndocs package for PostgreSQL)\n\n> > Making the postgresql package depend upon the postgresql-libs package is\n> > easy enough. That means you do have at leats two packages to install.\n> \n> (On a quiet night you can hear the Debian users laughing...)\n> \n> > One example of a split that seems to work well (AFAIK) is the amanda\n> > network backup tool.\n> \n> > The main package contains files common to the client and server.\n> \n> In PostgreSQL there are, strictly speaking, no files in common to client\n> and server.\n> \n> Two more points:\n> \n> * createlang, droplang, and pg_id should be in the server package.\n> \n> * Maybe you want to create a postgresql-server-devel package with the\n> backend header files. These are needed rather seldom.\n\nWould we have postgresql-server-devel and postgresql-clients-devel?\nThis splits things up rather finely, but it seems consistent, and I \ntend to like that -- overall the way Lamar is going sounds very good \nto me. And supports a point from an old discussion -- no matter how \ngood the developers are (and they are great) -- it really helps to \nhave a good packager as well.\n\n-- \nKarl DeBisschop [email protected]\nLearning Network/Information Please http://www.infoplease.com\nNetsaint Plugin Developer [email protected]\n",
"msg_date": "Wed, 13 Dec 2000 18:21:24 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RPM changes for 7.1."
},
{
"msg_contents": "Btw., there's a bug in the RPMs distributed with Red Hat 7.0.\n\nIf the postmaster that is under RPM and SysV init control is not running,\nthen a call '/etc/init.d/postgresql stop' will kill all other postmasters\non the system, for example those that I have running for development.\n\nYou should rely on the postmaster.pid file in the data area to find the\nprocess to kill.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 15 Dec 2000 19:23:05 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RPM changes for 7.1."
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Btw., there's a bug in the RPMs distributed with Red Hat 7.0. \n> If the postmaster that is under RPM and SysV init control is not running,\n> then a call '/etc/init.d/postgresql stop' will kill all other postmasters\n> on the system, for example those that I have running for development.\n \n> You should rely on the postmaster.pid file in the data area to find the\n> process to kill.\n\nDuly noted, and my apologies. I'll see what I can do.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 15 Dec 2000 15:57:39 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RPM changes for 7.1."
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n\n> 1.)\tAddition of a postgresql-lib subpackage. Rationale: those using\n> just the Perl, Python, or Tcl clients may not want the full psql cli and\n> documetation installed just to use their client. This package would\n> simply be the shared object dynamic load libraries necessary for any\n> client.\n\nSounds like a good idea.\n\n\n> 2.)\tAddition of a postgresql-pltcl subpackage. Rationale: pl/tcl is\n> currently included as part of the postgresql-tcl package. If someone\n> has the need for a tcl-client ONLY installation, they currently cannot\n> do so due to the postgresql-tcl package's dependency upon the server\n> subpackage being loaded. Likewise, answering the question 'why not put\n> pl/tcl in the main server package', someone needing the server package,\n> but not pl/tcl, bight not want to have the full Tcl client installed\n> just to run a server.\n\nThe package is 64 k. Why split it?\n \n> I am a minimalist not because of a desire to save disk space -- disk is\n> cheap, after all. I am a minimalist because of security -- if I don't\n> _need_ Perl installed, then I shouldn't be forced to install Perl, for\n> instance. \n\nI've yet to see a security hole from documentation :)\n> \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "20 Dec 2000 17:35:29 -0500",
"msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RPM changes for 7.1."
}
]
|
[
{
"msg_contents": "I know you guys are pretty busy with the upcoming release but I\nwas hoping for more interest in this work.\n\nWith this (which needs forward porting) we're able to cut\nvacuum time down from ~10minutes to under 30 seconds.\n\nThe code is a nop unless you compile with special options(MMNB)\nspecify the special vacuum flag (VLAZY) and doesn't look like it\nmesses with anything otherwise.\n\nI was hoping to see it go into 7.0.x because of the non-intrusiveness\nof it and also because Vadim did it so he should understand it so\nthat it won't cause any problems (and on the slight chance that it\ndoes, he should be able to fix it).\n\nBasically Vadim left it up to me to campaign for acceptance of this\nwork and he said he wouldn't have a problem bringing it in as long\nas it was ok with the rest of the development team.\n\nSo can we get a go-ahead on this? :)\n\nthanks,\n-Alfred\n\n----- Forwarded message from Alfred Perlstein <[email protected]> -----\n\nFrom: Alfred Perlstein <[email protected]>\nTo: [email protected]\nSubject: [HACKERS] Patches with vacuum fixes available for 7.0.x\nDate: Thu, 7 Dec 2000 14:57:32 -0800\nMessage-ID: <[email protected]>\nUser-Agent: Mutt/1.2.5i\nSender: [email protected]\n\nWe recently had a very satisfactory contract completed by\nVadim.\n\nBasically Vadim has been able to reduce the amount of time\ntaken by a vacuum from 10-15 minutes down to under 10 seconds.\n\nWe've been running with these patches under heavy load for\nabout a week now without any problems except one:\n don't 'lazy' (new option for vacuum) a table which has just\n had an index created on it, or at least don't expect it to\n take any less time than a normal vacuum would.\n\nThere's three patchsets and they are available at:\n\nhttp://people.freebsd.org/~alfred/vacfix/\n\ncomplete diff:\nhttp://people.freebsd.org/~alfred/vacfix/v.diff\n\nonly lazy vacuum option to speed up index vacuums:\nhttp://people.freebsd.org/~alfred/vacfix/vlazy.tgz\n\nonly lazy vacuum option to only scan from start of modified\ndata:\nhttp://people.freebsd.org/~alfred/vacfix/mnmb.tgz\n\nAlthough the patches are for 7.0.x I'm hoping that they\ncan be forward ported (if Vadim hasn't done it already)\nto 7.1.\n\nenjoy!\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n\n----- End forwarded message -----\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Mon, 11 Dec 2000 12:32:10 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "(one more time) Patches with vacuum fixes available."
},
{
"msg_contents": "\nSounds good to me. [Of course, I never met a patch I didn't like, or so\nthey say.]\n\n\n> I know you guys are pretty busy with the upcoming release but I\n> was hoping for more interest in this work.\n> \n> With this (which needs forward porting) we're able to cut\n> vacuum time down from ~10minutes to under 30 seconds.\n> \n> The code is a nop unless you compile with special options(MMNB)\n> specify the special vacuum flag (VLAZY) and doesn't look like it\n> messes with anything otherwise.\n> \n> I was hoping to see it go into 7.0.x because of the non-intrusiveness\n> of it and also because Vadim did it so he should understand it so\n> that it won't cause any problems (and on the slight chance that it\n> does, he should be able to fix it).\n> \n> Basically Vadim left it up to me to campaign for acceptance of this\n> work and he said he wouldn't have a problem bringing it in as long\n> as it was ok with the rest of the development team.\n> \n> So can we get a go-ahead on this? :)\n> \n> thanks,\n> -Alfred\n> \n> ----- Forwarded message from Alfred Perlstein <[email protected]> -----\n> \n> From: Alfred Perlstein <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] Patches with vacuum fixes available for 7.0.x\n> Date: Thu, 7 Dec 2000 14:57:32 -0800\n> Message-ID: <[email protected]>\n> User-Agent: Mutt/1.2.5i\n> Sender: [email protected]\n> \n> We recently had a very satisfactory contract completed by\n> Vadim.\n> \n> Basically Vadim has been able to reduce the amount of time\n> taken by a vacuum from 10-15 minutes down to under 10 seconds.\n> \n> We've been running with these patches under heavy load for\n> about a week now without any problems except one:\n> don't 'lazy' (new option for vacuum) a table which has just\n> had an index created on it, or at least don't expect it to\n> take any less time than a normal vacuum would.\n> \n> There's three patchsets and they are available at:\n> \n> http://people.freebsd.org/~alfred/vacfix/\n> \n> complete diff:\n> http://people.freebsd.org/~alfred/vacfix/v.diff\n> \n> only lazy vacuum option to speed up index vacuums:\n> http://people.freebsd.org/~alfred/vacfix/vlazy.tgz\n> \n> only lazy vacuum option to only scan from start of modified\n> data:\n> http://people.freebsd.org/~alfred/vacfix/mnmb.tgz\n> \n> Although the patches are for 7.0.x I'm hoping that they\n> can be forward ported (if Vadim hasn't done it already)\n> to 7.1.\n> \n> enjoy!\n> \n> -- \n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \"I have the heart of a child; I keep it in a jar on my desk.\"\n> \n> ----- End forwarded message -----\n> \n> -- \n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \"I have the heart of a child; I keep it in a jar on my desk.\"\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Dec 2000 15:57:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available."
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> Basically Vadim left it up to me to campaign for acceptance of this\n> work and he said he wouldn't have a problem bringing it in as long\n> as it was ok with the rest of the development team.\n> So can we get a go-ahead on this? :)\n\nIf Vadim isn't sufficiently confident of it to commit it on his own\nauthority, I'm inclined to leave it out of 7.1. My concern is mostly\nschedule. We are well into beta cycle now and this seems like way too\ncritical (not to say high-risk) a feature to be adding after start of\nbeta.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2000 16:00:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available. "
},
{
"msg_contents": "> Alfred Perlstein <[email protected]> writes:\n> > Basically Vadim left it up to me to campaign for acceptance of this\n> > work and he said he wouldn't have a problem bringing it in as long\n> > as it was ok with the rest of the development team.\n> > So can we get a go-ahead on this? :)\n> \n> If Vadim isn't sufficiently confident of it to commit it on his own\n> authority, I'm inclined to leave it out of 7.1. My concern is mostly\n> schedule. We are well into beta cycle now and this seems like way too\n> critical (not to say high-risk) a feature to be adding after start of\n> beta.\n\nI was wondering if Vadim was hesitant because he had done this under\ncontract. Vadim, are you concerned about reliability or are there other\nissues?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Dec 2000 16:46:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available."
},
{
"msg_contents": "On Mon, 11 Dec 2000, Tom Lane wrote:\n\n> Alfred Perlstein <[email protected]> writes:\n> > Basically Vadim left it up to me to campaign for acceptance of this\n> > work and he said he wouldn't have a problem bringing it in as long\n> > as it was ok with the rest of the development team.\n> > So can we get a go-ahead on this? :)\n> \n> If Vadim isn't sufficiently confident of it to commit it on his own\n> authority, I'm inclined to leave it out of 7.1. My concern is mostly\n> schedule. We are well into beta cycle now and this seems like way too\n> critical (not to say high-risk) a feature to be adding after start of\n> beta.\n\nAs this is post-beta release, and this isn't a bug fix, I concur with this\n... its something we can easily put up on the web site as a patch to v7.1,\nbut would prefer this not going mainstream until *after* release ...\n\n\n",
"msg_date": "Mon, 11 Dec 2000 18:21:10 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available."
},
{
"msg_contents": "On Mon, 11 Dec 2000, Bruce Momjian wrote:\n\n> > Alfred Perlstein <[email protected]> writes:\n> > > Basically Vadim left it up to me to campaign for acceptance of this\n> > > work and he said he wouldn't have a problem bringing it in as long\n> > > as it was ok with the rest of the development team.\n> > > So can we get a go-ahead on this? :)\n> > \n> > If Vadim isn't sufficiently confident of it to commit it on his own\n> > authority, I'm inclined to leave it out of 7.1. My concern is mostly\n> > schedule. We are well into beta cycle now and this seems like way too\n> > critical (not to say high-risk) a feature to be adding after start of\n> > beta.\n> \n> I was wondering if Vadim was hesitant because he had done this under\n> contract. Vadim, are you concerned about reliability or are there other\n> issues?\n\nIrrelevant .. we are post-beta release, and this doesn't fix a bug, so it\ndoesn't go in ...\n\n\n",
"msg_date": "Mon, 11 Dec 2000 18:21:29 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available."
},
{
"msg_contents": "* The Hermit Hacker <[email protected]> [001211 14:27] wrote:\n> On Mon, 11 Dec 2000, Bruce Momjian wrote:\n> \n> > > Alfred Perlstein <[email protected]> writes:\n> > > > Basically Vadim left it up to me to campaign for acceptance of this\n> > > > work and he said he wouldn't have a problem bringing it in as long\n> > > > as it was ok with the rest of the development team.\n> > > > So can we get a go-ahead on this? :)\n> > > \n> > > If Vadim isn't sufficiently confident of it to commit it on his own\n> > > authority, I'm inclined to leave it out of 7.1. My concern is mostly\n> > > schedule. We are well into beta cycle now and this seems like way too\n> > > critical (not to say high-risk) a feature to be adding after start of\n> > > beta.\n> > \n> > I was wondering if Vadim was hesitant because he had done this under\n> > contract. Vadim, are you concerned about reliability or are there other\n> > issues?\n> \n> Irrelevant .. we are post-beta release, and this doesn't fix a bug, so it\n> doesn't go in ...\n\nI'm hoping this just means it won't be investigated until the release\nis made?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Mon, 11 Dec 2000 16:29:48 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available."
}
]
|
[
{
"msg_contents": "> > Ops, sorry - this case is not relevant to 7.1: WAL guarantees that\n> > both pages will be updated on restart. Seems we are safe now.\n> \n> First,already committed state isn't a normal state at least \n> without WAL. We must have access to db as less as possible in the\n> state without WAL.\n> AFAIK there has been no proof that we are sufficently safe in the \n> state under WAL. Don't you have to prove it if you dare to do another\n> vacuum in the state ?\n> \n> Second,isn't the following an example that VACUUM isn't crash-safe.\n> \n> VACUUM of a toast table crashed immediately after the movement\n> of a tuple(and before inserting corresponding index tuples).\n> Unfortunately the movement of a tuple is directly committed in\n> already committed state but corresponding index tuples aren't\n> inserted.\n\nNow you've won -:)\n\nVadim\n",
"msg_date": "Mon, 11 Dec 2000 16:07:35 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Is VACUUM still crash-safe?"
}
]
|
[
{
"msg_contents": "> > > If Vadim isn't sufficiently confident of it to commit it \n> > > on his own authority, I'm inclined to leave it out of 7.1.\n> > > My concern is mostly schedule. We are well into beta cycle\n> > > now and this seems like way too critical (not to say high-risk)\n> > > a feature to be adding after start of beta.\n> >\n> > I was wondering if Vadim was hesitant because he had done this\n> > under contract. Vadim, are you concerned about reliability or\n> > are there other issues?\n>\n> Irrelevant .. we are post-beta release, and this doesn't fix \n> a bug, so it doesn't go in ...\n\n1. I'm confident about code quality.\n2. Copyright issue is resolved.\n3. But we're in beta now.\n\nIf there are no objections then I'm ready to add changes to 7.1.\nElse, I'll produce patches for 7.1 just after release and incorporate\nchanges into 7.2.\n\nComments?\n\nVadim\n",
"msg_date": "Mon, 11 Dec 2000 16:33:24 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: (one more time) Patches with vacuum fixes available\n\t."
},
{
"msg_contents": "On Mon, 11 Dec 2000, Mikheev, Vadim wrote:\n\n> > > > If Vadim isn't sufficiently confident of it to commit it \n> > > > on his own authority, I'm inclined to leave it out of 7.1.\n> > > > My concern is mostly schedule. We are well into beta cycle\n> > > > now and this seems like way too critical (not to say high-risk)\n> > > > a feature to be adding after start of beta.\n> > >\n> > > I was wondering if Vadim was hesitant because he had done this\n> > > under contract. Vadim, are you concerned about reliability or\n> > > are there other issues?\n> >\n> > Irrelevant .. we are post-beta release, and this doesn't fix \n> > a bug, so it doesn't go in ...\n> \n> 1. I'm confident about code quality.\n> 2. Copyright issue is resolved.\n> 3. But we're in beta now.\n> \n> If there are no objections then I'm ready to add changes to 7.1.\n> Else, I'll produce patches for 7.1 just after release and incorporate\n> changes into 7.2.\n> \n> Comments?\n\nIMHO, we are in beta now and this doesn't fix a bug ... if we could make\nthis available as a patch in /contrib until 7.1 is released, i would much\nprefer that ...\n\n\n\n",
"msg_date": "Mon, 11 Dec 2000 21:12:55 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: (one more time) Patches with vacuum fixes available\n ."
},
{
"msg_contents": "> > If there are no objections then I'm ready to add changes to 7.1.\n> > Else, I'll produce patches for 7.1 just after release and incorporate\n> > changes into 7.2.\n> > \n> > Comments?\n> \n> IMHO, we are in beta now and this doesn't fix a bug ... if we could make\n> this available as a patch in /contrib until 7.1 is released, i would much\n> prefer that ...\n\nBut we just entered beta. Can't we let this slide? Seems like a nice\nfeature. I can't image it delaying anything.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Dec 2000 20:34:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available ."
},
{
"msg_contents": "\nGo for it Vadim ... it is only a couple of days in, and I know there are\nseveral places I could personally use it ... \n\nOn Mon, 11 Dec 2000, Bruce Momjian wrote:\n\n> > > If there are no objections then I'm ready to add changes to 7.1.\n> > > Else, I'll produce patches for 7.1 just after release and incorporate\n> > > changes into 7.2.\n> > > \n> > > Comments?\n> > \n> > IMHO, we are in beta now and this doesn't fix a bug ... if we could make\n> > this available as a patch in /contrib until 7.1 is released, i would much\n> > prefer that ...\n> \n> But we just entered beta. Can't we let this slide? Seems like a nice\n> feature. I can't image it delaying anything.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 11 Dec 2000 21:37:17 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available\n ."
},
{
"msg_contents": "Thanks. The other good item is that is already being used in production\nuse, so it seems it is pretty well tested.\n\n> \n> Go for it Vadim ... it is only a couple of days in, and I know there are\n> several places I could personally use it ... \n> \n> On Mon, 11 Dec 2000, Bruce Momjian wrote:\n> \n> > > > If there are no objections then I'm ready to add changes to 7.1.\n> > > > Else, I'll produce patches for 7.1 just after release and incorporate\n> > > > changes into 7.2.\n> > > > \n> > > > Comments?\n> > > \n> > > IMHO, we are in beta now and this doesn't fix a bug ... if we could make\n> > > this available as a patch in /contrib until 7.1 is released, i would much\n> > > prefer that ...\n> > \n> > But we just entered beta. Can't we let this slide? Seems like a nice\n> > feature. I can't image it delaying anything.\n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Dec 2000 21:01:47 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available ."
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> If there are no objections then I'm ready to add changes to 7.1.\n> Else, I'll produce patches for 7.1 just after release and incorporate\n> changes into 7.2.\n\nI'd vote for the second choice. I do not think we should be adding new\nfeatures now. Also, I don't know about you, but I have enough bug fix,\ntesting, and documentation work to keep me busy till January even\nwithout any new features...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2000 21:05:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available . "
},
{
"msg_contents": "On Mon, 11 Dec 2000, Tom Lane wrote:\n\n> \"Mikheev, Vadim\" <[email protected]> writes:\n> > If there are no objections then I'm ready to add changes to 7.1.\n> > Else, I'll produce patches for 7.1 just after release and incorporate\n> > changes into 7.2.\n> \n> I'd vote for the second choice. I do not think we should be adding new\n> features now. Also, I don't know about you, but I have enough bug fix,\n> testing, and documentation work to keep me busy till January even\n> without any new features...\n\nA few points in favor of including this ... first, when Vadim does do the\nstorage manager rewrite for v7.2, the patch is essentially useless ... and\nsecond, its currently being used in production on a server that is/will\ntax it heavily, so it isn't untested ...\n\nI'd almost extend that to the point that it is probably more tested right\nnow then any other feature that has been added to v7.1 pre-beta,\nconsidering my knowledge of Alfred's environment ...\n\n\n",
"msg_date": "Mon, 11 Dec 2000 22:42:10 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available\n ."
},
{
"msg_contents": "> A few points in favor of including this ... first, when Vadim does do the\n> storage manager rewrite for v7.2, the patch is essentially useless ... and\n> second, its currently being used in production on a server that is/will\n> tax it heavily, so it isn't untested ...\n> \n> I'd almost extend that to the point that it is probably more tested right\n> now then any other feature that has been added to v7.1 pre-beta,\n> considering my knowledge of Alfred's environment ...\n\nEwe, that is a strong argument.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Dec 2000 22:23:03 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available ."
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> But we just entered beta. Can't we let this slide? Seems like a nice\n> feature. I can't image it delaying anything.\n\nI can imagine it *breaking* lots of things.\n\nWe just today discovered that we didn't understand the interaction\nbetween VACUUM and TOAST well enough. How well do you think we\nunderstand this new VACUUM behavior that no one but Vadim has even\nseen?\n\nWith all due respect to Vadim, this scares me a lot. I vote NO.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2000 22:56:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available . "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> I'd almost extend that to the point that it is probably more tested right\n> now then any other feature that has been added to v7.1 pre-beta,\n> considering my knowledge of Alfred's environment ...\n\nAnd wasn't Alfred hollering yesterday about sudden instability?\nSee the Re: [COMMITTERS] pgsql/src/include (config.h.in) thread\nover in committers.\n\nMy initial guess is that he's seeing the VACUUM patch tickling a\nsyscache-entry-drop problem --- if so, it shouldn't be a problem in 7.1\n--- but I have no evidence for that guess yet. I'm sure not confident\nenough of the guess to not be afraid of dropping the patch into 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2000 23:06:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available . "
},
{
"msg_contents": "\nOn Mon, 11 Dec 2000, Tom Lane wrote:\n\n> \"Mikheev, Vadim\" <[email protected]> writes:\n> > If there are no objections then I'm ready to add changes to 7.1.\n> > Else, I'll produce patches for 7.1 just after release and incorporate\n> > changes into 7.2.\n> \n> I'd vote for the second choice. I do not think we should be adding new\n> features now. Also, I don't know about you, but I have enough bug fix,\n> testing, and documentation work to keep me busy till January even\n> without any new features...\n\nIt'd be really naughty to add it to the beta at this stage. Would it be\npossible to add it to the 7.1 package with some kind of compile-time option?\nSo that those of us who do want to use it, can.\n\n\n- Andrew\n\n\n",
"msg_date": "Tue, 12 Dec 2000 15:12:12 +1100 (EST)",
"msg_from": "Andrew Snow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available\n ."
},
{
"msg_contents": "On Mon, 11 Dec 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > I'd almost extend that to the point that it is probably more tested right\n> > now then any other feature that has been added to v7.1 pre-beta,\n> > considering my knowledge of Alfred's environment ...\n> \n> And wasn't Alfred hollering yesterday about sudden instability?\n> See the Re: [COMMITTERS] pgsql/src/include (config.h.in) thread\n> over in committers.\n> \n> My initial guess is that he's seeing the VACUUM patch tickling a\n> syscache-entry-drop problem --- if so, it shouldn't be a problem in 7.1\n> --- but I have no evidence for that guess yet. I'm sure not confident\n> enough of the guess to not be afraid of dropping the patch into 7.1.\n\nshould be easily testable though, no? worst case, we pull it out\nafterwards ...\n\n\n",
"msg_date": "Tue, 12 Dec 2000 00:18:41 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available\n ."
},
{
"msg_contents": "> > I'd vote for the second choice. I do not think we should be adding new\n> > features now. Also, I don't know about you, but I have enough bug fix,\n> > testing, and documentation work to keep me busy till January even\n> > without any new features...\n> \n> It'd be really naughty to add it to the beta at this stage. Would it be\n> possible to add it to the 7.1 package with some kind of compile-time option?\n> So that those of us who do want to use it, can.\n\nThat usually adds confusion.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Dec 2000 23:19:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available ."
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> should be easily testable though, no?\n\nWhat makes you think that? \n\n> worst case, we pull it out afterwards ...\n\nNo, worst case is that we release a seriously broken 7.1, and don't\nfind out till afterwards.\n\nThere are plenty of new features on my to-do list, if beta no longer\nmeans anything...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2000 23:32:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available . "
},
{
"msg_contents": "On Mon, 11 Dec 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > should be easily testable though, no?\n> \n> What makes you think that? \n\nAlfred could volunteer to move to v7.1? *grin*\n\n\n",
"msg_date": "Tue, 12 Dec 2000 00:41:20 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available\n ."
},
{
"msg_contents": "* Andrew Snow <[email protected]> [001211 20:21] wrote:\n> \n> On Mon, 11 Dec 2000, Tom Lane wrote:\n> \n> > \"Mikheev, Vadim\" <[email protected]> writes:\n> > > If there are no objections then I'm ready to add changes to 7.1.\n> > > Else, I'll produce patches for 7.1 just after release and incorporate\n> > > changes into 7.2.\n> > \n> > I'd vote for the second choice. I do not think we should be adding new\n> > features now. Also, I don't know about you, but I have enough bug fix,\n> > testing, and documentation work to keep me busy till January even\n> > without any new features...\n> \n> It'd be really naughty to add it to the beta at this stage. Would it be\n> possible to add it to the 7.1 package with some kind of compile-time option?\n> So that those of us who do want to use it, can.\n\nOne is a compile time option (CFLAGS+=-DMMNB), the other doesn't\nhappen unless you ask for it:\n\nvacuum lazy <table>;\n\nI don't understand what the deal here is, as I said, it's optional\ncode that you won't see unless you ask for it.\n\n [children: 0 12/11/2000 21:57:20 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx]\nVacuuming link.\n [children: 0 12/11/2000 21:57:54 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx]\n\n-rw------- 1 pgsql pgsql 134627328 Dec 11 21:57 link\n-rw------- 1 pgsql pgsql 261201920 Dec 11 21:57 link_triple_idx\n\nYup, 30 seconds, the table is 134 megabytes and the index is 261 megs.\n\nI think normally this takes about 10 or so _minutes_.\n\nOn our faster server:\n\n [children: 0 12/11/2000 22:17:50 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx]\nVacuuming referer_link.\n [children: 0 12/11/2000 22:18:09 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx]\n\n-rw------- 1 pgsql wheel 273670144 Dec 11 22:15 link\n-rw------- 1 pgsql wheel 641048576 Dec 11 22:15 link_triple_idx\n\ntime is ~19seconds, table is 273 megs, and index 641 megs.\n\ndual 800mhz, raid 5 disks.\n\nI think the users deserve this patch. :)\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Mon, 11 Dec 2000 22:20:24 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available ."
},
{
"msg_contents": "On Mon, Dec 11, 2000 at 11:32:17PM -0500, Tom Lane wrote:\n> The Hermit Hacker <[email protected]> writes:\n> > worst case, we pull it out afterwards ...\n> \n> No, worst case is that we release a seriously broken 7.1, and don't\n> find out till afterwards.\n> \n> There are plenty of new features on my to-do list, if beta no longer\n> means anything...\n\nDoes this mean the code gets put in contrib/, with a prominent\npointer in the release notes?\n\nNathan Myers\[email protected]\n",
"msg_date": "Tue, 12 Dec 2000 15:51:43 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available ."
},
{
"msg_contents": "\nDid we decide against LAZY? Seems we have a number of people concerned\nabout vacuum downtime, and I can see this as a win for them. If they\ndon't specify LAZY, the code is not run.\n\n> The Hermit Hacker <[email protected]> writes:\n> > should be easily testable though, no?\n> \n> What makes you think that? \n> \n> > worst case, we pull it out afterwards ...\n> \n> No, worst case is that we release a seriously broken 7.1, and don't\n> find out till afterwards.\n> \n> There are plenty of new features on my to-do list, if beta no longer\n> means anything...\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 09:44:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available ."
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Did we decide against LAZY?\n\nI thought the core consensus was that it was too risky to install\npost-beta. On the other hand, we're installing some other pretty\nmajor fixes. Do we want to re-open that discussion?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Jan 2001 10:16:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available . "
},
{
"msg_contents": "On Wed, 24 Jan 2001, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > Did we decide against LAZY?\n>\n> I thought the core consensus was that it was too risky to install\n> post-beta. On the other hand, we're installing some other pretty\n> major fixes. Do we want to re-open that discussion?\n\nI would like to see it in myself ... it will be useless post v7.1, with\nall the work planned for v7.2 ... if there is some way we could get it in\nbefore beta4 goes out, I'd feel even better about it ... Vadim? :)\n\n",
"msg_date": "Wed, 24 Jan 2001 11:52:38 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available\n ."
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Did we decide against LAZY? Seems we have a number of people concerned\n> about vacuum downtime, and I can see this as a win for them. If they\n> don't specify LAZY, the code is not run.\n\nI see a number of possibilities:\n1.)\tA tested 'feature patch' available for separate download;\n2.)\tA configure switch '--enable-lazy-vacuum' perhaps;\n3.)\tThe (marginally if at all documented) LAZY parameter, with the code\nsitting there dormant until the parameter is passed. Those who need it\nprobably read this list anyway.\n\nAre we anywhere near comfortable that the code to support LAZY doesn't\nimpact standard VACUUM in any way? That for me is the acid test -- if\nVACUUM proper gets a bug due to the addition, that would be _bad_. But,\nif someone either applied the feature patch or enabled the configure\nswitch, well, then that's their choice, and their problem.\n\nThen those who don't need it can still have reasonable confidence in the\nsolidity of the VACUUM code. The fact that LAZY will get really lazy\nand go away at 7.2 is a factor, as well. But the fact that 7.2, which\nwill obviate all this (hopefully), is several months at the very least\ndown the road makes it desireable NOW to have the LAZY behavior.\n\nI for one don't _need_ the LAZY behavior -- my VACUUMs take seconds, not\nhours.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 24 Jan 2001 13:50:10 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (one more time) Patches with vacuum fixes available ."
}
]
|
[
{
"msg_contents": "> I have an index on group_id, one on\n> (group_id,status_id) and one on (group_id,status_id,assigned_to) \n\nAs an aside notice: you should definitely only need the last of the\nthree indices, since it can perfectly work on group_id\nor group_id + status_id only restrictions.\n\nAndreas\n",
"msg_date": "Tue, 12 Dec 2000 09:32:35 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: SourceForge & Postgres"
}
]
|
[
{
"msg_contents": "The following example worked in previous versions (7.0.2 was the last I \ntested), but not in 7.1 any more:\n\ncreate table parent (\nglobal_id serial\n);\n\ncreate table child (\nanything text\n) inherits (parent);\n\ncreate table foreign (\nfk_id int4 references parent(global_id) on update cascade on delete no action\n) inherits (parent);\n\ntest.sql:83: ERROR: UNIQUE constraint matching given keys for referenced \ntable \"child\" not found\nWHY ???\n\nI would appreciate any help. Our database depends heavily on this.\n\nHorst\n",
"msg_date": "Tue, 12 Dec 2000 23:37:43 +1100",
"msg_from": "Horst Herb <[email protected]>",
"msg_from_op": true,
"msg_subject": "HELP! foreign eys & inheritance"
},
{
"msg_contents": "\n> btw anyone trying this query should use: \"attdispersion\"\n> \n\nI see it, yes. Was this an intended change ? I am quite sure, that it was \nattdisbursion in 7.0 ?\n\nAndreas\n",
"msg_date": "Tue, 12 Dec 2000 13:59:40 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": false,
"msg_subject": "AW: SourceForge & Postgres (attdispursion)"
},
{
"msg_contents": "> I see it, yes. Was this an intended change ? I am quite sure, that it was \n> attdisbursion in 7.0 ?\n\nYes, I couldn't spell dispersion in the past. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Dec 2000 09:01:21 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: SourceForge & Postgres (attdispursion)"
}
]
|
[
{
"msg_contents": "Ooops, sorry, error in this example:\n> The following example worked in previous versions (7.0.2 was the last I\n> tested), but not in 7.1 any more:\n>\n> create table parent (\n> global_id serial\n> );\n>\n> create table child (\n> anything text\n> ) inherits (parent);\n>\n> create table foreign (\n> fk_id int4 references child(global_id) on update cascade on delete no\n\naction\n ^^^^^ child, of course, not parent!\n\n> ) inherits (parent);\n>\n> test.sql:83: ERROR: UNIQUE constraint matching given keys for referenced\n> table \"child\" not found\n> WHY ???\n>\n> I would appreciate any help. Our database depends heavily on this.\n>\n> Horst\n\n-------------------------------------------------------\n",
"msg_date": "Wed, 13 Dec 2000 01:25:40 +1100",
"msg_from": "Horst Herb <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Re: HELP! foreign eys & inheritance"
},
{
"msg_contents": "\nYou'll need to make a unique index/unique constraint on the fields\nof child you wish to constrain. The unique constraint check wasn't\nchecked in 7.0, and also unique constraints are not inherited so\nit has to be on the actual table you want to reference.\n\nStephan Szabo\[email protected]\n\nOn Wed, 13 Dec 2000, Horst Herb wrote:\n\n> Ooops, sorry, error in this example:\n> > The following example worked in previous versions (7.0.2 was the last I\n> > tested), but not in 7.1 any more:\n> >\n> > create table parent (\n> > global_id serial\n> > );\n> >\n> > create table child (\n> > anything text\n> > ) inherits (parent);\n> >\n> > create table foreign (\n> > fk_id int4 references child(global_id) on update cascade on delete no\n> \n> action\n> ^^^^^ child, of course, not parent!\n> \n> > ) inherits (parent);\n> >\n> > test.sql:83: ERROR: UNIQUE constraint matching given keys for referenced\n> > table \"child\" not found\n> > WHY ???\n> >\n> > I would appreciate any help. Our database depends heavily on this.\n> >\n> > Horst\n> \n> -------------------------------------------------------\n> \n\n",
"msg_date": "Tue, 12 Dec 2000 10:10:23 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Re: HELP! foreign eys & inheritance"
}
]
|
[
{
"msg_contents": "I think the newC function idea is pretty good, however, what would be\ngreat is just one more step of protocol, perhaps an API verson 2 or 3:\n\nOne thing than makes writing a non-trivial function a bit problematic,\nand perhaps even less efficient, is that the function does not know when\nit is first run and when it is finished, and there is no facility to\nmanage contextual information. This limits external functons having to\nbe fairly simple, or overly complex.\n\nI propose that when the newC structure is allocated that a function\nspecific \"Init\" function be called, and when the structure is being\nfreed, calling a \"Exit\" function. The new C structure should also have a\nvoid pointer that allows persistent information to be passed around.\n\ntypedef struct\n{\n FmgrInfo *flinfo; /* ptr to lookup info used for this call\n*/\n Node *context; /* pass info about context of call */\n Node *resultinfo; /* pass or return extra info about\nresult */\n bool isnull; /* function must set true if result is\nNULL */\n short nargs; /* # arguments actually passed */\n Datum arg[FUNC_MAX_ARGS]; /* Arguments passed to function */\n bool argnull[FUNC_MAX_ARGS]; /* T if arg[i] is actually NULL\n*/\n\n void * userparam; /* to be used by he function */\n\n} FunctionCallInfoData;\ntypedef FunctionCallInfoData* FunctionCallInfo;\n\nThe userparam can be used to store data, or a count, or whatever.\n\nDatum function(PG_FUNCTION_ARGS) ;\nbool function_Init(PG_FUNCTION_ARGS);\nvoid function_Exit(PG_FUNCTION_ARGS);\n\nThis protocol would make writing some really cool features much easier.\nAs a C++ guy, I could execute \"new\" at Init and \"delete\" at Exit. ;-)\n\n\nMark.\n\n",
"msg_date": "Tue, 12 Dec 2000 11:27:33 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "external function proposal for 7.2"
},
{
"msg_contents": "\nAs a lurker on the list this post caught my eye somewhat. I think this\nwould be excellent functionality to have in postgres, i was considering\ndoing something like this in a non intruse manner, by manipulating\n_init() and _fini functions of shared libraries. But what you have\ndescribed below is a much better interface. In particular i was looking\nat a way of getting async notifications when a row had been inserted, and\npasing out to my other applications enough data, to be able to query back\nin for the complete row.\n\nThe ability to have an init/exit for an external function would be a big\nwin, you could even have the init() create a thread for passing results\nto, and performing what ever voodoo magic you wanted.\n\ni'll go back to lurking and listening now.\n\n\n\nOn Tue, 12 Dec 2000, mlw wrote:\n\n> I think the newC function idea is pretty good, however, what would be\n> great is just one more step of protocol, perhaps an API verson 2 or 3:\n> \n> One thing than makes writing a non-trivial function a bit problematic,\n> and perhaps even less efficient, is that the function does not know when\n> it is first run and when it is finished, and there is no facility to\n> manage contextual information. This limits external functons having to\n> be fairly simple, or overly complex.\n> \n> I propose that when the newC structure is allocated that a function\n> specific \"Init\" function be called, and when the structure is being\n> freed, calling a \"Exit\" function. The new C structure should also have a\n> void pointer that allows persistent information to be passed around.\n> \n> typedef struct\n> {\n> FmgrInfo *flinfo; /* ptr to lookup info used for this call\n> */\n> Node *context; /* pass info about context of call */\n> Node *resultinfo; /* pass or return extra info about\n> result */\n> bool isnull; /* function must set true if result is\n> NULL */\n> short nargs; /* # arguments actually passed */\n> Datum arg[FUNC_MAX_ARGS]; /* Arguments passed to function */\n> bool argnull[FUNC_MAX_ARGS]; /* T if arg[i] is actually NULL\n> */\n> \n> void * userparam; /* to be used by he function */\n> \n> } FunctionCallInfoData;\n> typedef FunctionCallInfoData* FunctionCallInfo;\n> \n> The userparam can be used to store data, or a count, or whatever.\n> \n> Datum function(PG_FUNCTION_ARGS) ;\n> bool function_Init(PG_FUNCTION_ARGS);\n> void function_Exit(PG_FUNCTION_ARGS);\n> \n> This protocol would make writing some really cool features much easier.\n> As a C++ guy, I could execute \"new\" at Init and \"delete\" at Exit. ;-)\n> \n> \n> Mark.\n> \n\n\n\nPGP key: http://codex.net/pgp/pgp.asc\n\n",
"msg_date": "Tue, 12 Dec 2000 23:48:50 +0000 (GMT)",
"msg_from": "Vincent AE Scott <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: external function proposal for 7.2"
},
{
"msg_contents": "Vincent AE Scott wrote:\n> \n> As a lurker on the list this post caught my eye somewhat. I think this\n> would be excellent functionality to have in postgres, i was considering\n> doing something like this in a non intruse manner, by manipulating\n> _init() and _fini functions of shared libraries. But what you have\n> described below is a much better interface. In particular i was looking\n> at a way of getting async notifications when a row had been inserted, and\n> pasing out to my other applications enough data, to be able to query back\n> in for the complete row.\n> \n> The ability to have an init/exit for an external function would be a big\n> win, you could even have the init() create a thread for passing results\n> to, and performing what ever voodoo magic you wanted.\n> \n> i'll go back to lurking and listening now.\n\nI did some code spelunking today. It will not be easy, but I think it is\nquite doable. Currently, in the code, a function pointer is passed\naround. If I resurrect some of the \"old\" C code a bit, and do some\nmerging with the new code we could do it. I just have to find where I\ncall the exit function.\n\nAs far as I can see, the code passes around a function pointer, but\nseems to mostly call a small number of localized functions to dispatch\nthe call. So, I was thinking, rather than pass the function, why not\npass the structure? The old C code stuff does this, why not keep it\naround, and pass around the finfo struct instead? and call\n(*finfo->funct)(args)?\n\n> \n> On Tue, 12 Dec 2000, mlw wrote:\n> \n> > I think the newC function idea is pretty good, however, what would be\n> > great is just one more step of protocol, perhaps an API verson 2 or 3:\n> >\n> > One thing than makes writing a non-trivial function a bit problematic,\n> > and perhaps even less efficient, is that the function does not know when\n> > it is first run and when it is finished, and there is no facility to\n> > manage contextual information. This limits external functons having to\n> > be fairly simple, or overly complex.\n> >\n> > I propose that when the newC structure is allocated that a function\n> > specific \"Init\" function be called, and when the structure is being\n> > freed, calling a \"Exit\" function. The new C structure should also have a\n> > void pointer that allows persistent information to be passed around.\n> >\n> > typedef struct\n> > {\n> > FmgrInfo *flinfo; /* ptr to lookup info used for this call\n> > */\n> > Node *context; /* pass info about context of call */\n> > Node *resultinfo; /* pass or return extra info about\n> > result */\n> > bool isnull; /* function must set true if result is\n> > NULL */\n> > short nargs; /* # arguments actually passed */\n> > Datum arg[FUNC_MAX_ARGS]; /* Arguments passed to function */\n> > bool argnull[FUNC_MAX_ARGS]; /* T if arg[i] is actually NULL\n> > */\n> >\n> > void * userparam; /* to be used by he function */\n> >\n> > } FunctionCallInfoData;\n> > typedef FunctionCallInfoData* FunctionCallInfo;\n> >\n> > The userparam can be used to store data, or a count, or whatever.\n> >\n> > Datum function(PG_FUNCTION_ARGS) ;\n> > bool function_Init(PG_FUNCTION_ARGS);\n> > void function_Exit(PG_FUNCTION_ARGS);\n> >\n> > This protocol would make writing some really cool features much easier.\n> > As a C++ guy, I could execute \"new\" at Init and \"delete\" at Exit. ;-)\n> >\n> >\n> > Mark.\n> >\n> \n> PGP key: http://codex.net/pgp/pgp.asc\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Tue, 12 Dec 2000 19:41:36 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: external function proposal for 7.2"
},
{
"msg_contents": "mlw <[email protected]> writes:\n> I just have to find where I call the exit function.\n\nThat will be the hard part.\n\nFmgrInfo is not currently considered a durable data structure, and I\nthink you will be in for grief if you try to make any guarantees about\nwhat will happen when one disappears. If you need a cleanup proc to\nbe called, I'd suggest looking into registering it to be called at\nquery completion and/or transaction cleanup/abort, as needed.\n\nMost of the sorts of resources you might need to clean up already have\ncleanup mechanisms, so it's not entirely clear that you even *need*\na cleanup proc. Maybe a different way to say that is that Postgres\nalready has a pretty well-defined cleanup philosophy, and it's geared\nto particular resources (memory, open files, etc) not to individual\ncalled functions. You should consider swimming with that tide rather\nthan against it.\n\nI have no objection to adding another field to FmgrInfo for the callee's\nuse, if you can show an example or two where it'd be useful. I'm only\nconcerned about the callback-on-delete part. That sounds like a recipe\nfor fragility...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2000 00:20:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: external function proposal for 7.2 "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <[email protected]> writes:\n> > I just have to find where I call the exit function.\n> \n> That will be the hard part.\n> \n> FmgrInfo is not currently considered a durable data structure, and I\n> think you will be in for grief if you try to make any guarantees about\n> what will happen when one disappears. If you need a cleanup proc to\n> be called, I'd suggest looking into registering it to be called at\n> query completion and/or transaction cleanup/abort, as needed.\n\nI think making this structure durable with be fairly 'easy' assuming\nthat fmgr_info(...) is called only once prior to operations which\nrequires the function. If this is not the case, then you are 100% right. \n\nIf my assumption is correct, and please correct me if I am wrong, then\nall that should be needed to be done is allocate the structure at this\npoint, and pass it around as the function pointer, and just make sure\nthat it is always 'FunctionCallInvoke' that calls the function. \n\nAssuming all my assumptions are correct, (and I can't see how that is\npossible ;-), I should also call the Init function at this time.\n\nThe big problem is calling the \"Exit\" function. I am sure that will not\nbe easily done, or even doable, but we can dream.\n\n> \n> Most of the sorts of resources you might need to clean up already have\n> cleanup mechanisms, so it's not entirely clear that you even *need*\n> a cleanup proc. Maybe a different way to say that is that Postgres\n> already has a pretty well-defined cleanup philosophy, and it's geared\n> to particular resources (memory, open files, etc) not to individual\n> called functions. You should consider swimming with that tide rather\n> than against it.\n\nBelieve me I understand what you are saying, but, I think Postgres, with\na few tweaks here and there, targeted at efficient extension mechanisms,\ncould blow away the DB market. I have harped on my text search engine, I\nknow, but I am not the only one that wants to do these sorts of things,\nand it is discouraging how little information is available.\n\nMaking it easy for guys like me, to bring functionality into Posgres,\nwill make Postgres the hands down winner for so many projects that\notherwise would have to resort to using some crappy db library.\n\nPostgres has it all, it has query language, presentation mechanisms,\nODBC, tools, etc. Rather than having to write an application around some\ncrappy db library, we could write a few neat functions in a powerful SQL\ndatabase.\n\nI think a little focus on this area will pay off hugely.\n\n> \n> I have no objection to adding another field to FmgrInfo for the callee's\n> use, if you can show an example or two where it'd be useful. I'm only\n> concerned about the callback-on-delete part. That sounds like a recipe\n> for fragility...\n\nYes, this is a concern for sure, if it is a problem, then, absolutely,\nit should be dropped.\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Wed, 13 Dec 2000 08:11:41 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: external function proposal for 7.2"
},
{
"msg_contents": "<HIGH HORSE>\n\nLet me explain why I think the changes I mentioned are a good thing.\n\n(BTW gateway.mohawksoft.com seems to going to an old IP address that I\nhaven't had for years, something is strange.)\n\nSo, using the IP address, go to this web site. \n\thttp://216.41.12.226/search.php3\n\nThis is a test page, not a production page. I'll leave it up for a few\ndays barring power outages and other such non-sense.\n\nI have harped about it before, it is a music search system. There is\nbased on an external daemon which does the full text searching. The\nsearch is completely independent of Postgres, but I use Postgres as the\ndata source and the presentation system. I use PHP/Apache to interface\nwith Postgres and display data.\n\n(One added goody about the design is that the text search engine can be\nrun on a different machine than the Postgres DB, this allows better\nscalability with common hardware.)\n\nThe code looks like http://216.41.12.226/testmuze.html (please look at\npage source, the table strings screw up the page)\n\nIt takes three select statements and a temp table, to do what one should\nbe able to do with a single select statement and good function support.\n\nPlease don't get me wrong, I'm not dumping on Postgres at all, but it\nwould be nice to be able to create this sort of application much easier.\nSupport for these sorts of constructs will put Postgres in the real\n\"world class\" database category, not just a very strong contender.\n\nIt has been suggested that I create a Postgres Index, but that is a lot\nof code that many would not be able to justify to use Postgres. If the\nfunction mechanisms were just a little more powerful, this sort of\napplication would be much easier and more efficient, thus a better\nchoice.\n\n</HIGH HORSE> \n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Wed, 13 Dec 2000 09:25:40 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: external function proposal for 7.2"
},
{
"msg_contents": "On Wed, 13 Dec 2000, mlw wrote:\n\n> Assuming all my assumptions are correct, (and I can't see how that is\n> possible ;-), I should also call the Init function at this time.\n> \n> The big problem is calling the \"Exit\" function. I am sure that will not\n> be easily done, or even doable, but we can dream.\n\n\nOk, i don't know the complete syntax of the 'load external function'\nstuff, but how about something like :\n\n... ON LOAD CALL 'init()' on UNLOAD CALL 'fini()' ...\n\nwhen the functions is loaded, you specify a setup function and when it's\nunloaded( im not actually sure if this exists) call the finish function.\n\nsorry, if any of that sounds dumb.\n\n-vince\n(going back to lurk mode)\n\n\nPGP key: http://codex.net/pgp/pgp.asc\n\n",
"msg_date": "Wed, 13 Dec 2000 19:11:12 +0000 (GMT)",
"msg_from": "Vincent AE Scott <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: external function proposal for 7.2"
},
{
"msg_contents": "mlw wrote:\n> \n> Let me explain why I think the changes I mentioned are a good thing.\n> \n> (BTW gateway.mohawksoft.com seems to going to an old IP address that I\n> haven't had for years, something is strange.)\n> \n> So, using the IP address, go to this web site.\n> http://216.41.12.226/search.php3\n> \n> This is a test page, not a production page. I'll leave it up for a few\n> days barring power outages and other such non-sense.\n\nDoes it search from some hidden fields too ?\n\nWhen I searched for \"allison\", I got lot of allisons, but also a lot of \nlines with no allison in them, like \"The Janet Lawson Quintet: Sunday\nAfternoon\"\n\n----------\nHannu\n",
"msg_date": "Thu, 14 Dec 2000 00:35:56 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: external function proposal for 7.2"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> mlw wrote:\n> >\n> > Let me explain why I think the changes I mentioned are a good thing.\n> >\n> > (BTW gateway.mohawksoft.com seems to going to an old IP address that I\n> > haven't had for years, something is strange.)\n> >\n> > So, using the IP address, go to this web site.\n> > http://216.41.12.226/search.php3\n> >\n> > This is a test page, not a production page. I'll leave it up for a few\n> > days barring power outages and other such non-sense.\n> \n> Does it search from some hidden fields too ?\n> \n> When I searched for \"allison\", I got lot of allisons, but also a lot of\n> lines with no allison in them, like \"The Janet Lawson Quintet: Sunday\n> Afternoon\"\n\nActually, that's a metaphone search. \"Lawson\" metaphones to \"lsn\" and\nallison will also metaphone to \"lsn.\" The metaphone is optional, but\nworks well when at least two words are specified.\n\nA search of \"costello allison\" will find exactly what you want.\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Wed, 13 Dec 2000 17:50:34 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: external function proposal for 7.2"
}
]
|
[
{
"msg_contents": "\n> I take it from the smiley that you're not serious, but actually it seems\n> like it might not be a bad idea. I could see appending a CRC to each\n> tuple record. Comments anyone?\n\nLet's not get paranoid. If you compress the output the file will get checksummed\nanyway. I am against a CRC in binary copy output :-)\n\nAndreas\n",
"msg_date": "Tue, 12 Dec 2000 17:28:20 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Re: COPY BINARY file format proposal "
}
]
|
[
{
"msg_contents": "This problem with foreign keys has been reported to me, and I have confirmed\nthe bug exists in current sources. The DELETE should succeed:\n\n---------------------------------------------------------------------------\n\nCREATE TABLE primarytest2 (\n col1 INTEGER, \n col2 INTEGER, \n PRIMARY KEY(col1, col2)\n );\n\nCREATE TABLE foreigntest2 (col3 INTEGER, \n col4 INTEGER,\n FOREIGN KEY (col3, col4) REFERENCES primarytest2\n );\ntest=> BEGIN;\nBEGIN\ntest=> INSERT INTO primarytest2 VALUES (5,5);\nINSERT 27618 1\ntest=> DELETE FROM primarytest2 WHERE col1 = 5 AND col2 = 5;\nERROR: triggered data change violation on relation \"primarytest2\"\n\n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Dec 2000 12:13:34 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug in FOREIGN KEY"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> ERROR: triggered data change violation on relation \"primarytest2\"\n\nWe're getting this report about once every 48 hours, which would make it a\nFAQ. (hint, hint)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 13 Dec 2000 21:09:17 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in FOREIGN KEY"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > ERROR: triggered data change violation on relation \"primarytest2\"\n> \n> We're getting this report about once every 48 hours, which would make it a\n> FAQ. (hint, hint)\n> \n\n\nFirst time I heard of it. Does anyone know more details?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 13 Dec 2000 23:17:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bug in FOREIGN KEY"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > Bruce Momjian writes:\n> >\n> > > ERROR: triggered data change violation on relation \"primarytest2\"\n> >\n> > We're getting this report about once every 48 hours, which would make it a\n> > FAQ. (hint, hint)\n> >\n>\n>\n> First time I heard of it. Does anyone know more details?\n\n Think I misinterpreted the SQL3 specs WR to this detail. The\n checks must be made per statement, not at the transaction\n level. I'll try to fix it, but we need to define what will\n happen with referential actions in the case of conflicting\n actions on the same key - there are some possible conflicts:\n\n 1. DEFERRED ON DELETE NO ACTION or RESTRICT\n\n Do the referencing rows reference to the new PK row with\n the same key now, or is this still a constraint\n violation? I would say it's not, because the constraint\n condition is satisfied at the end of the transaction. How\n do other databases behave?\n\n 2. DEFERRED ON DELETE CASCADE, SET NULL or SET DEFAULT\n\n Again I'd say that the action should be suppressed\n because a matching PK row is present at transaction end -\n it's not the same old row, but the constraint itself is\n still satisfied.\n\n Implementing it that way (if it is correct that way) requires\n that the RI-triggers check that the key in question really\n disappeared from the PK table, at least for the deferred\n invocation at transaction end. This lookup is not required in\n the immediate case, so it would be possible to retain the\n current performance here, but we'd need a mechanism that\n tells the trigger if it is actually invoked in immediate or\n deferred mode. Don't know how to do that right now.\n\n To fix it now, I'd tend to remove the triggered data change\n check in the trigger queue (where the error is coming from)\n and add the extra PK lookup to the triggers for 7.1. Then\n think about the suppress of it with an immediate/deferred\n flag mechanism for 7.2.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Thu, 14 Dec 2000 07:02:24 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in FOREIGN KEY"
},
{
"msg_contents": "\nCan someone tell me where we are on this?\n\n> This problem with foreign keys has been reported to me, and I have confirmed\n> the bug exists in current sources. The DELETE should succeed:\n> \n> ---------------------------------------------------------------------------\n> \n> CREATE TABLE primarytest2 (\n> col1 INTEGER, \n> col2 INTEGER, \n> PRIMARY KEY(col1, col2)\n> );\n> \n> CREATE TABLE foreigntest2 (col3 INTEGER, \n> col4 INTEGER,\n> FOREIGN KEY (col3, col4) REFERENCES primarytest2\n> );\n> test=> BEGIN;\n> BEGIN\n> test=> INSERT INTO primarytest2 VALUES (5,5);\n> INSERT 27618 1\n> test=> DELETE FROM primarytest2 WHERE col1 = 5 AND col2 = 5;\n> ERROR: triggered data change violation on relation \"primarytest2\"\n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 23:16:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bug in FOREIGN KEY"
},
{
"msg_contents": "\nThis is Jan's reply to the issue.\n\n> Bruce Momjian wrote:\n> > > Bruce Momjian writes:\n> > >\n> > > > ERROR: triggered data change violation on relation \"primarytest2\"\n> > >\n> > > We're getting this report about once every 48 hours, which would make it a\n> > > FAQ. (hint, hint)\n> > >\n> >\n> >\n> > First time I heard of it. Does anyone know more details?\n> \n> Think I misinterpreted the SQL3 specs WR to this detail. The\n> checks must be made per statement, not at the transaction\n> level. I'll try to fix it, but we need to define what will\n> happen with referential actions in the case of conflicting\n> actions on the same key - there are some possible conflicts:\n> \n> 1. DEFERRED ON DELETE NO ACTION or RESTRICT\n> \n> Do the referencing rows reference to the new PK row with\n> the same key now, or is this still a constraint\n> violation? I would say it's not, because the constraint\n> condition is satisfied at the end of the transaction. How\n> do other databases behave?\n> \n> 2. DEFERRED ON DELETE CASCADE, SET NULL or SET DEFAULT\n> \n> Again I'd say that the action should be suppressed\n> because a matching PK row is present at transaction end -\n> it's not the same old row, but the constraint itself is\n> still satisfied.\n> \n> Implementing it that way (if it is correct that way) requires\n> that the RI-triggers check that the key in question really\n> disappeared from the PK table, at least for the deferred\n> invocation at transaction end. This lookup is not required in\n> the immediate case, so it would be possible to retain the\n> current performance here, but we'd need a mechanism that\n> tells the trigger if it is actually invoked in immediate or\n> deferred mode. Don't know how to do that right now.\n> \n> To fix it now, I'd tend to remove the triggered data change\n> check in the trigger queue (where the error is coming from)\n> and add the extra PK lookup to the triggers for 7.1. Then\n> think about the suppress of it with an immediate/deferred\n> flag mechanism for 7.2.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== [email protected] #\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 23:17:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bug in FOREIGN KEY"
},
{
"msg_contents": "hi, there!\n\nOn Mon, 22 Jan 2001, Bruce Momjian wrote:\n\n> \n> > This problem with foreign keys has been reported to me, and I have confirmed\n> > the bug exists in current sources. The DELETE should succeed:\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > CREATE TABLE primarytest2 (\n> > col1 INTEGER, \n> > col2 INTEGER, \n> > PRIMARY KEY(col1, col2)\n> > );\n> > \n> > CREATE TABLE foreigntest2 (col3 INTEGER, \n> > col4 INTEGER,\n> > FOREIGN KEY (col3, col4) REFERENCES primarytest2\n> > );\n> > test=> BEGIN;\n> > BEGIN\n> > test=> INSERT INTO primarytest2 VALUES (5,5);\n> > INSERT 27618 1\n> > test=> DELETE FROM primarytest2 WHERE col1 = 5 AND col2 = 5;\n> > ERROR: triggered data change violation on relation \"primarytest2\"\n\nI have another (slightly different) example:\n--- cut here ---\ntest=> CREATE TABLE pr(obj_id int PRIMARY KEY);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'pr_pkey' for\ntable 'pr'\nCREATE\ntest=> CREATE TABLE fr(obj_id int REFERENCES pr ON DELETE CASCADE);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\ntest=> BEGIN;\nBEGIN\ntest=> INSERT INTO pr (obj_id) VALUES (1);\nINSERT 200539 1\ntest=> INSERT INTO fr (obj_id) SELECT obj_id FROM pr;\nINSERT 200540 1\ntest=> DELETE FROM fr;\nERROR: triggered data change violation on relation \"fr\"\ntest=> \n--- cut here ---\n\nwe are running postgresql 7.1 beta3\n\n/fjoe\n\n",
"msg_date": "Tue, 23 Jan 2001 14:31:24 +0600 (NS)",
"msg_from": "Max Khon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in FOREIGN KEY"
},
{
"msg_contents": "\n> > Think I misinterpreted the SQL3 specs WR to this detail. The\n> > checks must be made per statement, not at the transaction\n> > level. I'll try to fix it, but we need to define what will\n> > happen with referential actions in the case of conflicting\n> > actions on the same key - there are some possible conflicts:\n> > \n> > 1. DEFERRED ON DELETE NO ACTION or RESTRICT\n> > \n> > Do the referencing rows reference to the new PK row with\n> > the same key now, or is this still a constraint\n> > violation? I would say it's not, because the constraint\n> > condition is satisfied at the end of the transaction. How\n> > do other databases behave?\n> > \n> > 2. DEFERRED ON DELETE CASCADE, SET NULL or SET DEFAULT\n> > \n> > Again I'd say that the action should be suppressed\n> > because a matching PK row is present at transaction end -\n> > it's not the same old row, but the constraint itself is\n> > still satisfied.\n\nI'm not actually sure on the cascade, set null and set default. The\nway they are written seems to imply to me that it's based on the state\nof the database before/after the command in question as opposed to the\ndeferred state of the database because of the stuff about updating the\nstate of partially matching rows immediately after the delete/update of\nthe row which wouldn't really make sense when deferred. Does anyone know\nwhat other systems do with a case something like this all in a\ntransaction:\n\ncreate table a (a int primary key);\ncreate table b (b int references a match full on update cascade\n\t\t on delete cascade deferrable initially deferred);\ninsert into a values (1);\ninsert into a values (2);\ninsert into b values (1);\ndelete from a where a=1;\nselect * from b;\ncommit;\n\n",
"msg_date": "Tue, 23 Jan 2001 10:41:21 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in FOREIGN KEY"
},
{
"msg_contents": "\nWe have to decide how to address this, perhaps with a clearer error\nmessage and a TODO item.\n\n> Bruce Momjian wrote:\n> > > Bruce Momjian writes:\n> > >\n> > > > ERROR: triggered data change violation on relation \"primarytest2\"\n> > >\n> > > We're getting this report about once every 48 hours, which would make it a\n> > > FAQ. (hint, hint)\n> > >\n> >\n> >\n> > First time I heard of it. Does anyone know more details?\n> \n> Think I misinterpreted the SQL3 specs WR to this detail. The\n> checks must be made per statement, not at the transaction\n> level. I'll try to fix it, but we need to define what will\n> happen with referential actions in the case of conflicting\n> actions on the same key - there are some possible conflicts:\n> \n> 1. DEFERRED ON DELETE NO ACTION or RESTRICT\n> \n> Do the referencing rows reference to the new PK row with\n> the same key now, or is this still a constraint\n> violation? I would say it's not, because the constraint\n> condition is satisfied at the end of the transaction. How\n> do other databases behave?\n> \n> 2. DEFERRED ON DELETE CASCADE, SET NULL or SET DEFAULT\n> \n> Again I'd say that the action should be suppressed\n> because a matching PK row is present at transaction end -\n> it's not the same old row, but the constraint itself is\n> still satisfied.\n> \n> Implementing it that way (if it is correct that way) requires\n> that the RI-triggers check that the key in question really\n> disappeared from the PK table, at least for the deferred\n> invocation at transaction end. This lookup is not required in\n> the immediate case, so it would be possible to retain the\n> current performance here, but we'd need a mechanism that\n> tells the trigger if it is actually invoked in immediate or\n> deferred mode. Don't know how to do that right now.\n> \n> To fix it now, I'd tend to remove the triggered data change\n> check in the trigger queue (where the error is coming from)\n> and add the extra PK lookup to the triggers for 7.1. Then\n> think about the suppress of it with an immediate/deferred\n> flag mechanism for 7.2.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== [email protected] #\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 09:44:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bug in FOREIGN KEY"
},
{
"msg_contents": "Here is another bug:\n\ntest=> begin;\nBEGIN\ntest=> INSERT INTO primarytest2 VALUES (5,5);\nINSERT 18757 1\ntest=> UPDATE primarytest2 SET col2=1 WHERE col1 = 5 AND col2 = 5;\nERROR: deferredTriggerGetPreviousEvent: event for tuple (0,10) not\nfound\n\n> Bruce Momjian wrote:\n> > > Bruce Momjian writes:\n> > >\n> > > > ERROR: triggered data change violation on relation \"primarytest2\"\n> > >\n> > > We're getting this report about once every 48 hours, which would make it a\n> > > FAQ. (hint, hint)\n> > >\n> >\n> >\n> > First time I heard of it. Does anyone know more details?\n> \n> Think I misinterpreted the SQL3 specs WR to this detail. The\n> checks must be made per statement, not at the transaction\n> level. I'll try to fix it, but we need to define what will\n> happen with referential actions in the case of conflicting\n> actions on the same key - there are some possible conflicts:\n> \n> 1. DEFERRED ON DELETE NO ACTION or RESTRICT\n> \n> Do the referencing rows reference to the new PK row with\n> the same key now, or is this still a constraint\n> violation? I would say it's not, because the constraint\n> condition is satisfied at the end of the transaction. How\n> do other databases behave?\n> \n> 2. DEFERRED ON DELETE CASCADE, SET NULL or SET DEFAULT\n> \n> Again I'd say that the action should be suppressed\n> because a matching PK row is present at transaction end -\n> it's not the same old row, but the constraint itself is\n> still satisfied.\n> \n> Implementing it that way (if it is correct that way) requires\n> that the RI-triggers check that the key in question really\n> disappeared from the PK table, at least for the deferred\n> invocation at transaction end. This lookup is not required in\n> the immediate case, so it would be possible to retain the\n> current performance here, but we'd need a mechanism that\n> tells the trigger if it is actually invoked in immediate or\n> deferred mode. Don't know how to do that right now.\n> \n> To fix it now, I'd tend to remove the triggered data change\n> check in the trigger queue (where the error is coming from)\n> and add the extra PK lookup to the triggers for 7.1. Then\n> think about the suppress of it with an immediate/deferred\n> flag mechanism for 7.2.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== [email protected] #\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 Jan 2001 16:10:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bug in FOREIGN KEY"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Here is another bug:\n>\n> test=> begin;\n> BEGIN\n> test=> INSERT INTO primarytest2 VALUES (5,5);\n> INSERT 18757 1\n> test=> UPDATE primarytest2 SET col2=1 WHERE col1 = 5 AND col2 = 5;\n> ERROR: deferredTriggerGetPreviousEvent: event for tuple (0,10) not\n> found\n\n Schema?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 26 Jan 2001 17:02:32 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in FOREIGN KEY"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > Here is another bug:\n> >\n> > test=> begin;\n> > BEGIN\n> > test=> INSERT INTO primarytest2 VALUES (5,5);\n> > INSERT 18757 1\n> > test=> UPDATE primarytest2 SET col2=1 WHERE col1 = 5 AND col2 = 5;\n> > ERROR: deferredTriggerGetPreviousEvent: event for tuple (0,10) not\n> > found\n> \n> Schema?\n> \n\nCREATE TABLE primarytest2 (\n col1 INTEGER, \n col2 INTEGER, \n PRIMARY KEY(col1, col2)\n );\n\nCREATE TABLE foreigntest2 (col3 INTEGER, \n col4 INTEGER,\n FOREIGN KEY (col3, col4) REFERENCES primarytest2\n );\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 Jan 2001 17:03:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bug in FOREIGN KEY"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian\n> \n> > Bruce Momjian wrote:\n> > > Here is another bug:\n> > >\n\nISTM commands/trigger.c is broken.\nThe behabior seems to be changed by recent changes made by Tom.\n\n * Check if we're interested in this row at all \n * ---------- * ---------- \n */ */ \n ntriggers = rel->trigdesc->n_after_row[event]; \n if (ntriggers <= 0) \n \nRegards,\nHiroshi Inoue\n\n> > > test=> begin;\n> > > BEGIN\n> > > test=> INSERT INTO primarytest2 VALUES (5,5);\n> > > INSERT 18757 1\n> > > test=> UPDATE primarytest2 SET col2=1 WHERE col1 = 5 AND col2 = 5;\n> > > ERROR: deferredTriggerGetPreviousEvent: event for tuple (0,10) not\n> > > found\n> > \n> > Schema?\n> > \n> \n> CREATE TABLE primarytest2 (\n> col1 INTEGER, \n> col2 INTEGER, \n> PRIMARY KEY(col1, col2)\n> );\n> \n> CREATE TABLE foreigntest2 (col3 INTEGER, \n> col4 INTEGER,\n> FOREIGN KEY (col3, col4) REFERENCES \n> primarytest2\n> );\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n",
"msg_date": "Sat, 27 Jan 2001 07:43:59 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Bug in FOREIGN KEY"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> ISTM commands/trigger.c is broken.\n> The behabior seems to be changed by recent changes made by Tom.\n\nHm. I changed the code to not log an AFTER event unless there is\nactually a trigger of the relevant type, thus suppressing what I\nconsidered a very serious memory leak in the non-deferred-trigger case.\nAre there cases where we must log an event anyway, and if so what are\nthey? It didn't look to me like the deferred event executor would do\nanything with a logged event that has no triggers ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Jan 2001 18:13:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in FOREIGN KEY "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > ISTM commands/trigger.c is broken.\n> > The behabior seems to be changed by recent changes made by Tom.\n> \n> Hm. I changed the code to not log an AFTER event unless there is\n> actually a trigger of the relevant type, thus suppressing what I\n> considered a very serious memory leak in the non-deferred-trigger case.\n> Are there cases where we must log an event anyway, and if so what are\n> they? It didn't look to me like the deferred event executor would do\n> anything with a logged event that has no triggers ...\n> \n\nBecause I don't know details about trigger stuff, I may be\nmisunderstanding. As far as I see, KEY_CHANGED stuff\nrequires to log every event about logged tuples.\n\nHowever I'm suspicious if KEY_CHANGED check is necessary.\nRemoving KEY_CHANGED stuff seems to solve the TODO \n FOREIGN KEY INSERT & UPDATE/DELETE in transaction \"change violation\"\nthough it may introduce other bugs. \n\nRegards,\nHiroshi Inoue\n",
"msg_date": "Sat, 27 Jan 2001 13:29:30 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Bug in FOREIGN KEY "
},
{
"msg_contents": "I wrote:\n> Are there cases where we must log an event anyway, and if so what are\n> they? It didn't look to me like the deferred event executor would do\n> anything with a logged event that has no triggers ...\n\nOops, I missed the uses of deferredTriggerGetPreviousEvent(). Fixed\nnow.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Jan 2001 00:20:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in FOREIGN KEY "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Because I don't know details about trigger stuff, I may be\n> misunderstanding. As far as I see, KEY_CHANGED stuff\n> requires to log every event about logged tuples.\n\nI just realized that myself. The code was still doing it the hard\nway (eg, logging *both* before and after events for each tuple),\nbut it does seem necessary to log all events if there is either an\nUPDATE or DELETE deferred trigger.\n\n> However I'm suspicious if KEY_CHANGED check is necessary.\n> Removing KEY_CHANGED stuff seems to solve the TODO \n> FOREIGN KEY INSERT & UPDATE/DELETE in transaction \"change violation\"\n> though it may introduce other bugs. \n\nI suspect it just masks the problem by preventing the trigger code\nfrom executing ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Jan 2001 00:25:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in FOREIGN KEY "
},
{
"msg_contents": "-----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> > However I'm suspicious if KEY_CHANGED check is necessary.\n> > Removing KEY_CHANGED stuff seems to solve the TODO \n> > FOREIGN KEY INSERT & UPDATE/DELETE in transaction \"change violation\"\n> > though it may introduce other bugs. \n> \n> I suspect it just masks the problem by preventing the trigger code\n> from executing ...\n>\n\nI've examined the new TODO\n * FOREIGN KEY INSERT & UPDATE/DELETE in transaction \"change violation\"\na little and am now wondering why it has remained unsolved until now.\n\nISTM there are 2 different RI related issues.\n1) \"begin; insert; delete(or update pk of) the inserted tuple\"\n causes a \"change violation\" error.\n2) For deferred RI constraints\n \"begin;delete a pk;insert the same pk;commit;\"\n fails(or misbehaves) in case the corresponding fk\n exist.\n\nShouldn't we distinguish above 2 issues clearly ?\nAnd doesn't the new TODO correspond to 1) ?\nThe issue 1) seems to be caused due to the transaction-wide\nKEY_CHANGED check. Isn't it sufficient to check KEY_CHANGED\nper query. For example, how about clearing KEY_CHANGED after\nevery DeferredTriggerEndQeury() ?\n\nRegards,\nHiroshi Inoue\n",
"msg_date": "Sun, 28 Jan 2001 07:21:26 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Bug in FOREIGN KEY "
}
]
|
[
{
"msg_contents": "> >> What is the default commit delay now?\n> \n> > As before 5 * 10^(-6) sec - pretty the same as sleep(0) -:)\n> > Seems CommitDelay is not very useful parameter now - XLogFlush\n> > logic and fsync time add some delay.\n> \n> There was a thread recently about smarter ways to handle shared fsync\n> of the log --- IIRC, we talked about self-tuning commit delay,\n> releasing waiting processes as soon as someone else had fsync'd, etc.\n> Looks like none of those ideas are in the code now. Did you not like \n> any of those ideas, or just no time to work on it yet?\n\nWe're in beta - it's better to test WAL to find/fix bugs than make\nfurther improvements.\n\nAlso, I've run test with 100 clients inserting records into 100 tables\n(to minimize contentions) - 915 tps with fsync and 1190 tps without fsync.\nSo, we do ~ 18 commits per fsync now and probably we'll be able to\nincrease commit performance by ~ 30%, no more.\n\nVadim\n",
"msg_date": "Tue, 12 Dec 2000 10:30:04 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: 7.0.3(nofsync) vs 7.1 "
}
]
|
[
{
"msg_contents": "I'm having trouble with like et.al. as there is no single character \nin et_EE locale (on linux at least) that is bigger than all the others.\n\nI would like to modify my locale definition files so that char(255) \nwould always sort after all others but I can't find docs on modifying \nthe locales\n\nSo i have a couple of questions:\n\n1) what file must i change and how to get char(255) out of the Y-group \n and into a separate group that sorts after all others \n (I guess the file is /usr/share/i18n/locales/et_EE, but I'm not so \n sure about the how part)\n\n2) how can I then turn this file into the various LC_* files\n\n----------\nHannu\n",
"msg_date": "Wed, 13 Dec 2000 00:25:21 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need help with redefining locales"
}
]
|
[
{
"msg_contents": "Hi,\n\nI am trying to emulate MySQL's SET type, by creating a new postgresql type.\n\nHowever, is it possible to create a type that has different parameters\nwherever it is used.\n\nFor instance - the varchar type takes as a parameter the max characters in\nthe field. Although there is only one varchar type, it has different\nproperties depending on whether or not it is varchar(5) or varchar(20).\n\nI wish to be able to declare:\n\nbitset('LOW','MEDIUM','HIGH') // Not sure of exact syntax\n\nInternally stored as an int4.\n\nThe trouble is in writing the in and out functions. They need to be able to\nstore a list of token names in order to recreate the comma delimited list of\ntokens from the internal bitset, and vice versa...\n\nAny help?\n\nThanks,\n\nChris\n\n--\nChristopher Kings-Lynne\nFamily Health Network (ACN 089 639 243)\n\n",
"msg_date": "Wed, 13 Dec 2000 13:02:17 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Creating a 'SET' type"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> However, is it possible to create a type that has different parameters\n> wherever it is used.\n> For instance - the varchar type takes as a parameter the max characters in\n> the field. Although there is only one varchar type, it has different\n> properties depending on whether or not it is varchar(5) or varchar(20).\n\nRight now, that support is hard-wired into the parser for each such type\n(and there aren't many). It might be interesting to look at what it\nwould take to make a generalized mechanism whereby a type name could\naccept parameters, with a type-specific routine being responsible for\nreducing the parameters down to a typmod value. One problem you'd run\ninto, I think, is creation of parsing ambiguities --- is NUMERIC(9,2)\na type specification, or a function call? Right now it's a type spec\nbecause NUMERIC is a keyword in the grammar, but that won't do for an\nextensible mechanism.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2000 11:55:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Creating a 'SET' type "
}
]
|
[
{
"msg_contents": "I propose we modify C functions for 7.2. \n\n( I'll volunteer to do as much as I can figure out ;-)\n\n(1) C functions should be able to return multiple values.\n\n(2) A setup and breakdown function should be able to be called\nsurrounding the query set in which a function is called. This allows\nconstructors and destructors.\n\n(3) A function should be able to tell Postgres how to use it. For\ninstance:\n\nselect * from table where column = function();\n\nShould be able to instruct Postgres to either take the value returned\nand search that one value (allowing index match against the value), or\nperform a table scan against the function each time. Both behaviors are\nimportant. Currently a function seems to force a table scan.\n\nEstimates:\n1 may be difficult. 2 should be easy enough. 3, depending on the code\ndependencies, could either be very hard or easy. (my guess is that it\nwould be hard)\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Wed, 13 Dec 2000 00:31:38 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "C function proposal redux"
},
{
"msg_contents": "\nOn Wed, 13 Dec 2000, mlw wrote:\n\n> I propose we modify C functions for 7.2. \n\n Too simple imagine anything, but often difficult do it :-)\n\n> (1) C functions should be able to return multiple values.\n\n for 7.2 / 7.3 are planned functions return tuples, but do it \nis really diffucult. See the current source...\n\n> (2) A setup and breakdown function should be able to be called\n> surrounding the query set in which a function is called. This allows\n> constructors and destructors.\n\n Why? Can you show any example where is it needful? If you really\nneed an init/destroy, you can use:\n\nSELECT my_init();\n..query...\nSELECT my_destroy();\n\n> (3) A function should be able to tell Postgres how to use it. For\n> instance:\n> \n> select * from table where column = function();\n> \n> Should be able to instruct Postgres to either take the value returned\n> and search that one value (allowing index match against the value), or\n> perform a table scan against the function each time. Both behaviors are\n> important. Currently a function seems to force a table scan.\n\nHere I not undestand. We have 'iscacheable' - or what you mean? \n\n\n\t\t\t\tKarel\n\n",
"msg_date": "Wed, 13 Dec 2000 09:03:24 +0100 (CET)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: C function proposal redux"
}
]
|
[
{
"msg_contents": "Hi,\n\nI have just tried using the ILIKE function in 7.0.3. I assume that it is\njust a case-insensitive version of LIKE. (Please correct me if I am wrong\non this assumption.)\n\nThis is my example test case:\n\nusa=# select 'test' LIKE '%es%';\n ?column?\n----------\n t\n(1 row)\n\nusa=# select 'test' ILIKE '%es%';\nERROR: parser: parse error at or near \"ilike\"\nusa=#\n\nHEre is a dump (\\do) of the some of the tilde operators in 7.0.3:\n\n ~* | bpchar | text | bool | matches regex.,\ncase-insensitive\n ~* | name | text | bool | matches regex.,\ncase-insensitive\n ~* | text | text | bool | matches regex.,\ncase-insensitive\n ~* | varchar | text | bool | matches regex.,\ncase-insensitive\n ~= | box | box | bool | same as\n ~= | circle | circle | bool | same as\n ~= | point | point | bool | same as\n ~= | polygon | polygon | bool | same as\n ~= | tinterval | tinterval | bool | same as\n ~~ | bpchar | text | bool | matches LIKE expression\n ~~ | name | text | bool | matches LIKE expression\n ~~ | text | text | bool | matches LIKE expression\n ~~ | varchar | text | bool | matches LIKE expression\n\nNotice that there's no ILIKE operators, (~~*), at all!\n\nIs this documented, but not implemented or what????\n\nChris\n\n",
"msg_date": "Wed, 13 Dec 2000 15:27:44 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug in ILIKE function?"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> Hi,\n> \n> I have just tried using the ILIKE function in 7.0.3. I assume that it is\n> just a case-insensitive version of LIKE. (Please correct me if I am wrong\n> on this assumption.)\n\nAFAIK postgres 7.0.3 does not have it, ILIKE appeared in 7.1 \n\nBut you could use the case-independant regular expressions.\n\n------------\nHannu\n",
"msg_date": "Wed, 13 Dec 2000 10:58:25 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in ILIKE function?"
},
{
"msg_contents": "Christopher Kings-Lynne writes:\n\n> I have just tried using the ILIKE function in 7.0.3.\n\nThere is no ILIKE function in 7.0.3.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 13 Dec 2000 20:28:52 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in ILIKE function?"
},
{
"msg_contents": "That's odd, because it's in the 7.0.3 documentation...\n\nChris\n\n> -----Original Message-----\n> From: Peter Eisentraut [mailto:[email protected]]\n> Sent: Thursday, December 14, 2000 3:29 AM\n> To: Christopher Kings-Lynne\n> Cc: Pgsql-Hackers\n> Subject: Re: [HACKERS] Bug in ILIKE function?\n> \n> \n> Christopher Kings-Lynne writes:\n> \n> > I have just tried using the ILIKE function in 7.0.3.\n> \n> There is no ILIKE function in 7.0.3.\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n",
"msg_date": "Thu, 14 Dec 2000 09:06:57 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Bug in ILIKE function?"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> That's odd, because it's in the 7.0.3 documentation...\n\nWhere? A quick grep doesn't find it there anywhere.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2000 21:35:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in ILIKE function? "
}
]
|
[
{
"msg_contents": "Hello all,\nsorry, but I haven't received any replies to my previous message... and\nit's important for me to solve it.\n\nWhen I perform an action on a psql database (e.g. insert into a table),\nsome more action could be induced, via trigger firing:\n - is it possible to know at any time the exact action chain?\n - is it possible to know at any time if the control is inside a\ntrigger (and which one)?\nSorry, I tried to search in www.postgresql.org but I wasn't able to\nfind anything useful.\n\nThese questions arise because I'm trying to keep in sync two identical\npsql databases; I have audited tables and an audit trail. I'm facing the\nproblem of recognising which actions in the trail were due to a trigger\nfiring, rather than explicitly commanded.\n\nThanks again\nFabio\n",
"msg_date": "Wed, 13 Dec 2000 11:35:57 +0200",
"msg_from": "Fabio Nanni <[email protected]>",
"msg_from_op": true,
"msg_subject": "triggers and actions tree/2"
}
]
|
[
{
"msg_contents": "\n> > anyway? ;-)) If so, a search for artistid 100050450 definitely *should*\n> > use a sequential scan.\n> \n> I tested this statement against the database and you are right, about 14\n> seconds with the index, 4 without.\n\nNow I don't understand the problem any more. Are you complaining, that\nthe optimizer is choosing a faster path ? Or are you saying, that you also\nget the seq scan for other very infrequent values ?\n\nAndreas\n",
"msg_date": "Wed, 13 Dec 2000 11:17:43 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: SourceForge & Postgres"
}
]
|
[
{
"msg_contents": "\nHi!\n\nI download, configure and install postgresql-7.1beta1 _exactly_ the\nsame way as my previous version - 7.0.2:\n\n\n\n./configure --enable-multibyte=KOI8 --enable-locale\ngmake\ngmake install\n\ninitdb\n\n\n\nBut it seems to me locale support gone out. In particulary\n\nselect upper('О©╫О©╫О©╫О©╫О©╫О©╫О©╫ О©╫О©╫О©╫О©╫О©╫ - Russian text');\n\n(first two words are russian in lowercase in KOI8 encoding) don't give\nuppercase russian text - russian letters don't change. But when I do\nthe _same_ steps with 7.0.2 - all is OK. May anyone help me? I work\nunder FreeBSD 4.0. \n\n-- \nAnatoly K. Lasareff Email: [email protected] \nhttp://tolikus.hq.aaanet.ru:8080 Phone: (8632)-710071\n",
"msg_date": "13 Dec 2000 14:06:16 +0300",
"msg_from": "[email protected] (Anatoly K. Lasareff)",
"msg_from_op": true,
"msg_subject": "Locale and multibyte support in 7.1"
},
{
"msg_contents": "On 13 Dec 2000, Anatoly K. Lasareff wrote:\n\n> Date: 13 Dec 2000 14:06:16 +0300\n> From: \"Anatoly K. Lasareff\" <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] Locale and multibyte support in 7.1\n> \n> \n> Hi!\n> \n> I download, configure and install postgresql-7.1beta1 _exactly_ the\n> same way as my previous version - 7.0.2:\n> \n> \n> \n> ./configure --enable-multibyte=KOI8 --enable-locale\n> gmake\n> gmake install\n> \n> initdb\n> \n\nYou need to do \n\n\tinitb -E KOI8\n\nand setup environment properly\nLet me know if you still have a problem\n\n\n\tRegards,\n\t\tOleg\n> \n> \n> But it seems to me locale support gone out. In particulary\n> \n> select upper('О©╫О©╫О©╫О©╫О©╫О©╫О©╫ О©╫О©╫О©╫О©╫О©╫ - Russian text');\n> \n> (first two words are russian in lowercase in KOI8 encoding) don't give\n> uppercase russian text - russian letters don't change. But when I do\n> the _same_ steps with 7.0.2 - all is OK. May anyone help me? I work\n> under FreeBSD 4.0. \n> \n> -- \n> Anatoly K. Lasareff Email: [email protected] \n> http://tolikus.hq.aaanet.ru:8080 Phone: (8632)-710071\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 13 Dec 2000 14:46:15 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Locale and multibyte support in 7.1"
}
]
|
[
{
"msg_contents": "I stated this before, but I did not get a helpful answer. I might have \nmisunderstood tghe documentation on foreign keys:\n\ncreate table global(id serial);\ncreate table child(anything text) inherits(global);\ninsert into child(anything) values ('test);\n\nNow, a select * from child shows\nid\tanything\n-------------\n1\ttest\n\nSo far, so good.\n\ncreate table dependend(globid int4 references child(id) on update cascade on \ndelete cascade);\n\ngives me an error: \nCREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: UNIQUE constraint matching given keys for referenced table \"child\" \nnot found \n\nOnce again, _why_ is this? What would inheritance be good for if I can't use \nit this way? Bad enough that inheritance of triggers or constraints doesn't \nwork, but a simple refernce to a attribute should be possible, shouldn't it?\n\nIf there is a good reason not to allow it, I would like to know. If not, I \nwould be willing to help out implementing it, if somebody points me into the \nright direction in the code (or documentation)\n\nHorst\n",
"msg_date": "Wed, 13 Dec 2000 22:34:27 +1100",
"msg_from": "Horst Herb <[email protected]>",
"msg_from_op": true,
"msg_subject": "PLEASE help with foreign key and inheritance problem"
}
]
|
[
{
"msg_contents": "hi, there!\n\ntest=# create table a(id int primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'a_pkey' for\ntable 'a'\nCREATE\ntest=# create table b(id int references a);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\ntest=# insert into a values(45);\nINSERT 34924 1\ntest=# insert into a values(43);\nINSERT 34925 1\ntest=# insert into a values(34);\nINSERT 34926 1\ntest=# select a.id, b.id from a left join b using(id);\n id | id \n----+----\n 43 | \n 45 | \n(2 rows)\n\ntest=# select * from a;\n id \n----\n 45\n 43\n 34\n(3 rows)\n\ntest=# select * from b;\n id \n----\n(0 rows)\n\ntest=# insert into b values(34);\nINSERT 34927 1\ntest=# select a.id, b.id from a left join b using(id);\n id | id \n----+----\n 34 | 34\n 43 | \n 45 | \n(3 rows)\n\ntest=# \n \nlark:~$psql --version\npsql (PostgreSQL) 7.1beta1\ncontains readline, history, multibyte support\n[...]\nlark:~$uname -a\nFreeBSD xxx 4.2-STABLE FreeBSD 4.2-STABLE #0: Wed Dec\n6 17:16:57 NOVT 2000 xxx:/usr/obj/usr/src/sys/alf i386\n\nsorry, if it has already been fixed\n\n/fjoe\n\n",
"msg_date": "Wed, 13 Dec 2000 18:45:55 +0600 (NS)",
"msg_from": "Max Khon <[email protected]>",
"msg_from_op": true,
"msg_subject": "left join bug?"
},
{
"msg_contents": "Max Khon <[email protected]> writes:\n> test=# select a.id, b.id from a left join b using(id);\n> id | id \n> ----+----\n> 43 | \n> 45 | \n> (2 rows)\n\n> test=# select * from a;\n> id \n> ----\n> 45\n> 43\n> 34\n> (3 rows)\n\nUgh. It looks like mergejoin must be mishandling the first left-side\nitem when the right side is empty. Will take a look...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2000 11:37:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: left join bug? "
},
{
"msg_contents": "Got it --- was the proverbial one-line fix --- thanks for the report!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2000 18:47:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: left join bug? "
}
]
|
[
{
"msg_contents": "\n> I stated this before, but I did not get a helpful answer. I \n> might have \n> misunderstood tghe documentation on foreign keys:\n> \n> create table global(id serial);\n> create table child(anything text) inherits(global);\n\nneed:\n\tcreate unique index child_id_index on child (id);\n\n> gives me an error: \n> CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> ERROR: UNIQUE constraint matching given keys for referenced \n> table \"child\" \n> not found \n\nThen the above works. \nActually the error message sounds sufficiently clear to me, no?\n\nAndreas\n",
"msg_date": "Wed, 13 Dec 2000 13:59:02 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: PLEASE help with foreign key and inheritance proble\n\tm"
}
]
|
[
{
"msg_contents": ">>\n>> \tcreate unique index child_id_index on child (id);\n\n>Thanks a lot. You saved my day :-)))\n\nAlways feels good to be able to help :-)\n\n> > > CREATE TABLE will create implicit trigger(s) for FOREIGN \n> KEY check(s)\n> > > ERROR: UNIQUE constraint matching given keys for referenced\n> > > table \"child\"\n> > > not found\n> >\n> > Then the above works.\n> > Actually the error message sounds sufficiently clear to me, no?\n> \n> I retrospect, yes. Still, I think inheritance could/should do that for me \n> automatically. Is there a good reason why it doesn't ?\n\nNone, other that 1. noone implemented it and 2nd there was no generally \naccepted plan on how this should work.\n\ne.g. should the unique index for the serial span the whole hierarchy,\nor should a separate index be created for each table ?\n\nAs a hint I would keep my fingers off inheritance as it stands now, \nsince all it is good for is to save you some typing for the create table\nstatements. It currently has almost no other functionality except to \ngive you the supertable columns for all rows in the hierarchy if you\nselect * from supertable.\n\nAndreas\n",
"msg_date": "Wed, 13 Dec 2000 15:05:34 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: PLEASE help with foreign key and inheritance pr\n\toble m"
}
]
|
[
{
"msg_contents": "hi, there!\n\ntest=# create table a(id int primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'a_pkey' for\ntable 'a'\nCREATE\ntest=# create table b(id int references a);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\ntest=# insert into a values(45);\nINSERT 34924 1\ntest=# insert into a values(43);\nINSERT 34925 1\ntest=# insert into a values(34);\nINSERT 34926 1\ntest=# select a.id, b.id from a left join b using(id);\n id | id \n----+----\n 43 | \n 45 | \n(2 rows)\n\ntest=# select * from a;\n id \n----\n 45\n 43\n 34\n(3 rows)\n\ntest=# select * from b;\n id \n----\n(0 rows)\n\ntest=# insert into b values(34);\nINSERT 34927 1\ntest=# select a.id, b.id from a left join b using(id);\n id | id \n----+----\n 34 | 34\n 43 | \n 45 | \n(3 rows)\n\ntest=# \n \nlark:~$psql --version\npsql (PostgreSQL) 7.1beta1\ncontains readline, history, multibyte support\n[...]\nlark:~$uname -a\nFreeBSD xxx 4.2-STABLE FreeBSD 4.2-STABLE #0: Wed Dec\n6 17:16:57 NOVT 2000 xxx:/usr/obj/usr/src/sys/alf i386\n\nsorry, if it has already been fixed\n\n/fjoe\n\n\n",
"msg_date": "Wed, 13 Dec 2000 20:10:18 +0600 (NS)",
"msg_from": "Max Khon <[email protected]>",
"msg_from_op": true,
"msg_subject": "left join bug? (fwd)"
}
]
|
[
{
"msg_contents": "I'm trying to delete all the records or only one record from a table but\ni'm having this message:\nERROR: Unable to identify an operator '=' for types 'int4' and 'text'\n You will have to retype this query using an explicit cast\n\nWhat's this means ???\n\nThanks\n\nLuis Sousa\n\n",
"msg_date": "Wed, 13 Dec 2000 16:29:17 +0000",
"msg_from": "Luis Sousa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem when deleting a record from a table"
}
]
|
[
{
"msg_contents": "Hi!\n\nMy name is Daniel �kerud, a swedish studen, writing an essay for my exam.\nThe label will be something like: \"Database algorithms\".\nI know it is a complex task, and will ofcourse, as soon as possible,\nspecify more preciesly what it will be about.\n\nI have thoughts about writing about, for example, how searching a\ndatabase will go faster by indexing certain columns in a table.\nAnd what makes this same procedure slower by indexing wrong, or\ntoo many. (Correct me if I am wrong).\n\nI assume that there is a cascade of algorithms inside the code\nof a databasemanager. There is no doubt work for me :)\n\nDo you have any tips of places where I can gather information?\nDo you recommend a book in this topic?\n\nI have plans of investingating some of the code in several of the Open \nSource databasemanagers out there.\n\nThank you,\nI really appreciate your help!\n\nDaniel �kerud\nSoftwareEngineering, Malm� University.\[email protected]\n\n------------------------------------------------------------\n Get your FREE web-based e-mail and newsgroup access at:\n http://MailAndNews.com\n\n Create a new mailbox, or access your existing IMAP4 or\n POP3 mailbox from anywhere with just a web browser.\n------------------------------------------------------------\n\n",
"msg_date": "Wed, 13 Dec 2000 11:54:24 -0500",
"msg_from": "\"D=?ISO-8859-1?Q?=C5?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Writing essay, please help!"
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.