threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hi!\n\nMy name is Daniel �kerud, a swedish studen, writing an essay for my exam.\nThe label will be something like: \"Database algorithms\".\nI know it is a complex task, and will ofcourse, as soon as possible,\nspecify more preciesly what it will be about.\n\nI have thoughts about writing about, for example, how searching a\ndatabase will go faster by indexing certain columns in a table.\nAnd what makes this same procedure slower by indexing wrong, or\ntoo many. (Correct me if I am wrong).\n\nI assume that there is a cascade of algorithms inside the code\nof a databasemanager. There is no doubt work for me :)\n\nDo you have any tips of places where I can gather information?\nDo you recommend a book in this topic?\n\nI have plans of investingating some of the code in several of the Open \nSource databasemanagers out there.\n\nThank you,\nI really appreciate your help!\n\nDaniel �kerud\nSoftwareEngineering, Malm� University.\[email protected]\n\n------------------------------------------------------------\n Get your FREE web-based e-mail and newsgroup access at:\n http://MailAndNews.com\n\n Create a new mailbox, or access your existing IMAP4 or\n POP3 mailbox from anywhere with just a web browser.\n------------------------------------------------------------\n\n",
"msg_date": "Wed, 13 Dec 2000 12:16:04 -0500",
"msg_from": "\"D=?ISO-8859-1?Q?=C5?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "DB Algorithm Essay, please help"
},
{
"msg_contents": "Database research papers at berkeley are at:\nhttp://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/\n\n\nOn Wednesday 13 December 2000 12:16, D� wrote:\n> Hi!\n>\n> My name is Daniel �kerud, a swedish studen, writing an essay for my exam.\n> The label will be something like: \"Database algorithms\".\n> I know it is a complex task, and will ofcourse, as soon as possible,\n> specify more preciesly what it will be about.\n>\n> I have thoughts about writing about, for example, how searching a\n> database will go faster by indexing certain columns in a table.\n> And what makes this same procedure slower by indexing wrong, or\n> too many. (Correct me if I am wrong).\n>\n> I assume that there is a cascade of algorithms inside the code\n> of a databasemanager. There is no doubt work for me :)\n>\n> Do you have any tips of places where I can gather information?\n> Do you recommend a book in this topic?\n>\n> I have plans of investingating some of the code in several of the Open\n> Source databasemanagers out there.\n>\n> Thank you,\n> I really appreciate your help!\n>\n> Daniel �kerud\n> SoftwareEngineering, Malm� University.\n> [email protected]\n>\n> ------------------------------------------------------------\n> Get your FREE web-based e-mail and newsgroup access at:\n> http://MailAndNews.com\n>\n> Create a new mailbox, or access your existing IMAP4 or\n> POP3 mailbox from anywhere with just a web browser.\n> ------------------------------------------------------------\n\n-- \n-------- Robert B. Easter [email protected] ---------\n- CompTechNews Message Board http://www.comptechnews.com/ -\n- CompTechServ Tech Services http://www.comptechserv.com/ -\n---------- http://www.comptechnews.com/~reaster/ ------------\n",
"msg_date": "Wed, 13 Dec 2000 17:08:03 -0500",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB Algorithm Essay, please help"
}
]
|
[
{
"msg_contents": "Hi,\nI came across an anomaly yesterday which I would like clarified. I am using\n6.5.3 but I don't know whether this is version specific.\n\nI created a new db user with a name that was 14 characters long, I allowed\ncreateuser to also create a dbase of the dame name. So far so good. There\nis no corresponding unix user so everything seemed fine.\n\nI then tried to add this user to an existing pg_passwd file. I didn't get\nan error, but it didn't add it either. What pg_passwd did do was to truncate\nthe name to 8 characters, it then found an existing user as the root of my\nlong name is an existing unix/postgres user and changed the password for\nthat entry. This isn't what I wanted to happen. After some experimentation\nI found that I could add a user xxx to the file with pg_passwd and then edit\nthe file and change the xxx to the correct 14 character name.\n\nIs there a defined max length for postgreSQL user names ?\nIf there is and it is 8, then createuser should enforce it.\nIf the length is > 8 then pg_passwd needs fixing so that it doesn't\ntruncate.\n\nIn either case, a message should have been given rather than the silent\ntruncation, as it had me foxed for a while as to why my new user couldn't\nuse the database and the existing user couldn't either.\n--\nGlen and Rosanne Eustace,\nGodZone Internet Services, a division of AGRE Enterprises Ltd.,\nP.O. Box 8020, Palmerston North, New Zealand\nPh: +64 6 357 8168, Mobile: +64 21 424 015\n\n",
"msg_date": "Thu, 14 Dec 2000 06:34:33 +1300",
"msg_from": "\"Glen and Rosanne Eustace\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "User names"
},
{
"msg_contents": "\"Glen and Rosanne Eustace\" <[email protected]> writes:\n> Is there a defined max length for postgreSQL user names ?\n> If there is and it is 8, then createuser should enforce it.\n> If the length is > 8 then pg_passwd needs fixing so that it doesn't\n> truncate.\n\nA quick look in the pg_passwd sources indeed shows some hardwired limits\non both username and password length. This is bogus. Anyone want to\ncontribute a patch?\n\nFor the record, the max username length should be NAMEDATALEN-1, and\nthe password field in pg_shadow is 'text', so it should be happy to\ntake any password that you're willing to type ;-). However, crypt\npassword processing depends on crypt(3), which more than likely ignores\ncharacters beyond the eighth on most platforms.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2000 00:41:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: User names "
},
{
"msg_contents": "On Thu, Dec 14, 2000 at 12:41:34AM -0500, Tom Lane wrote:\n> For the record, the max username length should be NAMEDATALEN-1, and\n> the password field in pg_shadow is 'text', so it should be happy to\n> take any password that you're willing to type ;-). However, crypt\n> password processing depends on crypt(3), which more than likely ignores\n> characters beyond the eighth on most platforms.\n\nalso be aware that as of FreeBSD 4.1 or so, the crypt() function on freebsd\nis actually doing some kinda MD5 thingy unless you specifically use\ncrypt_set_format().\n\n-- \n[ Jim Mercer [email protected] ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ aka [email protected] +1 416 410-5633 ]\n",
"msg_date": "Thu, 14 Dec 2000 01:06:53 -0500",
"msg_from": "Jim Mercer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: User names"
},
{
"msg_contents": "\nCan someone comment on this?\n\n> \"Glen and Rosanne Eustace\" <[email protected]> writes:\n> > Is there a defined max length for postgreSQL user names ?\n> > If there is and it is 8, then createuser should enforce it.\n> > If the length is > 8 then pg_passwd needs fixing so that it doesn't\n> > truncate.\n> \n> A quick look in the pg_passwd sources indeed shows some hardwired limits\n> on both username and password length. This is bogus. Anyone want to\n> contribute a patch?\n> \n> For the record, the max username length should be NAMEDATALEN-1, and\n> the password field in pg_shadow is 'text', so it should be happy to\n> take any password that you're willing to type ;-). However, crypt\n> password processing depends on crypt(3), which more than likely ignores\n> characters beyond the eighth on most platforms.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 23:43:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: User names"
},
{
"msg_contents": "\nSeems like a bug that needs fixing.\n\n> \"Glen and Rosanne Eustace\" <[email protected]> writes:\n> > Is there a defined max length for postgreSQL user names ?\n> > If there is and it is 8, then createuser should enforce it.\n> > If the length is > 8 then pg_passwd needs fixing so that it doesn't\n> > truncate.\n> \n> A quick look in the pg_passwd sources indeed shows some hardwired limits\n> on both username and password length. This is bogus. Anyone want to\n> contribute a patch?\n> \n> For the record, the max username length should be NAMEDATALEN-1, and\n> the password field in pg_shadow is 'text', so it should be happy to\n> take any password that you're willing to type ;-). However, crypt\n> password processing depends on crypt(3), which more than likely ignores\n> characters beyond the eighth on most platforms.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 09:45:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] User names"
}
]
|
[
{
"msg_contents": "Hi!\n\nMy name is Daniel �kerud, a swedish studen, writing an essay for my exam.\nThe label will be something like: \"Database algorithms\".\nI know it is a complex task, and will ofcourse, as soon as possible,\nspecify more preciesly what it will be about.\n\nI have thoughts about writing about, for example, how searching a\ndatabase will go faster by indexing certain columns in a table.\nAnd what makes this same procedure slower by indexing wrong, or\ntoo many. (Correct me if I am wrong).\n\nI assume that there is a cascade of algorithms inside the code\nof a databasemanager. There is no doubt work for me :)\n\nDo you have any tips of places where I can gather information?\nDo you recommend a book in this topic?\n\nI have plans of investingating some of the code in several of the Open\nSource databasemanagers out there.\n\nThank you,\nI really appreciate your help!\n\nDaniel �kerud\nSoftwareEngineering, Malm� University.\[email protected]\n\n",
"msg_date": "Wed, 13 Dec 2000 13:37:16 -0500",
"msg_from": "\"D=?ISO-8859-1?Q?=C5?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "DB-Manager Algorithms Essay. Please help!"
},
{
"msg_contents": "Hi all,\n\nCan anyone point the doc explaining how to assign a new admin level\naccount name and password? I can't seem to find it in docs or search\nengines. Thanks.\n\nR. Smith\n\n",
"msg_date": "Fri, 15 Dec 2000 10:46:09 -0800",
"msg_from": "Roger Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "How to assign a new admin account name and password for 7.02?"
}
]
|
[
{
"msg_contents": "I noticed the other day that one of my pg databases was slow, so I ran\nvacuum on it, which brought a question to mind: why the need? I looked\nat my oracle server and we aren't doing anything of the sort (that I can\nfind), so why does pg need it? Any info?\n\nThanks,\n- brandon\n\n\nb. palmer, [email protected]\npgp: www.crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Wed, 13 Dec 2000 14:41:21 -0500 (EST)",
"msg_from": "bpalmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why vacuum?"
},
{
"msg_contents": "El Mi� 13 Dic 2000 16:41, bpalmer escribi�:\n> I noticed the other day that one of my pg databases was slow, so I ran\n> vacuum on it, which brought a question to mind: why the need? I looked\n> at my oracle server and we aren't doing anything of the sort (that I can\n> find), so why does pg need it? Any info?\n\nI know nothing about Oracle, but I can tell you that Informix has an update \nstatistics, which I don't know if it's similar to vacuum, but....\nWhat vacuum does is clean the database from rows that were left during \nupdates and deletes, non the less, the tables get shrincked, so searches get \nfaster.\n\nSaludos... :-)\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Wed, 13 Dec 2000 20:08:05 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "* Martin A. Marques <[email protected]> [001213 15:15] wrote:\n> El Mi� 13 Dic 2000 16:41, bpalmer escribi�:\n> > I noticed the other day that one of my pg databases was slow, so I ran\n> > vacuum on it, which brought a question to mind: why the need? I looked\n> > at my oracle server and we aren't doing anything of the sort (that I can\n> > find), so why does pg need it? Any info?\n> \n> I know nothing about Oracle, but I can tell you that Informix has an update \n> statistics, which I don't know if it's similar to vacuum, but....\n> What vacuum does is clean the database from rows that were left during \n> updates and deletes, non the less, the tables get shrincked, so searches get \n> faster.\n\nYes, postgresql requires vacuum quite often otherwise queries and\nupdates start taking ungodly amounts of time to complete. If you're\nhaving problems because vacuum locks up your tables for too long\nyou might want to check out:\n\nhttp://people.freebsd.org/~alfred/vacfix/\n\nIt has some tarballs that have patches to speed up vacuum depending\non how you access your tables you can see up to a 20x reduction in\nvacuum time.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Wed, 13 Dec 2000 15:42:46 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "bpalmer wrote:\n> \n> I noticed the other day that one of my pg databases was slow, so I ran\n> vacuum on it, which brought a question to mind: why the need? I looked\n> at my oracle server and we aren't doing anything of the sort (that I can\n> find), so why does pg need it? Any info?\n\nHi,\n\nI'm one of the people beeing slightly bitten by the current vacuum\nbehaviour :), so i take the chance to add my suggestions to this\nquestion.\n\nFWIW, my thought is about a vacuumer process that, in background, scans\neach table for available blocks (for available I mean a block full of\ndeleted rows whose tid is commited) and fills a cache of those blocks\navailable to the backends.\n\nWhenever a backend needs to allocate a new block it looks for a free\nblock in the cache, if it finds any, it can use it, else it proceeds as\nusual appending the block at the tail.\n\nThe vacuumer would run with a very low priority, so that it doesn't suck\nprecious CPU and I/O when the load on the machine is high.\n\nA small flag on each table would avoid the vacuumer to scan the table if\nno empty block is found and no tuple has been deleted.\n\nOk, now tell me where this is badly broken :))\n\nJust my .02 euro :)\n\nBye!\n\n-- \n Daniele Orlandi\n",
"msg_date": "Wed, 13 Dec 2000 23:46:41 +0000",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "I have this nasty problem too, in early time, I don't know the problem, but we used it for a while,\nthan we found our table growing too fast without insert any record( we use update), this behaviour \nmost like M$ MSACCESS database I had used a long time ago which don't reuse deleted record \nspace and full fill your hard disk after several hours, the nasty vaccum block any other users to operate\non table, this is a big problem for a large table, because it will block tooo long to let other user to run\nquery. we have a project affected by this problem, and sadly we decide to use closure source database\n - SYBASE on linux, we havn't any other selections. :(\n\nnote that SYBASE and Informix both have 'update statistics' command, but they run it fast in seconds,\nnot block any other user, this is pretty. ya, what's good technology!\n\nXuYifeng\n\n----- Original Message ----- \nFrom: Martin A. Marques <[email protected]>\nTo: bpalmer <[email protected]>; <[email protected]>\nSent: Thursday, December 14, 2000 7:08 AM\nSubject: Re: [HACKERS] Why vacuum?\n\n\nEl Mi� 13 Dic 2000 16:41, bpalmer escribi�:\n> I noticed the other day that one of my pg databases was slow, so I ran\n> vacuum on it, which brought a question to mind: why the need? I looked\n> at my oracle server and we aren't doing anything of the sort (that I can\n> find), so why does pg need it? Any info?\n\nI know nothing about Oracle, but I can tell you that Informix has an update \nstatistics, which I don't know if it's similar to vacuum, but....\nWhat vacuum does is clean the database from rows that were left during \nupdates and deletes, non the less, the tables get shrincked, so searches get \nfaster.\n\nSaludos... :-)\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s email: [email protected]\nSanta Fe - Argentina http://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n\n\n",
"msg_date": "Thu, 14 Dec 2000 09:24:35 +0800",
"msg_from": "\"xuyifeng\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "I have this nasty problem too, in early time, I don't know the problem, but we used it for a while,\nthan we found our table growing too fast without insert any record( we use update), this behaviour \nmost like M$ MSACCESS database I had used a long time ago which don't reuse deleted record \nspace and full fill your hard disk after several hours, the nasty vaccum block any other users to operate\non table, this is a big problem for a large table, because it will block tooo long to let other user to run\nquery. we have a project affected by this problem, and sadly we decide to use closure source database\n - SYBASE on linux, we havn't any other selections. :(\n\nnote that SYBASE and Informix both have 'update statistics' command, but they run it fast in seconds,\nnot block any other user, this is pretty. ya, what's good technology!\n\nXuYifeng\n\n----- Original Message ----- \nFrom: Martin A. Marques <[email protected]>\nTo: bpalmer <[email protected]>; <[email protected]>\nSent: Thursday, December 14, 2000 7:08 AM\nSubject: Re: [HACKERS] Why vacuum?\n\n\nEl Mi� 13 Dic 2000 16:41, bpalmer escribi�:\n> I noticed the other day that one of my pg databases was slow, so I ran\n> vacuum on it, which brought a question to mind: why the need? I looked\n> at my oracle server and we aren't doing anything of the sort (that I can\n> find), so why does pg need it? Any info?\n\nI know nothing about Oracle, but I can tell you that Informix has an update \nstatistics, which I don't know if it's similar to vacuum, but....\nWhat vacuum does is clean the database from rows that were left during \nupdates and deletes, non the less, the tables get shrincked, so searches get \nfaster.\n\nSaludos... :-)\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s email: [email protected]\nSanta Fe - Argentina http://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n\n",
"msg_date": "Thu, 14 Dec 2000 09:40:17 +0800",
"msg_from": "\"xuyifeng\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "* xuyifeng <[email protected]> [001213 18:54] wrote:\n> I have this nasty problem too, in early time, I don't know the problem, but we used it for a while,\n> than we found our table growing too fast without insert any record( we use update), this behaviour \n> most like M$ MSACCESS database I had used a long time ago which don't reuse deleted record \n> space and full fill your hard disk after several hours, the nasty vaccum block any other users to operate\n> on table, this is a big problem for a large table, because it will block tooo long to let other user to run\n> query. we have a project affected by this problem, and sadly we decide to use closure source database\n> - SYBASE on linux, we havn't any other selections. :(\n> \n> note that SYBASE and Informix both have 'update statistics' command, but they run it fast in seconds,\n> not block any other user, this is pretty. ya, what's good technology!\n\nhttp://people.freebsd.org/~alfred/vacfix/\n\n-Alfred\n",
"msg_date": "Wed, 13 Dec 2000 19:09:23 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "> But why? I don't know of other databases that need to be 'vacuum'ed. Do\n> all others just do it internaly on a regular basis?\n>\n> What am I missing here?\n\nPlenty of other databases need to be 'vacuumed'. For instance, if you have\nan ms access database with 5 MB of data in it, and then delete all the data,\nleaving only the forms, etc - you will be left with a 5MB mdb file still!\n\nIf you then run 'Compact Database' (which is another word for 'vacuum'), the\nmdb file will be reduced down to 500k...\n\nChris\n\n",
"msg_date": "Thu, 14 Dec 2000 11:42:08 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Why vacuum?"
},
{
"msg_contents": "> Yes, postgresql requires vacuum quite often otherwise queries and\n> updates start taking ungodly amounts of time to complete. If you're\n> having problems because vacuum locks up your tables for too long\n> you might want to check out:\n\nBut why? I don't know of other databases that need to be 'vacuum'ed. Do\nall others just do it internaly on a regular basis?\n\nWhat am I missing here?\n\n\n\nb. palmer, [email protected]\npgp: www.crimelabs.net/bpalmer.pgp5\n\n\n",
"msg_date": "Wed, 13 Dec 2000 22:45:05 -0500 (EST)",
"msg_from": "bpalmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "On Wed, 13 Dec 2000, bpalmer wrote:\n\n> > Yes, postgresql requires vacuum quite often otherwise queries and\n> > updates start taking ungodly amounts of time to complete. If you're\n> > having problems because vacuum locks up your tables for too long\n> > you might want to check out:\n> \n> But why? I don't know of other databases that need to be 'vacuum'ed. Do\n> all others just do it internaly on a regular basis?\n> \n> What am I missing here?\n\nPgSQL's storage manager is currently such that it doesn't overwrite\n'deleted' records, but just keeps appending to the end of the table\n... so, for instance, a client of ours whose table had 5 records in it\nthat are updated *alot* grew a table to 64Meg that only contains ~8k worth\nof data ...\n\nvacuum'ng cleans out the cruft and truncates the file ...\n\nvadim, for v7.2, is planning on re-writing the storage manager to do\nproper overwriting of deleted space, which will reduce the requirement for\nvacuum to almost never ... \n\n",
"msg_date": "Wed, 13 Dec 2000 23:47:50 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "On Thu, 14 Dec 2000, Christopher Kings-Lynne wrote:\n\n> Plenty of other databases need to be 'vacuumed'. For instance, if you have\n> an ms access database with 5 MB of data in it, and then delete all the data,\n> leaving only the forms, etc - you will be left with a 5MB mdb file still!\n> \n> If you then run 'Compact Database' (which is another word for 'vacuum'), the\n> mdb file will be reduced down to 500k...\n\nOoh... Hope MS Access isn't going to be taken seriously as a benchmark\nhere :-). The same is also true of MapInfo, by the way, but I'm not\nholding that up as a benchmark either ;-).\n\n> Chris\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen [email protected]\nProximity Pty Ltd http://www.proximity.com.au/\n http://www4.tpg.com.au/users/rita_tim/\n\n",
"msg_date": "Thu, 14 Dec 2000 14:58:25 +1100 (EST)",
"msg_from": "Tim Allen <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Why vacuum?"
},
{
"msg_contents": "Tim Allen wrote:\n> \n> On Thu, 14 Dec 2000, Christopher Kings-Lynne wrote:\n> \n> > Plenty of other databases need to be 'vacuumed'. For instance, if you have\n> > an ms access database with 5 MB of data in it, and then delete all the data,\n> > leaving only the forms, etc - you will be left with a 5MB mdb file still!\n> >\n> > If you then run 'Compact Database' (which is another word for 'vacuum'), the\n> > mdb file will be reduced down to 500k...\n> \n> Ooh... Hope MS Access isn't going to be taken seriously as a benchmark\n> here :-). The same is also true of MapInfo, by the way, but I'm not\n> holding that up as a benchmark either ;-).\n\n:-)\n\nI think that the non-overwriting storage manager actually bought a lot\nmore for PostgreSQL than it does for MS Access.\n\nIn earlier versions of PostgreSQL it was possible to \"time travel\" your\ndatabase and so run your query agains the database as it was at a\nparticular time / date. This advanced feature turns out to be useful in\nvery few situations, and is very expensive in terms of storage.\n\nStill, \"if it works, don't fix it\" also applies. The PostgreSQL storage\nmanager is quite efficient as it is now, and most of us do have quiet\nperiods when we can safely vacuum the database, which is why it has had\nto wait until now.\n\nThis will be quite a big change for 7.2, and getting the performance\nright will no doubt challenge these hackers whom we are all greatly\nindebted to.\n\nCheers,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n",
"msg_date": "Thu, 14 Dec 2000 23:28:38 +1300",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "Hello,\n\nAnother question about vacuum. Will vacuum/drop/create deadlocks be fixed in \n7.0.x branch? That's really annoying. I cannot run vacuum automatically due \nto this. Just a patch will be really great. Is it so hard to fix?\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Thu, 14 Dec 2000 17:57:32 +0600",
"msg_from": "Denis Perchine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "El Mi� 13 Dic 2000 22:24, xuyifeng escribi�:\n> I have this nasty problem too, in early time, I don't know the problem,\n> but we used it for a while, than we found our table growing too fast\n> without insert any record( we use update), this behaviour most like M$\n> MSACCESS database I had used a long time ago which don't reuse deleted\n> record space and full fill your hard disk after several hours, the nasty\n> vaccum block any other users to operate on table, this is a big problem\n> for a large table, because it will block tooo long to let other user to run\n> query. we have a project affected by this problem, and sadly we decide to\n> use closure source database - SYBASE on linux, we havn't any other\n> selections. :(\n>\n> note that SYBASE and Informix both have 'update statistics' command, but\n> they run it fast in seconds, not block any other user, this is pretty. ya,\n> what's good technology!\n\nI have to say that 'update statistics' does not take a few seconds if the \ndatabases have grownto be a bit large. At least thats what I have seen.\n\nSaludos... :-)\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Thu, 14 Dec 2000 09:01:59 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "\"Martin A. Marques\" wrote:\n> \n> El Mi� 13 Dic 2000 16:41, bpalmer escribi�:\n> > I noticed the other day that one of my pg databases was slow, so I ran\n> > vacuum on it, which brought a question to mind: why the need? I looked\n> > at my oracle server and we aren't doing anything of the sort (that I can\n> > find), so why does pg need it? Any info?\n> \n> I know nothing about Oracle, but I can tell you that Informix has an update\n> statistics, which I don't know if it's similar to vacuum, but....\n> What vacuum does is clean the database from rows that were left during\n> updates and deletes, non the less, the tables get shrincked, so searches get\n> faster.\n> \n\nWhile I would like Postgres to perform statistics, one and a while, on\nit own. I like vacuum in general.\n\nI would rather trade unused disk space for performace. The last thing\nyou need during high loads is the database thinking that it is time to\nclean up.\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Thu, 14 Dec 2000 08:11:25 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> On Wed, 13 Dec 2000, bpalmer wrote:\n> \n> > > Yes, postgresql requires vacuum quite often otherwise queries and\n> > > updates start taking ungodly amounts of time to complete. If you're\n> > > having problems because vacuum locks up your tables for too long\n> > > you might want to check out:\n> >\n> > But why? I don't know of other databases that need to be 'vacuum'ed. Do\n> > all others just do it internaly on a regular basis?\n> >\n> > What am I missing here?\n> \n> PgSQL's storage manager is currently such that it doesn't overwrite\n> 'deleted' records, but just keeps appending to the end of the table\n> ... so, for instance, a client of ours whose table had 5 records in it\n> that are updated *alot* grew a table to 64Meg that only contains ~8k worth\n> of data ...\n> \n> vacuum'ng cleans out the cruft and truncates the file ...\n> \n> vadim, for v7.2, is planning on re-writing the storage manager to do\n> proper overwriting of deleted space, which will reduce the requirement for\n> vacuum to almost never ...\n\nI hope that he does it in a way that allows it to retain the old\nbehaviour \nfor some tables if there is need for it.\n\nAlso, vacuum and analyze should be separated (i.e. one should be able to \nanalyze a table without vacuuming it.) \n\nMaybe use \"ALTER TABLE/DATABASE UPDATE STATISTICS\" for VACUUM ANALYZE as\nsyntax.\n\nTime travel is/was an useful feature that is difficult to emulate\nefficiently \nusing \"other\" means like rules/triggers \n\n------------\nHannu\n",
"msg_date": "Thu, 14 Dec 2000 13:16:20 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "On Thu, Dec 14, 2000 at 01:16:20PM +0000, Hannu Krosing wrote:\n> The Hermit Hacker wrote:\n\n<snip>\n\n> > vadim, for v7.2, is planning on re-writing the storage manager to do\n> > proper overwriting of deleted space, which will reduce the requirement for\n> > vacuum to almost never ...\n> \n> I hope that he does it in a way that allows it to retain the old\n> behaviour \n> for some tables if there is need for it.\n\nHere as well. The framework is still mostly there for multiple storage\nmanagers: I hope Vadim takes advantage of it.\n\n<snip>\n\n> Time travel is/was an useful feature that is difficult to emulate\n> efficiently using \"other\" means like rules/triggers \n\nI've actually been doing this very thing this week. It's not _that_\nhoribble, but does interact really poorly with RI constraints: suddenly,\nall those unique PK columns aren't so unique! This is probably the biggest\nreason to do time travel in the backend. Having it on a per-table basis\nwould be cool.\n\nHmm, seems the biggest problem to doing it per table would be needing\na couple optional system attributes (e.g. tt_start and tt_stop),\nand different indices, that know how to skip not-current tuples. Extra\nsyntax to do queries at a particular time in the past would be nice, but\nnot an inital requirement.\n\nSounds like there's something in common here with the per tuple CRC\ndiscusson, as well as optional OID: a generic need for optional system\nattributes.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n",
"msg_date": "Thu, 14 Dec 2000 10:14:13 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "* mlw <[email protected]> [001214 09:30] wrote:\n> \"Martin A. Marques\" wrote:\n> > \n> > El Mi� 13 Dic 2000 16:41, bpalmer escribi�:\n> > > I noticed the other day that one of my pg databases was slow, so I ran\n> > > vacuum on it, which brought a question to mind: why the need? I looked\n> > > at my oracle server and we aren't doing anything of the sort (that I can\n> > > find), so why does pg need it? Any info?\n> > \n> > I know nothing about Oracle, but I can tell you that Informix has an update\n> > statistics, which I don't know if it's similar to vacuum, but....\n> > What vacuum does is clean the database from rows that were left during\n> > updates and deletes, non the less, the tables get shrincked, so searches get\n> > faster.\n> > \n> \n> While I would like Postgres to perform statistics, one and a while, on\n> it own. I like vacuum in general.\n> \n> I would rather trade unused disk space for performace. The last thing\n> you need during high loads is the database thinking that it is time to\n> clean up.\n\nEven worse is having to scan a file that has grown 20x the size\nbecause you havne't vacuum'd in a while.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 14 Dec 2000 09:38:55 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
}
]
|
[
{
"msg_contents": "For example\n\ncreate table testTable (\nid integer,\nname char(20)\n);\n\nan ASCII file format with field separator \"|\" is\n\n1|Hello|\n2|Again|\n......\n\nThere is a way to do this in Oracle, Sybase, Informix and MySQL.\nBut I want to know how to do this in PostgreSQL.\n\nPlease don't tell me use pg_dump, because it is not a correct answer for\n\nmy question!\n\nThank you!",
"msg_date": "Wed, 13 Dec 2000 16:17:31 -0500",
"msg_from": "Raymond Chui <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to import/export data from/to an ASCII file?"
},
{
"msg_contents": "Use\n\n COPY table FROM 'filename' USING DELIMITERS '|';\n\n - Thomas\n",
"msg_date": "Thu, 14 Dec 2000 04:34:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to import/export data from/to an ASCII file?"
},
{
"msg_contents": "> For example\n> \n> create table testTable (\n> id integer,\n> name char(20)\n> );\n> \n> an ASCII file format with field separator \"|\" is\n> \n> 1|Hello|\n> 2|Again|\n> ......\n> \n> There is a way to do this in Oracle, Sybase, Informix and MySQL.\n> But I want to know how to do this in PostgreSQL.\n> \n> Please don't tell me use pg_dump, because it is not a correct answer\n> for\n\nLook at the documentation for COPY\n\n\t\\h COPY\n\nQuestions like this should be posted to pgsql-general, not pgsql-\nhackers.\n--\nJoel Burton, Director of Information Systems -*- [email protected]\nSupport Center of Washington (www.scw.org)\n",
"msg_date": "Thu, 14 Dec 2000 15:14:48 -0500",
"msg_from": "\"Joel Burton\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to import/export data from/to an ASCII file?"
},
{
"msg_contents": "Try \\copy or copy commands in psql:\n\nI normally use \\copy for tab-delimited files.\nBut copy also works and has help....\n\n\\h copy\nCommand: COPY\nDescription: Copies data between files and tables\nSyntax:\nCOPY [ BINARY ] table [ WITH OIDS ]\n FROM { 'filename' | stdin }\n [ [USING] DELIMITERS 'delimiter' ]\n [ WITH NULL AS 'null string' ]\nCOPY [ BINARY ] table [ WITH OIDS ]\n TO { 'filename' | stdout }\n [ [USING] DELIMITERS 'delimiter' ]\n [ WITH NULL AS 'null string' ]\n\n\nRaymond Chui wrote:\n\n> For example\n> \n> create table testTable (\n> id integer,\n> name char(20)\n> );\n> \n> an ASCII file format with field separator \"|\" is\n> \n> 1|Hello|\n> 2|Again|\n> .......\n> \n> There is a way to do this in Oracle, Sybase, Informix and MySQL.\n> But I want to know how to do this in PostgreSQL.\n> \n> Please don't tell me use pg_dump, because it is not a correct answer for\n> \n> my question!\n> \n> Thank you!\n\n",
"msg_date": "Fri, 15 Dec 2000 02:54:08 GMT",
"msg_from": "Josh Rovero <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to import/export data from/to an ASCII file?"
}
]
|
[
{
"msg_contents": "Hi,\n\t\tWhere can I find documentation on WAL, TOAST and how to configure the \npg_hda.conf file?\n\nSaludos... ;-)\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n\n",
"msg_date": "Wed, 13 Dec 2000 18:51:28 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "docs"
}
]
|
[
{
"msg_contents": "I've been looking into Brian Hirt's complaint that 7.0.3 and 7.1 are\nlots slower than 7.0.2 in planning big joins. The direct cause is that\nsince we now deduce implied equality clauses, the system has more\npotential join paths than it used to --- for example, given \"WHERE\na.x = b.y AND b.y = c.z\", the old system would not consider joining a to\nc and then adding b, because it didn't have a joinqual relating a and c.\nNow it does. There's not a lot to be done about that, but I've been\nlooking to see if we can make any offsetting speedups.\n\nWhile digging around, I've realized that the planner is still carrying\naround a lot more paths than it really needs to. The rule in add_path()\nis that we will keep a path if it is cheaper OR differently sorted/\nbetter sorted than any other path for the same rel. But what is not\ntaken into account is whether the sort ordering of a path actually has\nany potential usefulness. Before saving a path on the grounds that it's\ngot an otherwise unobtainable sort ordering, we should check to see if\nthat sort ordering is really going to be useful for a later mergejoin\nor for the final output ordering. It turns out we already have code to\ncheck that for basic indexscan paths --- see useful_for_mergejoin() and\nuseful_for_ordering() in indxpath.c. But we fail to make the same sort\nof test on paths for join relations, with the result that we carry along\na lot more paths than we could possibly need, and that costs huge\namounts of time.\n\nAn example of what's going on is that given a query with\n\tFROM a, b, ... other rels ...\n\tWHERE a.w = 1 AND a.x = 2 AND b.y = 3 AND b.z = 4 ...\nif w,x,y,z all have indexes we will consider indexscans on all four\nof those indexes. Which is fine. But we will then consider nestloops\nand mergejoins of a with b that use these four indexscans as the outer\npath, and therefore yield results that are sorted by w,x,y,z\nrespectively. Those paths will be carried as possible paths for a+b\nbecause they offer different sort orders, even if we have no further\nuse for those sort orderings. And then we have a combinatorial blowup\nin the number of paths considered at higher join levels. We should\ninstead consider that these paths have no useful sort ordering, and\nthrow away all but the cheapest.\n\nWhat I'm thinking of doing is truncating the recorded pathkeys of a path\nat the first sortkey that's not useful for either a higher-level\nmergejoin clause or the requested final output sort ordering. Then the\nlogic inside add_path() wouldn't change, but it would only be considering\nuseful pathkeys and not useless ones.\n\nI'm trying to resist the temptation to make this change right now :-).\nIt's not quite a bug fix --- well, maybe you could call it a performance\nbug fix --- so I'm kind of thinking it shouldn't be done during beta.\nOTOH I seem to have lost the argument that Vadim shouldn't commit VACUUM\nperformance improvements during beta, so maybe this should go in too.\nWhat do you think?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2000 18:14:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Idea for reducing planning time"
},
{
"msg_contents": "* Tom Lane <[email protected]> [001213 15:18] wrote:\n> \n> I'm trying to resist the temptation to make this change right now :-).\n> It's not quite a bug fix --- well, maybe you could call it a performance\n> bug fix --- so I'm kind of thinking it shouldn't be done during beta.\n> OTOH I seem to have lost the argument that Vadim shouldn't commit VACUUM\n> performance improvements during beta, so maybe this should go in too.\n> What do you think?\n\nIf you're saying that you're OK with the work Vadim has done please\nlet him know, I'm assuming he hasn't committed out of respect for your\nstill standing objection.\n\nIf you're terribly against it then say so again, I just would rather\nit not happen because you objected rather than missed communication.\n\nAs far as the work you're proposing, how much of a gain is it over\nthe current code? 2x? 3x? 20x? :) There's a difference between a\nslight performance increase and something too good to pass up.\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Wed, 13 Dec 2000 16:04:34 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idea for reducing planning time"
},
{
"msg_contents": "\nsorry, meant to respond to the original and deleted it too fast ... \n\nTom, if the difference between 7.0 and 7.1 is such that there is a\nperformance decrease, *please* apply the fix ... with the boon that OUTER\nJOINs will provide, would hate to see us with a performance hit reducing\nthat impact ...\n\nOne thing I would like to suggest for this stage of the beta, though, is\nthat a little 'peer review' before committing the code might be something\nthat would help 'ease' implementing stuff like this and Vadim's VACUUM\ncode ... read through Vadim's code and see if it looks okay to you ... get\nVadim to read through your code/patch and see if it looks okay to him\n... it adds a day or two to the commit cycle, but at least you can say it\nwas reviewed before committed ...\n\n\nOn Wed, 13 Dec 2000, Alfred Perlstein wrote:\n\n> * Tom Lane <[email protected]> [001213 15:18] wrote:\n> > \n> > I'm trying to resist the temptation to make this change right now :-).\n> > It's not quite a bug fix --- well, maybe you could call it a performance\n> > bug fix --- so I'm kind of thinking it shouldn't be done during beta.\n> > OTOH I seem to have lost the argument that Vadim shouldn't commit VACUUM\n> > performance improvements during beta, so maybe this should go in too.\n> > What do you think?\n> \n> If you're saying that you're OK with the work Vadim has done please\n> let him know, I'm assuming he hasn't committed out of respect for your\n> still standing objection.\n> \n> If you're terribly against it then say so again, I just would rather\n> it not happen because you objected rather than missed communication.\n> \n> As far as the work you're proposing, how much of a gain is it over\n> the current code? 2x? 3x? 20x? :) There's a difference between a\n> slight performance increase and something too good to pass up.\n> \n> thanks,\n> -- \n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \"I have the heart of a child; I keep it in a jar on my desk.\"\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 13 Dec 2000 20:21:58 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idea for reducing planning time"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> If you're saying that you're OK with the work Vadim has done please\n> let him know, I'm assuming he hasn't committed out of respect for your\n> still standing objection.\n\nWell, I'm still against committing it now, but I only have one core\nvote, and I seem to be losing 3:1. I know when to concede ;-)\n\n> As far as the work you're proposing, how much of a gain is it over\n> the current code? 2x? 3x? 20x? :) There's a difference between a\n> slight performance increase and something too good to pass up.\n\nHard to tell without doing the work. But we already know that extra\npaths inside the planner pose a combinatorial penalty --- think\nexponential behavior, not linear speedups...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2000 19:46:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Idea for reducing planning time "
},
{
"msg_contents": "> What I'm thinking of doing is truncating the recorded pathkeys of a path\n> at the first sortkey that's not useful for either a higher-level\n> mergejoin clause or the requested final output sort ordering. Then the\n> logic inside add_path() wouldn't change, but it would only be considering\n> useful pathkeys and not useless ones.\n\n[ Sorry for the delay in replying. I was talking at Compaq.]\n\nOK, here is my idea. Do the patch and post it to the hackers list so\npeople can see the scope of the change. Then, if no one objects, apply\nit. We can always back it out, because no one else will be playing in\nthe optimizer. We can even back it out of a minor release if we find it\nis a problem. Seems we don't want reports that queries are _slower_\nthan 7.0.X, and I can see how it would happen.\n\nMy quick question is that if we have a1=b1 and b1=c1, isn't the join\nsorted by a1, b1, and c1, and threfore we don't get more sorted plans?\n\n> \n> I'm trying to resist the temptation to make this change right now :-).\n> It's not quite a bug fix --- well, maybe you could call it a performance\n> bug fix --- so I'm kind of thinking it shouldn't be done during beta.\n> OTOH I seem to have lost the argument that Vadim shouldn't commit VACUUM\n> performance improvements during beta, so maybe this should go in too.\n> What do you think?\n\nDid you lose the argument with Vadim? I haven't seen his vacuum commit\nyet, though I certainly would like to. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 15 Dec 2000 13:32:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idea for reducing planning time"
},
{
"msg_contents": "> \n> sorry, meant to respond to the original and deleted it too fast ... \n> \n> Tom, if the difference between 7.0 and 7.1 is such that there is a\n> performance decrease, *please* apply the fix ... with the boon that OUTER\n> JOINs will provide, would hate to see us with a performance hit reducing\n> that impact ...\n> \n> One thing I would like to suggest for this stage of the beta, though, is\n> that a little 'peer review' before committing the code might be something\n> that would help 'ease' implementing stuff like this and Vadim's VACUUM\n> code ... read through Vadim's code and see if it looks okay to you ... get\n> Vadim to read through your code/patch and see if it looks okay to him\n> ... it adds a day or two to the commit cycle, but at least you can say it\n> was reviewed before committed ...\n> \n\nTotally agree. In the old days, we posted all our patches to the list\nso people could see. We used to make cvs commits only on the main\nserver, so we had the patch handy, and it made sense to post it. Now\nthat we have remote cvs, we don't do it as much, but in this case, cvs\ndiff -c is a big help.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 15 Dec 2000 13:34:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idea for reducing planning time"
},
{
"msg_contents": "* Bruce Momjian <[email protected]> [001215 10:34] wrote:\n> > \n> > sorry, meant to respond to the original and deleted it too fast ... \n> > \n> > Tom, if the difference between 7.0 and 7.1 is such that there is a\n> > performance decrease, *please* apply the fix ... with the boon that OUTER\n> > JOINs will provide, would hate to see us with a performance hit reducing\n> > that impact ...\n> > \n> > One thing I would like to suggest for this stage of the beta, though, is\n> > that a little 'peer review' before committing the code might be something\n> > that would help 'ease' implementing stuff like this and Vadim's VACUUM\n> > code ... read through Vadim's code and see if it looks okay to you ... get\n> > Vadim to read through your code/patch and see if it looks okay to him\n> > ... it adds a day or two to the commit cycle, but at least you can say it\n> > was reviewed before committed ...\n> > \n> \n> Totally agree. In the old days, we posted all our patches to the list\n> so people could see. We used to make cvs commits only on the main\n> server, so we had the patch handy, and it made sense to post it. Now\n> that we have remote cvs, we don't do it as much, but in this case, cvs\n> diff -c is a big help.\n\nIt seems that Tom has committed his fixups but we're still waiting\non Vadim?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Fri, 15 Dec 2000 10:57:58 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idea for reducing planning time"
},
{
"msg_contents": "On Fri, 15 Dec 2000, Alfred Perlstein wrote:\n\n> * Bruce Momjian <[email protected]> [001215 10:34] wrote:\n> > > \n> > > sorry, meant to respond to the original and deleted it too fast ... \n> > > \n> > > Tom, if the difference between 7.0 and 7.1 is such that there is a\n> > > performance decrease, *please* apply the fix ... with the boon that OUTER\n> > > JOINs will provide, would hate to see us with a performance hit reducing\n> > > that impact ...\n> > > \n> > > One thing I would like to suggest for this stage of the beta, though, is\n> > > that a little 'peer review' before committing the code might be something\n> > > that would help 'ease' implementing stuff like this and Vadim's VACUUM\n> > > code ... read through Vadim's code and see if it looks okay to you ... get\n> > > Vadim to read through your code/patch and see if it looks okay to him\n> > > ... it adds a day or two to the commit cycle, but at least you can say it\n> > > was reviewed before committed ...\n> > > \n> > \n> > Totally agree. In the old days, we posted all our patches to the list\n> > so people could see. We used to make cvs commits only on the main\n> > server, so we had the patch handy, and it made sense to post it. Now\n> > that we have remote cvs, we don't do it as much, but in this case, cvs\n> > diff -c is a big help.\n> \n> It seems that Tom has committed his fixups but we're still waiting\n> on Vadim?\n\nWe can't force Vadim to commit them ... only encourage him to :)\n\n\n",
"msg_date": "Fri, 15 Dec 2000 14:01:12 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idea for reducing planning time"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> My quick question is that if we have a1=b1 and b1=c1, isn't the join\n> sorted by a1, b1, and c1, and threfore we don't get more sorted plans?\n\nThat's not the issue. See, before the transitive-equality patch,\nif you wrote\n\n\tselect * from a,b,c where a.x = b.y and b.y = c.z\n\nthen the system would not consider plans of the structure {a c} b\n(ie, start with a join of a to c and then add b). It would only\nconsider {a b} c and {b c} a plans, because it follows join clauses\nif any are available.\n\nNow that we deduce the additional WHERE clause a.x = c.z, we will\nalso consider {a c} b. So we may get a better plan as a result ...\nbut whether we give back the same plan or not, it takes longer,\nbecause more paths will be considered. Brian Hirt was unhappy because\nplanning was taking significantly longer on his seven-way join.\n\n\n> Did you lose the argument with Vadim? I haven't seen his vacuum commit\n> yet, though I certainly would like to. :-)\n\nI'm assuming he's going to commit it, though I haven't seen him say so\nin so many words.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 14:06:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Idea for reducing planning time "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Totally agree. In the old days, we posted all our patches to the list\n> so people could see. We used to make cvs commits only on the main\n> server, so we had the patch handy, and it made sense to post it. Now\n> that we have remote cvs, we don't do it as much, but in this case, cvs\n> diff -c is a big help.\n\nIf you like I'll post the patch, but it strikes me as a waste of list\nbandwidth --- anyone who is likely to actually review it is perfectly\ncapable of doing cvs diff for themselves ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 14:08:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Idea for reducing planning time "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Totally agree. In the old days, we posted all our patches to the list\n> > so people could see. We used to make cvs commits only on the main\n> > server, so we had the patch handy, and it made sense to post it. Now\n> > that we have remote cvs, we don't do it as much, but in this case, cvs\n> > diff -c is a big help.\n> \n> If you like I'll post the patch, but it strikes me as a waste of list\n> bandwidth --- anyone who is likely to actually review it is perfectly\n> capable of doing cvs diff for themselves ...\n\nPosting patch is only useful if you want people to review it. They are\nmore likely to if you send it to patches, already diff'ed. In fact, how\ndo you pull out a patch set from CVS? You have to use -D and specify a\ndate/time range, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 15 Dec 2000 14:27:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idea for reducing planning time"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> If you like I'll post the patch, but it strikes me as a waste of list\n>> bandwidth --- anyone who is likely to actually review it is perfectly\n>> capable of doing cvs diff for themselves ...\n\n> Posting patch is only useful if you want people to review it. They are\n> more likely to if you send it to patches, already diff'ed. In fact, how\n> do you pull out a patch set from CVS? You have to use -D and specify a\n> date/time range, right?\n\nThat's a good point --- there doesn't seem to be any real easy way of\nextracting a set of changes to different files except to use -D. And\neven that doesn't do anything to separate unrelated patches applied at\nabout the same time.\n\nFor the record, you can get a diff of this kind with a command like\n\n\tcvs diff -c -D '2000-12-14 17:00' -D '2000-12-14 18:00'\n\nexecuted in the top level of the tree you want to search.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 14:45:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Idea for reducing planning time "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> If you like I'll post the patch, but it strikes me as a waste of list\n> >> bandwidth --- anyone who is likely to actually review it is perfectly\n> >> capable of doing cvs diff for themselves ...\n> \n> > Posting patch is only useful if you want people to review it. They are\n> > more likely to if you send it to patches, already diff'ed. In fact, how\n> > do you pull out a patch set from CVS? You have to use -D and specify a\n> > date/time range, right?\n> \n> That's a good point --- there doesn't seem to be any real easy way of\n> extracting a set of changes to different files except to use -D. And\n> even that doesn't do anything to separate unrelated patches applied at\n> about the same time.\n> \n> For the record, you can get a diff of this kind with a command like\n> \n> \tcvs diff -c -D '2000-12-14 17:00' -D '2000-12-14 18:00'\n> \n> executed in the top level of the tree you want to search.\n\nShame there isn't a -u (user) option supported by diff.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 15 Dec 2000 15:07:46 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idea for reducing planning time"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> If you like I'll post the patch, but it strikes me as a waste of list\n> >> bandwidth --- anyone who is likely to actually review it is perfectly\n> >> capable of doing cvs diff for themselves ...\n> \n> > Posting patch is only useful if you want people to review it. They are\n> > more likely to if you send it to patches, already diff'ed. In fact, how\n> > do you pull out a patch set from CVS? You have to use -D and specify a\n> > date/time range, right?\n> \n> That's a good point --- there doesn't seem to be any real easy way of\n> extracting a set of changes to different files except to use -D. And\n> even that doesn't do anything to separate unrelated patches applied at\n> about the same time.\n> \n> For the record, you can get a diff of this kind with a command like\n> \n> \tcvs diff -c -D '2000-12-14 17:00' -D '2000-12-14 18:00'\n> \n> executed in the top level of the tree you want to search.\n> \n> \t\t\tregards, tom lane\n\nTom, do you have a plan to make a back patch for 7.0.3? I got a bug\nreport from a user with a script to reproduce the problem. Seems the\nbackend consumes infinite memory.\n\nI don't want to recommend her to use 7.0.2 since it has a merge join\nproblem...\n--\nTatsuo Ishii",
"msg_date": "Tue, 27 Feb 2001 18:55:04 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idea for reducing planning time "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Tom, do you have a plan to make a back patch for 7.0.3?\n\nNo, I don't. No time for it now.\n\n> I got a bug report from a user with a script to reproduce the\n> problem. Seems the backend consumes infinite memory.\n\nNot infinite, surely ;-) ... but possibly more than her kernel will\nallow. As a workaround, I'd suggest setting geqo_threshold smaller,\nperhaps 8 or 9. IIRC, the way to do that in 7.0 is\n\tset geqo='on=8';\nNot sure if it's possible to set up a system-wide default for that\nin 7.0.\n\nBTW, the main reason planning this join in 7.0 is so slow is that\nall the options look exactly alike and so the planner has no reason to\ndiscard any paths. As soon as you create some indexes, load up some\ndata and VACUUM, the symmetry would be broken and performance should\nimprove (geqo or not). Or at least it'd usually be broken. Is it\npossible that all her tables are exactly the same size?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Feb 2001 10:45:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Idea for reducing planning time "
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > Tom, do you have a plan to make a back patch for 7.0.3?\n> \n> No, I don't. No time for it now.\n> \n> > I got a bug report from a user with a script to reproduce the\n> > problem. Seems the backend consumes infinite memory.\n> \n> Not infinite, surely ;-) ... but possibly more than her kernel will\n> allow. As a workaround, I'd suggest setting geqo_threshold smaller,\n> perhaps 8 or 9. IIRC, the way to do that in 7.0 is\n> \tset geqo='on=8';\n> Not sure if it's possible to set up a system-wide default for that\n> in 7.0.\n\nYes, I thought about geqo too. However, a standard geqo settings\ndidn't help. It still took 0:49 (7.0.2, 7.1 takes only ~3 seconds).\n\nThen I set:\n\n Pool_Size 128\n Generations 10\n\nand now the query takes only 5 seconds!\n\n> BTW, the main reason planning this join in 7.0 is so slow is that\n> all the options look exactly alike and so the planner has no reason to\n> discard any paths. As soon as you create some indexes, load up some\n> data and VACUUM, the symmetry would be broken and performance should\n> improve (geqo or not). Or at least it'd usually be broken. Is it\n> possible that all her tables are exactly the same size?\n\nYes. t_cyuubunrui has four rows, and the others has only a row.\n--\nTatsuo Ishii\n \n",
"msg_date": "Wed, 28 Feb 2001 22:00:05 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idea for reducing planning time "
}
]
|
[
{
"msg_contents": "Hi.\n\nI need a little help on the format of the postgres tables.\n\nI've got this wonderfully corrupted database where just about everything is\nfubar. I've tried a number of things to get it back using postgres and\nrelated tools with no success. It looks like most of the data is there, but\nthere may be a small amount of corruption that's causing all kinds of\nproblems.\n\nI've broken down and begin development of a tool to allow examination of the\ndata within the table files. This could actually be useful for recovering\nand undoing changes (or at least until the row-reuse code goes into\nproduction).\n\nI've been hacking the file format and trying to find stuff in the source and\ndocs as much as possible, but here goes...\n\na) tuples cannot span multiple pages (yet).\nb) the data is not platform independant??? Ie the data from a sun looks\ndifferent from an intel?\n\nFor every page, I see that the first 2 words are for the end of the tuple\npointers and the beginning of the tuple data.\n\nWhat are the next 2 words used for? In all my cases they appear to be set to\n0x2000.\n\nFollowing that I find the 2 word tuple pointers.\nThe first is the transactionid that, if comitted gives this tuple\nvisibility???\nThe second word appears to be the offset in the page where the tuple can be\nfound.\n\nAre these tuple pointers always stored in order of last to first? Or should\nI be loading and sorting them according to offset?\n\nNow on to the tuple data... I have my tool to the point where it extracts\nall the tuple data from the table, but I haven't been able to find the place\nin the postgres source that explains the format. I assume a tuple contains a\nnumber of attributes (referencing pg_attribute). Those not found in the\ntuple would be assumed to be NULL.\n\nSince I'm ignoring transaction ids right now, I'm planning on extracting all\nthe tuple and ordering them by oid so you can see all the comitted and\nuncomitted changes. I may even make it look good once I've recovered my\ndata...\n\n-Michael\n\n",
"msg_date": "Wed, 13 Dec 2000 18:43:38 -0500",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table File Format"
}
]
|
[
{
"msg_contents": " I need a little help on the format of the postgres tables.\n\n I've got this wonderfully corrupted database where just about everything is\n fubar. I've tried a number of things to get it back using postgres and\n related tools with no success. It looks like most of the data is there, but\n there may be a small amount of corruption that's causing all kinds of\n problems.\n\n I've broken down and begin development of a tool to allow examination of\nthe\n data within the table files. This could actually be useful for recovering\n and undoing changes (or at least until the row-reuse code goes into\n production).\n\n I've been hacking the file format and trying to find stuff in the source\nand\n docs as much as possible, but here goes...\n\n a) tuples cannot span multiple pages (yet).\n b) the data is not platform independant??? Ie the data from a sun looks\n different from an intel?\n\n For every page, I see that the first 2 words are for the end of the tuple\n pointers and the beginning of the tuple data.\n\n What are the next 2 words used for? In all my cases they appear to be set\nto\n 0x2000.\n\n Following that I find the 2 word tuple pointers.\nThe first word appears to be the offset in the page where the tuple can be\nfound but the MSB has to be stripped off (haven't found it's function in the\nsource yet).\nThe second is the transactionid that, if comitted gives this tuple\nvisibility???\n\nAre these tuple pointers always stored in order of last to first? Or should\nI be loading and sorting them according to offset?\n\n Now on to the tuple data... I have my tool to the point where it extracts\nall the tuple data from the table, but I haven't been able to find the place\nin the postgres source that explains the format. I assume a tuple contains a\nnumber of attributes (referencing pg_attribute). Those not found in the\ntuple would be assumed to be NULL.\n\n Since I'm ignoring transaction ids right now, I'm planning on extracting\nall\nthe tuple and ordering them by oid so you can see all the comitted and\nuncomitted changes. I may even make it look good once I've recovered my\ndata...\n\n -Michael\n\n\n\n",
"msg_date": "Wed, 13 Dec 2000 19:01:26 -0500",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "(Updated) Table File Format"
},
{
"msg_contents": "\"Michael Richards\" <[email protected]> writes:\n> Following that I find the 2 word tuple pointers.\n> The first word appears to be the offset in the page where the tuple can be\n> found but the MSB has to be stripped off (haven't found it's function in the\n> source yet).\n> The second is the transactionid that, if comitted gives this tuple\n> visibility???\n\nNo, offset and length --- there is also a 2-bit flags field. Look at\nthe page and item declarations in src/include/storage/\n\nSomeone else was recently working on a bit-level dump tool, but I've\nforgotten who.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2000 00:44:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (Updated) Table File Format "
},
{
"msg_contents": "Michael Richards wrote:\n> \n> I need a little help on the format of the postgres tables.\n> \n> I've got this wonderfully corrupted database where just about everything is\n> fubar. I've tried a number of things to get it back using postgres and\n> related tools with no success. It looks like most of the data is there, but\n> there may be a small amount of corruption that's causing all kinds of\n> problems.\n\nFind attached a python script that I used to get deleted (actually all\n;) \nrecords from a database table.\n\nIt was not corrupted, just a simple programming error in client software \nhad deleted more than needed. \n\nFortunately it was not vacuumed so the data itself (info for web-based\npaper \npostcard sending system) was there \n\nIt works as-is only for my table as the field extraction code is\nhard-coded, but \nit should be quite easy to modify for your needs\n\nIt worked 1 year ago probably on 6.4.x . I hope that the structure had\nnot \nchanged since.\n\nsendcard.py is the actual script used, pgtabdump.py is a somewhat\ncleaned-up version\n\n\n---------------\nHannu",
"msg_date": "Thu, 14 Dec 2000 13:01:46 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (Updated) Table File Format"
},
{
"msg_contents": "Okay,\n\nWhere would I find a definition of the tuple data? I didn't see anything\npromising in include/storage?\n\nI've found a definition for the page inside pagebuf.h That clears up all the\npage stuff. I'm still having a little trouble decoding the tuple data\nwithin. Hannu Krosing sent me a python script to do the extract, but having\nnever seen a line of Python before in my life, I'm having a little trouble\nwith the actual tuple data. I can see where the actual transaction\nvisibility info is in the tuple data, but the actual data... nope. My\nprogram (c++) is at the point where it will create tuple objects for every\nblock of \"tuple\" data within the page.\n\nthanks\n-Michael\n\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Michael Richards\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, December 14, 2000 12:44 AM\nSubject: Re: [HACKERS] (Updated) Table File Format\n\n\n> \"Michael Richards\" <[email protected]> writes:\n> > Following that I find the 2 word tuple pointers.\n> > The first word appears to be the offset in the page where the tuple can\nbe\n> > found but the MSB has to be stripped off (haven't found it's function in\nthe\n> > source yet).\n> > The second is the transactionid that, if comitted gives this tuple\n> > visibility???\n>\n> No, offset and length --- there is also a 2-bit flags field. Look at\n> the page and item declarations in src/include/storage/\n>\n> Someone else was recently working on a bit-level dump tool, but I've\n> forgotten who.\n>\n> regards, tom lane\n\n",
"msg_date": "Thu, 14 Dec 2000 10:55:32 -0500",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: (Updated) Table File Format "
},
{
"msg_contents": "Michael Richards wrote:\n> \n> Okay,\n> \n> Where would I find a definition of the tuple data? I didn't see anything\n> promising in include/storage?\n> \n> I've found a definition for the page inside pagebuf.h That clears up all the\n> page stuff. I'm still having a little trouble decoding the tuple data\n> within. Hannu Krosing sent me a python script to do the extract, but having\n> never seen a line of Python before in my life, I'm having a little trouble\n> with the actual tuple data. I can see where the actual transaction\n> visibility info is in the tuple data, but the actual data... nope. My\n> program (c++) is at the point where it will create tuple objects for every\n> block of \"tuple\" data within the page.\n\nIIRC, the data field format for individual fields is the same as defined\nin \nthe back-end/front-end protocol for binary cursors.\n\nif there are any NULL fields in the record then there is a flag\nsomewhere in \nthe tuple header and a bitmap of N*32 bits (N=no_of_fields/32) for\nmissing .\nIt is possible that there is no flag and you must deduce the presence of \nbitmap from the tuple-header length, im not sure which way it was.\n\nThe actual fields in a table and their order must be extracted from\npg_class \nand pg_attribute tables.\n\n------------\nHannu\n",
"msg_date": "Fri, 15 Dec 2000 17:34:06 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (Updated) Table File Format"
}
]
|
[
{
"msg_contents": "> I still don't see how dirty reads can solve the RI problems.\n> If Xact A deletes a PK while Xact B inserts an FK, one of\n> them will either see the new reference or the PK gone. But\n> from a transactional POV it depends on if the opposite Xact\n> finally commits or not to tell if that really happened.\n> \n> With dirty read, you only get \"maybe my PK is gone\" or \"maybe\n> there is a reference\".\n\nYes, and so we'll write special function(s) to check was PK really\ngone/FK inserted or not. This funcs will call\nXactLockTableWait(t_xmin|t_xmax) for questionable tuple to wait for\nconcurrent transaction commit/rollback. It will work as long as we\ncall RI triggers *after* INSERT/UPDATE/DELETE op, so triggers can\nsee concurrent changes with dirty reads.\n\nVadim\n",
"msg_date": "Wed, 13 Dec 2000 16:14:17 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: 7.0.3(nofsync) vs 7.1"
}
]
|
[
{
"msg_contents": "I seem to miss the announce of beta testing of 7.1. Could someone\nplease give me a copy of the announcement?\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 14 Dec 2000 10:00:45 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Beta1 starting date?"
},
{
"msg_contents": "\nbeta1 was very low key ... it was announced here on the list as \"its\npackaged, try it out\" ... there was no big hype about this one, but there\nwill be for beta2, which will most likely be after Vadim gets those vacuum\nfixes in place, and Tom gets those planner fixes ...\n\nOn Thu, 14 Dec 2000, Tatsuo Ishii wrote:\n\n> I seem to miss the announce of beta testing of 7.1. Could someone\n> please give me a copy of the announcement?\n> --\n> Tatsuo Ishii\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 13 Dec 2000 21:35:35 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beta1 starting date?"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> beta1 was very low key ... it was announced here on the list as \"its\n> packaged, try it out\" ... there was no big hype about this one, but there\n> will be for beta2, which will most likely be after Vadim gets those vacuum\n> fixes in place, and Tom gets those planner fixes ...\n\nSince there are going to already be a number of fixes in beta2, I will\nwait until beta2 to release any RPM's. I am also continuing to get\nfeedback about the packaging -- many thanks to all who are participating\nin that discussion! I remember the one or two comments I got on the 7.0\nRPM changes (from 6.5), and the change is positive this time.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 15 Dec 2000 10:59:14 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beta1 starting date?"
},
{
"msg_contents": "> Since there are going to already be a number of fixes in beta2, I will\n> wait until beta2 to release any RPM's. I am also continuing to get\n> feedback about the packaging -- many thanks to all who are participating\n> in that discussion! I remember the one or two comments I got on the 7.0\n> RPM changes (from 6.5), and the change is positive this time.\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n\nI would like to make a comment on the stop script in the RPMS. It uses\nkillproc function to kill postmaster. killproc sends SIGTERM to stop\npostmaster. It's good. However if postmaster won't die (for example,\nsome users are still using psql), it then sends SIGKILL signal to\npostmaster. This is extremely dangerous since there is no chance for\npostmaster to clean up resources in this case. I think the signal\nnumber should be SIGINT (fast shutdown) or SIGQUIT (immediate\nshutdown).\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 30 Dec 2000 16:41:11 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Beta1 starting date?"
}
]
|
[
{
"msg_contents": "pg_options.sample coming with 7.0.x does not work because:\n\n1) it exceeds 4096 bytes while read_pg_options() reads only first 4096\n bytes of it.\n\n2) it allows spaces around \"=\" while parese_options() does not.\n\nApparently the sample file was brought in without enough testings when\n7.0 was developed. What should we do now?\n\nShould we fix pg_options code so that PostgreSQL accepts the sample\nfile? Or just leave it and add a new entry to the FAQ?\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 14 Dec 2000 10:00:53 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_options.sample"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> pg_options.sample coming with 7.0.x does not work because:\n> 1) it exceeds 4096 bytes while read_pg_options() reads only first 4096\n> bytes of it.\n\nOliver Elphick posted a patch for this recently (pghackers 28-Nov)\nand noted that it seemed already fixed in 7.1 sources.\n\n> What should we do now?\n\nNothing, I think. If you want to apply Oliver's patch to the\nREL7_0_PATCHES branch, go ahead --- but I don't think there'll be\na 7.0.4 release, so it's probably wasted effort.\n\nIf the bug still exists in 7.1 sources, then of course we need to\nfix it there...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2000 21:32:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_options.sample "
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > pg_options.sample coming with 7.0.x does not work because:\n> > 1) it exceeds 4096 bytes while read_pg_options() reads only first 4096\n> > bytes of it.\n> \n> Oliver Elphick posted a patch for this recently (pghackers 28-Nov)\n> and noted that it seemed already fixed in 7.1 sources.\n\nThanks for poting it out.\n\n> > What should we do now?\n> \n> Nothing, I think. If you want to apply Oliver's patch to the\n> REL7_0_PATCHES branch, go ahead --- but I don't think there'll be\n> a 7.0.4 release, so it's probably wasted effort.\n> \n> If the bug still exists in 7.1 sources, then of course we need to\n> fix it there...\n> \n> \t\t\tregards, tom lane\n\nAgreed.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 14 Dec 2000 14:02:26 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_options.sample "
}
]
|
[
{
"msg_contents": "Althoug this happens on old 6.5.3, I would like to know if this has\nbeen already fixed...\n\nHere is the scenario:\n\n1) before vacuum, table A has 8850 tuples.\n\n2) vacuum on table A makes postgres crashed.\n\n3) it crashes at line 1758:\n\n\tAssert(num_moved == checked_moved);\n\n\tI examined variables using gdb. num_moved == 8849, check_moved ==\n\t8813, num_tuples == 18.\n\n4) if PostgreSQL is not compiled with assertion, vacuum does not\n crash. However, after vacuum, the number of tuples descreases from\n 8850 to 8814!! (I am not sure which number is correct, though)\n\nI think this is an important problem since a data loss might\nhappen. Any idea?\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 14 Dec 2000 16:38:01 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum crash on 6.5.3"
},
{
"msg_contents": "> Althoug this happens on old 6.5.3, I would like to know if this has\n> been already fixed...\n> \n> Here is the scenario:\n> \n> 1) before vacuum, table A has 8850 tuples.\n> \n> 2) vacuum on table A makes postgres crashed.\n> \n> 3) it crashes at line 1758:\n> \n> \tAssert(num_moved == checked_moved);\n> \n> \tI examined variables using gdb. num_moved == 8849, check_moved ==\n> \t8813, num_tuples == 18.\n> \n> 4) if PostgreSQL is not compiled with assertion, vacuum does not\n> crash. However, after vacuum, the number of tuples descreases from\n> 8850 to 8814!! (I am not sure which number is correct, though)\n> \n> I think this is an important problem since a data loss might\n> happen. Any idea?\n\nIt turns out that this was caused by vacuum's bug. Thanks to Hiroshi,\nhe has identified the problem. I have checked other version of\nPostgreSQL, and found that at we have had the bug at least since\n6.3.2, and it has been fixed in 7.0. Included are patches for 6.5.3 and\na test sript to reproduce the bug. Both of them are made by Hiroshi.\n--\nTatsuo Ishii",
"msg_date": "Fri, 29 Dec 2000 23:16:36 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuum crash on 6.5.3"
},
{
"msg_contents": "Just a supplement.\nEssentially this isn't a crash bug.\nThis had been a disastrous bug that causes data loss silently.\n(This is known as 'HEAP_MOVED_IN was not expected' bug\nbut the result could be more serious than I've recognized.) \n\nPlease apply the patch if you still have pre-7.0 pg db-s and\nyou don't love data loss.\n\nRegards.\nHiroshi Inoue\n\n> -----Original Message-----\n> From: Tatsuo Ishii\n> \n> > Althoug this happens on old 6.5.3, I would like to know if this has\n> > been already fixed...\n> > \n> > Here is the scenario:\n> > \n> > 1) before vacuum, table A has 8850 tuples.\n> > \n> > 2) vacuum on table A makes postgres crashed.\n> > \n> > 3) it crashes at line 1758:\n> > \n> > \tAssert(num_moved == checked_moved);\n> > \n> > \tI examined variables using gdb. num_moved == 8849, check_moved ==\n> > \t8813, num_tuples == 18.\n> > \n> > 4) if PostgreSQL is not compiled with assertion, vacuum does not\n> > crash. However, after vacuum, the number of tuples descreases from\n> > 8850 to 8814!! (I am not sure which number is correct, though)\n> > \n> > I think this is an important problem since a data loss might\n> > happen. Any idea?\n> \n> It turns out that this was caused by vacuum's bug. Thanks to Hiroshi,\n> he has identified the problem. I have checked other version of\n> PostgreSQL, and found that at we have had the bug at least since\n> 6.3.2, and it has been fixed in 7.0. Included are patches for 6.5.3 and\n> a test sript to reproduce the bug. Both of them are made by Hiroshi.\n> --\n> Tatsuo Ishii\n> \n",
"msg_date": "Sun, 31 Dec 2000 13:12:35 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: vacuum crash on 6.5.3"
}
]
|
[
{
"msg_contents": "Hi all,\n\nI am adressing this email to this mailinglist due to the lack of some\naddress to post euphoric feedback to. Let me tell you my story:\n\nThis february I began working on a web-application using PHP and a\nSQL-Database. My problem was to evaluate the right one:\n\nInterbase was at the state of being developed then. There were only promises\nto make open source of it. They did not really fulfill them.\n\nPostgres (version 6.5) was known to be quite slow and unstable.\n\nSo I used MySQL and was quite happy with it; possibly because I didn't\nreally know all the features and possibilities of true SQL (not just that\ncrappy subset MySQL gives its users). Last September I deceided to get a\nREAL database and I began looking araound a bit (I desperately needed UNIONs\nand this stinking workaround with temp. TABLEs MySQL provides you with was\nnot really an alternative). I deeply looked at Postgres (I think 7.0.2 or\n7.0.3) and deceided to give it a try.\n\nAnd it was great: There is nothing you cannot do with it. My only problem:\nThe limitation of the row-size and the VACUUM-stuff.\n\nAfter having read the article on Postgres at phpbuilder.com, I deceided to\nuse the current CVS-version. And this let me fully fall in love with your\ndatabase... What impressed me most: You seem to know all my thinkings: One\nday, I needed a LIMIT within one subselect of a UNION. It did not work\n(Although I used the brackets and there was no Syntax-Error; the query\nsimply did not return any rows). One day and a \"cvs update\" later, the\nfeature was implemented.\n\nAnother example: Jody, the guy that writes a searchengine for our database,\nneeded UNIONs in subselects. Somwhere on your homepage one can read about\nthis being implemented in PostgreSQL 7.2 but he just forwarded me a mail\nfrom one of you that this feature acutally is implemented in 7.1dev.\n\nAnd you cannot imagine the fun I had with compiling perl into the database\nand with writing the stored-procedure to do a string-manipulation. \n\nIf only I had time to have a closer look at the PostgreSQL-Code: I would\nspend my whole free time for helping you to make postgres even better.\n\nMany thanks to you. You did and are doing a great job. Postgres is the best\nOpenSource-Database out there (and better than many commercional products\n(that ability to use Perl as language for stored procedures is uniqe)).\n\nThanks.\n\nPilif\n",
"msg_date": "Thu, 14 Dec 2000 11:42:11 +0100",
"msg_from": "Philip Hofstetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Thanks!"
}
]
|
[
{
"msg_contents": "\n> > Yes, postgresql requires vacuum quite often otherwise queries and\n> > updates start taking ungodly amounts of time to complete. If you're\n> > having problems because vacuum locks up your tables for too long\n> > you might want to check out:\n> \n> But why? I don't know of other databases that need to be \n> 'vacuum'ed. Do\n> all others just do it internaly on a regular basis?\n> \n> What am I missing here?\n\nThey all have an overwriting storage manager. The current storage manager\nof PostgreSQL is non overwriting, which has other advantages.\n\nThere seem to be 2 answers to the problem:\n1. change to an overwrite storage manager\n2. make vacuum concurrent capable\n\nThe tendency here seems to be towards an improved smgr.\nBut, it is currently extremely cheap to calculate where a new row\nneeds to be located physically. This task is *a lot* more expensive\nin an overwrite smgr. It needs to maintain a list of pages with free slots,\nwhich has all sorts of concurrency and persistence problems.\n\nAndreas\n",
"msg_date": "Thu, 14 Dec 2000 12:07:00 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Why vacuum?"
},
{
"msg_contents": "On Thu, Dec 14, 2000 at 12:07:00PM +0100, Zeugswetter Andreas SB wrote:\n> \n> They all have an overwriting storage manager. The current storage manager\n> of PostgreSQL is non overwriting, which has other advantages.\n> \n> There seem to be 2 answers to the problem:\n> 1. change to an overwrite storage manager\n> 2. make vacuum concurrent capable\n> \n> The tendency here seems to be towards an improved smgr.\n> But, it is currently extremely cheap to calculate where a new row\n> needs to be located physically. This task is *a lot* more expensive\n> in an overwrite smgr. It needs to maintain a list of pages with free slots,\n> which has all sorts of concurrency and persistence problems.\n> \n\nNot to mention the recent thread here about people recovering data that\nwas accidently deleted, or from damaged db files: the old tuples serve\nas redundant backup, in a way. Not a real compelling reason to keep a\nnon-overwriting smgr, but still a surprise bonus for those who need it.\n\nRoss\n",
"msg_date": "Thu, 14 Dec 2000 09:57:32 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "* Ross J. Reedstrom <[email protected]> [001214 07:57] wrote:\n> On Thu, Dec 14, 2000 at 12:07:00PM +0100, Zeugswetter Andreas SB wrote:\n> > \n> > They all have an overwriting storage manager. The current storage manager\n> > of PostgreSQL is non overwriting, which has other advantages.\n> > \n> > There seem to be 2 answers to the problem:\n> > 1. change to an overwrite storage manager\n> > 2. make vacuum concurrent capable\n> > \n> > The tendency here seems to be towards an improved smgr.\n> > But, it is currently extremely cheap to calculate where a new row\n> > needs to be located physically. This task is *a lot* more expensive\n> > in an overwrite smgr. It needs to maintain a list of pages with free slots,\n> > which has all sorts of concurrency and persistence problems.\n> > \n> \n> Not to mention the recent thread here about people recovering data that\n> was accidently deleted, or from damaged db files: the old tuples serve\n> as redundant backup, in a way. Not a real compelling reason to keep a\n> non-overwriting smgr, but still a surprise bonus for those who need it.\n\nOne could make vacuum optional such that it either:\n\n1) always overwrites\n2) will not overwrite data until a vacuum is called (perhaps with\n a date option to specify how much deleted data you wish to\n reclaim) data can be marked free but not free for re-use\n until vacuum is run.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 14 Dec 2000 08:25:14 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> \n> Not to mention the recent thread here about people recovering data that\n> was accidently deleted, or from damaged db files: the old tuples serve\n> as redundant backup, in a way. Not a real compelling reason to keep a\n> non-overwriting smgr, but still a surprise bonus for those who need it.\n\nThe optimal would be a configurable behaviour. I wouldn't enable it on a\nusers table, neither on a log-type table (the former is a slowly\nchanging table, the second is a table with few updates/deletes), but a\nfast-changing table like an active sessions table would benefit a lot.\n\nCurrently, my active sessions table grows by 100K every 20 seconds, I\nhave to constantly vacuum it to keep the things reasonable. Other tables\nwould benefit a lot, pg_listener for example.\n\nBye!\n",
"msg_date": "Thu, 14 Dec 2000 18:13:44 +0100",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> \n> On Thu, Dec 14, 2000 at 12:07:00PM +0100, Zeugswetter Andreas SB wrote:\n> >\n> > The tendency here seems to be towards an improved smgr.\n> > But, it is currently extremely cheap to calculate where a new row\n> > needs to be located physically. This task is *a lot* more expensive\n> > in an overwrite smgr.\n\nI don't agree. If (as I have proposed) the search is made in the\nbackground by a low priority process, you just have to lookup a cache\nentry to find out where to write.\n\n> > It needs to maintain a list of pages with free slots,\n> > which has all sorts of concurrency and persistence problems.\n\nConcurrency is a problem, but a spinlock on a shared-memory table should\nsuffice in the majority of the cases[1]. I may be wrong... but I think\nit should be discussed.\n\n[1] I believe that already there's a similar problem to synchronize the\nbackends when the want to append a new page.\n\nBye!\n",
"msg_date": "Thu, 14 Dec 2000 18:24:26 +0100",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
}
]
|
[
{
"msg_contents": "Hello,\n\nI just have installed Vadim's patch for speeding up vacuum.\nAnd have quite strange problem.\nI have quite small table:\n[root@mx src]# ls -l /home/postgres/data/base/db/users*\n-rw------- 1 postgres postgres 4120576 Dec 14 08:48 \n/home/postgres/data/base/db/users\n-rw------- 1 postgres postgres 483328 Dec 14 08:46 \n/home/postgres/data/base/db/users_id_key\n-rw------- 1 postgres postgres 8192 Dec 14 08:20 \n/home/postgres/data/base/db/users_id_seq\n\nI did vacuum verbose analyze lazy;, and it locks up (or better say do \nsomething for a long time.\n\nBefore started vacuuming users.\n19379 pts/0 R 2:24 /home/postgres/bin/postgres localhost postgres \ndb VACUUM\n\nBefore I kill the backend.\n19379 pts/0 R 4:41 /home/postgres/bin/postgres localhost postgres \ndb VACUUM\n\nIt spends at least 2 minutes trying vacuuming users. Usually this table is \nvacuumed & analyzed in few seconds.\n\nHere is the output of vacuum verbose analyze, I done after this problem \narises.\n\ndb=# vacuum verbose analyze users;\nNOTICE: --Relation users --\nNOTICE: Pages 978: Changed 1, reaped 832, Empty 0, New 0; Tup 14280: Vac \n13541, Keep/VTL 0/0, Crash 0, UnUsed 265, MinLen 248, MaxLen 340; Re-using: \nFree/Avail. Space 3891320/3854076; EndEmpty/Avail. Pages 0/666. CPU \n0.09s/1.25u sec.\nNOTICE: Index users_id_key: Pages 35; Tuples 14280: Deleted 82. CPU \n0.00s/0.05u sec.\nNOTICE: Index ix_users_account_name: Pages 56; Tuples 14280: Deleted 82. CPU \n0.01s/0.05u sec.\nNOTICE: Index ix_users_blocked: Pages 31; Tuples 14280: Deleted 82. CPU \n0.00s/0.05u sec.\nNOTICE: Rel users: Pages: 978 --> 503; Tuple(s) moved: 640. CPU 0.22s/0.22u \nsec.\nNOTICE: Index users_id_key: Pages 59; Tuples 14280: Deleted 640. CPU \n0.00s/0.04u sec.\nNOTICE: Index ix_users_account_name: Pages 93; Tuples 14280: Deleted 640. \nCPU 0.00s/0.04u sec.\nNOTICE: Index ix_users_blocked: Pages 32; Tuples 14280: Deleted 640. CPU \n0.00s/0.04u sec.\nVACUUM\n\nI wouldn't consider it's a bug, but from my point of view it is quite strange.\nAny comments?\n\nBTW, I did a backup, and can supply anyone interested with original table.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Thu, 14 Dec 2000 20:10:24 +0600",
"msg_from": "Denis Perchine <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum verbose analyze lazy problem."
},
{
"msg_contents": "> I wouldn't consider it's a bug, but from my point of view it is quite strange.\n> Any comments?\n> \n> BTW, I did a backup, and can supply anyone interested with original table.\n\nPlease send it me.\n\nVadim\n\n\n",
"msg_date": "Thu, 14 Dec 2000 09:04:19 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum verbose analyze lazy problem."
},
{
"msg_contents": "On Thursday 14 December 2000 23:04, Vadim Mikheev wrote:\n> > I wouldn't consider it's a bug, but from my point of view it is quite\n> > strange. Any comments?\n> >\n> > BTW, I did a backup, and can supply anyone interested with original\n> > table.\n>\n> Please send it me.\n\nAnother small comment. I did not made initdb. And did not made dump/restore.\nMaybe this is a problem. I just installed a patch.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Fri, 15 Dec 2000 13:11:25 +0600",
"msg_from": "Denis Perchine <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuum verbose analyze lazy problem."
}
]
|
[
{
"msg_contents": "I'm trying to delete all the records or only one record or insert one\nrecord in a table but\ni'm having this message:\nERROR: Unable to identify an operator '=' for types 'int4' and 'text'\n You will have to retype this query using an explicit cast\n\nWhat's this means ???\n\nThanks\n\nLuis Sousa\n\n",
"msg_date": "Thu, 14 Dec 2000 15:33:27 +0000",
"msg_from": "Luis Sousa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ocasional problems !!!!"
},
{
"msg_contents": "> i'm having this message:\n> ERROR: Unable to identify an operator '=' for types 'int4' and 'text'\n> You will have to retype this query using an explicit cast\n\nWithout knowing your schema and query, we can't tell you exactly. But\nyour query is trying to compare a string to an integer, which you can do\nby using an explicit cast. For example,\n\n select text '123' = 123;\n\nwill fail, but\n\n select cast((text '123') as integer) = 123;\n\nwill succeed.\n\n - Thomas\n",
"msg_date": "Thu, 14 Dec 2000 16:14:53 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ocasional problems !!!!"
},
{
"msg_contents": "\nWhat is the schema of the table involved and what are the queries you\nare trying to run?\n\nStephan Szabo\[email protected]\n\nOn Thu, 14 Dec 2000, Luis Sousa wrote:\n\n> I'm trying to delete all the records or only one record or insert one\n> record in a table but\n> i'm having this message:\n> ERROR: Unable to identify an operator '=' for types 'int4' and 'text'\n> You will have to retype this query using an explicit cast\n> \n> What's this means ???\n> \n> Thanks\n> \n> Luis Sousa\n> \n\n",
"msg_date": "Thu, 14 Dec 2000 10:07:57 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ocasional problems !!!!"
},
{
"msg_contents": "I just trying to execute a simple query in a table to delete a simpe record\nor all of them, like:\n\nDELETE * FROM table;\n\nI have a schema of more or less 25 tables, that are created using a script.\nWhen i'm trying to use that table (and only happens in this table) when\ncreated by the script i receive the message below.\nThe most strange is that i droped the table and i created again,\nmaintaining the structure created with the script and i didn't had any\nproblems !!!\n\nBest Regards\n\nLuis Sousa\n\n\nStephan Szabo wrote:\n\n> What is the schema of the table involved and what are the queries you\n> are trying to run?\n>\n> Stephan Szabo\n> [email protected]\n>\n> On Thu, 14 Dec 2000, Luis Sousa wrote:\n>\n> > I'm trying to delete all the records or only one record or insert one\n> > record in a table but\n> > i'm having this message:\n> > ERROR: Unable to identify an operator '=' for types 'int4' and 'text'\n> > You will have to retype this query using an explicit cast\n> >\n> > What's this means ???\n> >\n> > Thanks\n> >\n> > Luis Sousa\n> >\n\n",
"msg_date": "Fri, 15 Dec 2000 09:33:07 +0000",
"msg_from": "Luis Sousa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ocasional problems !!!!"
},
{
"msg_contents": "I received that message working directly in psql.\n\nI already send another email, please see.\n\nRegards\n\nLuis Sousa\n\n\nThomas Lockhart wrote:\n\n> > But i'm not making any compare.\n> > I just wrote delete from table; and i receive that message.\n>\n> Hmm. We will need to know more about your setup, since a simple\n>\n> delete from table;\n>\n> in psql does not involve *any* comparisons, and should never provoke the\n> message you are receiving.\n>\n> If you think that you are doing something as simple as that, then\n> perhaps the application you are using is doing extra stuff underneath?\n>\n> - Thomas\n\n",
"msg_date": "Fri, 15 Dec 2000 09:46:00 +0000",
"msg_from": "Luis Sousa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ocasional problems !!!!"
},
{
"msg_contents": ">>>> But i'm not making any compare.\n>>>> I just wrote delete from table; and i receive that message.\n>> \n>> Hmm. We will need to know more about your setup, since a simple\n>> \n>> delete from table;\n>> \n>> in psql does not involve *any* comparisons, and should never provoke the\n>> message you are receiving.\n\nPerhaps there is a foreign key constraint, or some such, being invoked\nby this command?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 10:59:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ocasional problems !!!! "
},
{
"msg_contents": "\nDid you perhaps have foreign key constraints with an on delete\nclause defined on a table that referenced this one? Postgres doesn't\ncurrently check that the types are comparable before making the\nconstraint. I'm working on adding a check for that now.\n\nOr for that matter, any other rules or triggers could do it.\n\nOn Fri, 15 Dec 2000, Luis Sousa wrote:\n\n> I just trying to execute a simple query in a table to delete a simpe record\n> or all of them, like:\n> \n> DELETE * FROM table;\n> \n> I have a schema of more or less 25 tables, that are created using a script.\n> When i'm trying to use that table (and only happens in this table) when\n> created by the script i receive the message below.\n> The most strange is that i droped the table and i created again,\n> maintaining the structure created with the script and i didn't had any\n> problems !!!\n> \n> Best Regards\n> \n> Luis Sousa\n> \n> \n> Stephan Szabo wrote:\n> \n> > What is the schema of the table involved and what are the queries you\n> > are trying to run?\n> >\n> > Stephan Szabo\n> > [email protected]\n> >\n> > On Thu, 14 Dec 2000, Luis Sousa wrote:\n> >\n> > > I'm trying to delete all the records or only one record or insert one\n> > > record in a table but\n> > > i'm having this message:\n> > > ERROR: Unable to identify an operator '=' for types 'int4' and 'text'\n> > > You will have to retype this query using an explicit cast\n> > >\n> > > What's this means ???\n> > >\n> > > Thanks\n> > >\n> > > Luis Sousa\n> > >\n> \n\n",
"msg_date": "Fri, 15 Dec 2000 09:17:47 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ocasional problems !!!!"
},
{
"msg_contents": "I really have constraints of foreign keys but not on delete, only on update\n\nStephan Szabo wrote:\n\n> Did you perhaps have foreign key constraints with an on delete\n> clause defined on a table that referenced this one? Postgres doesn't\n> currently check that the types are comparable before making the\n> constraint. I'm working on adding a check for that now.\n>\n> Or for that matter, any other rules or triggers could do it.\n>\n> On Fri, 15 Dec 2000, Luis Sousa wrote:\n>\n> > I just trying to execute a simple query in a table to delete a simpe record\n> > or all of them, like:\n> >\n> > DELETE * FROM table;\n> >\n> > I have a schema of more or less 25 tables, that are created using a script.\n> > When i'm trying to use that table (and only happens in this table) when\n> > created by the script i receive the message below.\n> > The most strange is that i droped the table and i created again,\n> > maintaining the structure created with the script and i didn't had any\n> > problems !!!\n> >\n> > Best Regards\n> >\n> > Luis Sousa\n> >\n> >\n> > Stephan Szabo wrote:\n> >\n> > > What is the schema of the table involved and what are the queries you\n> > > are trying to run?\n> > >\n> > > Stephan Szabo\n> > > [email protected]\n> > >\n> > > On Thu, 14 Dec 2000, Luis Sousa wrote:\n> > >\n> > > > I'm trying to delete all the records or only one record or insert one\n> > > > record in a table but\n> > > > i'm having this message:\n> > > > ERROR: Unable to identify an operator '=' for types 'int4' and 'text'\n> > > > You will have to retype this query using an explicit cast\n> > > >\n> > > > What's this means ???\n> > > >\n> > > > Thanks\n> > > >\n> > > > Luis Sousa\n> > > >\n> >\n\n--\nLuis Sousa\nTecnico Superior de Informatica\nGabinete de Assessoria e Planeamento\nUniversidade do Algarve\n\n\n\n",
"msg_date": "Mon, 18 Dec 2000 09:16:42 +0000",
"msg_from": "Luis Sousa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ocasional problems !!!!"
},
{
"msg_contents": "\nActually, it's not going to matter since all foreign keys have a delete\nportion (realized after seeing your response) that checks to make sure\nthe one you are deleting is not being referenced.\nI'm surprised you're not seeing this on inserts into the fk table or\non updates to the pk table. What are the types of the columns on \nboth tables?\n\nStephan Szabo\[email protected]\n\nOn Mon, 18 Dec 2000, Luis Sousa wrote:\n\n> I really have constraints of foreign keys but not on delete, only on update\n> \n> Stephan Szabo wrote:\n> \n> > Did you perhaps have foreign key constraints with an on delete\n> > clause defined on a table that referenced this one? Postgres doesn't\n> > currently check that the types are comparable before making the\n> > constraint. I'm working on adding a check for that now.\n> >\n> > Or for that matter, any other rules or triggers could do it.\n> >\n> > On Fri, 15 Dec 2000, Luis Sousa wrote:\n> >\n> > > I just trying to execute a simple query in a table to delete a simpe record\n> > > or all of them, like:\n> > >\n> > > DELETE * FROM table;\n> > >\n> > > I have a schema of more or less 25 tables, that are created using a script.\n> > > When i'm trying to use that table (and only happens in this table) when\n> > > created by the script i receive the message below.\n> > > The most strange is that i droped the table and i created again,\n> > > maintaining the structure created with the script and i didn't had any\n> > > problems !!!\n> > >\n> > > Best Regards\n> > >\n> > > Luis Sousa\n> > >\n> > >\n> > > Stephan Szabo wrote:\n> > >\n> > > > What is the schema of the table involved and what are the queries you\n> > > > are trying to run?\n> > > >\n> > > > Stephan Szabo\n> > > > [email protected]\n> > > >\n> > > > On Thu, 14 Dec 2000, Luis Sousa wrote:\n> > > >\n> > > > > I'm trying to delete all the records or only one record or insert one\n> > > > > record in a table but\n> > > > > i'm having this message:\n> > > > > ERROR: Unable to identify an operator '=' for types 'int4' and 'text'\n> > > > > You will have to retype this query using an explicit cast\n> > > > >\n> > > > > What's this means ???\n> > > > >\n> > > > > Thanks\n> > > > >\n> > > > > Luis Sousa\n> > > > >\n> > >\n> \n> --\n> Luis Sousa\n> Tecnico Superior de Informatica\n> Gabinete de Assessoria e Planeamento\n> Universidade do Algarve\n> \n> \n> \n\n\n",
"msg_date": "Mon, 18 Dec 2000 09:19:45 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ocasional problems !!!!"
},
{
"msg_contents": "I think i already discovered what's the problem !! At least the problem is not\nhappening again.\n\nIt was some problems in some triggers that are implementend in the database.\n\nAnyway, i appreciatte all the time that you took with my problem\n\n\n\nBest Regards\n\nLuis Sousa\n\n\nStephan Szabo wrote:\n\n> Actually, it's not going to matter since all foreign keys have a delete\n> portion (realized after seeing your response) that checks to make sure\n> the one you are deleting is not being referenced.\n> I'm surprised you're not seeing this on inserts into the fk table or\n> on updates to the pk table. What are the types of the columns on\n> both tables?\n>\n> Stephan Szabo\n> [email protected]\n>\n> On Mon, 18 Dec 2000, Luis Sousa wrote:\n>\n> > I really have constraints of foreign keys but not on delete, only on update\n> >\n> > Stephan Szabo wrote:\n> >\n> > > Did you perhaps have foreign key constraints with an on delete\n> > > clause defined on a table that referenced this one? Postgres doesn't\n> > > currently check that the types are comparable before making the\n> > > constraint. I'm working on adding a check for that now.\n> > >\n> > > Or for that matter, any other rules or triggers could do it.\n> > >\n> > > On Fri, 15 Dec 2000, Luis Sousa wrote:\n> > >\n> > > > I just trying to execute a simple query in a table to delete a simpe record\n> > > > or all of them, like:\n> > > >\n> > > > DELETE * FROM table;\n> > > >\n> > > > I have a schema of more or less 25 tables, that are created using a script.\n> > > > When i'm trying to use that table (and only happens in this table) when\n> > > > created by the script i receive the message below.\n> > > > The most strange is that i droped the table and i created again,\n> > > > maintaining the structure created with the script and i didn't had any\n> > > > problems !!!\n> > > >\n> > > > Best Regards\n> > > >\n> > > > Luis Sousa\n> > > >\n> > > >\n> > > > Stephan Szabo wrote:\n> > > >\n> > > > > What is the schema of the table involved and what are the queries you\n> > > > > are trying to run?\n> > > > >\n> > > > > Stephan Szabo\n> > > > > [email protected]\n> > > > >\n> > > > > On Thu, 14 Dec 2000, Luis Sousa wrote:\n> > > > >\n> > > > > > I'm trying to delete all the records or only one record or insert one\n> > > > > > record in a table but\n> > > > > > i'm having this message:\n> > > > > > ERROR: Unable to identify an operator '=' for types 'int4' and 'text'\n> > > > > > You will have to retype this query using an explicit cast\n> > > > > >\n> > > > > > What's this means ???\n> > > > > >\n> > > > > > Thanks\n> > > > > >\n> > > > > > Luis Sousa\n> > > > > >\n> > > >\n> >\n> > --\n> > Luis Sousa\n> > Tecnico Superior de Informatica\n> > Gabinete de Assessoria e Planeamento\n> > Universidade do Algarve\n> >\n> >\n> >\n\n--\nLuis Sousa\nTecnico Superior de Informatica\nGabinete de Assessoria e Planeamento\nUniversidade do Algarve\n\n\n\n",
"msg_date": "Wed, 20 Dec 2000 15:41:42 +0000",
"msg_from": "Luis Sousa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ocasional problems !!!!"
}
]
|
[
{
"msg_contents": "\nWhen the filesystem fills and the txlog cannot be written, then the postmaster dies.\n\npostgres@s0188000zeu:/usr/postgres> time echo \"copy journaleintrag from '/tmp/j.unl' delimiters '|';\" | psql cusejoua\nFATAL 2: copy: line 64902, write(logfile 0 seg 2 off 6094848) failed: No space left on device\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nconnection to server was lost\n\nI think it shoud stay up and running, but deny any modification until the filesystem is extended.\n\nAndreas\n",
"msg_date": "Thu, 14 Dec 2000 16:41:04 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "fs full stops postmaster"
}
]
|
[
{
"msg_contents": "\n> When the filesystem fills and the txlog cannot be written, \n> then the postmaster dies.\n> \n> postgres@s0188000zeu:/usr/postgres> time echo \"copy \n> journaleintrag from '/tmp/j.unl' delimiters '|';\" | psql cusejoua\n> FATAL 2: copy: line 64902, write(logfile 0 seg 2 off \n> 6094848) failed: No space left on device\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> connection to server was lost\n> \n> I think it shoud stay up and running, but deny any \n> modification until the filesystem is extended.\n\nThis is an effect of the fact, that the file is not really created in its full size\neven if ls shows the 16 mb. As an additional effect, the db does not \nstart up any more --> txlog corrupted.\n\nAndreas\n",
"msg_date": "Thu, 14 Dec 2000 16:49:58 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: fs full stops postmaster"
}
]
|
[
{
"msg_contents": "when doing txlog switches there seems to be a problem with remembering the \ncorrect = active logfile, when the postmaster crashes.\n\nThis is one of the problems I tried to show up previously:\n\tYou cannot rely on writes to other files except the txlog itself !!!\n\nThus the current way of recording the active txlog seg and position in \npg_control is busted, and must be avoided. I would try to not use pg_control\nfor this at all, but scan the pg_xlog directory for this purpose.\n\ncusejoua=# update journaleintrag set txt_funktion=trim(txt_funktion);\nFATAL 2: write(logfile 0 seg 2 off 4612096) failed: No such file or directory\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nAndreas\n\nPS: I am using -F (bad boy that I am :-)\n",
"msg_date": "Thu, 14 Dec 2000 17:14:58 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "switching txlog file in 7.1beta"
},
{
"msg_contents": "> when doing txlog switches there seems to be a problem with remembering the \n> correct = active logfile, when the postmaster crashes.\n> \n> This is one of the problems I tried to show up previously:\n> You cannot rely on writes to other files except the txlog itself !!!\n\nWhy? If you handle those files specifically, as txlog itself.\n\n> Thus the current way of recording the active txlog seg and position in \n> pg_control is busted, and must be avoided. I would try to not use pg_control\n> for this at all, but scan the pg_xlog directory for this purpose.\n> \n> cusejoua=# update journaleintrag set txt_funktion=trim(txt_funktion);\n> FATAL 2: write(logfile 0 seg 2 off 4612096) failed: No such file or directory\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n\nCan you start up db with --wal_debug=1 and send me output?\n\nVadim\n\n\n",
"msg_date": "Thu, 14 Dec 2000 09:10:02 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: switching txlog file in 7.1beta"
}
]
|
[
{
"msg_contents": "\n> > > The tendency here seems to be towards an improved smgr.\n> > > But, it is currently extremely cheap to calculate where a new row\n> > > needs to be located physically. This task is *a lot* more expensive\n> > > in an overwrite smgr.\n> \n> I don't agree. If (as I have proposed) the search is made in the\n> background by a low priority process, you just have to lookup a cache\n> entry to find out where to write.\n\nIf the priority is too low you will end up with the same behavior as current,\nbecause the cache will be emptied by high priority multiple new rows,\nthus writing to the end anyways. Conclusio: In those cases where overwrite would\nbe most advantageous (high volume modified table) your system won't work,\nunless you resort to my concern and make it *very* expensive (=high priority).\n\nAndreas\n",
"msg_date": "Thu, 14 Dec 2000 17:39:08 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Why vacuum?"
},
{
"msg_contents": "* Daniele Orlandi <[email protected]> [001214 09:10] wrote:\n> Zeugswetter Andreas SB wrote:\n> > \n> > If the priority is too low you will end up with the same behavior as current,\n> \n> Yes, and it is the intended behaviour. I'd use idle priority for it.\n\nIf you're talking about vacuum, you really don't want to do this,\nwhat's going to happen is that since you have an exclusive lock on\nthe file during your vacuum and no way to do priority lending you\ncan deadlock.\n\n> > because the cache will be emptied by high priority multiple new rows,\n> > thus writing to the end anyways.\n> \n> Yes, but this only happens when you don't have enought spare idle CPU\n> time. If you are in such situation for long periods, there's nothing you\n> can do, you already have problems.\n> \n> My approach in winning here because it allows you to have bursts of CPU\n> utilization without being affected by the overhead of a overwriting smgr\n> that (without hacks) will always try to find available slots, even in\n> high load situations.\n> \n> > Conclusio: In those cases where overwrite would be most advantageous (high\n> > volume modified table) your system won't work\n> \n> Why ? I have plenty of CPU time available on my server, even if one of\n> my table is highly volatile, fast-changing.\n\nWhen your table grows to be very large you'll see what we're talking \nabout.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 14 Dec 2000 09:15:00 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> If the priority is too low you will end up with the same behavior as current,\n\nYes, and it is the intended behaviour. I'd use idle priority for it.\n\n> because the cache will be emptied by high priority multiple new rows,\n> thus writing to the end anyways.\n\nYes, but this only happens when you don't have enought spare idle CPU\ntime. If you are in such situation for long periods, there's nothing you\ncan do, you already have problems.\n\nMy approach in winning here because it allows you to have bursts of CPU\nutilization without being affected by the overhead of a overwriting smgr\nthat (without hacks) will always try to find available slots, even in\nhigh load situations.\n\n> Conclusio: In those cases where overwrite would be most advantageous (high\n> volume modified table) your system won't work\n\nWhy ? I have plenty of CPU time available on my server, even if one of\nmy table is highly volatile, fast-changing.\n\nBye!\n",
"msg_date": "Thu, 14 Dec 2000 19:04:59 +0100",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Why vacuum?"
},
{
"msg_contents": "Alfred Perlstein wrote:\n> \n> If you're talking about vacuum, you really don't want to do this,\n\nNo, I'm not talking about vacuum as it is intended now, it's only a\nprocess that scans tables to find available blocks/tuples. It is\nvirtually optional, if it doesn't run, the database will behave just\nlike now.\n\n> what's going to happen is that since you have an exclusive lock on\n> the file during your vacuum and no way to do priority lending you\n> can deadlock.\n\nNo exclusive lock, it's just a reader.\n\n> When your table grows to be very large you'll see what we're talking\n> about.\n\nI see this as an optimization issue. If the scanner isn't smart and\nloses time scanning areas of the table that have not been emptied, you\ngo back to the current behaviour.\n\nBye!\n",
"msg_date": "Thu, 14 Dec 2000 20:04:29 +0100",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum?"
}
]
|
[
{
"msg_contents": "\nI have obtained the CRC-64 code used in the SWISS-PROT genetic\ndatabase:\n\n ftp://ftp.ebi.ac.uk/pub/software/swissprot/Swissknife/SPcrc.tar.gz\n\n(Thanks go to Henning Hermjakob <[email protected]> of the European\nBioinformatics Institute.) From the README:\n\n The code in this package has been derived from the BTLib package \n obtained from Christian Iseli <[email protected]>.\n From his mail:\n\n The reference is: W. H. Press, S. A. Teukolsky, W. T. Vetterling, and \n B. P. Flannery, \"Numerical recipes in C\", 2nd ed., Cambridge University \n Press. Pages 896ff.\n\n The generator polynomial is x64 + x4 + x3 + x1 + 1.\n\nChoosing a good polynomial is considered a black art. A good tutorial \non CRC practice is at\n\n http://www.repairfaq.org/filipg/LINK/F_crc_v3.html\n\nand there's a clear exposition of the theory in Tanenbaum's \n\"Computer Networks\". Note that initialization is important, and \noften neglected: one quality to check is, does a block of all zeroes, \nand a zero CRC, come out as an error?\n\nNathan Myers\[email protected]\n",
"msg_date": "Thu, 14 Dec 2000 13:08:10 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": true,
"msg_subject": "CRC-64"
}
]
|
[
{
"msg_contents": "Hi,\n I have written a 'C' function to be called during INSERT trigger on a\ntable.\n\n Are there any restrictions on the functions that can be called?\n\n I know you can call SPI_* functions. But, can I call PQ* functions ?\ne.g PQsetdb.\n\n Is there any document which describes how the functions written for\ntrigger are executed?\nDo they executed within the same process as server process or another\nprocess is started to execute the commands?\n\nregards,\n\nSandeep\n\n",
"msg_date": "Thu, 14 Dec 2000 17:26:21 -0800",
"msg_from": "Sandeep Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "create trigger : functions"
},
{
"msg_contents": "\nHi,\nCan anyone help me .\nI want to know how \nto write triggers.\ni am using java and postgresql\n \n\n\n\n \n-------------------------------------------------------------------\n\nFrom:- | \n Ms. Manika Dey. |Ph.No:--\n Engineer-SC (Comp. Tech.) | IPR -- 02712 - 69276 \n I.P.R | EXT 336,315\n BHAT, GANDHINAGAR | Residence -- 079 - 6619967\n Gujrat -- 382 428 | FAX --- 69017\n ------------------------------------------------------------------ \n\n\n\n\n \n \n\n\n\n",
"msg_date": "Fri, 15 Dec 2000 11:36:08 -0500 (GMT)",
"msg_from": "Manika dey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Trigger"
},
{
"msg_contents": "\nOn Fri, 15 Dec 2000, Manika dey wrote:\n\n> \n> Hi,\n> Can anyone help me .\n> I want to know how \n> to write triggers.\n> i am using java and postgresql\n\n not probably in java.... you can write function in some\n\"internal-interpreted-language\": C, Perl, Tcl, SQL, PL/SQL\n\n BTW, What is bad on PostgreSQL's docs?\n\n\t\t\t\t\tKarel\n\nPS. -hackers: What happen with PL/Python? Before 1/2 of year I ask if \n anyone works on this and answer was: \"yes, but 'he' is waiting for new \n fmgr design\". Tom's fmgr is done... IMHO it's big worse - The Python \n has very good design for integration to other programs.\n\n",
"msg_date": "Mon, 18 Dec 2000 08:37:21 +0100 (CET)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Trigger"
},
{
"msg_contents": "> PS. -hackers: What happen with PL/Python? Before 1/2 of year I ask if \n> anyone works on this and answer was: \"yes, but 'he' is waiting for new \n> fmgr design\". Tom's fmgr is done... IMHO it's big worse - The Python \n> has very good design for integration to other programs.\n\nGood question. I don't remember this old message, though.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Dec 2000 21:48:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] Trigger"
},
{
"msg_contents": "\nOn Tue, 19 Dec 2000, Bruce Momjian wrote:\n\n> > PS. -hackers: What happen with PL/Python? Before 1/2 of year I ask if \n> > anyone works on this and answer was: \"yes, but 'he' is waiting for new \n> > fmgr design\". Tom's fmgr is done... IMHO it's big worse - The Python \n> > has very good design for integration to other programs.\n> \n> Good question. I don't remember this old message, though.\n\n ... but I remember, in the archive is following message:\n\n> Re: Hello PL/Python\n> ____________________________\n> \n> * From: Hannu Krosing <[email protected]>\n> * To: Karel Zak <[email protected]>\n> * Subject: Re: Hello PL/Python\n> * Date: Thu, 20 Jul 2000 12:30:54 +0300\n> _________________________________________________________________\n> \n> Karel Zak wrote:\n>> \n>> Today afternoon I a little study libpython1.5 and I mean create\n>> new PL language is not a problem.\n>> \n>> I a little play with it, and here is effect:\n>> \n>> test=# CREATE FUNCTION py_test() RETURNS text AS '\n>> test'# a = ''Hello '';\n>> test'# b = ''PL/Python'';\n>> test'# plpython.retval( a + b );\n>> test'# ' LANGUAGE 'plpython';\n>> CREATE\n>> test=#\n>> test=#\n>> test=# SELECT py_test();\n>> py_test\n>> -----------------\n>> Hello PL/Python\n>> (1 row)\n>> \n>> Comments? Works on this already anyone?\n> \n> There is a semi-complete implementation (i.e. no trigger procedures)\n> by Vello Kadarpik ([email protected]).\n> \n> He is probably waiting for fmgr redesign or somesuch to complete before\n> releasing it.\n> \n> ---------\n> Hannu\n\n\n Where is possible found it? IMHO it's really interesting feature.\n\n\t\t\t\t\tKarel\n\n\n\n\n--ELM980551534-4410-0_--\n",
"msg_date": "Wed, 20 Dec 2000 08:56:43 +0100 (CET)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "PL/Python (was: Re: [GENERAL] Re: [HACKERS] Trigger)"
},
{
"msg_contents": "Karel Zak wrote:\n> \n> On Tue, 19 Dec 2000, Bruce Momjian wrote:\n> \n> > > PS. -hackers: What happen with PL/Python? Before 1/2 of year I ask if\n> > > anyone works on this and answer was: \"yes, but 'he' is waiting for new\n> > > fmgr design\". Tom's fmgr is done... IMHO it's big worse - The Python\n> > > has very good design for integration to other programs.\n> >\n> > Good question. I don't remember this old message, though.\n> \n> ... but I remember, in the archive is following message:\n> \n> > There is a semi-complete implementation (i.e. no trigger procedures)\n> > by Vello Kadarpik ([email protected]).\n> >\n> > He is probably waiting for fmgr redesign or somesuch to complete before\n> > releasing it.\n> >\n> > ---------\n> > Hannu\n> \n> Where is possible found it? IMHO it's really interesting feature.\n\nPerhaps Vello will answer directly, but IIRC he stopped working on it\nafter \na more clean implementation of the same was posted on this list.\n\n--------------\nHannu\n",
"msg_date": "Wed, 20 Dec 2000 13:08:59 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Python (was: Re: [GENERAL] Re: Trigger)"
},
{
"msg_contents": "\nComments anyone?\n\n> \n> On Tue, 19 Dec 2000, Bruce Momjian wrote:\n> \n> > > PS. -hackers: What happen with PL/Python? Before 1/2 of year I ask if \n> > > anyone works on this and answer was: \"yes, but 'he' is waiting for new \n> > > fmgr design\". Tom's fmgr is done... IMHO it's big worse - The Python \n> > > has very good design for integration to other programs.\n> > \n> > Good question. I don't remember this old message, though.\n> \n> ... but I remember, in the archive is following message:\n> \n> > Re: Hello PL/Python\n> > ____________________________\n> > \n> > * From: Hannu Krosing <[email protected]>\n> > * To: Karel Zak <[email protected]>\n> > * Subject: Re: Hello PL/Python\n> > * Date: Thu, 20 Jul 2000 12:30:54 +0300\n> > _________________________________________________________________\n> > \n> > Karel Zak wrote:\n> >> \n> >> Today afternoon I a little study libpython1.5 and I mean create\n> >> new PL language is not a problem.\n> >> \n> >> I a little play with it, and here is effect:\n> >> \n> >> test=# CREATE FUNCTION py_test() RETURNS text AS '\n> >> test'# a = ''Hello '';\n> >> test'# b = ''PL/Python'';\n> >> test'# plpython.retval( a + b );\n> >> test'# ' LANGUAGE 'plpython';\n> >> CREATE\n> >> test=#\n> >> test=#\n> >> test=# SELECT py_test();\n> >> py_test\n> >> -----------------\n> >> Hello PL/Python\n> >> (1 row)\n> >> \n> >> Comments? Works on this already anyone?\n> > \n> > There is a semi-complete implementation (i.e. no trigger procedures)\n> > by Vello Kadarpik ([email protected]).\n> > \n> > He is probably waiting for fmgr redesign or somesuch to complete before\n> > releasing it.\n> > \n> > ---------\n> > Hannu\n> \n> \n> Where is possible found it? IMHO it's really interesting feature.\n> \n> \t\t\t\t\tKarel\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 23:47:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Python (was: Re: [GENERAL] Re: Trigger)"
},
{
"msg_contents": "\n> Comments anyone?\n\n Yes Bruce, I already told about it with Peter in private mails. Because \nit's 1/2 of year and nobody answer I already start work on PL/Python.\n\n The PL/Python will in 7.2 - as soon as I can I send some proposal to \nhackers list. \n\n\t\t\t\tKarel\n\n> > \n> > On Tue, 19 Dec 2000, Bruce Momjian wrote:\n> > \n> > > > PS. -hackers: What happen with PL/Python? Before 1/2 of year I ask if \n> > > > anyone works on this and answer was: \"yes, but 'he' is waiting for new \n> > > > fmgr design\". Tom's fmgr is done... IMHO it's big worse - The Python \n> > > > has very good design for integration to other programs.\n> > > \n> > > Good question. I don't remember this old message, though.\n> > \n\n",
"msg_date": "Tue, 23 Jan 2001 16:48:58 +0100 (CET)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Python (was: Re: [GENERAL] Re: Trigger)"
}
]
|
[
{
"msg_contents": "Hi there,\nI went thru all the mailing list archives and other\ndocumentation and found the way to increase the max\ntuple size clearly as \" change the BLCKSZ to ....\"\nI did that and even created a new data directory and\ncluster by using init_db but the error \npsql:ins.sql:173: ERROR: Tuple is too big: size 9148,\nmax size 8140 \nstill persists. \n\nMy question is that is there a way to reflect the\nchages to config.h in the database without\nre-istalling postgresql. I have a large database\nrunning on 7.03 and do not want to reinstall. \nPLEASE HELP !!!!!!!\nThanks\nNick\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Shopping - Thousands of Stores. Millions of Products.\nhttp://shopping.yahoo.com/\n",
"msg_date": "Thu, 14 Dec 2000 22:30:19 -0800 (PST)",
"msg_from": "Nick Wayne <[email protected]>",
"msg_from_op": true,
"msg_subject": "BLCKSZ:Max tuple size problem"
},
{
"msg_contents": "Nick Wayne <[email protected]> writes:\n> I went thru all the mailing list archives and other\n> documentation and found the way to increase the max\n> tuple size clearly as \" change the BLCKSZ to ....\"\n> I did that and even created a new data directory and\n> cluster by using init_db but the error \n> psql:ins.sql:173: ERROR: Tuple is too big: size 9148,\n> max size 8140 \n> still persists. \n\nSounds like you didn't do the rebuild correctly. The full recipe is\n\n1. Change BLCKSZ in include/config.h (after running configure).\n2. Do a *full* compilation. If you compiled already, do \"make clean\"\n before \"make all\". Do not rely on make dependencies, because there\n aren't any.\n3. Install.\n4. initdb.\n\n> My question is that is there a way to reflect the\n> chages to config.h in the database without\n> re-istalling postgresql.\n\nNo. Page size is a pretty fundamental thing...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 11:12:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BLCKSZ:Max tuple size problem "
}
]
|
[
{
"msg_contents": "\n> > because the cache will be emptied by high priority multiple new rows,\n> > thus writing to the end anyways.\n> \n> Yes, but this only happens when you don't have enought spare idle CPU\n> time. If you are in such situation for long periods, there's nothing you\n> can do, you already have problems.\n\nI think such a process would not need a lot of CPU, but a lot of IO. The common\nschedulers do not take IO into account (at least not in the here needed sense), \nthus you cannot use the process priority mechanism here :-(\n\nAn idea could be to only fill the freepage cache from pages that currently reside in the \npage buffer. This would also somehow improve cache efficiency, since pages\nthat are often accessed would get a higher fill level.\nA problem with this is, that an empty page (basically an optimal candidate for the list) \nwould not get into the freelist unless somebody does a seq scan on the table.\n\nAndreas\n",
"msg_date": "Fri, 15 Dec 2000 10:04:07 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: Why vacuum?"
}
]
|
[
{
"msg_contents": "\n> > cusejoua=# update journaleintrag set txt_funktion=trim(txt_funktion);\n> > FATAL 2: write(logfile 0 seg 2 off 4612096) failed: No such file or directory\n> > pqReadData() -- backend closed the channel unexpectedly.\n> > This probably means the backend terminated abnormally\n> > before or while processing the request.\n> > The connection to the server was lost. Attempting reset: Failed.\n> \n> Can you start up db with --wal_debug=1 and send me output?\n\nSorry, can't reproduce this exact case. The last log was actually number 3.\nTo reproduce similar states simply fill the data filesystem with any sql, \n(may need to be one that writes more than 16 Mb txlog). \n\nCompressed log something like:\n\nINSERT @ 0/31853840: prev 0/31853648; xprev 0/31853648; xid 596: Heap - update: node 18719/18720; cid 0; tid 1005/26; new 3511/48\nXLogFlush: rqst 0/22738576; wrt 0/31842304; flsh 0/24307040\nERROR: cannot extend journaleintrag: No space left on device.\n Check free disk space.\nINSERT @ 0/31854040: prev 0/31853840; xprev 0/31853840; xid 596: Transaction - abort: 2000-12-15 11:30:38\nXLogFlush: rqst 0/29779696; wrt 0/0; flsh 0/0\nFATAL 2: write(logfile 0 seg 1 off 15065088) failed: No space left on device\nServer process (pid 23444) exited with status 512 at Fri Dec 15 11:33:12 2000\nTerminating any active server processes...\n\nServer processes were terminated at Fri Dec 15 11:33:12 2000\nReinitializing shared memory and semaphores\nDEBUG: starting up\nDEBUG: database system was interrupted at 2000-12-15 11:33:12\nDEBUG: CheckPoint record at (0, 316272)\nDEBUG: Redo record at (0, 316272); Undo record at (0, 0); Shutdown TRUE\nDEBUG: NextTransactionId: 590; NextOid: 18719\nDEBUG: database system was not properly shut down; automatic recovery in progress...\nDEBUG: redo starts at (0, 316328)\nREDO @ 0/316328; LSN 0/316360: prev 0/316272; xprev 0/0; xid 590: XLOG - nextOid: 26911\n..........\nREDO @ 0/327312; LSN 0/327560: prev 0/327064; xprev 0/327064; xid 594: Heap - insert: node 1\n8719/18720; cid 0; tid 0/18\nDEBUG: ReadRecord: there is no subrecord flag in logfile 0 seg 0 off 40\nDEBUG: Formatting logfile 0 seg 0 block 39 at offset 8072\nDEBUG: The last logId/logSeg is (0, 0)\nDEBUG: Set logId/logSeg in control file\nDEBUG: redo done at (0, 327312)\nXLogFlush: rqst 0/0; wrt 0/327560; flsh 0/327560\n...........\nINSERT @ 0/327560: prev 0/327312; xprev 0/0; xid 0: XLOG - checkpoint: redo 0/327560; undo 0/0; sui 21; xid 595; oid 26911; shutdown\nXLogFlush: rqst 0/327616; wrt 0/327560; flsh 0/327560\nDEBUG: database system is in production state\n\nSeems ReadRecord should switch to seg 1 above and not 0. \nThen txlog file 0000001 somehow gets deleted. \n\n(all rows from table journaleintrag are lost) test server only of course :-)\n\nAndreas\n",
"msg_date": "Fri, 15 Dec 2000 12:02:37 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: switching txlog file in 7.1beta"
},
{
"msg_contents": "> REDO @ 0/327312; LSN 0/327560: prev 0/327064; xprev 0/327064; xid 594: Heap - insert: node 1\n> 8719/18720; cid 0; tid 0/18\n> DEBUG: ReadRecord: there is no subrecord flag in logfile 0 seg 0 off 40\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> DEBUG: Formatting logfile 0 seg 0 block 39 at offset 8072\n> DEBUG: The last logId/logSeg is (0, 0)\n> DEBUG: Set logId/logSeg in control file\n> DEBUG: redo done at (0, 327312)\n> XLogFlush: rqst 0/0; wrt 0/327560; flsh 0/327560\n> ...........\n> INSERT @ 0/327560: prev 0/327312; xprev 0/0; xid 0: XLOG - checkpoint: redo 0/327560; undo 0/0; sui 21; xid 595; oid 26911;\nshutdown\n> XLogFlush: rqst 0/327616; wrt 0/327560; flsh 0/327560\n> DEBUG: database system is in production state\n>\n> Seems ReadRecord should switch to seg 1 above and not 0.\n> Then txlog file 0000001 somehow gets deleted.\n\nSomething wrong was written into seg 0. ReadRecord assumes that this is end of log and\nso removes anything after this place (and seg 1 too, of course).\nIt doesn't look like related to txlog switching. Something bad in XLogInsert/XLogWrite.\nThanks.\n\nVadim\n\n\n",
"msg_date": "Fri, 15 Dec 2000 09:17:06 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: switching txlog file in 7.1beta"
}
]
|
[
{
"msg_contents": "\nA heap page corruption is not very likely in PostgreSQL because of the\nunderlying page design. Not even on flakey hardware/ossoftware.\n(I once read a page design note from pg 4 but don't exactly remember \nwere or when)\n\nThe point is, that the heap page is only modified in places that were\npreviously empty (except header). All previous row data stays exactly \nin the same place. Thus if a page is only partly written \n(any order of page segments) only a new row is affected. But those rows\nwill be fixed during redo anyway. The only source of serious problems is \nthus a bogus write of a page segment (100 bytes ok 412 bytes chunk \nactually written to disk), but this case is imho sufficiently guarded or at least \ndetected by disk hardware. \n(I assume that the page header fits into one atomic block and has no problem \nwith beeing one step behind or ahead of redo).\n\nI thus doubt that we really need \"physical log\" for heap pages in PostgreSQL\nwith the current non-overwrite smgr. If we could detect corruption in index pages\nwe would not need physical log at all, since an index can always be recreated.\n\nWhat do you think ? I ask because \"physical log\" is a substantial amount of \nadditional IO that we imho only want if it is absolutely necessary.\n\nAndreas\n",
"msg_date": "Fri, 15 Dec 2000 12:41:59 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "heap page corruption not easy"
}
]
|
[
{
"msg_contents": "Grzegorz Mucha ([email protected]) reports a bug with a severity of 2\nThe lower the number the more severe it is.\n\nShort Description\nOuter joins aren't working with views\n\nLong Description\nIt seems outer joins are not working at all(they work as inner joins so far).\nFor example, see below:\n(the result is identical for inner and outer join) - two rows fetched from db(as I recall, there should be one more row having t1.id=3)\n\nSample Code\ncreate table t1(id serial primary key);\ncreate table t2(id2 serial primary key, id int);\ninsert into t1 values (1);\ninsert into t1 values (2);\ninsert into t1 values (3);\ninsert into t2 (id) values(1);\ninsert into t2 (id) values(2);\nselect t1.*, t2.* from t1 natural left outer join t2;\n\nNo file was uploaded with this report\n\n",
"msg_date": "Fri, 15 Dec 2000 07:44:47 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Outer joins aren't working with views"
},
{
"msg_contents": "It works for me:\n\nregression=# select t1.*, t2.* from t1 natural left outer join t2;\n id | id2 | id\n----+-----+----\n 1 | 1 | 1\n 2 | 2 | 2\n 3 | |\n(3 rows)\n\nWhat version are you using?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 11:07:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Outer joins aren't working with views "
},
{
"msg_contents": "\nWhat version are you using? The sample code works for me \non current sources, three rows with the last one as 3|null|null\n\n\nStephan Szabo\[email protected]\n\nOn Fri, 15 Dec 2000 [email protected] wrote:\n\n> Grzegorz Mucha ([email protected]) reports a bug with a severity of 2\n> The lower the number the more severe it is.\n> \n> Short Description\n> Outer joins aren't working with views\n> \n> Long Description\n> It seems outer joins are not working at all(they work as inner joins so far).\n> For example, see below:\n> (the result is identical for inner and outer join) - two rows fetched from db(as I recall, there should be one more row having t1.id=3)\n> \n> Sample Code\n> create table t1(id serial primary key);\n> create table t2(id2 serial primary key, id int);\n> insert into t1 values (1);\n> insert into t1 values (2);\n> insert into t1 values (3);\n> insert into t2 (id) values(1);\n> insert into t2 (id) values(2);\n> select t1.*, t2.* from t1 natural left outer join t2;\n> \n> No file was uploaded with this report\n> \n\n",
"msg_date": "Fri, 15 Dec 2000 09:23:19 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Outer joins aren't working with views"
},
{
"msg_contents": "> It works for me:\n> regression=# select t1.*, t2.* from t1 natural left outer join t2;\n> id | id2 | id\n> ----+-----+----\n> 1 | 1 | 1\n\nMy recollection is that SQL9x requires that the join result lose the\nlink to the original table names. That is,\n\n select id, id2 from t1 natural left outer join t2;\n\nis legal, but\n\n select t1.id, ...\n\nis not.\n\nIf one needs to label the join product, then one uses aliases, as\n\n select tx.* from (t1 natural left outer join t2) as tx;\n\nor\n\n select tx.* from (t1 natural left outer join t2) as tx (c1, c2);\n\nI could see allowing in the target list explicit reference to the\nunderlying tables as an extension when there is no ambiguity.\n\nHowever, in this case should the natural join in the original example do\nthe join on the column \"id\", and not have two columns of name \"id\"\navailable after the join?\n\nHow do you read the spec and this example? My original reading was from\nthe Date and Darwen book, and the SQL99 spec we have is (to put it\nnicely) a bit harder to follow. I'll write some of this up for the\nsyntax section of the user's guide once I'm clear on it...\n\nref:\nansi-iso-9075-2-1999.txt from the draft documents we found on the web\nlast year.\n\nISO/IEC 9075-2:1999 SQL - Part 2: SQL/Foundation-\nSeptember 23, 1999\n[Compiled using SQL3_ISO option]\nsection 7.7 rule 7\n\n - Thomas\n",
"msg_date": "Sat, 16 Dec 2000 05:16:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Outer joins aren't working with views"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> It works for me:\n>> regression=# select t1.*, t2.* from t1 natural left outer join t2;\n>> id | id2 | id\n>> ----+-----+----\n>> 1 | 1 | 1\n\n> My recollection is that SQL9x requires that the join result lose the\n> link to the original table names. That is,\n> select id, id2 from t1 natural left outer join t2;\n> is legal, but\n> select t1.id, ...\n> is not.\n\nHm. This is one of the areas that I had put down on my personal TODO\nlist as needing a close look before release. So, let's get to it.\n\nMy first scan of SQL92 looks like our current behavior is right.\nI find these paras that seem to be relevant to the scope of a\n<correlation name> (ie, a table alias):\n\n5.4 Names and identifiers, syntax rule 12:\n\n 12)An <identifier> that is a <correlation name> is associated with\n a table within a particular scope. The scope of a <correlation\n name> is either a <select statement: single row>, <subquery>, or\n <query specification> (see Subclause 6.3, \"<table reference>\").\n Scopes may be nested. In different scopes, the same <correlation\n name> may be associated with different tables or with the same\n table.\n\n6.3 <table reference>, syntax rule 2:\n\n 2) Case:\n\n a) If a <table reference> TR is contained in a <from clause> FC\n with no intervening <derived table>, then the scope clause\n SC of TR is the <select statement: single row> or innermost\n <query specification> that contains FC. The scope clause of\n the exposed <correlation name> or exposed <table name> of TR\n is the <select list>, <where clause>, <group by clause>, and\n <having clause> of SC, together with the <join condition> of\n all <joined table>s contained in SC that contains TR.\n\n b) Otherwise, the scope clause SC of TR is the outermost <joined\n table> that contains TR with no intervening <derived table>.\n The scope of the exposed <correlation name> or exposed <table\n name> of TR is the <join condition> of SC and of all <joined\n table>s contained in SC that contain TR.\n\n(Note that <derived table> means subselect-in-FROM, cf 6.3 and 7.11.)\n\nThe first and second items here seem to be perfectly clear that the\nnames t1 and t2 have scope across the whole SELECT statement and are not\nhidden within the <joined table> formed by the OUTER JOIN clause.\n\nOn the other hand, the third item leaves me confused again. I don't\nsee how it applies at all, ie, when is the \"If\" of 2(a) ever false?\nHow is it *possible* to have a <table reference> that's not directly\ncontained in a <from clause>? The business about a <derived table>\nseems like horsepucky, because a table ref inside a subselect would be\ncontained in the subselect's from-clause and its scope would be that\nsubselect. Where in the spec does it allow a table reference that's\nnot in a from-clause? (Our PostQuel extensions do not count ;-))\n\nIt'd be useful to check the above example against Oracle and other\nimplementations, but the parts of the spec that I can follow seem\nto say that we've got the right behavior now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Dec 2000 01:38:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Table name scope (was Re: Outer joins aren't working with views)"
},
{
"msg_contents": "> The first and second items here seem to be perfectly clear that the\n> names t1 and t2 have scope across the whole SELECT statement and are not\n> hidden within the <joined table> formed by the OUTER JOIN clause.\n\nYou are right. If there is a \"correlation name\", then those underlying\ntable names become invisible, but that was not in the example here.\nRereading my Date and Darwen clarified this for me. However, there are\n*some* columns for which this explicit table qualification is not\nallowed, including in the example of NATURAL JOIN.\n\nDate and Darwen, 4th ed, pp 142-144 discuss various aspects of join\nscope and behavior. For NATURAL JOIN, the columns with common names\nforming the join columns *lose* their underlying table name, since they\ncan't be traced back to a column from a specific table (the table of\norigin is ambiguous). And for a NATURAL JOIN, it is impossible to get\nback two columns with the same name, since those columns were unified by\nthe join process.\n\nThe process is required to join on the columns with names in common, and\nto swallow one of each pair in the result. How should you refer to the\ncolumn that remains?\n\n create table t1 (id int, id2 int);\n create table t2 (id int, name text);\n select * from t1 natural left outer join t2;\n\nmust return something from the set of columns (id, id2, name), and two\ncolumns of name \"id\" will not be visible. Also, column \"id\" cannot be\nqualified with a table name. So\n\n select t1.id from t1 natural join t2;\n\nis not legal (though perhaps could be justified as an extension). The\ncolumns *not* involved in the join operation, id2 and name, *can* be\nqualified by the underlying table name, but the only way to get the same\nfor \"id\" after the natural join is to use a correlation name. e.g.\n\n select tx.id from (t1 natural join t2) as tx;\n select t1.id2 from t1 natural join t2;\n\nare both legal.\n\n> It'd be useful to check the above example against Oracle and other\n> implementations, but the parts of the spec that I can follow seem\n> to say that we've got the right behavior now.\n\nOracle does not support SQL9x join syntax, so we can't ask it for an\nexample. Not sure about the others.\n\nComments?\n\n - Thomas\n",
"msg_date": "Sat, 16 Dec 2000 07:38:20 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table name scope (was Re: Outer joins aren't working with views)"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> The first and second items here seem to be perfectly clear that the\n>> names t1 and t2 have scope across the whole SELECT statement and are not\n>> hidden within the <joined table> formed by the OUTER JOIN clause.\n\n> You are right. If there is a \"correlation name\", then those underlying\n> table names become invisible, but that was not in the example here.\n\nRight, either the table's real name or its alias (\"correlation name\") is\nintroduced into the query's scope, not both. AFAICT the scope rules\nare the same for either one, though.\n\n> Rereading my Date and Darwen clarified this for me. However, there are\n> *some* columns for which this explicit table qualification is not\n> allowed, including in the example of NATURAL JOIN.\n\nI disagree on that. The table's real/alias name is certainly supposed\nto be accessible, and I see nothing in the spec that says that only some\nof its columns are accessible via qualification. What the spec does say\nis that the *output* of the join has only one copy of the joined column.\nIn other words, given table A with columns ID and CA, and table B with\ncolumns ID and CB, I believe the correct behavior is\n\nSELECT * FROM (A NATURAL JOIN B) J\t\tproduces ID, CA, CB\n\nSELECT J.* FROM (A NATURAL JOIN B) J\t\tproduces ID, CA, CB\n\nSELECT A.* FROM (A NATURAL JOIN B) J\t\tproduces ID, CA\n\nSELECT B.* FROM (A NATURAL JOIN B) J\t\tproduces ID, CB\n\nIf it's an outer join then J.ID is subtly different from A.ID and/or\nB.ID --- the spec defines the output column as COALESCE(A.ID,B.ID)\n(cf SQL92 7.5 <joined table>, syntax rule 6.d) to get rid of introduced\nnulls. BTW, our implementation simplifies that to A.ID for an inner or\nleft join, or B.ID for a right join, and only uses the full COALESCE\nexpression for a full join.\n\nAnyway, I believe it's true that you can't get at A.ID or B.ID in\nthis example except by qualifying the column name with the table name\n--- but I don't see where it says that you shouldn't be able to get\nat them at all. If that were true then the definition in 7.5.6.d\nwouldn't be legal, because that's exactly the syntax it uses to define\nthe joined column.\n\n> Date and Darwen, 4th ed, pp 142-144 discuss various aspects of join\n> scope and behavior. For NATURAL JOIN, the columns with common names\n> forming the join columns *lose* their underlying table name, since they\n> can't be traced back to a column from a specific table (the table of\n> origin is ambiguous).\n\nMy reading is that the output columns are qualified with the JOIN\nclause's correlation name, if any (J in my example). If you didn't\nbother to stick a correlation name on the join clause, you couldn't\nrefer to them with a qualified name.\n\nIn an example like\n\nSELECT * FROM (A NATURAL LEFT JOIN (B NATURAL FULL JOIN C));\n\nsupposing that all three tables have a column ID, then the output ID\ncolumn of the B/C join has no qualified name, and it would indeed be\nimpossible to refer to it from the SELECT list. The only IDs accessible\nfrom the SELECT list are the also-qualified-name-less output of the\nleft join and A.ID, B.ID, C.ID, none of which are quite the same as\nthe output of the full join. Perhaps what Date and Darwen are talking\nabout is cases like this? \n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Dec 2000 12:46:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table name scope (was Re: Outer joins aren't working with views) "
},
{
"msg_contents": "> I disagree on that. The table's real/alias name is certainly supposed\n> to be accessible, and I see nothing in the spec that says that only some\n> of its columns are accessible via qualification.\n\nDate and Darwen disagree circa 1997, and I believe that SQL99 does not\nradically alter the spec in this regard. All of my interpretations below\nare based on D&D, not the draft spec we have available (though I look to\nthat to support their interpretation, which imho it does).\n\n> What the spec does say\n> is that the *output* of the join has only one copy of the joined column.\n> In other words, given table A with columns ID and CA, and table B with\n> columns ID and CB, I believe the correct behavior is\n> \n> SELECT * FROM (A NATURAL JOIN B) J produces ID, CA, CB\n\nYes.\n\n> SELECT J.* FROM (A NATURAL JOIN B) J produces ID, CA, CB\n\nYes.\n\n> SELECT A.* FROM (A NATURAL JOIN B) J produces ID, CA\n\nNo, since there is a range variable J, no columns explicitly qualified\nwith A or B are visible. If the range variable J is omitted, then the\nresult will produce only CA. See one of the D&D cases I include below.\n\n> SELECT B.* FROM (A NATURAL JOIN B) J produces ID, CB\n\nSame as for the previous case. B.* is not visible since a range variable\nis specified, and if J is not there then B.* produces CB only.\n\n> If it's an outer join then J.ID is subtly different from A.ID and/or\n> B.ID --- the spec defines the output column as COALESCE(A.ID,B.ID)\n> (cf SQL92 7.5 <joined table>, syntax rule 6.d) to get rid of introduced\n> nulls. BTW, our implementation simplifies that to A.ID for an inner or\n> left join, or B.ID for a right join, and only uses the full COALESCE\n> expression for a full join.\n\nRight, the result is the same for these cases. The only issue is the\nscoping on the name allowed for external reference.\n\n> Anyway, I believe it's true that you can't get at A.ID or B.ID in\n> this example except by qualifying the column name with the table name\n> --- but I don't see where it says that you shouldn't be able to get\n> at them at all. If that were true then the definition in 7.5.6.d\n> wouldn't be legal, because that's exactly the syntax it uses to define\n> the joined column.\n\n7.7.7.d seems to define SLCC pretty clearly, without a table name\nqualification. I think that this is consistant with D&D's\ninterpretation.\n\n> > Date and Darwen, 4th ed, pp 142-144 discuss various aspects of join\n> > scope and behavior. For NATURAL JOIN, the columns with common names\n> > forming the join columns *lose* their underlying table name, since they\n> > can't be traced back to a column from a specific table (the table of\n> > origin is ambiguous).\n> My reading is that the output columns are qualified with the JOIN\n> clause's correlation name, if any (J in my example). If you didn't\n> bother to stick a correlation name on the join clause, you couldn't\n> refer to them with a qualified name.\n\nSure. But without a correlation name, you are not allowed to qualify\nwith the underlying table name for \"join columns\" from NATURAL or JOIN\nON joins. See below...\n\n> In an example like\n> \n> SELECT * FROM (A NATURAL LEFT JOIN (B NATURAL FULL JOIN C));\n> \n> supposing that all three tables have a column ID, then the output ID\n> column of the B/C join has no qualified name, and it would indeed be\n> impossible to refer to it from the SELECT list. The only IDs accessible\n> from the SELECT list are the also-qualified-name-less output of the\n> left join and A.ID, B.ID, C.ID, none of which are quite the same as\n> the output of the full join. Perhaps what Date and Darwen are talking\n> about is cases like this?\n\nNo, they are talking about simpler cases, and very clearly they disagree\nwith the current behavior of the PostgreSQL parser. Now, it may be that\nSQL99 has changed the scoping rules for these cases, but instead I would\nlook for support for Date and Darwen's interpretation in the spec,\nrather than reading the spec from first principles. Date and Darwen can\nexplain it in a couple of pages, and give examples, where the spec is\njust way too convoluted for a clear reading istm.\n\nAnyway, here are two cases discussed by D&D -- note that table sp has\ncolumns (sno, pno, qty) and table s has columns (sno, sname, status,\ncity, primary):\n\n(p142, after a discussion of other cases)\n\"One very counterintuitive consequence of this unorthodox scoping rule\nis illustrated by the following example: The result of the expression\n\n select distinct sp.* from sp natural join s;\n\nwill include columns PNO and QTY but *not* column SNO, because --\nbelieve it or not -- there is no column \"SP.SNO\" in the result of the\njoin expression (indeed specifying SP.SNO in the SELECT clause would be\na syntax error).\"\n\nThe emphasis is D&D's, not mine ;) For natural joins, or other joins\nwhere two columns are subsumed into one (anything with a USING clause?)\nthe scoping rules are clear, at least to D&D: it is not possible to\nreference one of these columns by qualifying with the name of an\nunderlying table.\n\n\nAnother case (p143-144, following some simpler cases which show how\nscoping progresses through more deeply nested joins):\n\"Now let us modify the example once again to introduce an explicit range\nvariable TC for the overall result:\n\n ( ( T1 JOIN T2 ON cond-1 ) AS TA\n JOIN\n ( T3 JOIN T4 ON cond-2 ) AS TB\n ON cond-3 ) AS TC\n\nThe rules are now as follows:\n\ncond-1 can reference T1 and T2 but not T3, T4, TA, TB, or TC\ncond-2 can reference T3 and T4 but not T1, T2, TA, TB, or TC\ncond-3 can reference TA and TB but not T1, T2, T3, T4, or TC\n\nand (once again) if the overall expression appears as the operand of a\nFROM clause, then the associated SELECT clause, WHERE clause, etc. can\nreference TC but not T1, T2, T3, T4, TA, or TB.\"\n\nSo the two D&D cases cited above illustrate the \"with range variables\"\nand \"without range variables\" expected behavior. Comments?\n\n - Thomas\n",
"msg_date": "Sun, 17 Dec 2000 06:32:09 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table name scope (was Re: Outer joins aren't working with views)"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> (p142, after a discussion of other cases)\n> \"One very counterintuitive consequence of this unorthodox scoping rule\n> is illustrated by the following example: The result of the expression\n\n> select distinct sp.* from sp natural join s;\n\n> will include columns PNO and QTY but *not* column SNO, because --\n> believe it or not -- there is no column \"SP.SNO\" in the result of the\n> join expression (indeed specifying SP.SNO in the SELECT clause would be\n> a syntax error).\"\n\n> The emphasis is D&D's, not mine ;)\n\nHm. After further digging in the spec, it seems that their\ninterpretation rests on SQL92's section 6.4 <column reference> syntax\nrule 2.b. Rule 2 in full is:\n\n 2) If CR contains a <qualifier> Q, then CR shall appear within the\n scope of one or more <table name>s or <correlation name>s that\n are equal to Q. If there is more than one such <table name> or\n <correlation name>, then the one with the most local scope is\n specified. Let T be the table associated with Q.\n\n a) T shall include a column whose <column name> is CN.\n\n b) If T is a <table reference> in a <joined table> J, then CN\n shall not be a common column name in J.\n\n Note: Common column name is defined in Subclause 7.5, \"<joined\n table>\".\n\n2.b strikes me as a completely unnecessary and counterintuitive\nrestriction. Do D&D provide any justification for it? I'm not\nespecially inclined to make our implementation substantially more\ncomplex in order to enforce what seems a bogus restriction.\n\nWhat's even more interesting is that I can find no equivalent\ntext in SQL99.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2000 01:44:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table name scope (was Re: Outer joins aren't working with views) "
},
{
"msg_contents": "> > (p142, after a discussion of other cases)\n> > \"One very counterintuitive consequence of this unorthodox scoping rule\n> > is illustrated by the following example: The result of the expression\n> \n> > select distinct sp.* from sp natural join s;\n> \n> > will include columns PNO and QTY but *not* column SNO, because --\n> > believe it or not -- there is no column \"SP.SNO\" in the result of the\n> > join expression (indeed specifying SP.SNO in the SELECT clause would be\n> > a syntax error).\"\n> \n> > The emphasis is D&D's, not mine ;)\n> \n> Hm. After further digging in the spec, it seems that their\n> interpretation rests on SQL92's section 6.4 <column reference> syntax\n> rule 2.b. Rule 2 in full is:\n> \n> 2) If CR contains a <qualifier> Q, then CR shall appear within the\n> scope of one or more <table name>s or <correlation name>s that\n> are equal to Q. If there is more than one such <table name> or\n> <correlation name>, then the one with the most local scope is\n> specified. Let T be the table associated with Q.\n> \n> a) T shall include a column whose <column name> is CN.\n> \n> b) If T is a <table reference> in a <joined table> J, then CN\n> shall not be a common column name in J.\n> \n> Note: Common column name is defined in Subclause 7.5, \"<joined\n> table>\".\n> \n> 2.b strikes me as a completely unnecessary and counterintuitive\n> restriction. Do D&D provide any justification for it? I'm not\n> especially inclined to make our implementation substantially more\n> complex in order to enforce what seems a bogus restriction.\n\nHmm. istm that the D&D interpretation is entirely clear, and that for\nNATURAL and USING joins there is no other way to carry along join\nresults as intermediate \"tables\". If\n\n select * from t1 natural join t2;\n\nproduces, say, three columns, how can any other specification of the\ntarget list using only wildcards produce *more* columns? In particular,\nhow can\n\n select t1.*, t2.* from t1 natural join t2;\n\nproduce columns from t1 and t2 which are *not present* in the join \"t1\nnatural join t2\"?\n\n> What's even more interesting is that I can find no equivalent\n> text in SQL99.\n\nOf course. When they bloated the spec by a factor of three or four, they\nhad to leave out the clear parts to save space ;)\n\nI'm pretty sure that the sections I quoted (in 7.7.7 in the draft\ndocument I have -- hopefully the same as what you have available?)\ncover this topic. In particular, NATURAL and USING joins are not the\nsame as other inner or outer joins in the resulting set of available\ncolumns. So there are two issues here which I hope to clarify: scoping\non joins, and NATURAL and USING join column sets.\n\n - Thomas\n",
"msg_date": "Sun, 17 Dec 2000 07:49:13 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table name scope (was Re: Outer joins aren't working with views)"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> In particular, how can\n\n> select t1.*, t2.* from t1 natural join t2;\n\n> produce columns from t1 and t2 which are *not present* in the join \"t1\n> natural join t2\"?\n\nVery easily ;-)\n\n>> What's even more interesting is that I can find no equivalent\n>> text in SQL99.\n\n> Of course. When they bloated the spec by a factor of three or four, they\n> had to leave out the clear parts to save space ;)\n\nOr they realized they blew it the first time.\n\n> I'm pretty sure that the sections I quoted (in 7.7.7 in the draft\n> document I have -- hopefully the same as what you have available?)\n> cover this topic. In particular, NATURAL and USING joins are not the\n> same as other inner or outer joins in the resulting set of available\n> columns.\n\nThere's no question about what happens as far as the output of the join\nis concerned. However, 7.7.7 does not say word one about what is\nimplied by direct access (ie, qualified-name access) to the component\ntables of the join.\n\nI've been through the SQL99 draft again, and there is quite clearly NOT\nany restriction corresponding to the old 6.4.2.b; so under SQL99 it is\nlegal to refer to A.ID and B.ID. However, they do still have the idea\nthat A.* should omit ID: 7.11 <query specification> syntax rule 7.g.i\n(concerning expansion of qualified asterisks) says\n\n i) If the basis is a <table or query name> or <correlation\n name>, then let TQ be the table associated with the basis.\n The <select sublist> is equivalent to a <value expression>\n sequence in which each <value expression> is a column\n reference CR that references a column of TQ that is not\n a common column of a <joined table>. Each column of TQ\n that is not a referenced common column shall be referenced\n exactly once. The columns shall be referenced in the\n ascending sequence of their ordinal positions within TQ.\n\nwhich is essentially taken from 7.9.4 of the old spec. This is a mess;\nI wonder if the discrepancy between qualified-name access and asterisk\nexpansion is deliberate? (Perhaps they felt that allowing qualified\nname access was an extension that wouldn't break old code, but that they\ncouldn't change the asterisk expansion rule without breaking backwards\ncompatibility?) It'd be nice to see if this is still true in SQL99\nfinal.\n\n> So there are two issues here which I hope to clarify: scoping\n> on joins, and NATURAL and USING join column sets.\n\nTwo issues? I thought we were only arguing about the latter one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2000 13:05:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table name scope (was Re: Outer joins aren't working with views) "
},
{
"msg_contents": "I said:\n> The <select sublist> is equivalent to a <value expression>\n> sequence in which each <value expression> is a column\n> reference CR that references a column of TQ that is not\n> a common column of a <joined table>.\n\n> which is essentially taken from 7.9.4 of the old spec. This is a mess;\n\nIn fact, after looking at it again, I realize that the quoted text is\n*wrong*, because it does not say what they presumably intended. As\nwritten, it appears that\n\tSELECT J.* FROM (A NATURAL JOIN B) J\nshould omit the common column(s). They're common columns of a <joined\ntable>, aren't they?\n\nA lawyer would probably point out that 7.7 does not define the phrase\n\"common column\". It defines \"common column name\". Common column name\nclearly applies to all three tables involved (both input tables and the\noutput table), but it's not so clear whether \"common column\" is intended\nto do so.\n\nOne could also wonder about the intended behavior of multi-level joins.\nDoes a column of a base table become inaccessible if it is used as a\ncommon column several JOIN levels up?\n\nAt best, this part of the spec is extremely poorly written.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2000 13:34:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table name scope (was Re: Outer joins aren't working with views) "
},
{
"msg_contents": "> > So there are two issues here which I hope to clarify: scoping\n> > on joins, and NATURAL and USING join column sets.\n> Two issues? I thought we were only arguing about the latter one.\n\nWell, I prefer to consider it \"discussing\" ;)\n\nAnd there are two issues. I'll bet lunch and dinner that SQL99 did *not*\nmake radical changes in the scoping rules for join syntax vis a vis\nSQL92. Certainly something compatible with SQL92 should have a shot at\nbeing also compatible under SQL99, and scoping rules would fall into\nthat category.\n\nOn the second topic, NATURAL and USING join column sets, I believe that\nit *must* be true that the set of columns available in a natural join\nresult (e.g. the result of\n\n A NATURAL JOIN B\n\n) is the complete set of columns available to a SELECT target list, to a\nWHERE qualification, etc. D&D's description of the effects of this\n\"interpretation\" are consistant and clear (where the spec is not). I'm\nnot sure how we can allow our interpretation to be at odds with the\nSQL92 spec or with a reading of the SQL99 draft I have available. In\nparticular, the rules for forming join results seem to cover the cases\nwe are discussing, and I read them as being consistant with D&D's SQL92\ndiscussion. btw, their appendix on the upcoming \"SQL3\" does not bring up\njoin results or join scoping as among the changes in the upcoming\nstandard, though of course that is not a definitive point.\n\nDate and Darwen have imho a very clear description of the scoping\nallowed in join syntax. That scoping discussion says very clearly that a\n\"range variable\" (SQL9x \"correlation name\") becomes the only allowed\nqualification to a column name in SELECT target lists, WHERE clauses,\netc etc. They have very specific examples to clarify the point. And they\ndeem that necessary because the spec is a PITA to wade through. I'd\nrather leave it to them to do the wading ;)\n\nLet's look for counterexamples in our other texts if you are really\nuncomfortable with the SQL92 (and SQL99?) result in D&D. I have another\nbook or two, and will look through them tonight. Does anyone else want\nto jump in, esp. if you have experience with the SQL9x conventions or\nhave access to a db which already implements it?\n\n - Thomas\n",
"msg_date": "Mon, 18 Dec 2000 02:18:09 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table name scope (was Re: Outer joins aren't working with views)"
},
{
"msg_contents": "Tom Lane writes:\n\n> SELECT * FROM (A NATURAL JOIN B) J\t\tproduces ID, CA, CB\n>\n> SELECT J.* FROM (A NATURAL JOIN B) J\t\tproduces ID, CA, CB\n>\n> SELECT A.* FROM (A NATURAL JOIN B) J\t\tproduces ID, CA\n>\n> SELECT B.* FROM (A NATURAL JOIN B) J\t\tproduces ID, CB\n\nISTM that correlation names aren't allowed after joined tables in the\nfirst place.\n\n <table reference> ::=\n <table name> [ [ AS ] <correlation name>\n [ <left paren> <derived column list> <right paren> ] ]\n | <derived table> [ AS ] <correlation name>\n [ <left paren> <derived column list> <right paren> ]\n | <joined table>\n\n <joined table> ::=\n <cross join>\n | <qualified join>\n | <left paren> <joined table> <right paren>\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 19 Dec 2000 23:51:35 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Table name scope (was Re: [BUGS] Outer joins aren't\n\tworking with views)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> ISTM that correlation names aren't allowed after joined tables in the\n> first place.\n\n> <table reference> ::=\n> <table name> [ [ AS ] <correlation name>\n> [ <left paren> <derived column list> <right paren> ] ]\n> | <derived table> [ AS ] <correlation name>\n> [ <left paren> <derived column list> <right paren> ]\n> | <joined table>\n\n> <joined table> ::=\n> <cross join>\n> | <qualified join>\n> | <left paren> <joined table> <right paren>\n\nKeep looking:\n\n <derived table> ::= <table subquery>\n\n <table subquery> ::= <subquery>\n\n <subquery> ::= <left paren> <query expression> <right paren>\n\n <query expression> ::=\n <non-join query expression>\n | <joined table>\n\nSo you can write\n\tSELECT A.* FROM (A NATURAL JOIN B) J\nbut in\n\tSELECT A.* FROM A NATURAL JOIN B J\nthe J will be taken as an alias for B not for the join. If they allowed\nan alias clause on an unparenthesized <joined table>, the grammar would\nbe ambiguous...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 15:12:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Table name scope (was Re: [BUGS] Outer joins aren't working\n\twith views)"
},
{
"msg_contents": ">>>> So there are two issues here which I hope to clarify: scoping\n>>>> on joins, and NATURAL and USING join column sets.\n\nI've been looking some more at this business, and I have found one of\nthe reasons that I was confused. The SQL92 spec says (6.3 syntax rule\n2)\n\n 2) Case:\n\n a) If a <table reference> TR is contained in a <from clause> FC\n with no intervening <derived table>, then the scope clause\n SC of TR is the <select statement: single row> or innermost\n <query specification> that contains FC. The scope clause of\n the exposed <correlation name> or exposed <table name> of TR\n is the <select list>, <where clause>, <group by clause>, and\n <having clause> of SC, together with the <join condition> of\n all <joined table>s contained in SC that contains TR.\n\n b) Otherwise, the scope clause SC of TR is the outermost <joined\n table> that contains TR with no intervening <derived table>.\n The scope of the exposed <correlation name> or exposed <table\n name> of TR is the <join condition> of SC and of all <joined\n table>s contained in SC that contain TR.\n\nI mistakenly read this with the assumption that <derived table> means\na sub-SELECT. It does mean that, but it also means a <joined table>,\n*if and only if* that joined table is labeled with a <correlation name>.\nThe relevant productions are:\n\n <table reference> ::=\n <table name> [ [ AS ] <correlation name>\n [ <left paren> <derived column list> <right paren> ] ]\n | <derived table> [ AS ] <correlation name>\n [ <left paren> <derived column list> <right paren> ]\n | <joined table>\n\n <derived table> ::= <table subquery>\n\n <table subquery> ::= <subquery>\n\n <subquery> ::= <left paren> <query expression> <right paren>\n\n <query expression> ::=\n <non-join query expression>\n | <joined table>\n\nSo \"(<joined table>) AS foo\" has a <subquery> but \"<joined table>\" doesn't.\nAFAICT, this means that table references defined within the join are\ninvisible outside \"(<joined table>) AS foo\", but they are visible\noutside a plain \"<joined table>\". This is more than a tad bizarre\n... but it explains the examples you quoted from Date and Darwen.\n\nHowever, as long as a table reference is visible, I think that the\nset of qualified column names available from it should not depend on\nwhether it came from inside a JOIN expression or not. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Feb 2001 12:55:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table name scope (was Re: Outer joins aren't working with views) "
}
]
|
[
{
"msg_contents": "> It seems that Tom has committed his fixups but we're still waiting\n> on Vadim?\n\nSorry guys, I'm busy with WAL issues currently.\nCRC will require initdb, so it's better to implement it\nfirst...\nAlso, is TOAST-table vacuuming fixed now?\n\nVadim\n",
"msg_date": "Fri, 15 Dec 2000 11:33:26 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Idea for reducing planning time"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> Also, is TOAST-table vacuuming fixed now?\n\nStill broken. Hiroshi had muttered something about fixing the internal\ncommit of VACUUM so that it's more like a real commit --- including\nadvancing the transaction ID --- but still doesn't release the exclusive\nlock held by VACUUM. Basically we need to propagate the locks forward\nto the new xact instead of releasing them. I think that would be a nice\nclean solution if we could do it. Do you have any ideas about how?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 14:56:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "TOAST-table vacuuming (was Re: Idea for reducing planning time)"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > It seems that Tom has committed his fixups but we're still waiting\n> > on Vadim?\n> \n> Sorry guys, I'm busy with WAL issues currently.\n> CRC will require initdb, so it's better to implement it\n> first...\n> Also, is TOAST-table vacuuming fixed now?\n\nCan someone please remind me? CRC allows us to know of WAL log is\ncorrupt?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 15 Dec 2000 15:20:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idea for reducing planning time"
},
{
"msg_contents": "On Fri, Dec 15, 2000 at 03:20:27PM -0500, Bruce Momjian wrote:\n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > > It seems that Tom has committed his fixups but we're still waiting\n> > > on Vadim?\n> > \n> > Sorry guys, I'm busy with WAL issues currently.\n> > CRC will require initdb, so it's better to implement it\n> > first...\n> \n> Can someone please remind me? CRC allows us to know of WAL log is\n> corrupt?\n\nThere are two uses for CRC in connection with the WAL log \n(\"write-ahead log log\"). First, it helps to reveal corruption \nin the log. Second, it makes it possible to identify the end of \nthe log when the log lives on a raw partition.\n\nNathan Myers\[email protected]\n",
"msg_date": "Fri, 15 Dec 2000 13:37:51 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Idea for reducing planning time"
}
]
|
[
{
"msg_contents": "> > Also, is TOAST-table vacuuming fixed now?\n> \n> Still broken. Hiroshi had muttered something about fixing \n> the internal commit of VACUUM so that it's more like a real\n> commit --- including advancing the transaction ID --- but\n> still doesn't release the exclusive lock held by VACUUM.\n> Basically we need to propagate the locks forward to the new\n> xact instead of releasing them. I think that would be a nice\n> clean solution if we could do it. Do you have any ideas about how?\n\nYes, it would be nice for cursors too - they should be able to cross\ntransaction boundaries...\n\nUse BackendID instead of XID in XIDTAG?\nAdd internal (ie per backend) hash of locks that should not be\nreleased at commit time?\nAnd couple additional funcs in lmgr API?\n\nVadim\n",
"msg_date": "Fri, 15 Dec 2000 12:13:59 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: TOAST-table vacuuming (was Re: Idea for reducing pl anning time)"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> Use BackendID instead of XID in XIDTAG?\n> Add internal (ie per backend) hash of locks that should not be\n> released at commit time?\n> And couple additional funcs in lmgr API?\n\nI think that would work. I'm inclined to use backend PID instead of\nBackendID in XIDTAG, because that way you could tell if a lock entry\nis stale even after the BackendID gets recycled to a new process.\n(Of course PIDs get recycled too, but not as fast as BackendIDs.)\nBesides, PID is already being used by the USERLOCK case, and we could\neliminate some #ifdefs by making the two cases the same.\n\nWe'd still want XID keys for the locks that are used to wait for a\nparticular transaction to complete (eg when waiting to update a tuple).\nI think that's OK since VACUUM doesn't need to hold any such locks,\nbut it'd probably mean making separate lmgr API entry points for those\nlocks as opposed to normal table-level locks.\n\nError cleanup should release all locks including those marked as\nsurviving the current xact. (We'd need to improve that when we\nstart to work on nested xacts, but it'll do for now.)\n\nOther than that, it seems like it'd work, and it'd allow us to do a\nnormal transaction commit internally in VACUUM, which is a lot cleaner\nthan what VACUUM does now.\n\nComments, better ideas, anyone?\n\nI can work on this if you don't have time to...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 17:45:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TOAST-table vacuuming (was Re: Idea for reducing pl anning time) "
},
{
"msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > > Also, is TOAST-table vacuuming fixed now?\n> >\n> > Still broken. Hiroshi had muttered something about fixing\n> > the internal commit of VACUUM so that it's more like a real\n> > commit --- including advancing the transaction ID --- but\n> > still doesn't release the exclusive lock held by VACUUM.\n> > Basically we need to propagate the locks forward to the new\n> > xact instead of releasing them. I think that would be a nice\n> > clean solution if we could do it. Do you have any ideas about how?\n> \n> Yes, it would be nice for cursors too - they should be able to cross\n> transaction boundaries...\n> \n\nCursors outside transactions is a nice feature I've\nlong wanted.\nIt would be nice if one of the requirement is solved.\n\nHow many factors should be solved to achieve it ?\nFor example,not release the buffers kept by such\ncursors ...\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Mon, 18 Dec 2000 09:41:39 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TOAST-table vacuuming (was Re: Idea for reducing planning time)"
},
{
"msg_contents": "I said:\n> Other than that, it seems like it'd work, and it'd allow us to do a\n> normal transaction commit internally in VACUUM, which is a lot cleaner\n> than what VACUUM does now.\n\nI punted on actually changing repair_frag's RecordTransactionCommit()\ncall into CommitTransactionCommand()/StartTransactionCommand(). That\nwould now work as far as holding an exclusive lock on the table goes,\nbut there's more code that would have to be added to close/reopen the\nrelation and its indexes, ensure that all of VACUUM's internal data\nstructures survive into the new transaction, etc etc. I didn't have time\nfor that right now, so I left it as-is. Perhaps someone will feel like\ncleaning it up for 7.2 or later.\n\nHowever, the important thing is fixed: a TOAST table is now vacuumed\nin a separate transaction from its master, while still holding a lock\nthat prevents anyone else from touching the master.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Dec 2000 20:10:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TOAST-table vacuuming (was Re: Idea for reducing pl anning time) "
}
]
|
[
{
"msg_contents": "> > Sorry guys, I'm busy with WAL issues currently.\n> > CRC will require initdb, so it's better to implement it\n> > first...\n> > Also, is TOAST-table vacuuming fixed now?\n> \n> Can someone please remind me? CRC allows us to know of WAL log is\n> corrupt?\n\nCurrently WAL has no means to know is log data consistent or not.\nEg - it may try to redo insertion of tuple which data were only\npartially writtent to log disk pages (when record crosses page\nboundary).\n\nVadim\n",
"msg_date": "Fri, 15 Dec 2000 12:21:03 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Idea for reducing planning time"
}
]
|
[
{
"msg_contents": "Jan pointed out that elog(FATAL) out of ReverifyMyDatabase() leaves\nshared memory in an un-cleaned-up state (buffer refcounts > 0).\nThere might also be other unaccounted-for problems, I suspect.\nI do not like his fix (heap_endscan before the elog(FATAL)) because\nit only plugs this specific hole in the dike, not the whole class of\nproblems.\n\nThe reason that the normal process-exit cleanup doesn't fix this is that\nReverifyMyDatabase() runs before we've entered the main loop in\nPostgresMain, and so elog() just goes straight to proc_exit() instead of\nlongjmp'ing to the error recovery code in PostgresMain.\n\nMy feeling is that proc_exit() ought to be prepared to do *everything*\nthat is needed to shut down a backend, and that in fact elog()'s\nbehavior for FATAL is bogus even in the post-startup case. It should\nbe sufficient to call proc_exit() to exit after a FATAL error. If it\nisn't, then the other half-dozen places that call proc_exit() are broken\ntoo.\n\nSimilarly, it's broken that PostgresMain needs to do several things\nbefore calling proc_exit. Those things ought to be done *by* proc_exit.\n\nAccordingly, I propose registering more cleanup routines with\non_proc_exit so that the right things happen in all exit cases, whether\nnormal, FATAL, or backend-startup; and removing the extra calls in\nelog() and PostgresMain().\n\nComments, objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 16:25:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cleaning up backend-exit cleanup"
}
]
|
[
{
"msg_contents": "In recent months there has been a great deal of speculation,\nconfusion and concern raised in the open source community over\nthe roles of commercial companies and the PostgreSQL project.\nThere has also been a lot of curiosity and noise around who \nand what PostgreSQL, Inc. (PgSQL) is... \n\nAs a response we have prepared a PDF document as a quick \noverview on who and what PgSQL is about, located at this URL:\nhttp://www.pgsql.com/OpenLetter12-00.pdf\n\nThe rest of this email is information provided to simply \naugment that one page summary ... \n\nYou know most of us already.\nCo-founders and steering committee members of the Global\nDevelopment Project -\nVadim Mikheev, Director: principal author of eRServer, WAL ...\nThomas Lockhart, Director: SQL, API, OO, Distributed \n computing, Analyst ...\nMarc Fournier, President: 'net apps, SSL, Apache, BSD...\nOur other Board Members -\nJeff MacDonald, VP Support: in Wolfville, NS and the person\n usually first in line to handle support inquiries sent to\n us from both our clients and the open source community.\nGeoff Davidson, CEO: avid business user (sales.org Inc.), \n personal & corporate Project $upporter, Advocate since '97.\n\nOur priority is on the best interests of PostgreSQL. Our \nbusiness focus is on supporting and promoting PostgreSQL \nimplementations. \nWe are also developing 'Five Levels of Learning' certified\ntraining programs and a bunch of other commercial training,\napplications, and support programs. These will be offered \nthrough a variety of channels, and should include \nUniversity/College credit standing within the next few months.\n\nOpen Source vs. Proprietary:\nWe advocate Open Source, BSD style :)\nWe will consider and develop short term (up to 24 month)\nproprietary applications and solutions where there is a strong\nbusiness and intellectual property case to be made. *all*\nproprietary developments that we are involved in *will* become\nopen source within two years of implementation, without\nexception.\n\nFunding:\nFriends, family and clients. We don't have $25million, or even\n$25,000 in funded reserves. We have paid our own way and\ncontributed freely to the community since Marc first provided\nhosting, support and other services to the Project, and we will\ncontinue to do so. To date the principals have invested several\nhundred thousand dollars in both hard cash and time, with\nalmost all of that work already residing in the current CVS \ntree. We are negotiating for substantial venture capital, \nand may accept this where it advances our ability to achieve\nour goals for PostgreSQL.\nWe aren't willing to consider becoming purveyers of\nvaporware products or vaporcare services. Nor are we\nlooking to spend $millions of other people's money on\nadvertising, webspeak and glitz. We understand the big\ndifferences between real value based dot-commerce and others\nwho may be cashing in on some dot-con offers out there - \nPgSQL will be value and Values based in all our efforts.\n\nReplication (initial functionality):\nSorry for the delay, it took longer to make it solid enough to\nput out to the community than we had intended, which means it\nalso cost us more than we'd originally expected... but it\nshould be available within the next few days... Open Source...\n\nHacker Relations:\nI don't really know how to do that, we're all hackers at\nPgSQL and the people who make our business decisions have\nworked with everyone in the community from the beginning.\n[you guessed it, hackers are still in charge here... and\nalways will be]\n\nTaking over the world (of Open Source DB solutions):\n*IS* our vision for PostgreSQL\n... but it isn't what PgSQL is about. We plan on creating\nemployment for tens of thousands of people supporting\nPostgreSQL. A few of those will probably even be working \ndirectly for us. The rest will find employment with our \nclients, our partners, or on their own as independent \nconsultants, software engineers and developers.\nWe aren't dragonslayers. Microsoft, Oracle, Informix, Sybase,\nSAP, MySQL, and many others already have strongly established \nand successful implementations that help to prove the value \nand use of difffent databases. While we do intend to ensure \nthat PostgreSQL apps will own the lion's share of the DBMS \nmarket within the next five years, it will be by growing the \nmarket... which means doubling or tripling the client base \nfor all our competitors at the same time. They will help to\nkeep us honest, and will always serve as a benchmark on value\nto ensure that we continue to provide better, stronger,\nfaster, and more cost-effective solutions, or recommend to\nclients and end-users the alternative that will...\n\nThe Future:\nWe have several projects in development that will extend the\nfunctions and capabilities of PostgreSQL well beyond anything\npresently available in databases or eCommerce as it is known\ntoday. Most of these will be open source from the beginning, a\nfew will be products that we offer to our clients and partners \nto recover our costs and support the people who work full time\n(well, some of them need 5 or 6 hours of sleep, but we \ndiscourage that while they are in the office), and to pay for\nour partner's software engineers who are committed to our \ncommercial or funded development efforts, like Replication.\n\nAccessibility:\nYou can contact any of us, anytime. If you haven't heard from us\nin a while it means we're working on something. If you're worried\nabout what that is, email us and we'll tell you as much as we\ncan - always the truth, and only the truth.\n\nParticipation:\nYou can start now if you want to work for free with us on any \nof our projects.\nYou can send us your resume if you want us to consider hiring\nyou, or referring you to one of our business partners that are\nhiring PostgreSQL talent.\nYou can join our group of partners and participate with us in\nthis exciting journey.\n\nOur role:\nProbably the most gratifying thing in any business is to see\npeople you admire, respect, or compete with adopting your\nbusiness, product or service models. Almost everyone in this\nsandbox now has similar service, support and partnership\nprograms to the ones we've been pioneering since becoming the\nfirst to provide commercial PostgreSQL support. I sure hope we \ngot those programs right to begin with... ;)\nWe expect PostgreSQL to be the benchmark that DBMS apps are\ncompared to, and plan for PgSQL to be the benchmark that other\nopen source commercial companies are measured against.\n\nYour role:\nKeep us honest, challenge us to do more and do it better.\nRepresent the best interests of the open source communities in\neverything you do and ask for.\nPush the envelope, make PostgreSQL better, make the world a\nlittle better today because you made the effort.\nStart your own companies to support, promote and advance\nPostgreSQL.\nPostgreSQL has won Linux World and Linux Journal awards, \nbecause of what it has become through the efforts of hundreds\nof contributors in dozens of countries. It will only succeed \nif that passion, independence, and genius continues to drive \nthe application forward. That means, more than ever, that \n*you* are charged with the opportunity to make this a better,\nmore accessible and functional ORDBMS solution for the world.\n\nLet us know if you believe you can help us, or if you think \nwe can help you.\n\n-- \nGeoff Davidson, CFP, MBA | President\nCEO, PostgreSQL, Inc. | sales.org Inc.\nPhone: 416-410-4124 ext. 1 | 3982 Teakwood Drive\n Fax: 905-897-0409 | Mississauga, ON L5C 3T5\n--\nPostgreSQL, Inc. Canadian Office:\nPO Box 1648, 251 Main St, Suite 2\nWolfville, Nova Scotia, Canada, B0P 1X0\nPhone: 1-902-542-0713\n Fax : 1-902-542-5386\n",
"msg_date": "Fri, 15 Dec 2000 16:48:00 -0500",
"msg_from": "Geoff Davidson <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL, Inc. Open Letter to the PostgreSQL Global Development\n\tProject"
},
{
"msg_contents": "Hello,\n\nIs there a way to increment a record using an SQL command to make a\n\"ranking\" method\nfor values?\n\ni.e. When I select one result (one record) from an SQL query, I want to be\nable to alter an\ninteger record that is associated with that record so that I can order the\nresults\nfrom the next SQL query based on the integer value.\n\ngot that? :-) heh\n\nimage\ttitle\t| URL\t|keywords\t|101\nimage\ttitle\t| URL\t|keywords\t|101\nimage\ttitle\t| URL\t|keywords\t|101\nimage\ttitle\t| URL\t|keywords\t|101\nimage\ttitle\t| URL\t|keywords\t|101\nimage\ttitle\t| URL\t|keywords\t|101\nimage\ttitle\t| URL\t|keywords\t|101\nimage\ttitle\t| URL\t|keywords\t|101\nimage\ttitle\t| URL\t|keywords\t|101\n\n",
"msg_date": "Tue, 19 Dec 2000 19:14:47 -0700",
"msg_from": "\"Dan Harrington\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "incrementing values"
}
]
|
[
{
"msg_contents": "> We'd still want XID keys for the locks that are used to wait for a\n> particular transaction to complete (eg when waiting to update \n> a tuple). I think that's OK since VACUUM doesn't need to hold any\n> such locks, but it'd probably mean making separate lmgr API entry\n> points for those locks as opposed to normal table-level locks.\n\nIn this case XID is used as key in LOCKTAG, ie in lock identifier,\nbut we are going to change XIDTAG, ie just holder identifier.\nNo new entry will be required.\n\nVadim\n",
"msg_date": "Fri, 15 Dec 2000 14:44:33 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: TOAST-table vacuuming (was Re: Idea for reducing pl anning time) "
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> We'd still want XID keys for the locks that are used to wait for a\n>> particular transaction to complete (eg when waiting to update \n>> a tuple). I think that's OK since VACUUM doesn't need to hold any\n>> such locks, but it'd probably mean making separate lmgr API entry\n>> points for those locks as opposed to normal table-level locks.\n\n> In this case XID is used as key in LOCKTAG, ie in lock identifier,\n> but we are going to change XIDTAG, ie just holder identifier.\n> No new entry will be required.\n\nOh, OK. What say I rename the data structure to HOLDERTAG or something\nlike that, so it's more clear what it's for? Any suggestions for a\nname?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 18:08:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TOAST-table vacuuming (was Re: Idea for reducing pl anning time) "
}
]
|
[
{
"msg_contents": "> I can work on this if you don't have time to...\n\nI have no -:)\n\nVadim\n",
"msg_date": "Fri, 15 Dec 2000 14:45:10 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: TOAST-table vacuuming (was Re: Idea for reducing pl anning time) "
}
]
|
[
{
"msg_contents": "peter=# select 1 as current;\n old\n-----\n 1\n\nThis is now the inverse of what it used to do, but it's still not what it\n*should* do. I see someone already tried to fix that (keywords.c 1.76 ->\n1.77, TODO list), but he should try again.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 15 Dec 2000 23:53:53 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "CURRENT/OLD keywords still broken"
},
{
"msg_contents": "> peter=# select 1 as current;\n> old\n> -----\n> 1\n> \n> This is now the inverse of what it used to do, but it's still not what it\n> *should* do. I see someone already tried to fix that (keywords.c 1.76 ->\n> 1.77, TODO list), but he should try again.\n\nThat was me. The old code did old -> current, so I changed it to do\ncurrent -> old. How else can I fix this? Attached is the old patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: keywords.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/keywords.c,v\nretrieving revision 1.76\nretrieving revision 1.77\ndiff -c -r1.76 -r1.77\n*** keywords.c\t2000/06/12 03:40:30\t1.76\n--- keywords.c\t2000/06/12 19:40:41\t1.77\n***************\n*** 9,17 ****\n *\n * IDENTIFICATION\n <<<<<<< keywords.c\n! *\t $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/keywords.c,v 1.76 2000/06/12 03:40:30 momjian Exp $\n =======\n! *\t $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/keywords.c,v 1.76 2000/06/12 03:40:30 momjian Exp $\n >>>>>>> 1.73\n *\n *-------------------------------------------------------------------------\n--- 9,17 ----\n *\n * IDENTIFICATION\n <<<<<<< keywords.c\n! *\t $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/keywords.c,v 1.77 2000/06/12 19:40:41 momjian Exp $\n =======\n! *\t $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/keywords.c,v 1.77 2000/06/12 19:40:41 momjian Exp $\n >>>>>>> 1.73\n *\n *-------------------------------------------------------------------------\n***************\n*** 77,82 ****\n--- 77,84 ----\n \t{\"createdb\", CREATEDB},\n \t{\"createuser\", CREATEUSER},\n \t{\"cross\", CROSS},\n+ \t/* for portability with old rules bjm 2000-06-12 */\n+ \t{\"current\", OLD},\n \t{\"current_date\", CURRENT_DATE},\n \t{\"current_time\", CURRENT_TIME},\n \t{\"current_timestamp\", CURRENT_TIMESTAMP},\n***************\n*** 183,189 ****\n \t{\"off\", OFF},\n \t{\"offset\", OFFSET},\n \t{\"oids\", OIDS},\n! \t{\"old\", CURRENT},\n \t{\"on\", ON},\n \t{\"only\", ONLY},\n \t{\"operator\", OPERATOR},\n--- 185,191 ----\n \t{\"off\", OFF},\n \t{\"offset\", OFFSET},\n \t{\"oids\", OIDS},\n! \t{\"old\", OLD},\n \t{\"on\", ON},\n \t{\"only\", ONLY},\n \t{\"operator\", OPERATOR},",
"msg_date": "Fri, 15 Dec 2000 17:58:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT/OLD keywords still broken"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> peter=# select 1 as current;\n> old\n> -----\n> 1\n\n> This is now the inverse of what it used to do, but it's still not what it\n> *should* do. I see someone already tried to fix that (keywords.c 1.76 ->\n> 1.77, TODO list), but he should try again.\n\nWe should rip out the whole current/old aliasing, IMHO. CURRENT has\nbeen unsupported for a version or two now, hasn't it?\n\nI had that on my personal TODO list, but I was going to leave it till\nafter 7.1 because I had mistakenly thought that CURRENT was still a\nkeyword in 7.0. But it wasn't, was it? Bruce, why did you put in\nthat mapping?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 18:05:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT/OLD keywords still broken "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> That was me. The old code did old -> current, so I changed it to do\n> current -> old. How else can I fix this? Attached is the old patch.\n\nBut CURRENT was strictly an internal token name, not a string the user\ncould actually see. So there's no need to have\n+ \t/* for portability with old rules bjm 2000-06-12 */\n+ \t{\"current\", OLD},\n\nThe only way that there would be a compatibility problem would be if\nruleutils.c had been set up to print CURRENT, but it wasn't:\n\n*** 1278,1284 ****\n quote_identifier(rte->relname));\n else if (!strcmp(rte->ref->relname, \"*NEW*\"))\n appendStringInfo(buf, \"new.\");\n! else if (!strcmp(rte->ref->relname, \"*CURRENT*\"))\n appendStringInfo(buf, \"old.\");\n else\n appendStringInfo(buf, \"%s.\",\n--- 1278,1284 ----\n quote_identifier(rte->relname));\n else if (!strcmp(rte->ref->relname, \"*NEW*\"))\n appendStringInfo(buf, \"new.\");\n! else if (!strcmp(rte->ref->relname, \"*OLD*\"))\n appendStringInfo(buf, \"old.\");\n else\n\nNEW and OLD are what the user see, and have been for awhile. So there's\nno compatibility issue here.\n\n regards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 18:15:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT/OLD keywords still broken "
},
{
"msg_contents": "OK, compatibility mapping removed.\n\n> Bruce Momjian <[email protected]> writes:\n> > That was me. The old code did old -> current, so I changed it to do\n> > current -> old. How else can I fix this? Attached is the old patch.\n> \n> But CURRENT was strictly an internal token name, not a string the user\n> could actually see. So there's no need to have\n> + \t/* for portability with old rules bjm 2000-06-12 */\n> + \t{\"current\", OLD},\n> \n> The only way that there would be a compatibility problem would be if\n> ruleutils.c had been set up to print CURRENT, but it wasn't:\n> \n> *** 1278,1284 ****\n> quote_identifier(rte->relname));\n> else if (!strcmp(rte->ref->relname, \"*NEW*\"))\n> appendStringInfo(buf, \"new.\");\n> ! else if (!strcmp(rte->ref->relname, \"*CURRENT*\"))\n> appendStringInfo(buf, \"old.\");\n> else\n> appendStringInfo(buf, \"%s.\",\n> --- 1278,1284 ----\n> quote_identifier(rte->relname));\n> else if (!strcmp(rte->ref->relname, \"*NEW*\"))\n> appendStringInfo(buf, \"new.\");\n> ! else if (!strcmp(rte->ref->relname, \"*OLD*\"))\n> appendStringInfo(buf, \"old.\");\n> else\n> \n> NEW and OLD are what the user see, and have been for awhile. So there's\n> no compatibility issue here.\n> \n> regards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 15 Dec 2000 18:36:32 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT/OLD keywords still broken"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > peter=# select 1 as current;\n> > old\n> > -----\n> > 1\n> >\n> > This is now the inverse of what it used to do, but it's still not what it\n> > *should* do. I see someone already tried to fix that (keywords.c 1.76 ->\n> > 1.77, TODO list), but he should try again.\n>\n> That was me. The old code did old -> current, so I changed it to do\n> current -> old. How else can I fix this? Attached is the old patch.\n\nMaybe add another branch into the SpecialRuleRelation grammar rule with\nCURRENT instead of OLD.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 16 Dec 2000 00:39:22 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CURRENT/OLD keywords still broken"
}
]
|
[
{
"msg_contents": "Tom, I see from the CVS logs you say:\n\n Subselects in FROM clause, per ISO syntax: FROM (SELECT ...) [AS] alias.\n (Don't forget that an alias is required.) Views reimplemented as expanding\n to subselect-in-FROM. Grouping, aggregates, DISTINCT in views actually\n work now (he says optimistically). No UNION support in subselects/views\n yet, but I have some ideas about that. Rule-related permissions checking\n moved out of rewriter and into executor.\n \nDoes this mean the rewrite system no longer handles views?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 15 Dec 2000 17:59:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Views as FROM subselects"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, I see from the CVS logs you say:\n> Subselects in FROM clause, per ISO syntax: FROM (SELECT ...) [AS] alias.\n> (Don't forget that an alias is required.) Views reimplemented as expanding\n> to subselect-in-FROM. Grouping, aggregates, DISTINCT in views actually\n> work now (he says optimistically). No UNION support in subselects/views\n> yet, but I have some ideas about that. Rule-related permissions checking\n> moved out of rewriter and into executor.\n \n> Does this mean the rewrite system no longer handles views?\n\nSure it does. It got a lot simpler though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 18:09:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views as FROM subselects "
}
]
|
[
{
"msg_contents": "I've been getting these for about 6 hours now:\n\ncvs server: [18:58:27] waiting for anoncvs's lock in /home/projects/pgsql/cvsroot/pgsql\n\nCan someone kill off this lock so I can commit?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 16 Dec 2000 01:07:00 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "anoncvs's lock"
}
]
|
[
{
"msg_contents": "Is ther any info on using toast as of yet, and if so where is it hidden to?\n\n\n\n\n\n\n\nIs ther any info on using toast as of yet, \nand if so where is it hidden \nto?",
"msg_date": "Fri, 15 Dec 2000 18:20:00 -0800",
"msg_from": "\"mike\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "TOAST documentation"
}
]
|
[
{
"msg_contents": "Is ther any info on using toast as of yet, and if so where is it hidden to?\n\n\n\n\n\n\n\n\nIs ther any info on using toast as of yet, \nand if so where is it hidden \nto?",
"msg_date": "Fri, 15 Dec 2000 18:52:44 -0800",
"msg_from": "\"mike\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "TOAST documentation"
},
{
"msg_contents": "\"mike\" <[email protected]> writes:\n> Is ther any info on using toast as of yet, and if so where is it hidden to?\n\nThere isn't any yet, but you don't really need it. Just declare\nyour tables and away you go. No User-Serviceable Parts Inside ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2000 22:25:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TOAST documentation "
}
]
|
[
{
"msg_contents": "\nHow do I change to digest? The instructions on the website are wrong.\n\n=======================================================================\nPatrick Dunford, Christchurch, NZ - http://pdunford.godzone.net.nz/\n\n Rejoice with those who rejoice; mourn with those who mourn.\n -- Romans 12:15\nhttp://www.heartlight.org/cgi-shl/todaysverse.cgi?day=20001215\n=======================================================================\nCreated by Mail2Sig - http://pdunford.godzone.net.nz/software/mail2sig/\n\n",
"msg_date": "Sat, 16 Dec 2000 17:43:29 +1300",
"msg_from": "\"Patrick Dunford\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Digest subscription"
}
]
|
[
{
"msg_contents": "InQuent Technologies (www.inquent.com) has been using Postgresql for quite a while now on various projects. It also has space in several large data centres. After a brief discussion with my manager, it should be feasible to offer a server and bandwidth to accomplish things which are needed should they be needed.\n\nPlease reply with hardware and bandwidth (per month) required, as well as how it will help you out (thereby helping us out) so that I can make a proper business case out of it for upper management.\n\nIf you (collective core) feels that PostgreSQL has all the resources it requires at the moment, then please ignore this message.\n\nThanks,\n Rod Taylor\n Developer -- InQuent Technologies\n\n\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the truth, and what really happened.",
"msg_date": "Sat, 16 Dec 2000 11:47:27 -0500",
"msg_from": "\"Rod Taylor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Co-Location"
},
{
"msg_contents": "\"Rod Taylor\" <[email protected]> writes:\n> If you (collective core) feels that PostgreSQL has all the resources it req=\n> uires at the moment, then please ignore this message.\n\nRod, I don't think the project has any need for full-up hosting as such,\nbut access to unusual platforms is frequently useful for chasing\nportability problems. If your machines are running something other than\na run-of-the-mill setup (which I'd define as Linux or *BSD on Intel),\nit might be useful to have access to them for testing. An unprivileged\nuser account with adequate disk space and access to the usual\ndevelopment tools (compiler, debugger, etc) would be all we'd need.\nLet us know if you think you can contribute along that line.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Dec 2000 12:57:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Co-Location "
}
]
|
[
{
"msg_contents": "hi, there!\n\ntest=> create table foo(id int primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'foo_pkey'\nfor table 'foo'\nCREATE\ntest=> insert into foo values(1);\nINSERT 88959 1\ntest=> create table bar(id int references foo);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\ntest=> begin;\nBEGIN\ntest=> insert into bar values(1);\nINSERT 88976 1\ntest=>\n\nafter that (we did not issued commit or rollback on the connection)\nin another psql:\n\ntest=> begin;\nBEGIN\ntest=> insert into bar values(1);\n...and this transaction locks up until we finish first transaction\n\nif we insert different values no locking occur.\nthis happens on both postgresql 7.03 and 7.1-beta1\n\n/fjoe\n\n",
"msg_date": "Sat, 16 Dec 2000 23:03:50 +0600 (NS)",
"msg_from": "Max Khon <[email protected]>",
"msg_from_op": true,
"msg_subject": "locking bug?"
},
{
"msg_contents": "\nThis is because the first transaction has a lock on the\nrow in foo due to the references constrain, it currently\ngrabs the lock to prevent the pk row from being\nremoved after we tested its existance but before the transaction\nclosed. There's been talk about using some kind of dirty reads to drop\nthe need for the lock, but that hasn't been done yet.\n\nOn Sat, 16 Dec 2000, Max Khon wrote:\n\n> hi, there!\n> \n> test=> create table foo(id int primary key);\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'foo_pkey'\n> for table 'foo'\n> CREATE\n> test=> insert into foo values(1);\n> INSERT 88959 1\n> test=> create table bar(id int references foo);\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\n> check(s)\n> CREATE\n> test=> begin;\n> BEGIN\n> test=> insert into bar values(1);\n> INSERT 88976 1\n> test=>\n> \n> after that (we did not issued commit or rollback on the connection)\n> in another psql:\n> \n> test=> begin;\n> BEGIN\n> test=> insert into bar values(1);\n> ...and this transaction locks up until we finish first transaction\n> \n> if we insert different values no locking occur.\n> this happens on both postgresql 7.03 and 7.1-beta1\n> \n> /fjoe\n> \n\n",
"msg_date": "Sat, 16 Dec 2000 18:05:50 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locking bug?"
}
]
|
[
{
"msg_contents": "Hi.\n\nI've still got something I can't seem to get. In my test cases with simple\ntables the first uint16 of tuple data after the header contained the length\nof the tuple. In this case I can't seem to figure out what the value F24D\nstands for when I'd expect it's length to be 0800.\n\nThe first tuple in my table has:\nOID: 6155665\nt_cmin: 32494973\nt_cmax: 0\nt_xmin: 32494324\nt_xmax: 32495742\nt_ctid: 55181312:82\nt_infomask: A503\nBitmap: 3F00000000F2\nAttributes: 7\nData Offset: 36\n\nThe flags for this tuple say:\nHEAP_MOVED_IN\nHEAP_UPDATED\nHEAP_XMAX_COMMITTED\nHEAP_XMIN_INVALID\nHEAP_HASVARLENA\nHEAP_HASNULL\n\n\nTuple Data:\nF24D 0000 FFFF FFFF 1300 0000 4E65 7720 4D61 696C 2046 6F6C 6465 7200\n9F00 0000 9F00 0000 48A2 1800\n\nThe schema is:\n Attribute | Type | Modifier\n-------------+-------------+------------------------------------------------\n--------\n userid | integer | not null\n folderid | integer | not null default\nnextval('folders_folderid_seq'::text)\n foldername | varchar(25) |\n messages | integer |\n newmessages | integer |\n foldersize | integer |\n popinfo | integer |\nIndices: folder_folderid_idx,\n folders_pkey\n\nthanks\n\n-Michael\n\n",
"msg_date": "Sat, 16 Dec 2000 14:33:22 -0500",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuple data"
},
{
"msg_contents": "\"Michael Richards\" <[email protected]> writes:\n> I've still got something I can't seem to get. In my test cases with simple\n> tables the first uint16 of tuple data after the header contained the length\n> of the tuple.\n\nThat's not right --- AFAIR there is no length in the tuple data. You\nmust use the length from the 'page item' pointer that points to this\ntuple if you want to know the total tuple length.\n\nIf you were testing with tables containing single varlena columns, then\nyou may have seen the varlena datum's length word and taken it for total\nlength of the tuple --- but it's only total length of that one column.\n\nYour example dump looks like F24D 0000 is userid, FFFF FFFF is folderid,\nand 1300 0000 is the varlena length word for foldername.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Dec 2000 14:57:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data "
},
{
"msg_contents": "Michael Richards wrote:\n> \n> Hi.\n> \n> I've still got something I can't seem to get. In my test cases with simple\n> tables the first uint16 of tuple data after the header contained the length\n> of the tuple. In this case I can't seem to figure out what the value F24D\n> stands for when I'd expect it's length to be 0800.\n\nI'm not sure, but you may see some part of the NULL bitmap. \nIIRC it started at a quite illogical place, is suspect it was at byte 31\nbut \nit still reserved 4bytes for each 32 fields after byte 32\n\n> The first tuple in my table has:\n...\n> Bitmap: 3F00 0000 00F2\n> Attributes: 7\n\nyou should have only 4 bytes of bitmap for 7 real attributes\n\n> Data Offset: 36\n\nthats' right 32+4\n\n----------\nHannu\n",
"msg_date": "Sat, 16 Dec 2000 22:09:27 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data"
},
{
"msg_contents": "> That's not right --- AFAIR there is no length in the tuple data. You\n> must use the length from the 'page item' pointer that points to this\n> tuple if you want to know the total tuple length.\n\nOops, I meant attribute length...\n\n> If you were testing with tables containing single varlena columns, then\n> you may have seen the varlena datum's length word and taken it for total\n> length of the tuple --- but it's only total length of that one column.\n\nYes, I obviously had assumed that this length was common to all types (I was\ntesting with varchars before).\n\nI presume then that I get the sizes based on some system tables. What query\nshould I run to give me the layout (in the order it's on disk) and the size\nof each non-varlen attribute?\n\n> Your example dump looks like F24D 0000 is userid, FFFF FFFF is folderid,\n> and 1300 0000 is the varlena length word for foldername.\n\nThis is correct.\n\nthanks\n-Michael\n\n",
"msg_date": "Sat, 16 Dec 2000 15:14:29 -0500",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuple data "
},
{
"msg_contents": "Michael Richards wrote:\n> \n> > That's not right --- AFAIR there is no length in the tuple data. You\n> > must use the length from the 'page item' pointer that points to this\n> > tuple if you want to know the total tuple length.\n> \n> Oops, I meant attribute length...\n> \n> > If you were testing with tables containing single varlena columns, then\n> > you may have seen the varlena datum's length word and taken it for total\n> > length of the tuple --- but it's only total length of that one column.\n> \n> Yes, I obviously had assumed that this length was common to all types (I was\n> testing with varchars before).\n> \n> I presume then that I get the sizes based on some system tables. What query\n> should I run to give me the layout (in the order it's on disk) and the size\n> of each non-varlen attribute?\n\nselect * from pg_attribute\n where attrelid = (select oid from pg_class where relname = 'tablename')\n order by attnum;\n\nthen look up types by attypid to find the types or just look at attlen\n==-1 for varlena types\n\nselect * from pg_type where oid = 23; -- gives info for int type\nselect * from pg_type where oid = 1043; -- varchar\n\n\n\n--------\nHannu\n",
"msg_date": "Sat, 16 Dec 2000 22:20:50 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data"
},
{
"msg_contents": "pg_attribute tells you the types and ordering of the attributes\n(columns) of a table. Then see pg_type for the size and alignment\nof each type.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Dec 2000 15:29:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data "
},
{
"msg_contents": "> I'm not sure, but you may see some part of the NULL bitmap.\n> IIRC it started at a quite illogical place, is suspect it was at byte 31\n> but\n> it still reserved 4bytes for each 32 fields after byte 32\nSometimes the t_hoff value in the tuple header is 32 which seems to indicate\nno NULL bitmap. This really makes me wonder what happens when you ALTER\nTABLE ADD COLUMN on a table since it doesn't seem to take more than O(1)\ntime. Perhaps it is assumed if the attribute count is less than the actual\nnumber of attributes then the last ones are NULL and no NULL map is\nrequired.\n\n> > The first tuple in my table has:\n> ...\n> > Bitmap: 3F00 0000 00F2\n> > Attributes: 7\n>\n> you should have only 4 bytes of bitmap for 7 real attributes\n\nYes you are correct, my error. To find the bitmap length I was doing:\nfor (int i=0;i<header->t_hoff-30;i++)\nWhere if I were able to count it should have been:\nfor (int i=0;i<header->t_hoff-32;i++)\n\n-Michael\n\n",
"msg_date": "Sat, 16 Dec 2000 15:30:16 -0500",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuple data"
},
{
"msg_contents": "\"Michael Richards\" <[email protected]> writes:\n> Sometimes the t_hoff value in the tuple header is 32 which seems to indicate\n> no NULL bitmap.\n\nThere's no null bitmap unless the HASNULLS infomask bit is set.\n\n> This really makes me wonder what happens when you ALTER\n> TABLE ADD COLUMN on a table since it doesn't seem to take more than O(1)\n> time. Perhaps it is assumed if the attribute count is less than the actual\n> number of attributes then the last ones are NULL and no NULL map is\n> required.\n\nALTER ADD COLUMN doesn't touch any tuples, and you're right that it's\ncritically dependent on heap_getattr returning NULL when an attribute\nbeyond the number of attributes actually present in a tuple is accessed.\nThat's a fragile and unclean implementation IMHO --- see past traffic\non this list.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Dec 2000 21:23:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data "
},
{
"msg_contents": "Tom Lane wrote:\n> \n>> ALTER ADD COLUMN doesn't touch any tuples, and you're right that it's\n> critically dependent on heap_getattr returning NULL when an attribute\n> beyond the number of attributes actually present in a tuple is accessed.\n> That's a fragile and unclean implementation IMHO --- see past traffic\n> on this list.\n\nShort of redesigning the whole storage format I can see no better way to\nallow \nALTER ADD COLUMN in any reasonable time. And I cna see no place where\nthis is \nmore \"fragile and unclean implementation\" than any other in postgres -- \nOTOH it is quite hard for me to \"see the past traffic on this list\" as\nmy \n\"PgSQL HACKERS\" mail folder is too big for anything else then grep ;)\n\nThe notion that anything not stored is NULL seems so natural to me that\nit \nis very hard to find any substantial flaw or fragility with it.\n\n--------------\nHannu\n",
"msg_date": "Mon, 18 Dec 2000 00:27:05 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> Tom Lane wrote:\n> >\n> >> ALTER ADD COLUMN doesn't touch any tuples, and you're right that it's\n> > critically dependent on heap_getattr returning NULL when an attribute\n> > beyond the number of attributes actually present in a tuple is accessed.\n> > That's a fragile and unclean implementation IMHO --- see past traffic\n> > on this list.\n> \n> Short of redesigning the whole storage format I can see no better way to\n> allow\n> ALTER ADD COLUMN in any reasonable time. And I cna see no place where\n> this is\n> more \"fragile and unclean implementation\" than any other in postgres --\n> OTOH it is quite hard for me to \"see the past traffic on this list\" as\n> my\n> \"PgSQL HACKERS\" mail folder is too big for anything else then grep ;)\n>\n\nI don't remember the traffic either.\nIIRC,I objected to Tom at this point in pgsql-bugs recently.\nI think it's very important for dbms that ALTER ADD COLUMN\ntouches tuples as less as possible.\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Mon, 18 Dec 2000 09:10:21 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data"
},
{
"msg_contents": "Hiroshi Inoue <[email protected]> writes:\n>> Tom Lane wrote:\n>>>>> ALTER ADD COLUMN doesn't touch any tuples, and you're right that it's\n>>>> critically dependent on heap_getattr returning NULL when an attribute\n>>>> beyond the number of attributes actually present in a tuple is accessed.\n>>>> That's a fragile and unclean implementation IMHO --- see past traffic\n>>>> on this list.\n\n> I don't remember the traffic either.\n> IIRC,I objected to Tom at this point in pgsql-bugs recently.\n\nThat was the traffic I was recalling ;-)\n\n> I think it's very important for dbms that ALTER ADD COLUMN\n> touches tuples as less as possible.\n\nI disagree. The existing ADD COLUMN implementation only works for\nappending columns at the end of tuples; it can't handle inserting\na column. To make it usable for inherited tables requires truly\nhorrendous kluges (as you well know). IMHO we'd be far better off\nto rewrite ADD COLUMN so that it does go through and change all the\ntuples, and then we could get rid of the hackery that tries --- not\nvery successfully --- to deal with inconsistent column orders between\nparent and child tables.\n\nI have a similar opinion about DROP COLUMN ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2000 20:05:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data "
},
{
"msg_contents": "Considering how often you actually change the structure of a database, I\ndon't mind waiting for such a reorganisation to take place, however it would\nstill be nice if it could be done in O(1) time because it would minimise the\namount of downtime required for structure changes.\n\nWhat are the cases where the current implementation does not handle it\nproperly?\n\nRestructuring all the tables (inherited too) would require either 2x the\nspace or lots of hackery to take care of situations where there isn't enough\nroom for a larger null bitmap. This hackery seems more complicated than just\nhaving alter look for inherited tables and add the column to those as well.\n\nYou could define a flag or something so a deleted column could be so flagged\nand\nALTER TABLE DELETE COLUMN\nwould run just as fast. Vacuum could then take care of cleaning out these\ncolumns. If you wanted to make it really exciting, how about searching for a\ndeleted column for the ADD column. Touch all the tuples by zeroing that\ncolumn and finally update pg_attribute. Nothing would be more fun than 2 way\nfragmentation :)\n\n-Michael\n\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Hiroshi Inoue\" <[email protected]>\nCc: \"Hannu Krosing\" <[email protected]>; \"Michael Richards\"\n<[email protected]>; <[email protected]>\nSent: Sunday, December 17, 2000 8:05 PM\nSubject: Re: [HACKERS] Tuple data\n\n\n> Hiroshi Inoue <[email protected]> writes:\n> >> Tom Lane wrote:\n> >>>>> ALTER ADD COLUMN doesn't touch any tuples, and you're right that\nit's\n> >>>> critically dependent on heap_getattr returning NULL when an attribute\n> >>>> beyond the number of attributes actually present in a tuple is\naccessed.\n> >>>> That's a fragile and unclean implementation IMHO --- see past traffic\n> >>>> on this list.\n>\n> > I don't remember the traffic either.\n> > IIRC,I objected to Tom at this point in pgsql-bugs recently.\n>\n> That was the traffic I was recalling ;-)\n>\n> > I think it's very important for dbms that ALTER ADD COLUMN\n> > touches tuples as less as possible.\n>\n> I disagree. The existing ADD COLUMN implementation only works for\n> appending columns at the end of tuples; it can't handle inserting\n> a column. To make it usable for inherited tables requires truly\n> horrendous kluges (as you well know). IMHO we'd be far better off\n> to rewrite ADD COLUMN so that it does go through and change all the\n> tuples, and then we could get rid of the hackery that tries --- not\n> very successfully --- to deal with inconsistent column orders between\n> parent and child tables.\n>\n> I have a similar opinion about DROP COLUMN ...\n>\n> regards, tom lane\n\n",
"msg_date": "Sun, 17 Dec 2000 20:37:03 -0500",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuple data "
},
{
"msg_contents": "\"Michael Richards\" <[email protected]> writes:\n> What are the cases where the current implementation does not handle it\n> properly?\n\nInheritance.\n\nCREATE TABLE parent (a, b, c);\n\nCREATE TABLE child (z) INHERITS (parent);\n\nALTER TABLE parent ADD COLUMN (d);\n\nWith the current implementation you now have column order a,b,c,d in the\nparent, and a,b,c,z,d in the child. This is seriously broken for a\nnumber of reasons, not least being that pg_dump can't realistically be\nexpected to reproduce that state.\n\nI don't really buy the complaint about \"it'll take 2x the space\". So\nwhat? You'll likely expend that anyway trying to load reasonable data\ninto the new column. If we implemented ADD COLUMN in a less klugy\nfashion, we could at least support loading a DEFAULT value into the\ncolumn (not to mention allowing it to be NOT NULL). More to the point,\nI don't think that using 2x space is a sufficient justification for the\ncomplexity and fragility that are imposed *throughout* the system in\norder to make ADD COLUMN's life easy. You pay those hidden costs every\nday you use Postgres, even if you've never done an ADD COLUMN in your\nlife.\n\n> You could define a flag or something so a deleted column could be so flagged\n> and ALTER TABLE DELETE COLUMN would run just as fast.\n\nHiroshi already tried that; you can find the vestiges of his attempt in\ncurrent sources (look for _DROP_COLUMN_HACK__). Again, the cost to the\nrest of the system strikes me as far more than I care to pay.\n\nIn the end it's a judgment call --- my judgment is that making these\nfeatures fast is not worth the implementation effort and\nunderstandability/reliability penalties that ensue. I think we would\nbe better off spending our effort on other things.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2000 20:58:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data "
},
{
"msg_contents": "This is what I assumed the problem to be but I wasn't sure if there would be\nmore to it or not.\n\nMy question now is: Should the order in which columns are physically stored\nmatter?\n\nSince the details of where to find the columns in the tuple data are stored\nin pg_attribute, I'd think this is a place where the storage layer should be\nfree to store it as it likes. Consider as a performance enhancement\nshuffling all the variable length columns to the end of the table. This\nwould save having to look at the size of all the variable length columns in\norder to examine a fixed length column.\n\nObviously since I only have a brief understanding of how stuff works I'm\nrelying on you to point out whether this is even a valid suggestion.\n\n-Michael\n\n> Inheritance.\n>\n> CREATE TABLE parent (a, b, c);\n>\n> CREATE TABLE child (z) INHERITS (parent);\n>\n> ALTER TABLE parent ADD COLUMN (d);\n>\n> With the current implementation you now have column order a,b,c,d in the\n> parent, and a,b,c,z,d in the child. This is seriously broken for a\n> number of reasons, not least being that pg_dump can't realistically be\n> expected to reproduce that state.\n\n\n",
"msg_date": "Sun, 17 Dec 2000 21:22:32 -0500",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuple data "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <[email protected]> writes:\n> >> Tom Lane wrote:\n> >>>>> ALTER ADD COLUMN doesn't touch any tuples, and you're right that it's\n> >>>> critically dependent on heap_getattr returning NULL when an attribute\n> >>>> beyond the number of attributes actually present in a tuple is accessed.\n> >>>> That's a fragile and unclean implementation IMHO --- see past traffic\n> >>>> on this list.\n> \n> > I don't remember the traffic either.\n> > IIRC,I objected to Tom at this point in pgsql-bugs recently.\n> \n> That was the traffic I was recalling ;-)\n> \n> > I think it's very important for dbms that ALTER ADD COLUMN\n> > touches tuples as less as possible.\n> \n> I disagree. The existing ADD COLUMN implementation only works for\n> appending columns at the end of tuples; it can't handle inserting\n> a column.\n\nColumn order isn't essential in rdbms.\nIsn't it well known that it's not preferable to use\n'select *','insert' without column list etc.. in production\napplications ?\n\n> To make it usable for inherited tables requires truly\n> horrendous kluges (as you well know). \n\nLogical/physical attribute numbers solves it naturally.\n\n> IMHO we'd be far better off\n> to rewrite ADD COLUMN so that it does go through and change all the\n> tuples, and then we could get rid of the hackery that tries --- not\n> very successfully --- to deal with inconsistent column orders between\n> parent and child tables.\n> \n\nWe couldn't live without ALTER ADD COLUMN and it's\nvery critical for me to be able to ADD COLUMN even\nwhen the target table is at full work.\nIt has been one of my criteria how cool the dbms is.\nFortunately PostgreSQL has been cool but ....\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Mon, 18 Dec 2000 11:22:55 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data"
},
{
"msg_contents": "On Sunday 17 December 2000 19:10, Hiroshi Inoue wrote:\n> Hannu Krosing wrote:\n> > Tom Lane wrote:\n> > >> ALTER ADD COLUMN doesn't touch any tuples, and you're right that it's\n> > >\n> > > critically dependent on heap_getattr returning NULL when an attribute\n> > > beyond the number of attributes actually present in a tuple is\n> > > accessed. That's a fragile and unclean implementation IMHO --- see past\n> > > traffic on this list.\n> >\n> > Short of redesigning the whole storage format I can see no better way to\n> > allow\n> > ALTER ADD COLUMN in any reasonable time. And I cna see no place where\n> > this is\n> > more \"fragile and unclean implementation\" than any other in postgres --\n> > OTOH it is quite hard for me to \"see the past traffic on this list\" as\n> > my\n> > \"PgSQL HACKERS\" mail folder is too big for anything else then grep ;)\n>\n> I don't remember the traffic either.\n\nThis is kind of a lame comment, but the pgsql- mail lists are archived at \nwww.mail-archive.com. You can search lots of archived mail lists there.\n\n\n> IIRC,I objected to Tom at this point in pgsql-bugs recently.\n> I think it's very important for dbms that ALTER ADD COLUMN\n> touches tuples as less as possible.\n>\n> Regards.\n> Hiroshi Inoue\n\n-- \n-------- Robert B. Easter [email protected] ---------\n- CompTechNews Message Board http://www.comptechnews.com/ -\n- CompTechServ Tech Services http://www.comptechserv.com/ -\n---------- http://www.comptechnews.com/~reaster/ ------------\n",
"msg_date": "Sun, 17 Dec 2000 21:46:38 -0500",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data"
},
{
"msg_contents": "Hiroshi Inoue wrote :\n\n[ ... ]\n\n> Column order isn't essential in rdbms.\n\n<Nitpicking>\n\nA relation (a table) is a subset of the Cartesain cross-product of the \ndefinition domains of the attributes (columns). Cartesian product being \na commutative operation, \"order of columns\" does not really exists. Period.\n\nIf you impose an order relationship, you *add* inforation to the \nstructure. That may be OK, but you can't rely on relational algebra to \nguarantee your results. You'll have to manage it yourself. (And, yes, \nthere is relevant algebra for this, too ...).\n\n</Nitpicking>\n\n> Isn't it well known that it's not preferable to use\n> 'select *','insert' without column list etc.. in production\n> applications ?\n\n100% agreed. Such a notation is an abbreviation. Handy, but dangerous. \nIMHO, such checking can (should ?) be done by an algorithm checking for \ncolumn *names* before sending the \"insert\" command.\n\nA partial workaround : inserting in a view containing only the relevant \ncolumns, in a suitable (and known) order.\n\n[ Back to lurking ... ]\n\n",
"msg_date": "Mon, 18 Dec 2000 09:13:01 +0100",
"msg_from": "\"Emmanuel Charpentier,,,\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data"
},
{
"msg_contents": "Hiroshi Inoue <[email protected]> writes:\n>> To make it usable for inherited tables requires truly\n>> horrendous kluges (as you well know). \n\n> Logical/physical attribute numbers solves it naturally.\n\nMaybe. At this point that's a theory without experimental evidence\nto back it up ;-). I'm still concerned about how widespread/intrusive\nthe changes will need to be.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2000 10:30:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuple data "
}
]
|
[
{
"msg_contents": "Here is the list of features in 7.1.\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nRelease 7.1\n\nThis release focuses on removing limitations that have existed in the\nPostgreSQL code for many years.\n\nMajor changes in this release:\n\n\tWrite-ahead Log(WAL) - To maintain database consistency in case\nof an operating system crash, previous releases of PostgreSQL have\nforced all data modifications to disk before each transaction commit. \nWith WAL, only one log file must be flushed to disk, greatly improving\nperformance. If you have been using -F in previous releases to disable\ndisk flushes, you may want to consider discontinuing its use.\n\n\tTOAST - Previous releases had an 8k (or 32k) row length limit.\nThis limit made storage of long text fields difficult. With TOAST, long\nrows of any length can be stored with good performance.\n\n\tOuter Joins - We now support outer joins. The UNION/NOT IN\nworkaround for outer joins is no longer required. We use the SQL92\nouter join syntax.\n\n\tFunction Manager - The previous C function manager did not\nhandle NULLs properly, nor did it support 64-bit CPU's. The new\nfunction manager does. You can continue using your old custom\nfunctions, but you may want to rewrite them in the future to use the new\nfunction manager call interface.\n\n\tComplex Queries - A large number of complex queries that were\nunsupported in previous releases now work. Many combinations of views,\naggregates, UNION, LIMIT, cursors, subqueries, and inherited tables\nnow work properly. Inherited tables are now accessed by default. \nSubqueries in FROM are now supported.\n\nMigration to v7.0\n\n A dump/restore using pg_dump is required for those wishing to migrate\n data from any previous release.\n\n\n\nLast updated from CVS logs: 2000-12-11\n\nBug Fixes\n---------\nMany multi-byte/Unicode/locale fixes (Tatsuo and others)\nMore reliable ALTER TABLE RENAME (Tom)\nKerberos V fixes (David Wragg)\nFix for INSERT INTO...SELECT where targetlist has subqueries (Tom)\nPrompt username/password on standard error (Bruce)\nLarge objects inv_read/inv_write fixes (Tom)\nFixes for to_char(), to_date(), to_ascii(), and to_timestamp() (Karel, \n Daniel Baldoni)\nPrevent query expressions from leaking memory (Tom)\nAllow UPDATE of arrays elements (Tom)\nWake up lock waiters during cancel (Hiroshi)\nFix rare cursor crash when using hash join (Tom)\nFix for DROP TABLE/INDEX in rolled-back transaction (Hiroshi)\nFix psql crash from \\l+ if MULTIBYTE enabled (Peter E)\nFix truncation of rule names during CREATE VIEW (Ross Reedstrom)\nFix PL/perl (Alex Kapranoff)\nDisallow LOCK on views (Mark Holloman)\nDisallow INSERT/UPDATE/DELETE on views (Mark Holloman)\nDisallow DROP RULE, CREATE INDEX, TRUNCATE on views (Mark Holloman)\nAllow PL/pgSQL accept non-ASCII identifiers (Tatsuo)\nAllow views to proper handle GROUP BY, aggregates, DISTINCT (Tom)\nFix rare failure with TRUNCATE command (Tom)\nAllow UNION/INTERSECT/EXCEPT to be used with ALL, subqueries, views, \n DISTINCT, ORDER BY, SELECT...INTO (Tom)\nFix parser failures during aborted transactions (Tom)\nAllow temporary relations to properly clean up indexes (Bruce)\nFix VACUUM problem with moving rows in same page (Tom)\nModify pg_dump so it dumps only user-defined items, not system-defined (Philip)\nAllow LIMIT in VIEW (Tom)\nRequire cursor FETCH to honor LIMIT (Tom)\nAllow PRIMARY/FOREIGN Key definitions on inherited columns (Stephan)\nAllow ORDER BY, LIMIT in sub-selects (Tom)\nAllow UNION in CREATE RULE (Tom)\nMake DROP TABLE rollback-able (Tom)\nStore initdb collation in pg_control so collation cannot be changed (Tom)\nFix INSERT...SELECT with rules (Tom)\nFix FOR UPDATE inside views and subselects (Tom)\nFix OVERLAPS operators conform to SQL92 spec regarding NULLs (Tom)\nFix lpad() and rpad() to handle length less than input string (Tom)\n\nEnhancements\n------------\nAdd OUTER JOINs (Tom)\nFunction manager overhaul (Tom)\nAllow ALTER TABLE RENAME on indexes(Tom)\nImprove CLUSTER(Tom)\nImprove ps status display for more platforms(Marc)\nImprove CREATE FUNCTION failure message(Ross)\nJDBC improvements (Peter, Travis Bauer, Christopher Cain, William Webber,\n Gunnar)\nGrand Unified Configuration scheme/GUC. Many options can now be set in \n data/postgresql.conf, postmaster/postgres flags, or SET commands (Peter E)\nImproved handling of file descriptor cache (Tom)\nNew warning code about auto-created table alias entries (Bruce)\nOverhaul initdb process (Tom, Peter E)\nOverhaul of inherited tables; inherited tables now accessed by default;\n new ONLY keyword prevents it (Chris Bitmead, Tom)\nODBC cleanups/improvements (Nick Gorham, Stephan Szabo, Zoltan Kovacs, \n Michael Fork)\nAllow renaming of temp tables (Tom)\nOverhaul memory manager contexts (Tom)\npg_dump uses CREATE USER or CREATE GROUP rather using COPY (Peter E)\nOverhaul pg_dump (Philip Warner)\nAllow pg_hba.conf secondary password file to specify username (Peter E)\nAllow TEMPORARY or TEMP keyword when creating temporary tables (Bruce)\nNew memory leak checker (Karel)\nNew SET SESSION CHARACTERISTICS and SET DefaultXactIsoLevel (Thomas, Peter E)\nAllow nested block comments (Thomas)\nAdd WITHOUT TIME ZONE type qualifier (Thomas)\nNew ALTER TABLE ADD CONSTRAINT (Stephan)\nUse NUMERIC accumulators for INTEGER aggregates (Tom)\nOverhaul aggregate code (Tom)\nNew VARIANCE and STDDEV() aggregates\nImprove dependency ordering of pg_dump (Philip)\nNew pg_restore command (Philip)\nNew pg_dump tar output option (Philip)\nNew pg_dump of large objects (Philip)\nNew ESCAPE option to LIKE (Thomas)\nNew case-insensitive LIKE - ILIKE (Thomas)\nAllow functional indexes to use binary-compatible type (Tom)\nAllow SQL functions to be used in more contexts (Tom)\nNew pg_config utility (Peter E)\nNew PL/pgSQL EXECUTE command which allows dynamic SQL and utility statements\n (Jan)\nNew PL/pgSQL GET DIAGNOSTICS statement for SPI value access (Jan)\nNew quote_identifiers() and quote_literal() functions (Jan)\nNew ALTER TABLE table OWNER TO user command (Mark Holloman)\nAllow subselects in FROM, i.e. FROM (SELECT ...) [AS] alias (Tom)\nUpdate PyGreSQL to version 3.1 (D'Arcy)\nStore tables as files named by OID (Vadim)\nNew SQL function setval(seq,val,bool) for use in pg_dump (Philip)\nNew pg_service.conf file (Mario Weilguni)\nRequire DROP VIEW to remove views, no DROP TABLE (Mark)\nAllow DROP VIEW view1, view2 (Mark)\nAllow multiple objects in DROP INDEX, DROP RULE, and DROP TYPE (Tom)\nAllow automatic conversion to Unicode (Tatsuo)\nNew /contrib/pgcrypto hashing functions (Marko Kreen)\nNew pg_dumpall --accounts-only option (Peter E)\nNew CHECKPOINT command for WAL which creates new WAL log file (Vadim)\nNew AT TIME ZONE syntax (Thomas)\nAllow location of Unix domain socket to be configurable (David J. MacKenzie)\nAllow postmaster to listen on a specific IP address (David J. MacKenzie)\nAllow socket path name to be specified in hostname by using leading slash\n (David J. MacKenzie)\nAllow CREATE DATABASE to specify template database (Tom)\nNew template0 database that contains no user additions(Tom)\n\nTypes\n-----\nFix INET/CIDR type ordering and add new functions (Tom)\nMake OID behave as an unsigned type (Tom)\nAllow BIGINT as synonym for INT8 (Peter E)\nNew int2 and int8 comparison operators (Tom)\nNew BIT and BIT VARYING types (Adriaan Joubert, Tom)\nCHAR() no longer faster than VARCHAR() because of TOAST (Tom)\n\nPerformance\n-----------\nWrite-Ahead Log (WAL) to provide crash recovery with less performance \n overhead (Vadim)\nANALYZE stage of VACUUM no longer exclusively locks table (Bruce)\nReduced file seeks (Denis Perchine)\nImprove BTREE code for duplicate keys (Tom)\nStore all large objects in a single operating system file (Denis Perchine, Tom)\nImprove memory allocation performance (Karel, Tom)\n\nSource Code\n-----------\nNew function manager call conventions (Tom)\nSGI portability fixes (David Kaelbling)\nNew configure --enable-syslog option (Marc)\nNew BSDI README (Bruce)\nconfigure script moved to top level, not /src (Peter E)\nMakefile/configuration/compilation cleanups (Peter E)\nNew configure --with-python option (Peter E)\nSolaris cleanups (Peter E)\nOverhaul /contrib Makefiles (Karel)\nNew OpenSSL configuration option (Magnus, Peter E)\nAIX fixes (Andreas)\nNew heap_open(), heap_openr() API (Tom)\nRemove colon and semi-colon operators (Thomas)\nNew pg_class.relkind value for views (Mark Holloman)\nRename ichar() to chr() (Karel)\nNew documentation for btrim(), ascii(), chr(), repeat() (Karel)\nFixes for NT/Cygwin (Pete Forman)\nAIX port fixes (Andreas)\nNew BeOS port (David Reid, Cyril Velter)\nAdd proofreader's changes to docs (Addison-Wesley, Bruce)\nNew Alpha spinlock code (Adriaan Joubert, Compaq)\nUnixware port overhaul (Peter E)\nNew Darwin/Mac OSX port (Bruce Hartzler)\nNew FreeBSD Alpha port (Alfred)\nOverhaul shared memory segments (Tom)\nAdd IBM S/390 support (Neale Ferguson)",
"msg_date": "Sat, 16 Dec 2000 15:16:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.1 features list"
},
{
"msg_contents": "We're working hardly on bugfixes for GiST (I've posted patch for 7.0.3)\nand probably could finish in 1-2 weeks.\n\n\tregards,\n\t\tOleg\nOn Sat, 16 Dec 2000, Bruce Momjian wrote:\n\n> Date: Sat, 16 Dec 2000 15:16:22 -0500 (EST)\n> From: Bruce Momjian <[email protected]>\n> To: PostgreSQL-development <[email protected]>,\n> PostgreSQL-documentation <[email protected]>\n> Subject: [HACKERS] 7.1 features list\n> \n> Here is the list of features in 7.1.\n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 16 Dec 2000 23:34:35 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 7.1 features list"
},
{
"msg_contents": "At 3:16 PM -0500 12/16/00, Bruce Momjian wrote:\n>Here is the list of features in 7.1.\n>New Darwin/Mac OSX port (Bruce Hartzler)\n\nNot to be a snob, but I probably did 80% of this.\n\n(BTW- tons of stuff at www.postgresql.org is busted. Searching mailing list archives for example.)\n\n-pmb\n\n--\n\"Every time you provide an option, you're asking the user to make a decision.\n That means they will have to think about something and decide about it.\n It's not necessarily a bad thing, but, in general, you should always try to\n minimize the number of decisions that people have to make.\"\n http://joel.editthispage.com/stories/storyReader$51\n\n\n",
"msg_date": "Sat, 16 Dec 2000 17:14:42 -0800",
"msg_from": "Peter Bierman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 7.1 features list"
},
{
"msg_contents": "Peter Bierman <[email protected]> writes:\n> At 3:16 PM -0500 12/16/00, Bruce Momjian wrote:\n>> Here is the list of features in 7.1.\n>> New Darwin/Mac OSX port (Bruce Hartzler)\n\n> Not to be a snob, but I probably did 80% of this.\n\nBruce had submitted an earlier patch, but IIRC Peter's version was the\none that got applied. (Or was Peter doing mopup work on Bruce's first\ncut? I forget.) At the very least Peter should get 50% credit...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Dec 2000 21:46:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 7.1 features list "
},
{
"msg_contents": "Hi,\n\nI checked 7.1 feature list and didn't find any mention about GiST\nbut there are changes in GiST code. Who is a maintainer of GiST code ?\nWe have several problems with GiSt and would like to communicate\nwith somebody who understand these code.\n\n\tRegards,\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 18 Dec 2000 00:08:10 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Who is a maintainer of GiST code ?"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> I checked 7.1 feature list and didn't find any mention about GiST\n> but there are changes in GiST code. Who is a maintainer of GiST code ?\n\nYou are ;-). If you expect to find someone who understands GiST better\nthan you, you're probably out of luck.\n\nI recall having made a number of changes that applied to all of the\nindex access methods, including GiST --- but I was just changing\nsimilar code in all the methods. I don't claim to know anything\nabout GiST in particular.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2000 17:53:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ? "
},
{
"msg_contents": "On Sat, 16 Dec 2000, Peter Bierman wrote:\n\n> At 3:16 PM -0500 12/16/00, Bruce Momjian wrote:\n> >Here is the list of features in 7.1.\n> >New Darwin/Mac OSX port (Bruce Hartzler)\n> \n> Not to be a snob, but I probably did 80% of this.\n> \n> (BTW- tons of stuff at www.postgresql.org is busted. Searching mailing\n> list archives for example.)\n\nPlease provide URLs where you are trying to search ... we did extensive\nwork over the past few weeks to speed up searching, and I tend randomly to\nmake sure things are still running fine, and haven't had any problems with\neither speed or broken links ...\n\nits possible its one of the mirror sites?\n\n",
"msg_date": "Sun, 17 Dec 2000 21:43:15 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 7.1 features list"
},
{
"msg_contents": "At 9:43 PM -0400 12/17/00, The Hermit Hacker wrote:\n>On Sat, 16 Dec 2000, Peter Bierman wrote:\n>\n>> (BTW- tons of stuff at www.postgresql.org is busted. Searching mailing\n>> list archives for example.)\n>\n>Please provide URLs where you are trying to search ... we did extensive\n>work over the past few weeks to speed up searching, and I tend randomly to\n>make sure things are still running fine, and haven't had any problems with\n>either speed or broken links ...\n\nClicking on the \"search\" pic/link at http://www.postgresql.org/ always brought up a dialog (IE5-Mac) that said \"the attempt to load http://www.postgresql.org/search.mpl\" failed.\n\nBut just now I pasted that URL into the location, and it loads fine. And now the pic/link works fine too. I have no idea what's with that.\n\n\nJust now I went to http://www.postgresql.org/mhonarc/pgsql-hackers/\n\ntyped 'foo' in the search field, and I get a dialog a few seconds later:\n\n\"The attempt to load:\"Accessing URL: http://www.postgresql.org/mhonarc/pgsql-hackers/search.mpl?<stuff>\" (runs offscreen).\n\nMaybe it's some javascript that's trying to load a \"still loading\" page, and has a bogus URL with some explanatory text prepended? (Note the URL:\"Accessing URL:http...\")\n\nI loaded \"http://www.postgresql.org/mhonarc/pgsql-hackers/search.mpl?\" by hand, and then went back and tried a search again, and now it works.\n\nDunno what's going on here. Since it never worked for me, I never tried loading the URL by hand. Obviously it's more complicated than the outright broken link that I thought it was, sorry about that.\n\n-pmb\n\n--\n\"Every time you provide an option, you're asking the user to make a decision.\n That means they will have to think about something and decide about it.\n It's not necessarily a bad thing, but, in general, you should always try to\n minimize the number of decisions that people have to make.\"\n http://joel.editthispage.com/stories/storyReader$51\n\n\n",
"msg_date": "Sun, 17 Dec 2000 18:11:09 -0800",
"msg_from": "Peter Bierman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 7.1 features list"
},
{
"msg_contents": "Yes, I will take a pass over the logs before final. As noted in the\nfile, the list is accurate as of December 11.\n\n> We're working hardly on bugfixes for GiST (I've posted patch for 7.0.3)\n> and probably could finish in 1-2 weeks.\n> \n> \tregards,\n> \t\tOleg\n> On Sat, 16 Dec 2000, Bruce Momjian wrote:\n> \n> > Date: Sat, 16 Dec 2000 15:16:22 -0500 (EST)\n> > From: Bruce Momjian <[email protected]>\n> > To: PostgreSQL-development <[email protected]>,\n> > PostgreSQL-documentation <[email protected]>\n> > Subject: [HACKERS] 7.1 features list\n> > \n> > Here is the list of features in 7.1.\n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 17 Dec 2000 23:08:01 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 7.1 features list"
},
{
"msg_contents": "> Peter Bierman <[email protected]> writes:\n> > At 3:16 PM -0500 12/16/00, Bruce Momjian wrote:\n> >> Here is the list of features in 7.1.\n> >> New Darwin/Mac OSX port (Bruce Hartzler)\n> \n> > Not to be a snob, but I probably did 80% of this.\n> \n> Bruce had submitted an earlier patch, but IIRC Peter's version was the\n> one that got applied. (Or was Peter doing mopup work on Bruce's first\n> cut? I forget.) At the very least Peter should get 50% credit...\n\nModified entry. Thanks for the correction:\n\n\tNew Darwin/Mac OSX port (Peter Bierman, Bruce Hartzler) \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 17 Dec 2000 23:09:18 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 7.1 features list"
},
{
"msg_contents": "> Oleg Bartunov <[email protected]> writes:\n> > I checked 7.1 feature list and didn't find any mention about GiST\n> > but there are changes in GiST code. Who is a maintainer of GiST code ?\n> \n> You are ;-). If you expect to find someone who understands GiST better\n> than you, you're probably out of luck.\n> \n> I recall having made a number of changes that applied to all of the\n> index access methods, including GiST --- but I was just changing\n> similar code in all the methods. I don't claim to know anything\n> about GiST in particular.\n\nI know I met someone who said they invented Gist. Tom, was that at the\nDatabase Summit? I can't think of that person's name now. I think\nthere are some papers at Berkeley or a web site that goes into it in\ndetail.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 17 Dec 2000 23:23:32 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Who is a maintainer of GiST code ?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I think\n> there are some papers at Berkeley or a web site that goes into it in\n> detail.\n\nI imagine there's some GiST stuff at the Berkeley papers repository\nhttp://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/\nbut I'd be surprised if it's more than an overview...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2000 23:30:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ? "
},
{
"msg_contents": "Peter Bierman <[email protected]> writes:\n> Just now I went to http://www.postgresql.org/mhonarc/pgsql-hackers/\n\n> typed 'foo' in the search field, and I get a dialog a few seconds later:\n\n> \"The attempt to load:\"Accessing URL: http://www.postgresql.org/mhonarc/pgsql-hackers/search.mpl?<stuff>\" (runs offscreen).\n\nOdd, the same experiment seems to work fine for me. Maybe a browser\ndependency? I'm using Netscape 4.75 on HPUX ...\n\n> Maybe it's some javascript\n\nI don't see any javascript on the loaded page.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2000 13:03:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] 7.1 features list "
},
{
"msg_contents": "On Mon, 18 Dec 2000, Tom Lane wrote:\n\n> Peter Bierman <[email protected]> writes:\n> > Just now I went to http://www.postgresql.org/mhonarc/pgsql-hackers/\n> \n> > typed 'foo' in the search field, and I get a dialog a few seconds later:\n> \n> > \"The attempt to load:\"Accessing URL: http://www.postgresql.org/mhonarc/pgsql-hackers/search.mpl?<stuff>\" (runs offscreen).\n> \n> Odd, the same experiment seems to work fine for me. Maybe a browser\n> dependency? I'm using Netscape 4.75 on HPUX ...\n\nJust went to the above URL using IE 5.5, types in 'foo' and it came back\nwith 909 matches found ...\n\n> > Maybe it's some javascript\n> \n> I don't see any javascript on the loaded page.\n\nnone used that I'm aware of either ...\n\n\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 18 Dec 2000 15:11:52 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] 7.1 features list "
},
{
"msg_contents": "OK, now I understand the situation. Another question:\nWho is a maintainer of Rtree code ? We have a problem with\nhandling NULL values in GiST. Any thought how NULL values\nare handle in Rtree.\n\n\tregards,\n\t\tOleg\nOn Sun, 17 Dec 2000, Bruce Momjian wrote:\n\n> Date: Sun, 17 Dec 2000 23:23:32 -0500 (EST)\n> From: Bruce Momjian <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>,\n> PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] Who is a maintainer of GiST code ?\n> \n> > Oleg Bartunov <[email protected]> writes:\n> > > I checked 7.1 feature list and didn't find any mention about GiST\n> > > but there are changes in GiST code. Who is a maintainer of GiST code ?\n> > \n> > You are ;-). If you expect to find someone who understands GiST better\n> > than you, you're probably out of luck.\n> > \n> > I recall having made a number of changes that applied to all of the\n> > index access methods, including GiST --- but I was just changing\n> > similar code in all the methods. I don't claim to know anything\n> > about GiST in particular.\n> \n> I know I met someone who said they invented Gist. Tom, was that at the\n> Database Summit? I can't think of that person's name now. I think\n> there are some papers at Berkeley or a web site that goes into it in\n> detail.\n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 18 Dec 2000 23:49:51 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ?"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> We have a problem with\n> handling NULL values in GiST. Any thought how NULL values\n> are handle in Rtree.\n\nAFAIR, none of the index access methods except btree handle NULLs at\nall --- they just ignore NULL values and don't store them in the index.\nFeel free to improve on that ;-). The physical representation of index\ntuples can handle NULLs, the problem is teaching the index logic where\nthey should go in the index.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2000 16:10:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Oleg Bartunov <[email protected]> writes:\n> > We have a problem with\n> > handling NULL values in GiST. Any thought how NULL values\n> > are handle in Rtree.\n> \n> AFAIR, none of the index access methods except btree handle NULLs at\n> all --- they just ignore NULL values and don't store them in the index.\n> Feel free to improve on that ;-). The physical representation of index\n> tuples can handle NULLs, the problem is teaching the index logic where\n> they should go in the index.\n> \n> regards, tom lane\n\n\nand I can't see why btree stores them (as it seems to do judging by the \nindex file size) - at least it does not use it for searching for \"IS\nNULL\"\n\n--8<--------8<--------8<--------8<--------8<--------8<--------8<--------8<------\n\nhannu=# explain select * from nulltest where i is null;\nNOTICE: QUERY PLAN:\n\nSeq Scan on nulltest (cost=0.00..293.80 rows=5461 width=8)\n\nEXPLAIN\nhannu=# explain select * from nulltest where i =1;\nNOTICE: QUERY PLAN:\n\nIndex Scan using nulltest_i_ndx on nulltest (cost=0.00..96.95 rows=164\nwidth=8)\n\n--8<--------8<--------8<--------8<--------8<--------8<--------8<--------8<------\n\nnulltest is a 16k record table with numbers 1 to 16384 in field i\n\nIf it just ignored them we would have a nice way to fake partial indexes\n- \njust define a function that returns field value or null and then index\non that ;)\n\n-----------\nHannu\n",
"msg_date": "Tue, 19 Dec 2000 02:04:02 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ?"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> and I can't see why btree stores them (as it seems to do judging by the \n> index file size) - at least it does not use it for searching for \"IS\n> NULL\"\n\nThat's another thing that needs improvement ;-). Seems to me it should\nbe able to do that.\n\nThe reason why btree *has* to be able to deal with null entries is to\ncope with multi-column indexes; you don't want it refusing to index a\nrow at all just because some of the columns are null. The others don't\ncurrently handle multi-column indexes, so they're not really forced\nto deal with that issue.\n\n From a purely semantic point of view I'm not sure why Oleg is worried\nabout being able to store nulls in a GiST index ... seems like leaving\nthem out is OK, modulo the occasional complaint from VACUUM's\ninsufficiently intelligent tuple-count comparison ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2000 19:25:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ? "
},
{
"msg_contents": "On Sat, 16 Dec 2000, Bruce Momjian wrote:\n\n> Here is the list of features in 7.1.\n\n\tOne thing that I think ought to be added is that with 7.1,\nPostgreSQL will compile out of the box (i.e. without any extra patches)\nfor Linux/Alpha. This might not be a big deal for most people, but for\nthose of who run pgsql on Linux/Alpha, it is, and I feel it at least\ndeserves a mention in the 7.1 feature list.\n\tI looked for it (i.e. grep -i alpha) in the list, but did not see\nit. Your choice which heading it goes under.\n\tAlso, I have not tested any recent snapshots or betas on\nLinux/Alpha lately, but I plan to shortly and will let the hackers list\nknow of any problems. I have every intention of making sure the 7.1\nrelease does indeed work out of box on Linux/Alpha. Thanks, TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n",
"msg_date": "Mon, 18 Dec 2000 18:59:34 -0700 (MST)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 features list"
},
{
"msg_contents": "On Sun, Dec 17, 2000 at 11:30:23PM -0500, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I think\n> > there are some papers at Berkeley or a web site that goes into it in\n> > detail.\n> \n> I imagine there's some GiST stuff at the Berkeley papers repository\n> http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/\n> but I'd be surprised if it's more than an overview...\n\nWell, there's this: http://gist.cs.berkeley.edu/\nand this: http://gist.cs.berkeley.edu/pggist/\n-- \nChristopher Masto Senior Network Monkey NetMonger Communications\[email protected] [email protected] http://www.netmonger.net\n\nFree yourself, free your machine, free the daemon -- http://www.freebsd.org/\n",
"msg_date": "Tue, 19 Dec 2000 13:33:58 -0500",
"msg_from": "Christopher Masto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ?"
},
{
"msg_contents": "On Tue, 19 Dec 2000, Hannu Krosing wrote:\n\n> Date: Tue, 19 Dec 2000 02:04:02 +0200\n> From: Hannu Krosing <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>,\n> Bruce Momjian <[email protected]>,\n> PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] Who is a maintainer of GiST code ?\n> \n> Tom Lane wrote:\n> > \n> > Oleg Bartunov <[email protected]> writes:\n> > > We have a problem with\n> > > handling NULL values in GiST. Any thought how NULL values\n> > > are handle in Rtree.\n> > \n> > AFAIR, none of the index access methods except btree handle NULLs at\n> > all --- they just ignore NULL values and don't store them in the index.\n> > Feel free to improve on that ;-). The physical representation of index\n> > tuples can handle NULLs, the problem is teaching the index logic where\n> > they should go in the index.\n> > \n> > regards, tom lane\n> \n> \n> and I can't see why btree stores them (as it seems to do judging by the \n> index file size) - at least it does not use it for searching for \"IS\n> NULL\"\n\n\nand what does this error means ?\n\ncreate table rtree_test ( r box );\ncopy rtree_test from stdin;\n\\N\n\\N\n\\N\n\\N\n........ total 10,000 NULLS\n\\.\n\ncreate index rtree_test_idx on rtree_test using rtree ( r );\n--ERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by zero\n\nseems rtree doesn't ignore NULL ?\n\n\n\tRegards,\n\t\tOleg\n\n\n> \n> --8<--------8<--------8<--------8<--------8<--------8<--------8<--------8<------\n> \n> hannu=# explain select * from nulltest where i is null;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on nulltest (cost=0.00..293.80 rows=5461 width=8)\n> \n> EXPLAIN\n> hannu=# explain select * from nulltest where i =1;\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using nulltest_i_ndx on nulltest (cost=0.00..96.95 rows=164\n> width=8)\n> \n> --8<--------8<--------8<--------8<--------8<--------8<--------8<--------8<------\n> \n> nulltest is a 16k record table with numbers 1 to 16384 in field i\n> \n> If it just ignored them we would have a nice way to fake partial indexes\n> - \n> just define a function that returns field value or null and then index\n> on that ;)\n> \n> -----------\n> Hannu\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 19 Dec 2000 22:25:21 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ?"
},
{
"msg_contents": "On Tue, 19 Dec 2000, Christopher Masto wrote:\n\n> Date: Tue, 19 Dec 2000 13:33:58 -0500\n> From: Christopher Masto <[email protected]>\n> To: Tom Lane <[email protected]>, Bruce Momjian <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>,\n> PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] Who is a maintainer of GiST code ?\n> \n> On Sun, Dec 17, 2000 at 11:30:23PM -0500, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > I think\n> > > there are some papers at Berkeley or a web site that goes into it in\n> > > detail.\n> > \n> > I imagine there's some GiST stuff at the Berkeley papers repository\n> > http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/\n> > but I'd be surprised if it's more than an overview...\n> \n> Well, there's this: http://gist.cs.berkeley.edu/\n> and this: http://gist.cs.berkeley.edu/pggist/\n\nThanks,\n\nwe do know this sites. We're working on implementation of\nRD (Russian Doll) Tree using GiST interface. Current GiST sources\nhave some bugs, some of them we already fixed and currently we're a \nworking with handling of NULL values. We're getting broken index for\ndata with NULLs. btw, how many people use GiST ? It would be nice\nto test our changes after we solve our problems.\n\n\tRegards,\n\n\t\tOleg\n\n> -- \n> Christopher Masto Senior Network Monkey NetMonger Communications\n> [email protected] [email protected] http://www.netmonger.net\n> \n> Free yourself, free your machine, free the daemon -- http://www.freebsd.org/\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 19 Dec 2000 23:26:25 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ?"
},
{
"msg_contents": "I added (Alpha) next to the mention of 64-bit CPUs on the Function\nManager section at the top.\n\n> On Sat, 16 Dec 2000, Bruce Momjian wrote:\n> \n> > Here is the list of features in 7.1.\n> \n> \tOne thing that I think ought to be added is that with 7.1,\n> PostgreSQL will compile out of the box (i.e. without any extra patches)\n> for Linux/Alpha. This might not be a big deal for most people, but for\n> those of who run pgsql on Linux/Alpha, it is, and I feel it at least\n> deserves a mention in the 7.1 feature list.\n> \tI looked for it (i.e. grep -i alpha) in the list, but did not see\n> it. Your choice which heading it goes under.\n> \tAlso, I have not tested any recent snapshots or betas on\n> Linux/Alpha lately, but I plan to shortly and will let the hackers list\n> know of any problems. I have every intention of making sure the 7.1\n> release does indeed work out of box on Linux/Alpha. Thanks, TTYL.\n> \n> ---------------------------------------------------------------------------\n> | \"For to me to live is Christ, and to die is gain.\" |\n> | --- Philippians 1:21 (KJV) |\n> ---------------------------------------------------------------------------\n> | Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n> ---------------------------------------------------------------------------\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Dec 2000 22:03:15 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: 7.1 features list"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> seems rtree doesn't ignore NULL ?\n\nHm, maybe not. There are explicit tests to ignore null inputs in hash\nindexes (hash/hash.c), and I'd just sort of assumed that rtree and gist\ndo the same.\n\nFWIW, your example doesn't seem to provoke an error in current sources;\nbut it does take quite a long time (far longer than building a btree\nindex on 10000 nulls). That makes me think that indexing nulls in rtree\nmight be a bad idea even if it works.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 15:00:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ? "
},
{
"msg_contents": "We finished (cross fingers) our changes in GiST code for 7.0.3\nand in next 2 days we plan to send patches for upcoming 7.1 release.\nIsn't this too late ? What else we need to submit ?\nWe have int4array contribution package which added index support for\ninteger arrays and it's probably should come to contrib directory.\nWe got about 3 orders of magnitude speedup using RD-Tree.\nProbably we need to add regression test for GiST. \n\nWe have following fixes for GiST (7.0.3):\n1. indices now store on disk in compressed form - was decompressed which\n causes broken index when using compression function\n (now compression of indices is really works)\n2. NULLs now don't broken index\n3. added support for keys with variable length - was fixed-length\n\nPatches and contribution package could be prepared in several days,\ndocumentation with some benchmarks and blurbs require more time.\nWe plan to use these last changes to speedup our full-text-search\n( killer application ) and write article to popularize GiST+Postgres.\nAny thoughts ?\n\n\tRegards,\n\t\tOleg\n\n\nOn Wed, 20 Dec 2000, Tom Lane wrote:\n\n> Date: Wed, 20 Dec 2000 15:00:21 -0500\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Hannu Krosing <[email protected]>, Bruce Momjian <[email protected]>,\n> PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] Who is a maintainer of GiST code ? \n> \n> Oleg Bartunov <[email protected]> writes:\n> > seems rtree doesn't ignore NULL ?\n> \n> Hm, maybe not. There are explicit tests to ignore null inputs in hash\n> indexes (hash/hash.c), and I'd just sort of assumed that rtree and gist\n> do the same.\n> \n> FWIW, your example doesn't seem to provoke an error in current sources;\n> but it does take quite a long time (far longer than building a btree\n> index on 10000 nulls). That makes me think that indexing nulls in rtree\n> might be a bad idea even if it works.\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 20 Dec 2000 23:42:36 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ? "
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> Probably we need to add regression test for GiST. \n\nThat would be nice, but let's not hold up the patches for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 15:51:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ? "
},
{
"msg_contents": "> We finished (cross fingers) our changes in GiST code for 7.0.3\n> and in next 2 days we plan to send patches for upcoming 7.1 release.\n> Isn't this too late ? What else we need to submit ?\n\nNot too late, especially if it is fixing bugs. And even if not, anything\nwe can do to encourage active development and use of GiST is worth the\neffort imho. I'll guess that unless we are talking about some extreme,\nfar-reaching changes in the backend others will agree.\n\n> We have int4array contribution package which added index support for\n> integer arrays and it's probably should come to contrib directory.\n> We got about 3 orders of magnitude speedup using RD-Tree.\n> Probably we need to add regression test for GiST.\n> We have following fixes for GiST (7.0.3):\n> 1. indices now store on disk in compressed form - was decompressed which\n> causes broken index when using compression function\n> (now compression of indices is really works)\n> 2. NULLs now don't broken index\n> 3. added support for keys with variable length - was fixed-length\n> Patches and contribution package could be prepared in several days,\n> documentation with some benchmarks and blurbs require more time.\n> We plan to use these last changes to speedup our full-text-search\n> ( killer application ) and write article to popularize GiST+Postgres.\n> Any thoughts ?\n\nGo go go !!! :)\n\n - Thomas\n",
"msg_date": "Thu, 21 Dec 2000 06:00:57 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Oleg Bartunov <[email protected]> writes:\n> > seems rtree doesn't ignore NULL ?\n> \n> Hm, maybe not. There are explicit tests to ignore null inputs in hash\n> indexes (hash/hash.c), and I'd just sort of assumed that rtree and gist\n> do the same.\n> \n> FWIW, your example doesn't seem to provoke an error in current sources;\n> but it does take quite a long time (far longer than building a btree\n> index on 10000 nulls). That makes me think that indexing nulls in rtree\n> might be a bad idea even if it works.\n\nOr maybe just some optimisations done for large number of similar keys (\nprobabilistic page-splitting or some such ;) in btree are not done in\nrtree ?\n\n----------\nHannu\n",
"msg_date": "Thu, 21 Dec 2000 11:05:47 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ?"
},
{
"msg_contents": "Oleg Bartunov wrote:\n> \n> We finished (cross fingers) our changes in GiST code for 7.0.3\n\nAre tha patches for 7.0.3 already available ?\n\n> and in next 2 days we plan to send patches for upcoming 7.1 release.\n> Isn't this too late ? What else we need to submit ?\n> We have int4array contribution package which added index support for\n> integer arrays \n\nwhat exactly does it mean ?\n\ncan i now query for an int4array which contains number 42 or just \nan int array that = '{1,2,3}' with index support ?\n\n> and it's probably should come to contrib directory.\n> We got about 3 orders of magnitude speedup using RD-Tree.\n> Probably we need to add regression test for GiST.\n\nSome samples will do fine for start ;)\n\n> We have following fixes for GiST (7.0.3):\n> 1. indices now store on disk in compressed form - was decompressed which\n> causes broken index when using compression function\n> (now compression of indices is really works)\n> 2. NULLs now don't broken index\n> 3. added support for keys with variable length - was fixed-length\n> \n> Patches and contribution package could be prepared in several days,\n> documentation with some benchmarks and blurbs require more time.\n> We plan to use these last changes to speedup our full-text-search\n> ( killer application ) and write article to popularize GiST+Postgres.\n> Any thoughts ?\n\nGreat!\n\n-------\nHannu\n",
"msg_date": "Thu, 21 Dec 2000 11:12:41 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ?"
},
{
"msg_contents": "On Thu, 21 Dec 2000, Hannu Krosing wrote:\n\n> Date: Thu, 21 Dec 2000 11:12:41 +0200\n> From: Hannu Krosing <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Tom Lane <[email protected]>, Bruce Momjian <[email protected]>,\n> PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] Who is a maintainer of GiST code ?\n> \n> Oleg Bartunov wrote:\n> > \n> > We finished (cross fingers) our changes in GiST code for 7.0.3\n> \n> Are tha patches for 7.0.3 already available ?\n> \n\nyes, we could post it right now\n\n> > and in next 2 days we plan to send patches for upcoming 7.1 release.\n> > Isn't this too late ? What else we need to submit ?\n> > We have int4array contribution package which added index support for\n> > integer arrays \n> \n> what exactly does it mean ?\n> \n> can i now query for an int4array which contains number 42 or just \n> an int array that = '{1,2,3}' with index support ?\n> \n\nyes, you need to apply out patch to GiST and functions for\nRD-tree. I'll send you in private mail to conserve bandwidth.\nIt would be great if you test our code in your application.\n\n\n\tRegards,\n\n\t\tOleg\n> -------\n> Hannu\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 21 Dec 2000 14:45:10 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ?"
},
{
"msg_contents": "Oleg Bartunov wrote:\n> \n> On Thu, 21 Dec 2000, Hannu Krosing wrote:\n> \n> > Date: Thu, 21 Dec 2000 11:12:41 +0200\n> > From: Hannu Krosing <[email protected]>\n> > To: Oleg Bartunov <[email protected]>\n> > Cc: Tom Lane <[email protected]>, Bruce Momjian <[email protected]>,\n> > PostgreSQL-development <[email protected]>\n> > Subject: Re: [HACKERS] Who is a maintainer of GiST code ?\n> >\n> > Oleg Bartunov wrote:\n> > >\n> > > We finished (cross fingers) our changes in GiST code for 7.0.3\n> >\n> > Are tha patches for 7.0.3 already available ?\n> \n> yes, we could post it right now\n\nGreat!\n\n> > > and in next 2 days we plan to send patches for upcoming 7.1 release.\n> > > Isn't this too late ? What else we need to submit ?\n> > > We have int4array contribution package which added index support for\n> > > integer arrays\n> >\n> > what exactly does it mean ?\n> >\n> > can i now query for an int4array which contains number 42 or just\n> > an int array that = '{1,2,3}' with index support ?\n> >\n> \n> yes, you need to apply out patch to GiST and functions for\n> RD-tree. I'll send you in private mail to conserve bandwidth.\n> It would be great if you test our code in your application.\n\nThanks, I'm looking forwad to testing it.\n\n---------------\nHannu\n",
"msg_date": "Thu, 21 Dec 2000 20:05:40 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Who is a maintainer of GiST code ?"
},
{
"msg_contents": "Hi,\n\nwe've almost totally rewrite gist.c because old code and algorithm\nwere not suitable for variable size keys. I think it might be\nsubmitted into 7.1 beta source tree. We have fixed several bugs and\nmemory leaks. Version for 7.0.3 is also available.\nSampe application for contrib area - implementation RD-Tree and index support\nfor int arrays I'll submit later (Need some documentation).\n\n\tRegards,\n\n\t\tOleg\n\n\nHere is a README.gist\n--------------------------------\nNew version of gist.c for PostgreSQL 7.1\n\nNew feature:\n 1. Support of variable size keys - new algorithm of insertion to tree\n (GLI - gist layrered insertion). Previous algorithm was implemented\n as described in paper by Joseph M. Hellerstein et.al\n \"Generalized Search Trees for Database Systems\". This (old)\n algorithm was not suitable for variable size keys and could be\n not effective ( walking up-down ) in case of multiple levels split\nBug fixed:\n 1. fixed bug in gistPageAddItem - key values were written to disk\n uncompressed. This caused failure if decompression function\n does real job.\n 2. NULLs handling - we keep NULLs in tree. Right way is to remove them,\n but we don't know how to inform vacuum about index statistics. This is\n just cosmetic warning message (like in case with R-Tree),\n but I'm not sure how to recognize real problem if we remove NULLs\n and suppress this warning as Tom suggested.\n 3. various memory leaks\n\nAll our tests and Gene Selcov's regression tests passed ok.\nWe have version also for 7.0.3\nSample application which utilize RD-Tree for index support of\nint arrays is in contrib/intarray (will be submitted separately).\n\nTODO:\n\n1. Description of GLI algorithm\n2. regression test for GiST (based on RD-Tree)\n\nThis work was done by Teodor Sigaev ([email protected]) and\nOleg Bartunov ([email protected]).\n\n\n\nOn Sun, 17 Dec 2000, Tom Lane wrote:\n\n> Oleg Bartunov <[email protected]> writes:\n> > I checked 7.1 feature list and didn't find any mention about GiST\n> > but there are changes in GiST code. Who is a maintainer of GiST code ?\n>\n> You are ;-). If you expect to find someone who understands GiST better\n> than you, you're probably out of luck.\n>\n> I recall having made a number of changes that applied to all of the\n> index access methods, including GiST --- but I was just changing\n> similar code in all the methods. I don't claim to know anything\n> about GiST in particular.\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Wed, 10 Jan 2001 15:06:00 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "GiST for 7.1 !! "
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> we've almost totally rewrite gist.c because old code and algorithm\n> were not suitable for variable size keys. I think it might be\n> submitted into 7.1 beta source tree.\n\nUrgh. Dropping in a total rewrite when we're already past beta3 doesn't\nstrike me as good project management practice --- especially if the\nrewrite was done to add features (ie variable-size keys) not merely\nfix bugs. I think it might be more prudent to hold this for 7.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Jan 2001 21:49:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GiST for 7.1 !! "
},
{
"msg_contents": "On Wed, 10 Jan 2001, Tom Lane wrote:\n\n> Oleg Bartunov <[email protected]> writes:\n> > we've almost totally rewrite gist.c because old code and algorithm\n> > were not suitable for variable size keys. I think it might be\n> > submitted into 7.1 beta source tree.\n>\n> Urgh. Dropping in a total rewrite when we're already past beta3 doesn't\n> strike me as good project management practice --- especially if the\n> rewrite was done to add features (ie variable-size keys) not merely\n> fix bugs. I think it might be more prudent to hold this for 7.2.\n\nOK. If our changes will not go to 7.1, is't possible to create\nfeature archive and announce it somewhere. It would be nice if\npeople could test it. Anyway, I'll create web page with all\ndocs and patches. I'm afraid one more year to 7.2 is enough for\nGiST to die :-)\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 11 Jan 2001 13:09:19 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: GiST for 7.1 !! "
},
{
"msg_contents": "\nOleg ... how about a contrib/patches directory that we put this into for\nv7.1 release, so that ppl have access to it, and then we apply the patch\nfirst thing for part of v7.2?\n\n On Thu, 11 Jan 2001, Oleg Bartunov wrote:\n\n> On Wed, 10 Jan 2001, Tom Lane wrote:\n>\n> > Oleg Bartunov <[email protected]> writes:\n> > > we've almost totally rewrite gist.c because old code and algorithm\n> > > were not suitable for variable size keys. I think it might be\n> > > submitted into 7.1 beta source tree.\n> >\n> > Urgh. Dropping in a total rewrite when we're already past beta3 doesn't\n> > strike me as good project management practice --- especially if the\n> > rewrite was done to add features (ie variable-size keys) not merely\n> > fix bugs. I think it might be more prudent to hold this for 7.2.\n>\n> OK. If our changes will not go to 7.1, is't possible to create\n> feature archive and announce it somewhere. It would be nice if\n> people could test it. Anyway, I'll create web page with all\n> docs and patches. I'm afraid one more year to 7.2 is enough for\n> GiST to die :-)\n>\n> >\n> > \t\t\tregards, tom lane\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Thu, 11 Jan 2001 09:50:16 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: GiST for 7.1 !! "
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> Oleg ... how about a contrib/patches directory that we put this into for\n> v7.1 release, so that ppl have access to it, and then we apply the patch\n> first thing for part of v7.2?\n\nAnd have Mandrake ship postgresql-v7.1-GiST-1mdk.rpm by default ;)\n\nI would even vote for including the ability to index int(4) arrays in\nthe \nmain distribution and not in contrib, similar to the current state of\nplpgsq \nand other pl* - ie they are compiled by default but not \"activated\".\n\nAssumption that arrays are indexable seems to come up once or twice a \nmonth on the mailing lists. \n\n-------------------\nHannu\n",
"msg_date": "Thu, 11 Jan 2001 16:26:52 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: GiST for 7.1 !!"
}
]
|
[
{
"msg_contents": "I have just noticed that oper_select_candidate()'s first try at\nresolving unknown-type inputs (parse_oper.c lines 372-410 in current\nsources) is entirely redundant, because the case it is looking for\nhas already been tried by oper_exact(). I propose removing that\ncode, to make it more like func_select_candidate. Have I missed\nanything?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Dec 2000 16:59:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Redundant code in oper_select_candidate()"
}
]
|
[
{
"msg_contents": "Ok, just a small issue here:\nI'm running:\nselect attname,attlen,attalign from pg_attribute where attnum>0 and attrelid\n= (select oid from pg_class where relname = 'users') order by attnum;\n\nThis is to get the name, length of that attribute and the alignment.\nThe alignment seems to be wrong for type CHAR(1):\n postalcode | -1 | i\n gender | -1 | i\n yearofbirth | 2 | s\n\nGender is what I'm looking at and I think it's supposed to be 16 bit\naligned.\n\nHere is the data from that area of the tuple:\n0B00 0000 5435 5420 3257 3600 0500 0000 4600 5000\nThe postal code, 32 bit aligned extracts fine:\n'T5T 2W6'\nThis one is a female, the size is listed as 0500 or 5 (header plus the 1\nchar in it) It extracts as an 'F', but you can see the year of birth 80\ncomes only 6 bytes after the gender. Perhaps my original query for the\nalignments is wrong...\n\n-Michael\n\n",
"msg_date": "Sun, 17 Dec 2000 12:41:16 -0500",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "More Tuple Madness"
},
{
"msg_contents": "\"Michael Richards\" <[email protected]> writes:\n> The alignment seems to be wrong for type CHAR(1):\n\nNo, the alignment is fine. A field's align constraint says where it has\nto start, not where the next field has to start. gender starts on a\n4-byte boundary and is 5 bytes long, then we have one byte wasted for\nalignment of yearofbirth, then yearofbirth starts on a 2-byte boundary.\nEveryone's happy.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2000 13:14:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More Tuple Madness "
},
{
"msg_contents": "Oops, I guess I assumed that the alignment part was directly related to the\nnumber of bytes until the next attribute rather than the actual alignment.\n\nIs there any need for documentation on how this whole storage thing works?\nI'd be more than willing to write it up.\n\n-Michael\n\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Michael Richards\" <[email protected]>\nCc: <[email protected]>\nSent: Sunday, December 17, 2000 1:14 PM\nSubject: Re: [HACKERS] More Tuple Madness\n\n\n> \"Michael Richards\" <[email protected]> writes:\n> > The alignment seems to be wrong for type CHAR(1):\n>\n> No, the alignment is fine. A field's align constraint says where it has\n> to start, not where the next field has to start. gender starts on a\n> 4-byte boundary and is 5 bytes long, then we have one byte wasted for\n> alignment of yearofbirth, then yearofbirth starts on a 2-byte boundary.\n> Everyone's happy.\n>\n> regards, tom lane\n\n",
"msg_date": "Sun, 17 Dec 2000 13:34:35 -0500",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More Tuple Madness "
},
{
"msg_contents": "Write away ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2000 13:36:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More Tuple Madness "
},
{
"msg_contents": "Is this alignment relative to the beginning of the page or tuple, or even\nthe tuple data area (after the tuple header)?\n\n-Michael\n\n> \"Michael Richards\" <[email protected]> writes:\n> > The alignment seems to be wrong for type CHAR(1):\n>\n> No, the alignment is fine. A field's align constraint says where it has\n> to start, not where the next field has to start. gender starts on a\n> 4-byte boundary and is 5 bytes long, then we have one byte wasted for\n> alignment of yearofbirth, then yearofbirth starts on a 2-byte boundary.\n> Everyone's happy.\n>\n> regards, tom lane\n\n",
"msg_date": "Sun, 17 Dec 2000 15:49:40 -0500",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Attribute Alignment"
},
{
"msg_contents": "\"Michael Richards\" <[email protected]> writes:\n> Is this alignment relative to the beginning of the page or tuple, or even\n> the tuple data area (after the tuple header)?\n\nWell, it's absolute: on machines that care about such things, you *will*\ncoredump if you try to fetch an int at a non-4-byte-aligned address.\n\nOur implementation forces tuple data area (also tuple header) to start\nat a MAXALIGN'd offset in the page, and we ensure that page buffers\nstart at MAXALIGN'd addresses. So the attribute access routines only\nhave to worry about suitable alignment of the field's offset within the\ntuple data. You'd get the same answer however you measured it, but\nI think the code usually does the alignment computation on the offset\nwithin the tuple data.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2000 17:45:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Attribute Alignment "
},
{
"msg_contents": "Okay. On this particular machine, the way I was loading in the page it\nproperly aligns the data to the page beginning, so I used that. I now have\nit extracting meaningful data from a variety of tables. The very last thing\nI need to do now is figure out where the code in the source is that stores\nand retrieves timestamp and datetime fields (as I'm unsure how they are\nencoded).\n\nHaving written this tool which is at least the basis for a complete table\ndata verification program (it's written in c++) I'm wondering if there is\nany chance of having it pointed to, linked to or otherwise made available?\nTime-permitting I may add to it in the future, although I hope I'll never\nhave to use it again :)\n\n-Michael\n\n> Well, it's absolute: on machines that care about such things, you *will*\n> coredump if you try to fetch an int at a non-4-byte-aligned address.\n>\n> Our implementation forces tuple data area (also tuple header) to start\n> at a MAXALIGN'd offset in the page, and we ensure that page buffers\n> start at MAXALIGN'd addresses. So the attribute access routines only\n> have to worry about suitable alignment of the field's offset within the\n> tuple data. You'd get the same answer however you measured it, but\n> I think the code usually does the alignment computation on the offset\n> within the tuple data.\n>\n> regards, tom lane\n\n",
"msg_date": "Sun, 17 Dec 2000 18:03:34 -0500",
"msg_from": "\"Michael Richards\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Attribute Alignment "
},
{
"msg_contents": "On Sun, Dec 17, 2000 at 06:03:34PM -0500, Michael Richards wrote:\n> \n> Having written this tool which is at least the basis for a complete table\n> data verification program (it's written in c++) I'm wondering if there is\n> any chance of having it pointed to, linked to or otherwise made available?\n> Time-permitting I may add to it in the future, although I hope I'll never\n> have to use it again :)\n\n\nAnyone have this code/tool Michael is talking about? I tried to contact\nhim directly, but one of his emails bounces, and the other has not\nresponded.\n\nRoss\n",
"msg_date": "Fri, 2 Mar 2001 10:38:55 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Attribute Alignment"
}
]
|
[
{
"msg_contents": "I've been sneaking a peek at the Great Bridge documentation regarding the\npg_dumpaccounts utility. You may recall that I added the pg_dumpall\n--accounts-only option to provide the same functionality. But it occurred\nto me that the name is wrong.\n\nThe way it reads in the GB docs is that pg_dumpaccounts will dump the\nglobal structures, whereas pg_dump dumps the local structures, and\npg_dumpall is sort of a wrapper that does both. (Not a bad idea really;\nif they would have discussed it we might even have known about it.)\n\nBut equating \"accounts\" and \"global\" is wrong (leaving aside the fact that\nthere's no such thing as an \"account\" is SQL or Postgres). If/when table\nspaces are implemented, a table space would probably be a global object.\nIf/when roles are implemented, then groups will cease to be a global\nobject.\n\nTo make a long story short, a more correct name for this option would be\n\"--globals-only\". Is it okay to change this?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 17 Dec 2000 19:24:46 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dumpall --accounts-only"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> To make a long story short, a more correct name for this option would be\n> \"--globals-only\". Is it okay to change this?\n\nOkay with me...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2000 13:39:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dumpall --accounts-only "
}
]
|
[
{
"msg_contents": "We need additions to alter_table.sgml for the new OWNER option mention\nin the features list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 17 Dec 2000 15:07:59 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Manual changes for ALTER TABLE OWNER"
},
{
"msg_contents": "On Sunday 17 December 2000 15:07, Bruce Momjian wrote:\n> We need additions to alter_table.sgml for the new OWNER option mention\n> in the features list.\n\nHere it is.\n\n-- \nMark Hollomon",
"msg_date": "Tue, 19 Dec 2000 17:41:59 -0500",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Manual changes for ALTER TABLE OWNER"
},
{
"msg_contents": "Thanks. Applied.\n\n> On Sunday 17 December 2000 15:07, Bruce Momjian wrote:\n> > We need additions to alter_table.sgml for the new OWNER option mention\n> > in the features list.\n> \n> Here it is.\n> \n> -- \n> Mark Hollomon\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Dec 2000 22:19:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Manual changes for ALTER TABLE OWNER"
}
]
|
[
{
"msg_contents": "Hi all,\n\nI've been running pgsql database since 6.3 with sizes ranging from a few\nmegabytes to a few hundred megabytes. And ever since 6.5 came out I've\nhad almost no crashes. Until now.\n\nWe recently installed a small server for an external party to develop\nwebsites on. This machine, a K6-233 with 256 MB, is running FreeBSD 3.3\nand PostgreSQL 7.0.2 (maybe I'll upgrade to 7.0.3 tonight). The database\nit's running is about 2 MB in size and gets to process an estimated 10000\nto 25000 queries per day. Nothing special, I'd say.\n\nHowever, pgsql keeps crashing. It can take days, but pgsql will crash.\nIt spits out the following error:\n\nServerLoop: select failed: No child processes\n\nI've done some searching with Google and I've searched the mailinglist\narchives on the PostgreSQL site, but I couldn't turn up anything useful.\nI did find a post from someone with the same problem, but appearently\nnobody ever replyed to that post.\n\nDoes anyone have any idea what might be going on here? Should I be\nasking this question on another list? If so, which one?\n\nAny information would be very welcome,\n\nMathijs\n-- \n\"A book is a fragile creature. It suffers the wear of time,\n it fears rodents, the elements, clumsy hands.\" \n Umberto Eco \n",
"msg_date": "Sun, 17 Dec 2000 21:51:18 +0100",
"msg_from": "Mathijs Brands <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL crashes on me :("
},
{
"msg_contents": "Mathijs Brands <[email protected]> writes:\n> We recently installed a small server for an external party to develop\n> websites on. This machine, a K6-233 with 256 MB, is running FreeBSD 3.3\n> and PostgreSQL 7.0.2 (maybe I'll upgrade to 7.0.3 tonight). The database\n> it's running is about 2 MB in size and gets to process an estimated 10000\n> to 25000 queries per day. Nothing special, I'd say.\n\n> However, pgsql keeps crashing. It can take days, but pgsql will crash.\n> It spits out the following error:\n\n> ServerLoop: select failed: No child processes\n\nHm. It seems fairly unlikely that select() would return error ECHILD,\nwhich is what this message *appears* to imply. The code is\n\n if (select(nSockets, &rmask, &wmask, (fd_set *) NULL,\n (struct timeval *) NULL) < 0)\n {\n if (errno == EINTR)\n continue;\n fprintf(stderr, \"%s: ServerLoop: select failed: %s\\n\",\n progname, strerror(errno));\n return STATUS_ERROR;\n }\n\nwhich seems pretty straightforward. BUT: I think there's a race\ncondition here, at least on systems where errno is not saved and\nrestored around a signal handler. Consider the following scenario:\n\n\tPostmaster is waiting at the select() --- its normal state.\n\n\tPostmaster receives a SIGCHLD signal due to backend exit, so\n\tit goes off and does the reaper() thing. On return from\n\treaper() the system arranges to return EINTR error from\n\tthe select().\n\n\tBefore control can reach the \"if (errno...\" test, another\n\tSIGCHLD comes in. reaper() is invoked again and does its\n\tthing.\n\nThe normal exit condition from reaper() will be errno == ECHILD,\nbecause that's what the waitpid() or wait3() call will return after\nall children are dealt with. If the signal-handling mechanism allows\nthat to be returned to the mainline code, we have a failure.\n\nCan any FreeBSD hackers comment on the plausibility of this theory?\n\nA quick-and-dirty workaround would be to save and restore errno in\nreaper() and the other postmaster signal handlers. It might be\na better idea in the long run to avoid doing system calls in the\nsignal handlers --- but that would take a more substantial rewrite.\n\nI seem to recall previous pghackers discussions in which\nsaving/restoring errno looked like a good idea. Not sure why\nit hasn't been done already.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2000 22:47:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] PostgreSQL crashes on me :( "
},
{
"msg_contents": "On Sun, Dec 17, 2000 at 10:47:55PM -0500, Tom Lane wrote:\n> \n> A quick-and-dirty workaround would be to save and restore errno in\n> reaper() and the other postmaster signal handlers. It might be\n> a better idea in the long run to avoid doing system calls in the\n> signal handlers --- but that would take a more substantial rewrite.\n> \n> I seem to recall previous pghackers discussions in which\n> saving/restoring errno looked like a good idea. Not sure why\n> it hasn't been done already.\n\nSyscalls in signal handlers open doors for lots of mischief, not least \nbecause it's impractical to test all the combinations of half-run mainline\nsyscalls with all the other syscalls that may occur in signal handlers. \n(select() and wait(), by themselves, get exercised enough, but bugs in\nother pairs may be too hard to reproduce ever to be identified.) Maybe \nsomebody noted that patching errno handling would only close one hole, \nand make it less likely the real fix would be done.\n\nThat the signal handler got an error on its syscall might be cause for \nconcern as well. \n\nSounds like a TODO list item: eliminate syscalls from signal handlers.\n\nNathan Myers\[email protected]\n\n",
"msg_date": "Sun, 17 Dec 2000 23:01:41 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] PostgreSQL crashes on me :("
},
{
"msg_contents": "[email protected] (Nathan Myers) writes:\n> Sounds like a TODO list item: eliminate syscalls from signal handlers.\n\nAfter looking at it some more, I think that's a lot easier said than\ndone. We could try writing the postmaster's SIGCHLD routine in the \nsame style currently used for SIGHUP --- ie, signal handler just sets\na flag that's examined by the main loop in ServerLoop --- but I don't\nsee any way to guarantee timely response if we do that. If the SIGCHLD\nhappens just before we reach the select() then the select() will block,\nand we won't respond to the dying child until the next connection\nrequest arrives or some other signal happens. That's OK under normal\nscenarios, but highly not OK when a backend has crashed.\n\nAny thoughts on a cleaner solution?\n\nBTW, we do block signals except at the select(), so the fear of random\nsyscalls being interrupted by random other syscalls is overstated.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2000 10:41:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] PostgreSQL crashes on me :( "
},
{
"msg_contents": " Date: Sun, 17 Dec 2000 22:47:55 -0500\n From: Tom Lane <[email protected]>\n\n BUT: I think there's a race\n condition here, at least on systems where errno is not saved and\n restored around a signal handler. Consider the following scenario:\n\n\t Postmaster is waiting at the select() --- its normal state.\n\n\t Postmaster receives a SIGCHLD signal due to backend exit, so\n\t it goes off and does the reaper() thing. On return from\n\t reaper() the system arranges to return EINTR error from\n\t the select().\n\n\t Before control can reach the \"if (errno...\" test, another\n\t SIGCHLD comes in. reaper() is invoked again and does its\n\t thing.\n\n The normal exit condition from reaper() will be errno == ECHILD,\n because that's what the waitpid() or wait3() call will return after\n all children are dealt with. If the signal-handling mechanism allows\n that to be returned to the mainline code, we have a failure.\n\n Can any FreeBSD hackers comment on the plausibility of this theory?\n\nI'm not a FreeBSD hacker, but I do know how the BSD kernel works\nunless FreeBSD has changed things. The important facts are:\n\n1) The kernel only delivers signals when a process moves from kernel\n mode to user mode, after a system call or an interrupt (including a\n timer interrupt).\n\n2) The errno variable is set in user space after the process has\n returned to user mode.\n\nTherefore, the scenario you describe is possible, but only if there\nhappens to be both a timer interrupt and a SIGCHLD signal within a\ncouple of instructions after the select returns.\n\n(I suppose that a page fault instead of a timer interrupt could have\nthe same effect as well, although a page fault here seems quite\nunlikely unless the system is extremely overloaded.)\n\n A quick-and-dirty workaround would be to save and restore errno in\n reaper() and the other postmaster signal handlers. It might be\n a better idea in the long run to avoid doing system calls in the\n signal handlers --- but that would take a more substantial rewrite.\n\nIdeally, signal handlers should not make system calls. However, if\nthis is impossible, then signal handlers must save and restore errno.\n\nIan\n",
"msg_date": "18 Dec 2000 09:33:40 -0800",
"msg_from": "Ian Lance Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] PostgreSQL crashes on me :("
},
{
"msg_contents": "Ian Lance Taylor <[email protected]> writes:\n> Therefore, the scenario you describe is possible, but only if there\n> happens to be both a timer interrupt and a SIGCHLD signal within a\n> couple of instructions after the select returns.\n\nRight. A small failure window would explain the infrequency of the\nsymptom.\n\n> Ideally, signal handlers should not make system calls. However, if\n> this is impossible, then signal handlers must save and restore errno.\n\nI have just committed fixes to do that in current sources (7.1 branch).\nI doubt we're going to make a 7.0.4 release, but if we do then this\nought to be back-patched.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2000 12:40:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] PostgreSQL crashes on me :( "
},
{
"msg_contents": " Date: Mon, 18 Dec 2000 10:41:47 -0500\n From: Tom Lane <[email protected]>\n\n [email protected] (Nathan Myers) writes:\n > Sounds like a TODO list item: eliminate syscalls from signal handlers.\n\n After looking at it some more, I think that's a lot easier said than\n done. We could try writing the postmaster's SIGCHLD routine in the \n same style currently used for SIGHUP --- ie, signal handler just sets\n a flag that's examined by the main loop in ServerLoop --- but I don't\n see any way to guarantee timely response if we do that. If the SIGCHLD\n happens just before we reach the select() then the select() will block,\n and we won't respond to the dying child until the next connection\n request arrives or some other signal happens. That's OK under normal\n scenarios, but highly not OK when a backend has crashed.\n\n Any thoughts on a cleaner solution?\n\nOne way to avoid this race condition is to set a timeout on the\nselect. What is the maximum acceptable time for a timely response?\nWould executing a few instructions at that time interval impose an\nunacceptable system load? (Naturally no timeout should be used if\nthere are no child processes.)\n\nIan\n",
"msg_date": "18 Dec 2000 09:58:02 -0800",
"msg_from": "Ian Lance Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] PostgreSQL crashes on me :("
},
{
"msg_contents": "Ian Lance Taylor <[email protected]> writes:\n> Any thoughts on a cleaner solution?\n\n> One way to avoid this race condition is to set a timeout on the\n> select. What is the maximum acceptable time for a timely response?\n\nI thought about that, but it doesn't seem like a cleaner solution.\nBasically you'd have to figure a tradeoff between wasted cycles in\nthe postmaster and time delay to respond to a crashed backend.\nAnd there's no good tradeoff there. If you have a backend crash,\nyou want to shut down the other backends ASAP, before they have a\nchance to propagate any shared-memory corruption that the failed\nbackend might've created. The entire exercise is probably pointless\nif the postmaster twiddles its thumbs for awhile before killing the\nother backends.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2000 13:18:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] PostgreSQL crashes on me :( "
},
{
"msg_contents": " Date: Mon, 18 Dec 2000 13:18:26 -0500\n From: Tom Lane <[email protected]>\n\n Ian Lance Taylor <[email protected]> writes:\n > Any thoughts on a cleaner solution?\n\n > One way to avoid this race condition is to set a timeout on the\n > select. What is the maximum acceptable time for a timely response?\n\n I thought about that, but it doesn't seem like a cleaner solution.\n Basically you'd have to figure a tradeoff between wasted cycles in\n the postmaster and time delay to respond to a crashed backend.\n And there's no good tradeoff there. If you have a backend crash,\n you want to shut down the other backends ASAP, before they have a\n chance to propagate any shared-memory corruption that the failed\n backend might've created. The entire exercise is probably pointless\n if the postmaster twiddles its thumbs for awhile before killing the\n other backends.\n\nThe timeout is only to catch the case where the child dies between\nchecking the flag and calling select.\n\nWhat you really want, of course, is a version of select which lets you\natomically control the signal blocking mask. That function is\nactually specified by the most recent version of POSIX; it's called\npselect; it's just like select, but the last argument is a const\nsigset_t * which is atomically passed to sigprocmask(SIG_SETMASK)\naround the select (the original signal mask is restored before pselect\nreturns). But I don't know which kernels implement it. There is an\nimplementation on GNU/Linux, but if the kernel doesn't support it then\nit is not truly atomic.\n\nIan\n",
"msg_date": "18 Dec 2000 10:40:19 -0800",
"msg_from": "Ian Lance Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] PostgreSQL crashes on me :("
},
{
"msg_contents": "Ian Lance Taylor <[email protected]> writes:\n> What you really want, of course, is a version of select which lets you\n> atomically control the signal blocking mask.\n\nYeah, that would be a much cleaner solution. Someday pselect() may even\nbe portable enough ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2000 14:08:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] PostgreSQL crashes on me :( "
},
{
"msg_contents": "Hello all,\n\nI would like to thank Tom, Ian and the other pgsql wizards for their prompt\nresponse. This must surely be open source at it's best :)\n\nI've worked around the situation by running a small script that continually\nmonitors postgres and takes appropriate action if postgres shuts down. I'm\nassuming this problem won't lead to any data corruption.\n\nMathijs\n-- \n\"Borrowers of books -- those mutilators of collections, spoilers of the\n symmetry of shelves, and creators of odd volumes.\" \n Charles Lamb (1775-1834) \n",
"msg_date": "Mon, 18 Dec 2000 20:28:07 +0100",
"msg_from": "Mathijs Brands <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: [SQL] PostgreSQL crashes on me :("
},
{
"msg_contents": "Mathijs Brands <[email protected]> writes:\n> I've worked around the situation by running a small script that continually\n> monitors postgres and takes appropriate action if postgres shuts down. I'm\n> assuming this problem won't lead to any data corruption.\n\nHm. The problem here is that when the postmaster crashes, it probably\ndoesn't take down the old backends with it. So if you auto-restart\nthe postmaster, you may have two sets of backends that don't know about\neach other. That would be A Bad Thing.\n\nIf your script kills off all the backends it can find before starting\nthe new postmaster, that should work OK.\n\nA better solution would be to back-patch my errno fix into 7.0.3.\nYou can see the diff as it stands for current sources at\nhttp://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/postmaster/postmaster.c.diff?r1=1.198&r2=1.199&f=c\n\nThe additions to reaper() are probably the only part you really need.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2000 15:16:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] PostgreSQL crashes on me :( "
}
]
|
[
{
"msg_contents": "Hi\nOn LM 7.2-devel pgsql7.03 needs libc.so.6 GLIB2_2.\nWould anybody reply what packages do I need to download with \nglibc-2.2.21.rpm ?\nThanks\nLucian\n\n",
"msg_date": "Mon, 18 Dec 2000 06:43:23 +0200",
"msg_from": "vs <[email protected]>",
"msg_from_op": true,
"msg_subject": "LM devel GLIB2_2"
}
]
|
[
{
"msg_contents": "\nA heap page corruption is not very likely in PostgreSQL because of the\nunderlying page design. Not even on flakey hardware/ossoftware.\n(I once read a page design note from pg 4 but don't exactly remember \nwere or when)\n\nThe point is, that the heap page is only modified in places that were\npreviously empty (except header). All previous row data stays exactly \nin the same place. Thus if a page is only partly written \n(any order of page segments) only a new row is affected. But those rows\nwill be fixed during redo anyway. The only source of serious problems is \nthus a bogus write of a page segment (100 bytes ok 412 bytes chunk \nactually written to disk), but this case is imho sufficiently guarded or at least \ndetected by disk hardware. \n(I assume that the page header fits into one atomic block and has no problem \nwith beeing one step behind or ahead of redo).\n\nI thus doubt that we really need \"physical log\" for heap pages in PostgreSQL\nwith the current non-overwrite smgr. If we could detect corruption in index pages\nwe would not need physical log at all, since an index can always be recreated.\n\nWhat do you think ? I ask because \"physical log\" is a substantial amount of \nadditional IO that we imho only want if it is absolutely necessary.\n\nAndreas\n\nPS: reposted, did this not make it to the list ?\n",
"msg_date": "Mon, 18 Dec 2000 11:00:00 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "heap page corruption not easy"
}
]
|
[
{
"msg_contents": "Hi folks,\n\n\n Reading the documentation, I see that OIDs are unique through the\nwhole database.\n But since OIDs are int4, does that limit the number of rows I can\nhave in a database to 2^32 = 4 billion ?\n\nBest Regards,\nHowe\n\n\n\n",
"msg_date": "Mon, 18 Dec 2000 10:43:51 -0200",
"msg_from": "\"Steve Howe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "OID Implicit limit"
}
]
|
[
{
"msg_contents": "Thanks. Applied.\n\n\n> \n> Ok, sorry but now I use cvs diff instead of difforig (which use -c by \n> default). Here the same with -c.\n> \n> \n> thanks\n> \n> cyril\n> \n> >Sorry, I need a context diff, diff -c. It makes sure that the lines are\n> >added to the proper places. Thanks.\n> >\n> >\n> >[ Charset ISO-8859-1 unsupported, converting... ]\n> >> \n> >> \n> >> Hi,\n> >> \n> >> \n> >> Here is a patch for the beos port (All regression tests are OK).\n> >> \n> >> Updated files are :\n> >> \n> >> xlog.c : special case for beos to avoid 'link' which does not work yet\n> >> beos/sem.c : implementation of new sem_ctl call (GETPID) and a new \n> >sem_op \n> >> flag (IPCNOWAIT)\n> >> dynloader/beos.c : add a verification of symbol validity (seem that \n> the \n> >> loader sometime return OK with an invalid symbol)\n> >> postmaster.c : add beos forking support for the new checkpoint \n> process\n> >> postgres.c : remove beos special case for getrusage\n> >> beos.h : Correction of a bas definition of AF_UNIX, misc defnitions\n> >> \n> >> \n> >> thanks \n> >> \n> >> \n> >> cyril\n> >> \n> >> \n> >> \n> >> \n> >> \n> >> \n> >> \n> >> \n> >\n> >[ Attachment, skipping... ]\n> >\n> >\n> >-- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> \n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 18 Dec 2000 13:42:43 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Beos update"
},
{
"msg_contents": "> Thanks. Applied.\n> > >> Here is a patch for the beos port (All regression tests are OK).\n\nCyril, what version(s) of BeOS should be listed for our \"ports list\"?\n\n - Thomas\n",
"msg_date": "Tue, 19 Dec 2000 07:10:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Beos update"
}
]
|
[
{
"msg_contents": "> The point is, that the heap page is only modified in places that were\n> previously empty (except header). All previous row data stays exactly \n> in the same place. Thus if a page is only partly written \n> (any order of page segments) only a new row is affected.\n\nException: PageRepairFragmentation() and PageIndexTupleDelete() are\ncalled during vacuum - they change layout of tuples.\n\n> But those rows will be fixed during redo anyway.\n\nWe can't count on this for non-atomic 8K page writes: each page keeps\nLSN (log sequence number - offset of end of log record for last page\nmodification) - if page LSN >= LSN of redo record then recoverer\nassumes that changes already applied and doesn't try to redo op.\n\nWe could change this - ie force applying changes. This requires\nnew format of log records (we couldn't use PageAddItem in redo\nanymore):\n\n- for heap we would set pd_lower, pd_upper and line pointer (LP)\n and copy tuple data from record into page space;\n- for indices: set pd_lower, pd_upper, copy LPs from newly inserted\n index tuple LP till last one and copy tuple data from record into\n page space (in split case it seems better to log contents of both \n left and right siblings).\n\nWe would also have to log entire page for two ops above (which change\npage layout) if op occures first time after checkpoint or\ninsert/update/delete ops (because of redo for insert/update/delete\nmay be forced for improper page layout).\n\nWell, this probably will decrease required full page logging but\nI would think more about this way. For example, I didn't consider\nupcoming undo op...\n\n> The only source of serious problems is thus a bogus write of a page\n> segment (100 bytes ok 412 bytes chunk actually written to disk),\n> but this case is imho sufficiently guarded or at least detected\n> by disk hardware. \n\nWith full page logging after checkpoint we would be safe from this\ncase...\n\n> (I assume that the page header fits into one atomic block and \n> has no problem with beeing one step behind or ahead of redo).\n> \n> I thus doubt that we really need \"physical log\" for heap \n> pages in PostgreSQL with the current non-overwrite smgr.\n\nAs you see we still need in full page backup when we want to reuse\nspace and change page layout for this, so it's mostly issue not of\nsmgr type. I don't know about Informix page design, but overwriting\nsmgr itself doesn't require physical removing tuple from page (ie\nchanging page layout) - something like turning LP_USED off would be\nenough and page layout could be changed on first insertion of new\ntuple. Nevertheless they do full page backup.\n\n> If we could detect corruption in index pages we would not need\n> physical log at all, since an index can always be recreated.\n\nI don't like to follow this way on long term - reindex is not option\nfor 24x7x365 usage when index creation takes several minutes.\n\nComments?\n\n- full page backup on first after checkpoint modification\n\nor\n\n- forcing redo and full page backup when changing page layout first time\n after checkpoint/insert/update/delete\n\nVadim\n",
"msg_date": "Mon, 18 Dec 2000 14:05:45 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: heap page corruption not easy"
},
{
"msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > The point is, that the heap page is only modified in places that were\n> > previously empty (except header). All previous row data stays exactly\n> > in the same place. Thus if a page is only partly written\n> > (any order of page segments) only a new row is affected.\n> \n> Exception: PageRepairFragmentation() and PageIndexTupleDelete() are\n> called during vacuum - they change layout of tuples.\n>\n\nIs it guaranteed that the result of PageRepairFragmentation()\nhas already been written to disk when tuple movement is logged ?\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Tue, 19 Dec 2000 09:58:03 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: heap page corruption not easy"
}
]
|
[
{
"msg_contents": "(plz cc me on your replies, i'm not on pgsql-hackers for some reason.)\n\nhttp://www.vix.com/~vixie/results-psql.png shows a gnuplot of the wall time\nof 70K executions of \"pgcat\" (shown below) using a CIDR key and TEXT value.\n(this is for storing the MAPS RSS, which we presently have in flat files.)\n\ni've benchmarked this against a flat directory with IP addresses as filenames,\nand against a deep directory with squid/netnews style hashing (127/0/0/1.txt)\nand while it's way more predictable than either of those, there's nothing in\nmy test framework which explains the 1.5s mode shown in the above *.png file.\n\nanybody know what i could be doing wrong? (i'm also wondering why SELECT\ntakes ~250ms whereas INSERT takes ~70ms... seems counterintuitive, unless\nTOAST is doing a LOT better than i think.)\n\nfurthermore, are there any plans to offer a better libpq interface to INSERT?\nthe things i'm doing now to quote the text, and the extra copy i'm maintaining,\nare painful. arbitrary-sized \"text\" attributes are a huge boon -- we would\nnever have considered using postgres for MAPS RSS (or RBL) with \"large\nobjects\". (kudos to all who were involved, with both WAL and TOAST!)\n\nhere's the test jig -- please don't redistribute it yet since there's no man\npage and i want to try binary cursors and other things to try to speed it up\nor clean it up or both. but if someone can look at my code (which i'm running\nagainst the 7.1 bits at the head of the pgsql cvs tree) and at the *.png file\nand help me enumerate the sources of my stupidity, i will be forever grateful.\n\n# This is a shell archive. Save it in a file, remove anything before\n# this line, and then unpack it by entering \"sh file\". Note, it may\n# create directories; files and directories will be owned by you and\n# have default permissions.\n#\n# This archive contains:\n#\n#\tMakefile\n#\tpgcat.c\n#\necho x - Makefile\nsed 's/^X//' >Makefile << 'END-of-Makefile'\nX## Copyright (c) 2000 by Mail Abuse Prevention System LLC\nX##\nX## Permission to use, copy, modify, and distribute this software for any\nX## purpose with or without fee is hereby granted, provided that the above\nX## copyright notice and this permission notice appear in all copies.\nX##\nX## THE SOFTWARE IS PROVIDED \"AS IS\" AND INTERNET SOFTWARE CONSORTIUM DISCLAIMS\nX## ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES\nX## OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL INTERNET SOFTWARE\nX## CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL\nX## DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR\nX## PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS\nX## ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS\nX## SOFTWARE.\nX\nX# $Id: Makefile,v 1.1.1.1 2000/12/19 04:49:51 vixie Exp $\nX\nXCC= gcc -Wall\nXALL= pgcat\nX\nXLDFLAGS= -L/usr/local/pgsql/lib -L/usr/local/krb5/lib\nXCFLAGS= -I/usr/local/pgsql/include\nXLIBS= -lpq -lcom_err\nX\nXall: $(ALL)\nX\nXkit:; shar Makefile pgcat.c >kit\nX\nXclean:; rm -f $(ALL) kit; rm -f *.o\nX\nXpgcat: pgcat.o Makefile\nX\t$(CC) $(LDFLAGS) -o pgcat pgcat.o $(LIBS)\nX\nXpgcat.o: pgcat.c Makefile\nEND-of-Makefile\necho x - pgcat.c\nsed 's/^X//' >pgcat.c << 'END-of-pgcat.c'\nX/*\nX * Copyright (c) 2000 by Mail Abuse Prevention System LLC\nX *\nX * Permission to use, copy, modify, and distribute this software for any\nX * purpose with or without fee is hereby granted, provided that the above\nX * copyright notice and this permission notice appear in all copies.\nX *\nX * THE SOFTWARE IS PROVIDED \"AS IS\" AND INTERNET SOFTWARE CONSORTIUM DISCLAIMS\nX * ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES\nX * OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL INTERNET SOFTWARE\nX * CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL\nX * DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR\nX * PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS\nX * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS\nX * SOFTWARE.\nX */\nX\nX#ifndef LINT\nXstatic const char rcsid[] = \"$Id: pgcat.c,v 1.1.1.1 2000/12/19 04:49:50 vixie Exp $\";\nX#endif\nX\nX#include <sys/param.h>\nX#include <sys/types.h>\nX#include <sys/stat.h>\nX\nX#include <stdio.h>\nX#include <stdlib.h>\nX#include <string.h>\nX#include <unistd.h>\nX\nX#include <libpq-fe.h>\nX\nXstatic const char tmp_template[] = \"/tmp/pgcat.XXXXXX\";\nXstatic const char *progname = \"amnesia\";\nX\nXstatic int get(PGconn *, const char *, const char *, const char *,\nX\t const char *, const char *);\nXstatic int put(PGconn *, const char *, const char *, const char *,\nX\t const char *, const char *);\nX\nXstatic void\nXusage(const char *msg) {\nX\tfprintf(stderr, \"%s: usage error (%s)\\n\", progname, msg);\nX\tfprintf(stderr,\nX\t \"usage: %s get|put <dbname> <table> <key> <value> <text> [<file>]\\n\",\nX\t\tprogname);\nX\texit(1);\nX}\nX\nXint\nXmain(int argc, char *argv[]) {\nX\tconst char *pghost = NULL, *pgport = NULL, *pgoptions = NULL,\nX\t\t*pgtty = NULL;\nX\tconst char *op, *dbname, *table, *key, *value, *text, *file;\nX\tPGconn *conn;\nX\tint status;\nX\nX\tif ((progname = strrchr(argv[0], '/')) != NULL)\nX\t\tprogname++;\nX\telse\nX\t\tprogname = argv[0];\nX\tif (argc < 7)\nX\t\tusage(\"too few arguments\");\nX\top = argv[1];\nX\tdbname = argv[2];\nX\ttable = argv[3];\nX\tkey = argv[4];\nX\tvalue = argv[5];\nX\ttext = argv[6];\nX\tif (argc > 8)\nX\t\tusage(\"too many arguments\");\nX\telse if (argc == 8)\nX\t\tfile = argv[7];\nX\telse\nX\t\tfile = NULL;\nX\tif (strcmp(op, \"get\") != 0 && strcmp(op, \"put\") != 0)\nX\t\tusage(\"operation must be 'get' or 'put'\");\nX\tconn = PQsetdb(pghost, pgport, pgoptions, pgtty, dbname);\nX\tif (PQstatus(conn) == CONNECTION_BAD) {\nX\t\tfprintf(stderr, \"%s: \\\"%s\\\": %s\", progname, dbname,\nX\t\t\tPQerrorMessage(conn));\nX\t\tstatus = 1;\nX\t} else if (strcmp(op, \"get\") == 0) {\nX\t\tstatus = get(conn, table, key, value, text, file);\nX\t} else {\nX\t\tstatus = put(conn, table, key, value, text, file);\nX\t}\nX\tPQfinish(conn);\nX\treturn (status);\nX}\nX\nXstatic int\nXget(PGconn *conn, const char *table, const char *key, const char *value,\nX const char *text, const char *file)\nX{\nX\tchar cmd[999], ch, pch;\nX\tconst char *p;\nX\tPGresult *res = NULL;\nX\tint status = 0;\nX\tFILE *fp = stdout;\nX\nX\t/* Open the output file if there is one. */\nX\tif (file != NULL) {\nX\t\tfp = fopen(file, \"w\");\nX\t\tif (fp == NULL) {\nX\t\t\tperror(file);\nX\t\t\tstatus = 1;\nX\t\t\tgoto done;\nX\t\t}\nX\t}\nX\nX\t/* Quote the lookup value if nec'y. */\nX\tif (strchr(value, '\\'') != NULL || strchr(value, ':') != NULL)\nX\t\tp = \"\";\nX\telse\nX\t\tp = \"'\";\nX\t\t\nX\t/* Send the query. */\nX\tif (snprintf(cmd, sizeof cmd, \"SELECT %s FROM %s WHERE %s = %s%s%s\",\nX\t\t text, table, key, p, value, p) >= sizeof cmd) {\nX\t\tfprintf(stderr, \"%s: snprintf overflow\\n\", progname);\nX\t\tstatus = 1;\nX\t\tgoto done;\nX\t}\nX\tres = PQexec(conn, cmd);\nX\tif (PQresultStatus(res) != PGRES_TUPLES_OK) {\nX\t\tfprintf(stderr, \"%s: \\\"%s\\\": %s\", progname, cmd,\nX\t\t\tPQresultErrorMessage(res));\nX\t\tstatus = 1;\nX\t\tgoto done;\nX\t}\nX\tif (PQnfields(res) != 1) {\nX\t\tfprintf(stderr, \"%s: \\\"%s\\\": %d fields?\\n\",\nX\t\t\tprogname, cmd, PQnfields(res));\nX\t\tstatus = 1;\nX\t\tgoto done;\nX\t}\nX\tif (PQntuples(res) != 1) {\nX\t\tfprintf(stderr, \"%s: \\\"%s\\\": %d tuples?\\n\",\nX\t\t\tprogname, cmd, PQntuples(res));\nX\t\tstatus = 1;\nX\t\tgoto done;\nX\t}\nX\nX\t/* Output the result. */\nX\tpch = '\\0';\nX\tfor (p = PQgetvalue(res, 0, 0), ch = '\\0'; (ch = *p) != '\\0'; p++) {\nX\t\tputc(ch, fp);\nX\t\tpch = ch;\nX\t}\nX\tif (pch != '\\n')\nX\t\tputc('\\n', fp);\nX done:\nX\tif (fp != NULL && fp != stdout)\nX\t\tfclose(fp);\nX\tif (res != NULL)\nX\t\tPQclear(res);\nX\treturn (status);\nX}\nX\nXstatic int\nXput(PGconn *conn, const char *table, const char *key, const char *value,\nX const char *text, const char *file)\nX{\nX\tchar *t, *tp, cmd[999];\nX\tconst char *p;\nX\tPGresult *res = NULL;\nX\tint status = 0, ch, n;\nX\tFILE *fp = stdin, *copy = NULL;\nX\tstruct stat sb;\nX\tsize_t size;\nX\nX\t/* Open the file if there is one. */\nX\tif (file != NULL) {\nX\t\tfp = fopen(file, \"r\");\nX\t\tif (fp == NULL) {\nX\t\t\tperror(file);\nX\t\t\tstatus = 1;\nX\t\t\tgoto done;\nX\t\t}\nX\t}\nX\nX\t/*\nX\t * Read the file to find out how large it will be when quoted.\nX\t * If it's not a regular file, make a copy while reading, then switch.\nX\t */\nX\tif (fstat(fileno(fp), &sb) < 0) {\nX\t\tperror(\"stat\");\nX\t\tstatus = 1;\nX\t\tgoto done;\nX\t}\nX\tif ((sb.st_mode & S_IFMT) != S_IFREG) {\nX\t\tchar tmpname[MAXPATHLEN];\nX\t\tint fd;\nX\nX\t\tstrcpy(tmpname, tmp_template);\nX\t\tfd = mkstemp(tmpname);\nX\t\tif (fd < 0) {\nX\t\t\tperror(\"mkstemp\");\nX\t\t\tstatus = 1;\nX\t\t\tgoto done;\nX\t\t}\nX\t\tcopy = fdopen(fd, \"r+\");\nX\t\tunlink(tmpname);\nX\t}\nX\tsize = 0;\nX\twhile ((ch = getc(fp)) != EOF) {\nX\t\tif (ch == '\\\\' || ch == '\\'')\nX\t\t\tsize++;\nX\t\tsize++;\nX\t\tif (copy)\nX\t\t\tputc(ch, copy);\nX\t}\nX\tif (ferror(fp)) {\nX\t\tperror(\"fread\");\nX\t\tstatus = 1;\nX\t\tgoto done;\nX\t}\nX\tif (copy) {\nX\t\tif (fp != stdin)\nX\t\t\tfclose(fp);\nX\t\tfp = copy;\nX\t\tcopy = NULL;\nX\t}\nX\trewind(fp);\nX\nX\t/* Quote the lookup value if nec'y. */\nX\tif (strchr(value, '\\'') != NULL || strchr(value, ':') != NULL)\nX\t\tp = \"\";\nX\telse\nX\t\tp = \"'\";\nX\t\t\nX\t/* Construct the INSERT command. */\nX\tn = snprintf(cmd, sizeof cmd,\nX\t\t \"INSERT INTO %s ( %s, %s ) VALUES ( %s%s%s, '\",\nX\t\t table, key, text, p, value, p);\nX\tif (n >= sizeof cmd) {\nX\t\tfprintf(stderr, \"%s: snprintf overflow\\n\", progname);\nX\t\tstatus = 1;\nX\t\tgoto done;\nX\t}\nX\tt = malloc(n + size + sizeof \"');\");\nX\tif (t == NULL) {\nX\t\tperror(\"malloc\");\nX\t\tstatus = 1;\nX\t\tgoto done;\nX\t}\nX\tstrcpy(t, cmd);\nX\ttp = t + n;\nX\twhile ((ch = getc(fp)) != EOF) {\nX\t\tif (ch == '\\\\' || ch == '\\'')\nX\t\t\t*tp++ = '\\\\';\nX\t\t*tp++ = ch;\nX\t}\nX\t*tp++ = '\\'';\nX\t*tp++ = ')';\nX\t*tp++ = ';';\nX\t*tp++ = '\\0';\nX\nX\t/* Send the command. */\nX\tres = PQexec(conn, t);\nX\tif (PQresultStatus(res) != PGRES_COMMAND_OK) {\nX\t\tfprintf(stderr, \"%s: \\\"%s\\\": %s\", progname, t,\nX\t\t\tPQresultErrorMessage(res));\nX\t\tstatus = 1;\nX\t\tgoto done;\nX\t}\nX\tif (strcmp(PQcmdTuples(res), \"1\") != 0) {\nX\t\tfprintf(stderr, \"%s: \\\"%s...\\\": '%s' tuples? (%s)\\n\",\nX\t\t\tprogname, cmd, PQcmdTuples(res), PQcmdStatus(res));\nX\t\tstatus = 1;\nX\t\tgoto done;\nX\t}\nX\nX done:\nX\tif (fp != NULL && fp != stdin)\nX\t\tfclose(fp);\nX\tif (res != NULL)\nX\t\tPQclear(res);\nX\treturn (status);\nX}\nEND-of-pgcat.c\nexit\n\n",
"msg_date": "Mon, 18 Dec 2000 20:55:49 -0800",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance modality in 7.1 for large text attributes?"
},
{
"msg_contents": "Paul,\n\n1) Have you ran vacuum analyze after all these inserts to update database\nstatistics? :) Without vacuum, pgsql will opt to table scan even when\nthere's an index.\n\n2) I'm not sure if you are executing pgcat 70k times or executing inner\nloop in pgcat 70k times. Postgres connection establishment is expensive.\n\n3) Postgres INSERT is not very efficient if you are doing a bulk load of\ndata (it has to reparse the statement every time). If you want to delete\neverything and load new data, use \"COPY\", which is about 5 times faster.\nAlso, there's a patch by someone to do following: INSERT INTO (fields...)\nVALUES (...), (...), (...), which results in parsing the statement only\nonce.\n\nOh...And since I have your attention, could you please resolve\nlong-standing discussion between me and Tom Lane? :) \n\nQuestion is whether proper (standard/most-commonly-used) format for\nprinting CIDR network address is 10/8 or 10.0.0.0/8 (i.e. should all\noctets be printed even if they are 0). After search of RFCs, there's\nnothing that specifies the standard, but 10.0.0.0/8 is used more often in\nexamples than 10/8 form.\n\nPostgres uses 10/8 form, and I'm saying that 10.0.0.0/8 is more accepted\nby everyone else. (I.E. all software can deal with that, but not all\nsoftware accepts 10/8).\n\n-alex\n\nOn Mon, 18 Dec 2000, Paul A Vixie wrote:\n\n> (plz cc me on your replies, i'm not on pgsql-hackers for some reason.)\n> \n> http://www.vix.com/~vixie/results-psql.png shows a gnuplot of the wall time\n> of 70K executions of \"pgcat\" (shown below) using a CIDR key and TEXT value.\n> (this is for storing the MAPS RSS, which we presently have in flat files.)\n> \n> i've benchmarked this against a flat directory with IP addresses as filenames,\n> and against a deep directory with squid/netnews style hashing (127/0/0/1.txt)\n> and while it's way more predictable than either of those, there's nothing in\n> my test framework which explains the 1.5s mode shown in the above *.png file.\n> \n> anybody know what i could be doing wrong? (i'm also wondering why SELECT\n> takes ~250ms whereas INSERT takes ~70ms... seems counterintuitive, unless\n> TOAST is doing a LOT better than i think.)\n> \n> furthermore, are there any plans to offer a better libpq interface to INSERT?\n> the things i'm doing now to quote the text, and the extra copy i'm maintaining,\n> are painful. arbitrary-sized \"text\" attributes are a huge boon -- we would\n> never have considered using postgres for MAPS RSS (or RBL) with \"large\n> objects\". (kudos to all who were involved, with both WAL and TOAST!)\n> \n> here's the test jig -- please don't redistribute it yet since there's no man\n> page and i want to try binary cursors and other things to try to speed it up\n> or clean it up or both. but if someone can look at my code (which i'm running\n> against the 7.1 bits at the head of the pgsql cvs tree) and at the *.png file\n> and help me enumerate the sources of my stupidity, i will be forever grateful.\n\n\n\n\n\n\n",
"msg_date": "Tue, 19 Dec 2000 09:46:20 -0500 (EST)",
"msg_from": "Alex Pilosov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance modality in 7.1 for large text attributes?"
},
{
"msg_contents": "> anybody know what i could be doing wrong? (i'm also wondering why SELECT\n> takes ~250ms whereas INSERT takes ~70ms... seems counterintuitive, unless\n> TOAST is doing a LOT better than i think.)\n\nI would think that this is entirely due to planning the query. An INSERT\nhas no decisions to make, whereas a SELECT must decide among a variety\nof possible plans. To hand-optimize selects, you can set some parameters\nto force only some kinds of plans (such as index scan) but in general\nyou will need to remember to unset them afterwards or you run the risk\nof bizarrely inappropriate plans for other queries in the same session.\n\n> furthermore, are there any plans to offer a better libpq interface to INSERT?\n> the things i'm doing now to quote the text, and the extra copy i'm maintaining,\n> are painful.\n\nWhat exactly are you looking for in \"better\"? Is it just the quoting\nissue (a longstanding problem which persists for historical reasons :(\n\n> ... but if someone can look at my code (which i'm running\n> against the 7.1 bits at the head of the pgsql cvs tree) and at the *.png file\n> and help me enumerate the sources of my stupidity, i will be forever grateful.\n\nPossible causes of the 1.5s \"mode\" (at least as a starting point):\n\no task scheduling on your test machine (not likely??)\n\no swapping/thrashing on your test machine (not likely??)\n\no WAL fsync() log commits and cleanup (aggregate throughput is great,\nbut every once in a while someone waits while the paperwork gets done.\nWaiting may be due to processor resource competition)\n\no Underlying file system bookkeeping from the kernel. e.g. flushing\nbuffers to disk etc etc.\n\n - Thomas\n",
"msg_date": "Tue, 19 Dec 2000 15:03:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance modality in 7.1 for large text attributes?"
},
{
"msg_contents": "On Tue, Dec 19, 2000 at 03:03:43PM +0000, Thomas Lockhart wrote:\n> o WAL fsync() log commits and cleanup (aggregate throughput is great,\n> but every once in a while someone waits while the paperwork gets done.\n> Waiting may be due to processor resource competition)\n> \n> o Underlying file system bookkeeping from the kernel. e.g. flushing\n> buffers to disk etc etc.\n\nI was going to suggest the same, but it's interesting that it happens\non reads as well. I can't tell for sure from the graph, but it looks\nlike it happens fairly consistently - every Nth time. I'd be curious\nto see how this changes if you artificially slow down your loop, or\nadjust your OS's filesystem parameters. It may give some more clues.\n-- \nChristopher Masto Senior Network Monkey NetMonger Communications\[email protected] [email protected] http://www.netmonger.net\n\nFree yourself, free your machine, free the daemon -- http://www.freebsd.org/\n",
"msg_date": "Tue, 19 Dec 2000 13:01:56 -0500",
"msg_from": "Christopher Masto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance modality in 7.1 for large text attributes?"
},
{
"msg_contents": "> 1) Have you ran vacuum analyze after all these inserts to update database\n> statistics? :) Without vacuum, pgsql will opt to table scan even when\n> there's an index.\n\ni hadn't, but i did, and it didn't make that particular difference:\n\n\tvixie=# explain select file from rss where addr = '127.0.0.2';\n\tNOTICE: QUERY PLAN:\n\n\tSeq Scan on rss (cost=0.00..0.00 rows=1 width=12)\n\n\tEXPLAIN\n\nthat sounded bad, so i\n\n\tvixie=# vacuum analyze rss;\n\tVACUUM\n\nbut when i reran the explain, it still said it was doing it sequentially:\n\n\tvixie=# explain select file from rss where addr = '127.0.0.2';\n\tNOTICE: QUERY PLAN:\n\n\tSeq Scan on rss (cost=0.00..1685.10 rows=1 width=12)\n\n\tEXPLAIN\n\ni'll try remaking the table with \"addr\" as a unique key and see if that helps.\n\n> 2) I'm not sure if you are executing pgcat 70k times or executing inner\n> loop in pgcat 70k times. Postgres connection establishment is expensive.\n\nit was 70K invocations, but connection establishment ought to be the same\nfor both \"pgcat get\" and \"pgcat put\" so this doesn't explain the difference\nin the graphs.\n\n> 3) Postgres INSERT is not very efficient if you are doing a bulk load of\n> data (it has to reparse the statement every time). If you want to delete\n> everything and load new data, use \"COPY\", which is about 5 times faster.\n\nwell, that doesn't help in my application. i'm trying to find out whether\npgsql can be used as the generic backend for MAPS RSS, and the only time i\nexpect to be doing bulk loads is during benchmarking and during transition.\nso, the speed of a \"pgcat get\" really matters if i want the web server to\ngo fast when it gets hit by a lot of simultaneous lookups. so, even though\nthere are faster ways to do bulk loading, the current benchmark is accurate\nfor the real application's workload, which isn't about bulk loading.\n\n> Oh...And since I have your attention, could you please resolve\n> long-standing discussion between me and Tom Lane? :) \n> \n> Question is whether proper (standard/most-commonly-used) format for\n> printing CIDR network address is 10/8 or 10.0.0.0/8 (i.e. should all\n> octets be printed even if they are 0). After search of RFCs, there's\n> nothing that specifies the standard, but 10.0.0.0/8 is used more often in\n> examples than 10/8 form.\n> \n> Postgres uses 10/8 form, and I'm saying that 10.0.0.0/8 is more accepted\n> by everyone else. (I.E. all software can deal with that, but not all\n> software accepts 10/8).\n\ncisco IOS just won't take 10/8 and insists on 10.0.0.0/8. you will never,\never go wrong if you try to use 10.0.0.0/8, since everything that understands\nCIDR understands that. 10/8 is a pleasant-appearing alternative format, but\nit is not universally accepted and i recommend against it. (i'm not sure if\nmy original CIDR type implementation for pgsql output the shorthand or not;\nif it did, then i apologize to one and all.)\n",
"msg_date": "Tue, 19 Dec 2000 16:06:35 -0800",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance modality in 7.1 for large text attributes? "
},
{
"msg_contents": "> > anybody know what i could be doing wrong? (i'm also wondering why SELECT\n> > takes ~250ms whereas INSERT takes ~70ms... seems counterintuitive, unless\n> > TOAST is doing a LOT better than i think.)\n> \n> I would think that this is entirely due to planning the query. An INSERT\n> has no decisions to make, whereas a SELECT must decide among a variety\n> of possible plans. To hand-optimize selects, you can set some parameters\n> to force only some kinds of plans (such as index scan) but in general\n> you will need to remember to unset them afterwards or you run the risk\n> of bizarrely inappropriate plans for other queries in the same session.\n\nsince every \"pgcat\" invocation is its own sql session, i have no worries\nabout that. what i don't know, is how to set these options. i'm rerunning\nmy test with PRIMARY KEY on the thing i'm searching on, and will report\nresults here soon. it appears that 60ms is still the average INSERT time\n(which is fine, btw) but that PRIMARY KEY just about doubles the amount of\ndisk I/O per INSERT, and the postgres server process is using 4% of the CPU\nrather than the 0.5% it had used without PRIMARY KEY.\n\n> > furthermore, are there any plans to offer a better libpq interface to\n> > INSERT? the things i'm doing now to quote the text, and the extra copy\n> > i'm maintaining, are painful.\n> \n> What exactly are you looking for in \"better\"? Is it just the quoting\n> issue (a longstanding problem which persists for historical reasons :(\n\nwell, the programmatic interface to SELECT is just about perfect. i can\nconstruct the command and send it over, then check the result to see how\nmany tuples and fields i got back, and then i can get the value in its\nnative form as a big block of goo.\n\n /* Send the query. */\n if (snprintf(cmd, sizeof cmd, \"SELECT %s FROM %s WHERE %s = %s%s%s\",\n text, table, key, p, value, p) >= sizeof cmd) {\n fprintf(stderr, \"%s: snprintf overflow\\n\", progname);\n goto done;\n }\n res = PQexec(conn, cmd);\n if (PQresultStatus(res) != PGRES_TUPLES_OK) {\n fprintf(stderr, \"%s: \\\"%s\\\": %s\", progname, cmd,\n PQresultErrorMessage(res));\n goto done;\n }\n if (PQnfields(res) != 1) {\n fprintf(stderr, \"%s: \\\"%s\\\": %d fields?\\n\",\n progname, cmd, PQnfields(res));\n goto done;\n }\n if (PQntuples(res) != 1) {\n fprintf(stderr, \"%s: \\\"%s\\\": %d tuples?\\n\",\n progname, cmd, PQntuples(res));\n goto done;\n }\n\n /* Output the result. */\n pch = '\\0';\n for (p = PQgetvalue(res, 0, 0), ch = '\\0'; (ch = *p) != '\\0'; p++) {\n putc(ch, fp);\n pch = ch;\n }\n if (pch != '\\n')\n putc('\\n', fp);\n status = 0;\n\nfor INSERT, though, there is no analogue. i guess i'm looking for functions\nwhich might have names like PQinsert() and PQaddtuple(). instead, i've got\n\n /*\n * Read the file to find out how large it will be when quoted.\n * If it's not a regular file, make a copy while reading, then switch.\n */\n\t...\n /* Construct the INSERT command. */\n n = snprintf(cmd, sizeof cmd,\n \"INSERT INTO %s ( %s, %s ) VALUES ( %s%s%s, '\",\n table, key, text, p, value, p);\n if (n >= sizeof cmd) {\n fprintf(stderr, \"%s: snprintf overflow\\n\", progname);\n goto done;\n }\n t = malloc(n + size + sizeof \"');\");\n if (t == NULL) {\n perror(\"malloc\");\n goto done;\n }\n strcpy(t, cmd);\n tp = t + n;\n while ((ch = getc(fp)) != EOF) {\n if (ch == '\\\\' || ch == '\\'')\n *tp++ = '\\\\';\n *tp++ = ch;\n }\n *tp++ = '\\'';\n *tp++ = ')';\n *tp++ = ';';\n *tp++ = '\\0';\n\n /* Send the command. */\n res = PQexec(conn, t);\n if (PQresultStatus(res) != PGRES_COMMAND_OK) {\n fprintf(stderr, \"%s: \\\"%s\\\": %s\", progname, t,\n PQresultErrorMessage(res));\n goto done;\n }\n if (strcmp(PQcmdTuples(res), \"1\") != 0) {\n fprintf(stderr, \"%s: \\\"%s...\\\": '%s' tuples? (%s)\\n\",\n progname, cmd, PQcmdTuples(res), PQcmdStatus(res));\n goto done;\n }\n status = 0;\n\nwhich is really, really painful. the large \"text\" is a great idea, but the\nold \"lo_\" API actually had some things going for it.\n\n> Possible causes of the 1.5s \"mode\" (at least as a starting point):\n> \n> o task scheduling on your test machine (not likely??)\n> \n> o swapping/thrashing on your test machine (not likely??)\n\nthe machine was idle other than for this test. it's a two processor\nfreebsd machine.\n\n> o WAL fsync() log commits and cleanup (aggregate throughput is great,\n> but every once in a while someone waits while the paperwork gets done.\n> Waiting may be due to processor resource competition)\n> \n> o Underlying file system bookkeeping from the kernel. e.g. flushing\n> buffers to disk etc etc.\n\ni'm going to make a 500MB MFS partition for /usr/local/pgsql/data/base if\nthe PRIMARY KEY idea doesn't work out, just to rule out actuator noise.\n",
"msg_date": "Tue, 19 Dec 2000 16:43:05 -0800",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance modality in 7.1 for large text attributes? "
},
{
"msg_contents": "> furthermore, are there any plans to offer a better libpq interface to INSERT?\n> the things i'm doing now to quote the text, and the extra copy i'm maintaining,\n> are painful. arbitrary-sized \"text\" attributes are a huge boon -- we would\n> never have considered using postgres for MAPS RSS (or RBL) with \"large\n> objects\". (kudos to all who were involved, with both WAL and TOAST!)\n\nIf you are asking for a binary interface to TOAST values, I really wish\nwe had that in 7.1. It never got finished for 7.1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Dec 2000 22:07:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance modality in 7.1 for large text attributes?"
},
{
"msg_contents": "Paul A Vixie <[email protected]> writes:\n> cisco IOS just won't take 10/8 and insists on 10.0.0.0/8. you will never,\n> ever go wrong if you try to use 10.0.0.0/8, since everything that understands\n> CIDR understands that. 10/8 is a pleasant-appearing alternative format, but\n> it is not universally accepted and i recommend against it. (i'm not sure if\n> my original CIDR type implementation for pgsql output the shorthand or not;\n> if it did, then i apologize to one and all.)\n\nWell, that's an earful. Faced with this authoritative opinion, I\nwithdraw my previous objections to changing the output format for CIDR.\n\nIt would seem that the appropriate behavior would be to make the default\ndisplay format for CIDR be like \"10.0.0.0/8\". Now the text() conversion\nfunction already produces this same format. I'd be inclined to leave\ntext() as-is and add a new conversion function with some other name\n(suggestions anyone?) that produces the shorthand form \"10/8\" as text,\nfor those who prefer it.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 13:06:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "CIDR output format"
},
{
"msg_contents": "Paul A Vixie <[email protected]> writes:\n> http://www.vix.com/~vixie/results-psql.png shows a gnuplot of the wall time\n> of 70K executions of \"pgcat\" (shown below) using a CIDR key and TEXT value.\n\nI get a 404 on that URL :-(\n\n> anybody know what i could be doing wrong? (i'm also wondering why SELECT\n> takes ~250ms whereas INSERT takes ~70ms... seems counterintuitive, unless\n> TOAST is doing a LOT better than i think.)\n\nGiven your later post, the problem is evidently that the thing is\nfailing to use the index for the SELECT. I am not sure why, especially\nsince it clearly does know (after vacuuming) that the index would\nretrieve just a single row. May we see the exact declaration of the\ntable --- preferably via \"pg_dump -s -t TABLENAME DBNAME\" ?\n\n> furthermore, are there any plans to offer a better libpq interface to INSERT?\n\nConsider using COPY if you don't want to quote the data.\n\n\tCOPY rss FROM stdin;\n\tvalues here\n\tmore values here\n\t\\.\n\n(If you don't like tab as column delimiter, you can specify another in\nthe copy command.) The libpq interface to this is relatively\nstraightforward IIRC.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 13:17:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance modality in 7.1 for large text attributes? "
},
{
"msg_contents": "* Paul A Vixie <[email protected]> [001220 10:28]:\n> > Question is whether proper (standard/most-commonly-used) format for\n> > printing CIDR network address is 10/8 or 10.0.0.0/8 (i.e. should all\n> > octets be printed even if they are 0). After search of RFCs, there's\n> > nothing that specifies the standard, but 10.0.0.0/8 is used more often in\n> > examples than 10/8 form.\n> > \n> > Postgres uses 10/8 form, and I'm saying that 10.0.0.0/8 is more accepted\n> > by everyone else. (I.E. all software can deal with that, but not all\n> > software accepts 10/8).\n> \n> cisco IOS just won't take 10/8 and insists on 10.0.0.0/8. you will never,\n> ever go wrong if you try to use 10.0.0.0/8, since everything that understands\n> CIDR understands that. 10/8 is a pleasant-appearing alternative format, but\n> it is not universally accepted and i recommend against it. (i'm not sure if\n> my original CIDR type implementation for pgsql output the shorthand or not;\n> if it did, then i apologize to one and all.)\nThere was no way, prior to 7.1, to get all 4 octets printed using the\noriginal code. \n\nThanks for clearing up the info. \n\nLarry Rosenman\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 20 Dec 2000 12:44:35 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance modality in 7.1 for large text attributes?"
},
{
"msg_contents": "* Tom Lane <[email protected]> [001220 13:02]:\n> Paul A Vixie <[email protected]> writes:\n> > cisco IOS just won't take 10/8 and insists on 10.0.0.0/8. you will never,\n> > ever go wrong if you try to use 10.0.0.0/8, since everything that understands\n> > CIDR understands that. 10/8 is a pleasant-appearing alternative format, but\n> > it is not universally accepted and i recommend against it. (i'm not sure if\n> > my original CIDR type implementation for pgsql output the shorthand or not;\n> > if it did, then i apologize to one and all.)\n> \n> Well, that's an earful. Faced with this authoritative opinion, I\n> withdraw my previous objections to changing the output format for CIDR.\n> \n> It would seem that the appropriate behavior would be to make the default\n> display format for CIDR be like \"10.0.0.0/8\". Now the text() conversion\n> function already produces this same format. I'd be inclined to leave\n> text() as-is and add a new conversion function with some other name\n> (suggestions anyone?) that produces the shorthand form \"10/8\" as text,\n> for those who prefer it.\nI would call it cidrshort(). \n\nI assume this also is true for INET? \n\nThanks!\n\nLER\n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 21 Dec 2000 05:58:58 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CIDR output format"
},
{
"msg_contents": "Larry Rosenman <[email protected]> writes:\n>> It would seem that the appropriate behavior would be to make the default\n>> display format for CIDR be like \"10.0.0.0/8\". Now the text() conversion\n>> function already produces this same format. I'd be inclined to leave\n>> text() as-is and add a new conversion function with some other name\n>> (suggestions anyone?) that produces the shorthand form \"10/8\" as text,\n>> for those who prefer it.\n\n> I would call it cidrshort(). \n\nI was thinking something like abbrev(). There is no need to put the\ntype name in the function; that's what function overloading is for.\n\n> I assume this also is true for INET? \n\nINET doesn't use abbreviation of the address part anyway. The only\ndisplay shortcut it has is to suppress \"/32\" when the netmask is 32.\nI figured that text() could produce an un-abbreviated result for an\nINET input (as it does now), and abbrev() could produce one with\n/32 suppression. In short:\n\nValue\t\t\tDefault output\ttext()\t\tabbrev()\n\n'127.0.0.1/32'::inet\t127.0.0.1\t127.0.0.1/32\t127.0.0.1\n'127.0.0.1/32'::cidr\t127.0.0.1/32\t127.0.0.1/32\t127.0.0.1/32\n'127/8'::cidr\t\t127.0.0.0/8\t127.0.0.0/8\t127/8\n\nThis would be a little bit inconsistent, because the default output\nformat would match text() for CIDR values but abbrev() for INET values.\nBut that seems like the most useful behavior to me. Possibly others\nwill disagree ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Dec 2000 10:48:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CIDR output format "
},
{
"msg_contents": "* Tom Lane <[email protected]> [001221 09:49]:\n> Larry Rosenman <[email protected]> writes:\n> >> It would seem that the appropriate behavior would be to make the default\n> >> display format for CIDR be like \"10.0.0.0/8\". Now the text() conversion\n> >> function already produces this same format. I'd be inclined to leave\n> >> text() as-is and add a new conversion function with some other name\n> >> (suggestions anyone?) that produces the shorthand form \"10/8\" as text,\n> >> for those who prefer it.\n> \n> > I would call it cidrshort(). \n> \n> I was thinking something like abbrev(). There is no need to put the\n> type name in the function; that's what function overloading is for.\n> \n> > I assume this also is true for INET? \n> \n> INET doesn't use abbreviation of the address part anyway. The only\n> display shortcut it has is to suppress \"/32\" when the netmask is 32.\n> I figured that text() could produce an un-abbreviated result for an\n> INET input (as it does now), and abbrev() could produce one with\n> /32 suppression. In short:\n> \n> Value\t\t\tDefault output\ttext()\t\tabbrev()\n> \n> '127.0.0.1/32'::inet\t127.0.0.1\t127.0.0.1/32\t127.0.0.1\n> '127.0.0.1/32'::cidr\t127.0.0.1/32\t127.0.0.1/32\t127.0.0.1/32\n> '127/8'::cidr\t\t127.0.0.0/8\t127.0.0.0/8\t127/8\n> \n> This would be a little bit inconsistent, because the default output\n> format would match text() for CIDR values but abbrev() for INET values.\n> But that seems like the most useful behavior to me. Possibly others\n> will disagree ;-)\nI'm fine with it. IIRC, you fixed it so we can cast from INET to CIDR\nand back? \n\nThanks!\n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 21 Dec 2000 10:16:02 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CIDR output format"
},
{
"msg_contents": "On Thu, 21 Dec 2000, Tom Lane wrote:\n\n> Value\t\t\tDefault output\ttext()\t\tabbrev()\n> \n> '127.0.0.1/32'::inet\t127.0.0.1\t127.0.0.1/32\t127.0.0.1\n> '127.0.0.1/32'::cidr\t127.0.0.1/32\t127.0.0.1/32\t127.0.0.1/32\n> '127/8'::cidr\t\t127.0.0.0/8\t127.0.0.0/8\t127/8\n> \n> This would be a little bit inconsistent, because the default output\n> format would match text() for CIDR values but abbrev() for INET values.\n> But that seems like the most useful behavior to me. Possibly others\n> will disagree ;-)\nI think it makes sense.\n\n",
"msg_date": "Thu, 21 Dec 2000 12:10:58 -0500 (EST)",
"msg_from": "Alex Pilosov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CIDR output format "
}
]
|
[
{
"msg_contents": "\n> > The only source of serious problems is thus a bogus write of a page\n> > segment (100 bytes ok 412 bytes chunk actually written to disk),\n> > but this case is imho sufficiently guarded or at least detected\n> > by disk hardware. \n> \n> With full page logging after checkpoint we would be safe from this\n> case...\n\n> Comments?\n> \n> - full page backup on first after checkpoint modification\n\nI guess you are right, especially since it solves above and index.\nThe \"physical log\" solution sounds a lot simpler and more robust\n(I didn't know you use PageAddItem, sounds genially simple :-)\n\nBut we should probably try to do checkpoints less frequently by default, \nlike every 20 min to avoid too much phys log.\n\nAndreas\n",
"msg_date": "Tue, 19 Dec 2000 09:15:18 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: heap page corruption not easy"
}
]
|
[
{
"msg_contents": "Hi all, and mainly postresql developpers,\n\nI've been reading old posts about the libpq interface related to multi-process\napplication. The main problem being that after a fork, each process has a DB\nconnexion, actually the same. If one closes it, the other one remains in a\nunknown or not stable state.\n\nThis is a real problem when writing a highly loaded daemon. Let's consider my\nexample : a main daemon is receiving network requests, and makes heavy use of\nthe DB. It thus have a permanent connexion to the DB. Sometimes, this main\ndaemon forks, just to serve a couple of request. This child process doesn't\nneed a permanent connexion. But closing it would destroy his parent's one.\n\nThere is actually one very easy, but awful, solution : closing the database\nconnexion before forking, and reopening when needed in each process. But\nthat's really awful, cause the main daemon will always close, fork, and then\njust after reopen. What a waste !\n\nA second solution would be a clone of the PQfinish function which does NOT\nsend the disconnexion sequence to the backend but just does everything else\n(release memory, close the socket, and so on).\n\nThe big frustration being that this clone actually exists in the library, but\nis a private function. It's named freePGconn, and is called from PQfinish\nbesides closePGconn (which sends the disconnexion sequence to the backend).\n\nSo I guess you've understood my request. Great folks from postresql, would it\nbe possible to kinda export a nice version of freePGconn ? It would really,\nreally, help people writing multi-process application without having to manage\na single connexion with shared memory and other tricks, as suggested a few\nmonths ago.\n\nIn the meantime, I use the ugly solution : freeing the pointer returned by\nPQconnectdb in the child process, knowing some memory hasn't been released.\nHopefully, these child processes don't last long, and the garbage collector is\nworking fine !\n\n\nComments / other solutions are welcome !\n\nRegards,\n\n-- \nS�bastien Bonnet\n [email protected] http://bonseb.free.fr/\n",
"msg_date": "Tue, 19 Dec 2000 15:08:57 +0100",
"msg_from": "=?iso-8859-1?Q?S=E9bastien?= Bonnet <[email protected]>",
"msg_from_op": true,
"msg_subject": "libpq enhancement for multi-process application"
},
{
"msg_contents": "> Uhm... I always thought that sharing the same socket between\n> processes is wrong.\n\nWell, I've never thought about it before this problem, but it definitely\nappears to me like something not to do. Sharing remote object doesn't sound\nright :-(\n\n> My multi-process daemon works like apache with a pool of processes\n> everyone with its own connection to the DB. The connection is only\n> opened AFTER the fork and remains open as long as the process lives just\n> to avoid a new connection for each accept.\n\nWhen you can do it this way, that's nice'n'easy. In my case, I have to have a\nconnection before the fork, and keep it after in both parent and child,\neventhough it will be closed a few seconds later in the child. So, for now on,\nthe only-almost-clean-solution is to free the pgconn structure in the child\nand reconnect when needed. This way, the parent process keeps its own\nconnexion. No other process is using it. Sounds safe, but kinda\n\"do-it-yourself\" :-(\n\n-- \nS�bastien Bonnet\n [email protected] http://bonseb.free.fr/\n",
"msg_date": "Tue, 19 Dec 2000 17:23:46 +0100",
"msg_from": "=?iso-8859-1?Q?S=E9bastien?= Bonnet <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq enhancement for multi-process application"
},
{
"msg_contents": "S�bastien Bonnet wrote:\n> \n> Hi all, and mainly postresql developpers,\n> \n> I've been reading old posts about the libpq interface related to multi-process\n> application. The main problem being that after a fork, each process has a DB\n> connexion, actually the same. If one closes it, the other one remains in a\n> unknown or not stable state.\n\nUhm... I always thought that sharing the same socket between processes\nis wrong.\n\nMy multi-process daemon works like apache with a pool of processes\neveryone with its own connection to the DB. The connection is only\nopened AFTER the fork and remains open as long as the process lives just\nto avoid a new connection for each accept.\n\nBye!\n",
"msg_date": "Tue, 19 Dec 2000 18:10:10 +0100",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq enhancement for multi-process application"
}
]
|
[
{
"msg_contents": "HI all,\n\nI've encountered a database freeze and found it's due\nto the reset of connection after abort.\n \nThe following is a part of postmaster log.\nA new backend(pid=395) started immedaitely after\na backend(pid=394) abort. OTOH postmaster tries\nto kill all backends to cleanup shared memory.\nHowever the process 394 ignored SIGUSR1 signal\nand is waiting for some lock which would never be\nreleased.\n\nFATAL 2: elog: error during error recovery, giving up!\nDEBUG: proc_exit(2)\nDEBUG: shmem_exit(2)\npostmaster: ServerLoop:\t\thandling reading 5\npostmaster: ServerLoop:\t\thandling reading 5\npostmaster: ServerLoop:\t\thandling writing 5\npostmaster: BackendStartup: pid 395 user reindex db reindex socket 5\nDEBUG: exit(2)\npostmaster: reaping dead processes...\npostmaster: CleanupProc: pid 394 exited with status 512\nServer process (pid 394) exited with status 512 at Tue Dec 19 20:12:41 2000\nTerminating any active server processes...\npostmaster: CleanupProc: sending SIGUSR1 to process 395\npostmaster child[395]: starting with (postgres -d2 -v131072 -p reindex )\nFindExec: searching PATH ...\nValidateBinary: can't stat \"/bin/postgres\"\nValidateBinary: can't stat \"/usr/bin/postgres\"\nValidateBinary: can't stat \"/usr/local/bin/postgres\"\nValidateBinary: can't stat \"/usr/bin/X11/postgres\"\nValidateBinary: can't stat \"/usr/lib/jdk1.2/bin/postgres\"\nValidateBinary: can't stat \"/home/freetools/bin/postgres\"\nFindExec: found \"/home/freetools/reindex/bin/postgres\" using PATH\nDEBUG: connection: host=[local] user=reindex database=reindex\nDEBUG: InitPostgres\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Wed, 20 Dec 2000 00:40:50 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is PQreset() proper ?"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> postmaster: BackendStartup: pid 395 user reindex db reindex socket 5\n> DEBUG: exit(2)\n> postmaster: reaping dead processes...\n> postmaster: CleanupProc: pid 394 exited with status 512\n> Server process (pid 394) exited with status 512 at Tue Dec 19 20:12:41 2000\n> Terminating any active server processes...\n> postmaster: CleanupProc: sending SIGUSR1 to process 395\n> postmaster child[395]: starting with (postgres -d2 -v131072 -p reindex )\n\nThis isn't PQreset()'s fault that I can see. This is a race condition\ncaused by bogosity in PostgresMain --- it enables SIGUSR1 before it's\nset up the correct signal handler for same. The postmaster should have\nstarted the child process with all signals blocked, so SIGUSR1 will be\nheld off until the child explicitly enables it; but it does so a few\nlines too soon. Will fix.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 14:28:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is PQreset() proper ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > postmaster: BackendStartup: pid 395 user reindex db reindex socket 5\n> > DEBUG: exit(2)\n> > postmaster: reaping dead processes...\n> > postmaster: CleanupProc: pid 394 exited with status 512\n> > Server process (pid 394) exited with status 512 at Tue Dec 19 20:12:41 2000\n> > Terminating any active server processes...\n> > postmaster: CleanupProc: sending SIGUSR1 to process 395\n> > postmaster child[395]: starting with (postgres -d2 -v131072 -p reindex )\n> \n> This isn't PQreset()'s fault that I can see. This is a race condition\n> caused by bogosity in PostgresMain --- it enables SIGUSR1 before it's\n> set up the correct signal handler for same. The postmaster should have\n> started the child process with all signals blocked, so SIGUSR1 will be\n> held off until the child explicitly enables it; but it does so a few\n> lines too soon. Will fix.\n> \n\nI once observed another case,the hang of CheckPoint process\nwhile postmaster was in a backend crash recovery. I changed\npostmaster.c to not invoke CheckPoint process while postmaster\nis in a backend crash recovery but it doesn't seem sufficient.\nSIGUSR1 signal seems to be blocked all the way in CheckPoint\nprocess.\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Thu, 21 Dec 2000 12:42:35 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is PQreset() proper ?"
},
{
"msg_contents": "> Tom Lane wrote:\n>> This isn't PQreset()'s fault that I can see. This is a race condition\n>> caused by bogosity in PostgresMain --- it enables SIGUSR1 before it's\n>> set up the correct signal handler for same. The postmaster should have\n>> started the child process with all signals blocked, so SIGUSR1 will be\n>> held off until the child explicitly enables it; but it does so a few\n>> lines too soon. Will fix.\n\nActually, it turns out the real problem is that backends were inheriting\na SIG_IGN setting for SIGUSR1 from the postmaster. So a SIGUSR1\ndelivered before they got as far as setting up their own signal handling\nwould get lost. Fixed now.\n\nHiroshi Inoue <[email protected]> writes:\n> I once observed another case,the hang of CheckPoint process\n> while postmaster was in a backend crash recovery. I changed\n> postmaster.c to not invoke CheckPoint process while postmaster\n> is in a backend crash recovery but it doesn't seem sufficient.\n> SIGUSR1 signal seems to be blocked all the way in CheckPoint\n> process.\n\nHm. Vadim, do you think it's safe to let CheckPoint be killed by\nSIGUSR1? If not, what will we do about this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Dec 2000 01:17:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is PQreset() proper ? "
}
]
|
[
{
"msg_contents": "Hi all,\n\nIn InitPostgres()(postinit.c) I see the following code.\n\n RelationCacheInitialize(); /* pre-allocated reldescs created here\n*/\n InitializeTransactionSystem(); /* pg_log,etc init/crash recovery\nhere */\n\ninit_irels() is at the end of RelationCacheInitialize() and\naccesses system tables to build some system index\nrelations. However InitializeTransactionSystem() isn't\ncalled at this point and so TransactionIdDidCommit()\nalways returns true. Time qualification doesn't work\nproperly under such a situation.\nIt seems that init_irels() should be called after\nInitializeTransactionSystem() was called.\n\nComments ?\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Wed, 20 Dec 2000 00:41:15 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Isn't init_irels() dangerous ?"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> It seems that init_irels() should be called after\n> InitializeTransactionSystem() was called.\n\nCan we just swap the order of the RelationCacheInitialize() and\nInitializeTransactionSystem() calls in InitPostgres? If that\nworks, I'd have no objection.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 14:13:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Isn't init_irels() dangerous ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > It seems that init_irels() should be called after\n> > InitializeTransactionSystem() was called.\n> \n> Can we just swap the order of the RelationCacheInitialize() and\n> InitializeTransactionSystem() calls in InitPostgres? If that\n> works, I'd have no objection.\n>\n\nIt doesn't work. InitializeTransactionSystem() requires\npg_log/pg_variable relations which are already built in \nRelationCacheInitialize(). A few critical relations\nincluding pg_log/pg_variable are built in RelationCache\nInitialize() without touching database. It's OK but\ninit_irels() touches system tables to build a few\ncritical index relations. IMHO init_irels() should\nbe separated from RelationCacheInitialize().\n\nIn the meantime,I have another anxiety. init_irels()\n(RelationCacheInitialize()) seems to be called while \nLocking is disabled. This seems to mean that init_irels()\ncould access to system tables even when they are in\nvacuum. HeapTupleSatisfiesXXXX() doesn't seem to take\nsuch cases into account except HeapTupleSatisfiesDirty().\nHeapTupleSatisfiesXXXX() sets HEAP_XMIN_COMMITTED or\nHEAP_XMIN_INVALID mask for HEAP_MOVED_IN(OFF) tuples.\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Thu, 21 Dec 2000 09:26:58 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Isn't init_irels() dangerous ?"
},
{
"msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> Tom Lane wrote:\n>> \"Hiroshi Inoue\" <[email protected]> writes:\n>>>> It seems that init_irels() should be called after\n>>>> InitializeTransactionSystem() was called.\n>> \n>> Can we just swap the order of the RelationCacheInitialize() and\n>> InitializeTransactionSystem() calls in InitPostgres? If that\n>> works, I'd have no objection.\n\n> It doesn't work. InitializeTransactionSystem() requires\n> pg_log/pg_variable relations which are already built in \n> RelationCacheInitialize().\n\nOK. Second proposal: do the init_irels() call in\nRelationCacheInitializePhase2(). I've just looked through the\nother stuff that's done in between, and I don't think any of it\nneeds valid relcache entries.\n\n> In the meantime,I have another anxiety. init_irels()\n> (RelationCacheInitialize()) seems to be called while \n> Locking is disabled.\n\nThis should fix that problem, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Dec 2000 19:37:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Isn't init_irels() dangerous ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <[email protected]> writes:\n> > Tom Lane wrote:\n> >> \"Hiroshi Inoue\" <[email protected]> writes:\n> >>>> It seems that init_irels() should be called after\n> >>>> InitializeTransactionSystem() was called.\n> >>\n> >> Can we just swap the order of the RelationCacheInitialize() and\n> >> InitializeTransactionSystem() calls in InitPostgres? If that\n> >> works, I'd have no objection.\n> \n> > It doesn't work. InitializeTransactionSystem() requires\n> > pg_log/pg_variable relations which are already built in\n> > RelationCacheInitialize().\n> \n> OK. Second proposal: do the init_irels() call in\n> RelationCacheInitializePhase2(). I've just looked through the\n> other stuff that's done in between, and I don't think any of it\n> needs valid relcache entries.\n> \n\nOops, I neglected to reply \"agreed\", sorry.\nIt would be much safer for init_irels() to be called\nin a proper transaction than the current implementation.\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Sat, 06 Jan 2001 09:33:34 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Isn't init_irels() dangerous ?"
},
{
"msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> Tom Lane wrote:\n>> OK. Second proposal: do the init_irels() call in\n>> RelationCacheInitializePhase2(). I've just looked through the\n>> other stuff that's done in between, and I don't think any of it\n>> needs valid relcache entries.\n\n> Oops, I neglected to reply \"agreed\", sorry.\n> It would be much safer for init_irels() to be called\n> in a proper transaction than the current implementation.\n\nFine. Were you going to do it, or do you want me to?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Jan 2001 19:47:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Isn't init_irels() dangerous ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <[email protected]> writes:\n> > Tom Lane wrote:\n> >> OK. Second proposal: do the init_irels() call in\n> >> RelationCacheInitializePhase2(). I've just looked through the\n> >> other stuff that's done in between, and I don't think any of it\n> >> needs valid relcache entries.\n> \n> > Oops, I neglected to reply \"agreed\", sorry.\n> > It would be much safer for init_irels() to be called\n> > in a proper transaction than the current implementation.\n> \n> Fine. Were you going to do it, or do you want me to?\n>\n\nIt would only need to change a few lines.\nOK, I will do it. \n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Sat, 06 Jan 2001 09:58:28 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Isn't init_irels() dangerous ?"
}
]
|
[
{
"msg_contents": "I am doing some testing and development on Postgres.\n\nIs there, by chance, a good source of data which can be used as a test\ndatabase? I have been using a music database, but it is proprietary, and\nmakes me uncomfortable to post public tests.\n\nWhat do you guys use?\n\nPerhaps we can create a substantial test database? (Millions of records,\nmany tables, and a number of relations.) So when we see a problem, we\ncan all see it right away. I like \"real world\" data, because it is often\nmore organic than randomized test data, and brings out more issues. Take\nindex selection during a select, for instance.\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Tue, 19 Dec 2000 13:19:44 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sample databases?"
},
{
"msg_contents": "> What do you guys use?\n\nThe regression database, which you can augment with some \"insert into x\nselect * from x;\" commands. It would also be useful to have a \"database\ngeneration\" script, but of course this would be cooked data.\n\n> Perhaps we can create a substantial test database? (Millions of records,\n> many tables, and a number of relations.) So when we see a problem, we\n> can all see it right away. I like \"real world\" data, because it is often\n> more organic than randomized test data, and brings out more issues. Take\n> index selection during a select, for instance.\n\nThe regression database is such a beast, but is not large enough for the\nmillions of records kinds of tests.\n\nSuggestions?\n\n - Thomas\n",
"msg_date": "Tue, 19 Dec 2000 19:31:52 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sample databases?"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > Perhaps we can create a substantial test database? (Millions of records,\n> > many tables, and a number of relations.) So when we see a problem, we\n> > can all see it right away. I like \"real world\" data, because it is often\n> > more organic than randomized test data, and brings out more issues. Take\n> > index selection during a select, for instance.\n> \n> The regression database is such a beast, but is not large enough for the\n> millions of records kinds of tests.\n> \n> Suggestions?\n> \n\nmaybe the Tiger database. it's certainly big enough & freely\navailable. if you're not familiar with tiger, it's a street database\nfrom the census department. you can find it at\nftp://ftp.linuxvc.com/pub/US-map. it's in plain text format, but\ntrivial to import. it's set up in several (at least a dozen tables)\nwhich are heavily interrelated & sometimes in fairly complex ways.\n\n-- \n\nJeff Hoffmann\nPropertyKey.com\n",
"msg_date": "Tue, 19 Dec 2000 13:33:38 -0600",
"msg_from": "Jeff Hoffmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sample databases?"
},
{
"msg_contents": "The NIMA web site has tab-delimited version of the\nAirfield Information database files. Lots of data,\nmany tables to relate. Some elements are geographic,\nothers are text and numeric feature attributes.\n\nmlw wrote:\n\n> I am doing some testing and development on Postgres.\n> \n> Is there, by chance, a good source of data which can be used as a test\n> database? I have been using a music database, but it is proprietary, and\n> makes me uncomfortable to post public tests.\n>\n\n",
"msg_date": "Wed, 20 Dec 2000 00:41:01 GMT",
"msg_from": "Josh Rovero <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sample databases?"
},
{
"msg_contents": "mlw <[email protected]> writes:\n> Perhaps we can create a substantial test database? (Millions of records,\n> many tables, and a number of relations.) So when we see a problem, we\n> can all see it right away. I like \"real world\" data, because it is often\n> more organic than randomized test data, and brings out more issues.\n\nThat's true, but a single test database strikes me as the wrong way\nto go. The real-life examples that people throw at Postgres are so\nvaried that a test database could never hope to be an adequate\nsubstitute. I think a test database would likely be subject to\n\"benchmark syndrome\", ie it'd encourage us to optimize with blinders on.\n\nThe regression database is actually sufficient to reproduce most simpler\nsorts of performance problems, once you know what to look for.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 14:42:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sample databases? "
},
{
"msg_contents": "On Wed, Dec 20, 2000 at 12:41:01AM +0000, Josh Rovero wrote:\n> mlw wrote:\n> > I am doing some testing and development on Postgres.\n> > \n> > Is there, by chance, a good source of data which can be used as a test\n> > database? I have been using a music database, but it is proprietary, and\n> > makes me uncomfortable to post public tests.\n> \n> The NIMA web site has tab-delimited version of the\n> Airfield Information database files. Lots of data,\n> many tables to relate. Some elements are geographic,\n> others are text and numeric feature attributes.\n\nIt would be no bad thing to include benchmarks against large, real\nsample databases. However, it would be very bad indeed to include\nthose large databases in the distribution.\n\nI suggest that each such benchmark script include code to check if \nthe sample database is present and, if not, download it from its \ncanonical site, massage it into shape and import it. Then there \nwould be no need to limit the number and variety of large sample \ndatabases that a build may be tried against.\n\nI gather that it takes two weeks to run the regression tests for\nIBM's DB2 for a single target platform.\n\nNathan Myers\[email protected]\n",
"msg_date": "Wed, 20 Dec 2000 14:55:17 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Sample databases?"
},
{
"msg_contents": "> I suggest that each such benchmark script include code to check if\n> the sample database is present and, if not, download it from its\n> canonical site, massage it into shape and import it. Then there\n> would be no need to limit the number and variety of large sample\n> databases that a build may be tried against.\n\nThe contrib/mac directory does something similar wrt fetching data,\nusing wget to get a file of manufacturer mac addresses to populate the\ndatabase as necessary.\n\nAlthough not everyone will use such a large test database, it certainly\ncannot hurt to have someone pursue assembling this as a toolkit, and\nrunning tests if they find that interesting and helpful. It may be hard\nto predict if, when, and how we will find this useful until it exists ;)\n\n - Thomas\n",
"msg_date": "Thu, 21 Dec 2000 06:11:44 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sample databases?"
}
]
|
[
{
"msg_contents": "\nGiven this basic SQL statement:\n\nselect * from table where col = function() ;\n\nThere are three basic types of SQL behaviors that should be able to be\nperformed.\n\n(1) \"function()\" returns a single value. Postgres should be able to\nunderstand how to optimize this to be: \"select * from table where col =\nvalue\" where value is the datum returned by function.\n\n(2) \"function()\" returns a number of values that are independent of the\nquery. Postgres should be able to optimize this to be: \"select * from\ntable where col in (val1, val2, val3, ..valn).\" I guess Postgres can\nloop until done, using the isDone flag?\n\n(3) \"function()\" returns a value based on the query. (This seems to be\nhow it currently functions.) where \"select * from table where col =\nfunction()\" will end up doing a full table scan. \n\n\n(1) and (2) are related, and could probably be implemented using the\nsame code. \n(3) Seems to be how Postgres is currently optimized.\n\nIt seems like Tom Lane laid the foundation for this behavior in 7.1\nnewC. (Does it now work this way?)\n\nDoes anyone see a problem with this thinking, and does it make sense to\nattempt this for 7.2? I am looking into the function manager stuff to\nsee what would be involved.\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Tue, 19 Dec 2000 13:41:10 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Three types of functions, ala function redux."
},
{
"msg_contents": "\n[I was having trouble with the direct address so i'm only sending to\nthe list]\n\n> select * from table where col = function() ;\n\n> (2) \"function()\" returns a number of values that are independent of the\n> query. Postgres should be able to optimize this to be: \"select * from\n> table where col in (val1, val2, val3, ..valn).\" I guess Postgres can\n> loop until done, using the isDone flag?\n\nI disagree here. I really don't think that changing = to mean \"in\"\nin the system is a good idea. If the user wants an in they should \nspecify it.\nI think \"select * from table where col in (select function());\" or\n\"select * from table where col in (select * from function());\" or\neven \"select * from table where col in function();\"\nare better ways of specifying this sort of behavior.\n\nIf we do that (col = <function returning set>) meaning in, then does\ncol = (select statement that returns multiple rows) mean in and what\nabout col = <array>? I think doing it only for the function case is\na mistake.\n\n",
"msg_date": "Tue, 19 Dec 2000 11:19:16 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Three types of functions, ala function redux."
},
{
"msg_contents": "mlw <[email protected]> writes:\n> There are three basic types of SQL behaviors that should be able to be\n> performed.\n\n> (1) \"function()\" returns a single value. Postgres should be able to\n> understand how to optimize this to be: \"select * from table where col =\n> value\" where value is the datum returned by function.\n\nYou get this now if the function is marked proiscachable.\n\n> (2) \"function()\" returns a number of values that are independent of the\n> query. Postgres should be able to optimize this to be: \"select * from\n> table where col in (val1, val2, val3, ..valn).\" I guess Postgres can\n> loop until done, using the isDone flag?\n\nI object to the notion that \"scalar = set\" should be automatically\ntransformed into \"scalar IN set\". It would be nice to be smarter about\noptimizing IN operations where the subselect only returns a few rows\ninto multiple indexscans, but how should the planner know that in advance?\n\n> (3) \"function()\" returns a value based on the query. (This seems to be\n> how it currently functions.) where \"select * from table where col =\n> function()\" will end up doing a full table scan. \n\nYou get this now if the function is not marked proiscachable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 14:50:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Three types of functions, ala function redux. "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <[email protected]> writes:\n> > There are three basic types of SQL behaviors that should be able to be\n> > performed.\n> \n> > (1) \"function()\" returns a single value. Postgres should be able to\n> > understand how to optimize this to be: \"select * from table where col =\n> > value\" where value is the datum returned by function.\n> \n> You get this now if the function is marked proiscachable.\n\nDoh! RTFM!\n\n> \n> > (2) \"function()\" returns a number of values that are independent of the\n> > query. Postgres should be able to optimize this to be: \"select * from\n> > table where col in (val1, val2, val3, ..valn).\" I guess Postgres can\n> > loop until done, using the isDone flag?\n> \n> I object to the notion that \"scalar = set\" should be automatically\n> transformed into \"scalar IN set\". It would be nice to be smarter about\n> optimizing IN operations where the subselect only returns a few rows\n> into multiple indexscans, but how should the planner know that in advance?\n\nThat is sort of my point. If one marks a function as \"Iscachable\" and\nreturns an isDone as false, will postgres keep calling until all values\nhave been returned, and then use an index scan with the finite (cached?)\nset of results?\n\nIf so, this is exactly what I need.\n\n> \n> > (3) \"function()\" returns a value based on the query. (This seems to be\n> > how it currently functions.) where \"select * from table where col =\n> > function()\" will end up doing a full table scan.\n> \n> You get this now if the function is not marked proiscachable.\n\nA lot of my confusion has cleared, the \"iscachable\" flag is an\nenlightenment. Boy am I schmuck. ;-}\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Thu, 21 Dec 2000 13:44:59 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Three types of functions, ala function redux."
}
]
|
[
{
"msg_contents": "my cgi program is test.cgi:\n#######################\nrequire \"./connectdb.pl\";\n&connectdatabase();\n$query=\"select count(*) from messages\";\n$sth=$dbh->prepare($query);\n$sth->execute();\n$count=$sth->fetchrow_array();\nprint \"Content-type: text/html\\n\\n\";\nprint <<\"TAG\";\n<html>\n<body>\n<h2> The count is $count. </h2>\n</body>\n</html>\nTAG\nexit 0;\n#############\nmy connectdb.pl :\nsub connectdatabase {\n# my ($dbusername,$dbpassword)=@_;\n $dbusername=\"postgres\";\n $dbpassword=\"lokicom\";\n $dbname=\"mboardsony\";\n use DBI;\n $dbh=DBI->connect(\"dbi:Pg:dbname=$dbname\",$dbusername,$dbpassword) or die \"can\nnot connect to $dbname\\n\";\n}\n1;\n#######################\nmy os is Redhat 6.2,and perl 5.005,and web server is Apache.\nThe problem is:when I run test.cgi,it can work properly.But when I press F5\nto refresh the web page for sever minutes,the Apache will have error message:\n\"DBI->connect failed: Sorry, too many clients already.\"\n Who can help me?\n Thank you ahead.\n \n My email: [email protected]\n\n\n\n\n\n\n\nmy cgi program is \ntest.cgi:#######################require \n\"./connectdb.pl\";&connectdatabase();$query=\"select count(*) from \nmessages\";$sth=$dbh->prepare($query);$sth->execute();$count=$sth->fetchrow_array();print \n\"Content-type: text/html\\n\\n\";print \n<<\"TAG\";<html><body><h2> The count is \n$count. </h2></body></html>TAGexit \n0;#############my connectdb.pl :sub connectdatabase {# my \n($dbusername,$dbpassword)=@_; $dbusername=\"postgres\"; \n$dbpassword=\"lokicom\"; $dbname=\"mboardsony\"; use \nDBI; \n$dbh=DBI->connect(\"dbi:Pg:dbname=$dbname\",$dbusername,$dbpassword) or die \n\"cannot connect to $dbname\\n\";}1;#######################my \nos is Redhat 6.2,and perl 5.005,and web server is Apache.The problem \nis:when I run test.cgi,it can work properly.But when I press F5to refresh \nthe web page for sever minutes,the Apache will have error \nmessage:\"DBI->connect failed: Sorry, too many clients already.\" \nWho can help me? Thank you ahead. My email: [email protected]",
"msg_date": "Tue, 19 Dec 2000 13:56:33 -0500",
"msg_from": "\"Joseph\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help me for \"DBI->connect failed: Sorry, too many clients already.\""
},
{
"msg_contents": "On Tue, 19 Dec 2000, Joseph wrote:\n\n> $dbh=DBI->connect(\"dbi:Pg:dbname=$dbname\",$dbusername,$dbpassword) or die \"can\n\nI would assume that if you never disconnect and are running under\nmod_perl, you will have problems.\n\n",
"msg_date": "Tue, 19 Dec 2000 21:02:47 +0100 (MET)",
"msg_from": "Marc SCHAEFER <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help me for \"DBI->connect failed: Sorry,\n\ttoo many clients already.\""
},
{
"msg_contents": "Hi,there,\n\nI hope it helps;\n\n1. postgres by default allows 16 sessiones(if I don't remember wrong)\n at same time. You can change the setting according to the doccument.\n open too many sessiones at same time will more or less affect the\n performance.\n2. I believe that using Pg module will be easier than DBI.\n\n\nJie LIANG\n\nInternet Products Inc.\n\n10350 Science Center Drive\nSuite 100, San Diego, CA 92121\nOffice:(858)320-4873\n\[email protected]\nwww.ipinc.com\n\nOn Tue, 19 Dec 2000, Joseph wrote:\n\n> my cgi program is test.cgi:\n> #######################\n> require \"./connectdb.pl\";\n> &connectdatabase();\n> $query=\"select count(*) from messages\";\n> $sth=$dbh->prepare($query);\n> $sth->execute();\n> $count=$sth->fetchrow_array();\n> print \"Content-type: text/html\\n\\n\";\n> print <<\"TAG\";\n> <html>\n> <body>\n> <h2> The count is $count. </h2>\n> </body>\n> </html>\n> TAG\n> exit 0;\n> #############\n> my connectdb.pl :\n> sub connectdatabase {\n> # my ($dbusername,$dbpassword)=@_;\n> $dbusername=\"postgres\";\n> $dbpassword=\"lokicom\";\n> $dbname=\"mboardsony\";\n> use DBI;\n> $dbh=DBI->connect(\"dbi:Pg:dbname=$dbname\",$dbusername,$dbpassword) or die \"can\n> not connect to $dbname\\n\";\n> }\n> 1;\n> #######################\n> my os is Redhat 6.2,and perl 5.005,and web server is Apache.\n> The problem is:when I run test.cgi,it can work properly.But when I press F5\n> to refresh the web page for sever minutes,the Apache will have error message:\n> \"DBI->connect failed: Sorry, too many clients already.\"\n> Who can help me?\n> Thank you ahead.\n> \n> My email: [email protected]\n> \n\n",
"msg_date": "Tue, 19 Dec 2000 14:22:34 -0800 (PST)",
"msg_from": "Jie Liang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help me for \"DBI->connect failed: Sorry, too many clients\n\talready.\""
},
{
"msg_contents": "On Tue, 19 Dec 2000, Jie Liang wrote:\n\n> 2. I believe that using Pg module will be easier than DBI.\n\nDBD::Pg is used by DBI, and DBI is more generic (read: portable, better,\netc).\n\n",
"msg_date": "Wed, 20 Dec 2000 20:04:50 +0100 (MET)",
"msg_from": "Marc SCHAEFER <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [ADMIN] Help me for \"DBI->connect failed: Sorry,\n\ttoo many clients already.\""
}
]
|
[
{
"msg_contents": "Hello:\n\nwhile compiling in windows 2000 got this error:\n\nYS.o hash/SUBSYS.o init/SUBSYS.o misc/SUBSYS.o mmgr/SUBSYS.o \nsort/SUBSYS.o time/\nSUBSYS.o\nmake[2]: Leaving directory `/usr/src/postgresql-7.0.\n3/src/backend/utils'\nmake -C ../utils dllinit.o\nmake[2]: Entering directory `/usr/src/postgresql-7.0.3/src/utils'\ngcc -I../include -I../backend -I/usr/local/include -O2 -\nI/usr/local/include -W\nall -Wmissing-prototypes -Wmissing-declarations -c -o dllinit.o \ndllinit.c\ndllinit.c:47: conflicting types for \n`_cygwin_dll_entry'\ndllinit.c:47: previous declaration of `_cygwin_dll_entry'\ndllinit.c:79: conflicting types for \n`DllMain'\ndllinit.c:47: previous declaration of `DllMain'\nmake[2]: *** [dllinit.o] Error 1\nmake[2]: Leaving directory `/usr/src/postgresql-7.0.\n3/src/utils'\nmake[1]: *** [../utils/dllinit.o] Error 2\nmake[1]: Leaving directory `/usr/src/postgresql-7.0.3/src/backend'\nmake: *** [all] Error 2\n\ncan anyone tell me what's wrong ?, I've followed this instructions:\n\nhttp://people.freebsd.org/~kevlo/postgres/portNT.html\n\nand that's my error, any help would be apprecciated.\n\nThank you.\n\n\n--\nIng. Luis Maga�a.\nGnovus Networks & Software\nwww.gnovus.com\n\n\n",
"msg_date": "Tue, 19 Dec 2000 14:16:18 -0600",
"msg_from": "Luis =?UNKNOWN?Q?Maga=F1a?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Compiling on Win32"
},
{
"msg_contents": "\n\ntry to go to http://208.160.255.143 there's a compiled postgresql\nserver...\n\n\n\nOn Tue, 19 Dec 2000, Luis [UNKNOWN] Maga���a wrote:\n\n> Hello:\n> \n> while compiling in windows 2000 got this error:\n> \n> YS.o hash/SUBSYS.o init/SUBSYS.o misc/SUBSYS.o mmgr/SUBSYS.o \n> sort/SUBSYS.o time/\n> SUBSYS.o\n> make[2]: Leaving directory `/usr/src/postgresql-7.0.\n> 3/src/backend/utils'\n> make -C ../utils dllinit.o\n> make[2]: Entering directory `/usr/src/postgresql-7.0.3/src/utils'\n> gcc -I../include -I../backend -I/usr/local/include -O2 -\n> I/usr/local/include -W\n> all -Wmissing-prototypes -Wmissing-declarations -c -o dllinit.o \n> dllinit.c\n> dllinit.c:47: conflicting types for \n> `_cygwin_dll_entry'\n> dllinit.c:47: previous declaration of `_cygwin_dll_entry'\n> dllinit.c:79: conflicting types for \n> `DllMain'\n> dllinit.c:47: previous declaration of `DllMain'\n> make[2]: *** [dllinit.o] Error 1\n> make[2]: Leaving directory `/usr/src/postgresql-7.0.\n> 3/src/utils'\n> make[1]: *** [../utils/dllinit.o] Error 2\n> make[1]: Leaving directory `/usr/src/postgresql-7.0.3/src/backend'\n> make: *** [all] Error 2\n> \n> can anyone tell me what's wrong ?, I've followed this instructions:\n> \n> http://people.freebsd.org/~kevlo/postgres/portNT.html\n> \n> and that's my error, any help would be apprecciated.\n> \n> Thank you.\n> \n> \n> --\n> Ing. Luis Maga���a.\n> Gnovus Networks & Software\n> www.gnovus.com\n> \n> \n\n",
"msg_date": "Thu, 21 Dec 2000 11:55:44 +0800 (PST)",
"msg_from": "Chris Ian Capon Fiel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compiling on Win32"
}
]
|
[
{
"msg_contents": "> > > The point is, that the heap page is only modified in \n> > > places that were previously empty (except header).\n> > > All previous row data stays exactly in the same place.\n> > > Thus if a page is only partly written\n> > > (any order of page segments) only a new row is affected.\n> > \n> > Exception: PageRepairFragmentation() and PageIndexTupleDelete() are\n> > called during vacuum - they change layout of tuples.\n> >\n> \n> Is it guaranteed that the result of PageRepairFragmentation()\n> has already been written to disk when tuple movement is logged ?\n\nNo.\n\nVadim\n",
"msg_date": "Tue, 19 Dec 2000 15:48:14 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: heap page corruption not easy"
}
]
|
[
{
"msg_contents": "> > > AFAIR, none of the index access methods except btree \n> > > handle NULLs at all --- they just ignore NULL values\n> > > and don't store them in the index.\n...\n> \n> and what does this error means ?\n> \n> create table rtree_test ( r box );\n> copy rtree_test from stdin;\n> \\N\n> ........ total 10,000 NULLS\n> \\.\n> \n> create index rtree_test_idx on rtree_test using rtree ( r );\n> --ERROR: floating point exception! The last floating point \n> operation either exceeded legal ranges or was a divide by zero\n> \n> seems rtree doesn't ignore NULL ?\n\nNo, it doesn't. As well as GiST. Only hash ignores them.\nAnd there is no code in GiST & rtree that take care about NULL\nkeys. It's probably ok for GiST which is \"meta-index\" - \nindex/type methods implementator should decide how to handle NULLs.\nAs for rtree - seems it's better to ignore NULLs as we did before\nfor single key btree: rtree is just variation of it.\n\nVadim\n",
"msg_date": "Tue, 19 Dec 2000 16:08:00 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Who is a maintainer of GiST code ?"
}
]
|
[
{
"msg_contents": "> > Reading the documentation, I see that OIDs are unique through\nthe\n> > whole database.\n> > But since OIDs are int4, does that limit the number of rows I\ncan\n> > have in a database to 2^32 = 4 billion ?\n>\n> Yep.\n>\n> Thanks for the answer - although that concerns me a bit.\n> Maybe I could recompile it setting oid to int64 type...\n\nIf that really concerns you, then the rest of the hackers list I think would\nbe very interested in hearing of a real-world database with more than 4\nbillion rows/inserts/deletes.\n\nApparently it is somewhat more complicated than just 'recompiling as an\nint64' to change this. I believe that patches are currently being made to\nfacilitate a future move towards 64bit OIDs, but I am not certain of the\nstatus.\n\nChris\n\n",
"msg_date": "Wed, 20 Dec 2000 09:22:17 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: OID Implicit limit"
},
{
"msg_contents": "\n\"\"Christopher Kings-Lynne\"\" <[email protected]> wrote in message\nnews:[email protected]...\n> > > Reading the documentation, I see that OIDs are unique through\n> the\n> > > whole database.\n> > > But since OIDs are int4, does that limit the number of rows I\n> can\n> > > have in a database to 2^32 = 4 billion ?\n> >\n> > Yep.\n> >\n> > Thanks for the answer - although that concerns me a bit.\n> > Maybe I could recompile it setting oid to int64 type...\n>\n> If that really concerns you, then the rest of the hackers list I think\nwould\n> be very interested in hearing of a real-world database with more than 4\n> billion rows/inserts/deletes.\n>\n> Apparently it is somewhat more complicated than just 'recompiling as an\n> int64' to change this. I believe that patches are currently being made to\n> facilitate a future move towards 64bit OIDs, but I am not certain of the\n> status.\n>\n> Chris\n\nIt won't for sure have 4 billion records *at a time*. But will easyly\nprocess (insert/small calculations/summarize/delete) up to 10 or\n20 million records per day.\nBut not all records will be deleted; I'll run into key conflicts\nif the oid sequence generator cross the 2^32 boundary. That is my problem.\nBut sure, that will take some time to happen (215 days for 20 millions\nrows/day).\nThat means I'll have to be able to do 231,48148148 inserts per second plus\nthe time to delete and aggregate the data.\n\n> Apparently it is somewhat more complicated than just 'recompiling as an\n>int64' to change this. I believe that patches are currently\n>being made to\n>facilitate a future move towards 64bit OIDs, but I am not certain of the\n>status.\nWell I hope until there I don't have the key conflicts I mentioned. It will\nbe hard to, anyway - but possible.\n\nBest regards,\nHowe\n\n\n",
"msg_date": "Wed, 20 Dec 2000 00:02:15 -0200",
"msg_from": "\"Steve Howe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OID Implicit limit"
},
{
"msg_contents": "We have an FAQ item on this now under OID's.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> > > Reading the documentation, I see that OIDs are unique through\n> the\n> > > whole database.\n> > > But since OIDs are int4, does that limit the number of rows I\n> can\n> > > have in a database to 2^32 = 4 billion ?\n> >\n> > Yep.\n> >\n> > Thanks for the answer - although that concerns me a bit.\n> > Maybe I could recompile it setting oid to int64 type...\n> \n> If that really concerns you, then the rest of the hackers list I think would\n> be very interested in hearing of a real-world database with more than 4\n> billion rows/inserts/deletes.\n> \n> Apparently it is somewhat more complicated than just 'recompiling as an\n> int64' to change this. I believe that patches are currently being made to\n> facilitate a future move towards 64bit OIDs, but I am not certain of the\n> status.\n> \n> Chris\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Dec 2000 21:49:12 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OID Implicit limit"
}
]
|
[
{
"msg_contents": "\n\tI have had the time to test today's (12/19) snapshot on my\nLinux/Alpha and the good news is that only two regression tests are\nfailing. The bad news is that these regression tests do not fail on\nLinux/Intel. :( [1]\n\tSpecifically, the oid and misc regression tests failed. Here are\nthe gory details:\n\n\noid: Inserting a negative oid should wrap that oid around to an unsigned\n value, but instead pgsql just spits it back out with an error\n message. i.e.:\n\nCREATE TABLE OID_TBL(f1 oid);\n...\nINSERT INTO OID_TBL(f1) VALUES ('-1040');\nERROR: oidin: error reading \"-1040\": value too large\n\n Probably not a major problem (who inserts negative oids?), but I\n could be wrong. Hopefully it has an easy fix.\n\n\nmisc: This one is nasty... Any attempts to use the '*' operator in the\n context of inheritance causes pgsql to lose its mind and wander off\n into the weeds never to be seen again. Example from 'misc' tests:\n\nSELECT p.name, p.hobbies.name FROM person* p;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nconnection to server was lost\n\n Definitely needs to be fixed, but I have a feeling it will not be\n easy.\n\nOther than those two issues, everything seems to run great. I would go\ndigging into the source to find the source of these problems, but I\nthought I would throw it out to the list first. [2]\n\tTherefore, if anyone has any ideas as to what is failing, how to\nfix it, or at least a general direction to head in (i.e. look in these\nsource files...), please speak up. If you want more information on the\nabove problems, feel free to ask. Just tell me what you want, and if it is\nnot obvious, how to get it.\n\tLooking forward to a new version pgsql that compiles out of the\nbox on Linux/Alpha! TTYL.\n\n\n[1] For those who missed my poor attempt at a joke... I mean that the\nLinux/Alpha regression failures are specific to that platform, and\ntherefore my problem to solve, not a more general problem I could leave to\nthe pg-hackers to solve....\n\n[2] That, and I am definitely not familiar with the pgsql source, so it\nwould probably take me a while to make any headway if I just started\ndigging with out any direction...\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n\n",
"msg_date": "Tue, 19 Dec 2000 21:08:00 -0700 (MST)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL pre-7.1 Linux/Alpha Status..."
},
{
"msg_contents": "Ryan Kirkpatrick <[email protected]> writes:\n> INSERT INTO OID_TBL(f1) VALUES ('-1040');\n> ERROR: oidin: error reading \"-1040\": value too large\n\nThat's coming from a possibly-misguided error check that I put into\noidin():\n\n\tunsigned long cvt;\n\tchar\t *endptr;\n\n\tcvt = strtoul(s, &endptr, 10);\n\n\t...\n\n\t/*\n\t * Cope with possibility that unsigned long is wider than Oid.\n\t */\n\tresult = (Oid) cvt;\n\tif ((unsigned long) result != cvt)\n\t\telog(ERROR, \"oidin: error reading \\\"%s\\\": value too large\", s);\n\nOn a 32-bit machine, -1040 converts to 4294966256, but on a 64-bit\nmachine it converts to 2^64-1040, and the test is accordingly deciding\nthat that value won't fit in an Oid.\n\nNot sure what to do about this. If you had actually typed 2^64-1040,\nit would be appropriate for the code to reject it. But I hadn't\nrealized that the extra check would introduce a discrepancy between\n32- and 64-bit machines for negative inputs. Maybe it'd be better just\nto delete the check. Comments anyone?\n\n> SELECT p.name, p.hobbies.name FROM person* p;\n> pqReadData() -- backend closed the channel unexpectedly.\n\nBacktrace please?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 11:41:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pre-7.1 Linux/Alpha Status... "
},
{
"msg_contents": "From: \"Ryan Kirkpatrick\" <[email protected]>\n> On Sat, 16 Dec 2000, Bruce Momjian wrote:\n> \n> > Here is the list of features in 7.1.\n> \n> One thing that I think ought to be added is that with 7.1,\n> PostgreSQL will compile out of the box (i.e. without any extra patches)\n> for Linux/Alpha.\n\nWhat patches do one need for compiling say 7.0.3 with alpha linux?\nIs it some kind of semaphore patch (i get those evil errors).\n\nMagnus Naeslund\n\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n PGP Key: http://www.genline.nu/mag_pgp.txt\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n\n",
"msg_date": "Wed, 20 Dec 2000 17:58:27 +0100",
"msg_from": "\"Magnus Naeslund\\(f\\)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pre-7.1 Linux/Alpha Status... "
},
{
"msg_contents": "> Ryan Kirkpatrick <[email protected]> writes:\n> > INSERT INTO OID_TBL(f1) VALUES ('-1040');\n> > ERROR: oidin: error reading \"-1040\": value too large\n> \n> That's coming from a possibly-misguided error check that I put into\n> oidin():\n> \n> \tunsigned long cvt;\n> \tchar\t *endptr;\n> \n> \tcvt = strtoul(s, &endptr, 10);\n> \n> \t...\n> \n> \t/*\n> \t * Cope with possibility that unsigned long is wider than Oid.\n> \t */\n> \tresult = (Oid) cvt;\n> \tif ((unsigned long) result != cvt)\n> \t\telog(ERROR, \"oidin: error reading \\\"%s\\\": value too large\", s);\n> \n> On a 32-bit machine, -1040 converts to 4294966256, but on a 64-bit\n> machine it converts to 2^64-1040, and the test is accordingly deciding\n> that that value won't fit in an Oid.\n> \n> Not sure what to do about this. If you had actually typed 2^64-1040,\n> it would be appropriate for the code to reject it. But I hadn't\n> realized that the extra check would introduce a discrepancy between\n> 32- and 64-bit machines for negative inputs. Maybe it'd be better just\n> to delete the check. Comments anyone?\n\nCan't we just say out of range, rather than too large?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 20 Dec 2000 12:25:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL pre-7.1 Linux/Alpha Status..."
},
{
"msg_contents": "On Wed, 20 Dec 2000, Tom Lane wrote:\n\n> Ryan Kirkpatrick <[email protected]> writes:\n> > INSERT INTO OID_TBL(f1) VALUES ('-1040');\n> > ERROR: oidin: error reading \"-1040\": value too large\n> \n> That's coming from a possibly-misguided error check that I put into\n> oidin():\n...\n> On a 32-bit machine, -1040 converts to 4294966256, but on a 64-bit\n> machine it converts to 2^64-1040, and the test is accordingly deciding\n> that that value won't fit in an Oid.\n> \n> Not sure what to do about this. If you had actually typed 2^64-1040,\n> it would be appropriate for the code to reject it. But I hadn't\n> realized that the extra check would introduce a discrepancy between\n> 32- and 64-bit machines for negative inputs. Maybe it'd be better just\n> to delete the check. Comments anyone?\n\n\tI will leave it up to the decision of the list. Though in my\nopinion attempting to insert a negative oid should be met with an error\noutright. Otherwise you end up with a phantom error (i.e. the result is\nnot what you expected, but you are given any warning or notice to that\neffect). \n\n> > SELECT p.name, p.hobbies.name FROM person* p;\n> > pqReadData() -- backend closed the channel unexpectedly.\n> \n> Backtrace please?\n\n\tIt appears that one has to run a decent amount of the regression\ntests to get the backend to crash. There is heavy dependency between the\ntests, so it is not trival to come up with a simple test case. Though I\ncan say simple inheritance (i.e. the inheritance example in the docs)\nworks fine. Apparently the more exotic and complex variety of inheritance\nstarts causing problems. I simply ran the regression tests in serial mode\nand gathered the below data. \n\tHere is the relevant from the postmaster logs:\n\n...\nDEBUG: Deadlock risk: raising lock level from ShareLock to ExclusiveLock\non object 401236/280863/8\nDEBUG: Deadlock risk: raising lock level from ShareLock to ExclusiveLock\non object 401242/280863/1\nDEBUG: Deadlock risk: raising lock level from ShareLock to ExclusiveLock\non object 401245/280863/7\nServer process (pid 8468) exited with status 11 at Wed Dec 20 21:01:56\n2000\nTerminating any active server processes...\nServer processes were terminated at Wed Dec 20 21:01:56 2000\nReinitializing shared memory and semaphores\n...\n\n\tAnd from gdb, here is the post-mortem (not for the faint of\nheart):\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x1200bc160 in ExecEvalFieldSelect ()\n\n#0 0x1200bc160 in ExecEvalFieldSelect ()\n#1 0x1200bc6dc in ExecEvalExpr ()\n#2 0x1200bb868 in ExecEvalFuncArgs ()\n#3 0x1200bb964 in ExecMakeFunctionResult ()\n#4 0x1200bbd18 in ExecEvalOper ()\n#5 0x1200bc63c in ExecEvalExpr ()\n#6 0x1200bc85c in ExecQual ()\n#7 0x1200bcf7c in ExecScan ()\n#8 0x1200c677c in ExecSeqScan ()\n#9 0x1200b9e2c in ExecProcNode ()\n#10 0x1200b8550 in ExecutePlan ()\n#11 0x1200b72bc in ExecutorRun ()\n#12 0x1200bee2c in postquel_getnext ()\n#13 0x1200bf080 in postquel_execute ()\n#14 0x1200bf298 in fmgr_sql ()\n#15 0x1200bbab0 in ExecMakeFunctionResult ()\n#16 0x1200bbdd8 in ExecEvalFunc ()\n#17 0x1200bc64c in ExecEvalExpr ()\n#18 0x1200bc14c in ExecEvalFieldSelect ()\n#19 0x1200bc6dc in ExecEvalExpr ()\n#20 0x1200b64dc in ExecEvalIter ()\n#21 0x1200bc5d0 in ExecEvalExpr ()\n#22 0x1200bcae0 in ExecTargetList ()\n#23 0x1200bce20 in ExecProject ()\n#24 0x1200c6330 in ExecResult ()\n#25 0x1200b9dec in ExecProcNode ()\n#26 0x1200b8550 in ExecutePlan ()\n#27 0x1200b72bc in ExecutorRun ()\n#28 0x1201319c8 in ProcessQuery ()\n#29 0x12012f77c in pg_exec_query_string ()\n#30 0x120131120 in PostgresMain ()\n#31 0x12010ced4 in DoBackend ()\n#32 0x12010c818 in BackendStartup ()\n#33 0x12010b228 in ServerLoop ()\n#34 0x12010a9a0 in PostmasterMain ()\n#35 0x1200d48a8 in main ()\n\n\tTTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n\n\n",
"msg_date": "Wed, 20 Dec 2000 21:11:10 -0700 (MST)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL pre-7.1 Linux/Alpha Status... "
},
{
"msg_contents": "Tom Lane writes:\n\n> Ryan Kirkpatrick <[email protected]> writes:\n> > INSERT INTO OID_TBL(f1) VALUES ('-1040');\n> > ERROR: oidin: error reading \"-1040\": value too large\n>\n> That's coming from a possibly-misguided error check that I put into\n> oidin():\n>\n> \tunsigned long cvt;\n> \tchar\t *endptr;\n>\n> \tcvt = strtoul(s, &endptr, 10);\n>\n> \t...\n>\n> \t/*\n> \t * Cope with possibility that unsigned long is wider than Oid.\n> \t */\n> \tresult = (Oid) cvt;\n\n if (sizeof(unsigned long) > sizeof(Oid) && cvt > UINT_MAX)\n\n> \tif ((unsigned long) result != cvt)\n> \t\telog(ERROR, \"oidin: error reading \\\"%s\\\": value too large\", s);\n>\n> On a 32-bit machine, -1040 converts to 4294966256, but on a 64-bit\n> machine it converts to 2^64-1040, and the test is accordingly deciding\n> that that value won't fit in an Oid.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 21 Dec 2000 18:29:20 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL pre-7.1 Linux/Alpha Status... "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> if (sizeof(unsigned long) > sizeof(Oid) && cvt > UINT_MAX)\n\nHm. Each part of that will generate \"expression is always false\"\nwarnings from certain overprotective compilers. A more serious problem\nis that using UINT_MAX assumes that Oid is unsigned int, which will\ncertainly not be true forever --- but the required change will be easily\nmissed when Oid changes.\n\nPerhaps postgres_ext.h could define\n\n\t#define OID_MAX UINT_MAX\n\nright below the typedef for Oid, and then we could do this in oidin():\n\n\t#if OID_MAX < ULONG_MAX\n\t\tif (cvt > OID_MAX)\n\t\t\telog();\n\t#endif\n\nI think this #if expression will work --- anyone see any portability\nrisk there?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Dec 2000 12:56:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pre-7.1 Linux/Alpha Status... "
},
{
"msg_contents": "Tom Lane writes:\n\n> > if (sizeof(unsigned long) > sizeof(Oid) && cvt > UINT_MAX)\n>\n> Hm. Each part of that will generate \"expression is always false\"\n> warnings from certain overprotective compilers.\n\nAny compiler that does this will certainly issue a boatload of these all\nover the tree. More generally, I have given up on worrying too much about\nthe warning count on non-GCC compilers. All the ones I've seen lately\ngenerate tons already.\n\n> A more serious problem is that using UINT_MAX assumes that Oid is\n> unsigned int, which will certainly not be true forever --- but the\n> required change will be easily missed when Oid changes.\n>\n> Perhaps postgres_ext.h could define\n>\n> \t#define OID_MAX UINT_MAX\n>\n> right below the typedef for Oid, and then we could do this in oidin():\n>\n> \t#if OID_MAX < ULONG_MAX\n> \t\tif (cvt > OID_MAX)\n> \t\t\telog();\n> \t#endif\n\nThat looks fine as well.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 21 Dec 2000 19:41:41 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL pre-7.1 Linux/Alpha Status... "
},
{
"msg_contents": "\nHow are you on Alpha now?\n\n> \n> \tI have had the time to test today's (12/19) snapshot on my\n> Linux/Alpha and the good news is that only two regression tests are\n> failing. The bad news is that these regression tests do not fail on\n> Linux/Intel. :( [1]\n> \tSpecifically, the oid and misc regression tests failed. Here are\n> the gory details:\n> \n> \n> oid: Inserting a negative oid should wrap that oid around to an unsigned\n> value, but instead pgsql just spits it back out with an error\n> message. i.e.:\n> \n> CREATE TABLE OID_TBL(f1 oid);\n> ...\n> INSERT INTO OID_TBL(f1) VALUES ('-1040');\n> ERROR: oidin: error reading \"-1040\": value too large\n> \n> Probably not a major problem (who inserts negative oids?), but I\n> could be wrong. Hopefully it has an easy fix.\n> \n> \n> misc: This one is nasty... Any attempts to use the '*' operator in the\n> context of inheritance causes pgsql to lose its mind and wander off\n> into the weeds never to be seen again. Example from 'misc' tests:\n> \n> SELECT p.name, p.hobbies.name FROM person* p;\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> connection to server was lost\n> \n> Definitely needs to be fixed, but I have a feeling it will not be\n> easy.\n> \n> Other than those two issues, everything seems to run great. I would go\n> digging into the source to find the source of these problems, but I\n> thought I would throw it out to the list first. [2]\n> \tTherefore, if anyone has any ideas as to what is failing, how to\n> fix it, or at least a general direction to head in (i.e. look in these\n> source files...), please speak up. If you want more information on the\n> above problems, feel free to ask. Just tell me what you want, and if it is\n> not obvious, how to get it.\n> \tLooking forward to a new version pgsql that compiles out of the\n> box on Linux/Alpha! TTYL.\n> \n> \n> [1] For those who missed my poor attempt at a joke... I mean that the\n> Linux/Alpha regression failures are specific to that platform, and\n> therefore my problem to solve, not a more general problem I could leave to\n> the pg-hackers to solve....\n> \n> [2] That, and I am definitely not familiar with the pgsql source, so it\n> would probably take me a while to make any headway if I just started\n> digging with out any direction...\n> \n> ---------------------------------------------------------------------------\n> | \"For to me to live is Christ, and to die is gain.\" |\n> | --- Philippians 1:21 (KJV) |\n> ---------------------------------------------------------------------------\n> | Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n> ---------------------------------------------------------------------------\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 23:46:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] PostgreSQL pre-7.1 Linux/Alpha Status..."
},
{
"msg_contents": "On Mon, 22 Jan 2001, Bruce Momjian wrote:\n\n> How are you on Alpha now?\n\n\tGreat! Downloaded 7.1beta3, and it works fine right out of the\nbox. Built it in the standard way (./configure; make all), and then ran\nthe regression tests. All 76 of 76 tests passed on my Alpha XLT 366\nw/Debian 2.2. I think we can finally say that Linux/Alpha is a viable\nplatform for PostgreSQL. :)\n\tTTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n\n",
"msg_date": "Tue, 23 Jan 2001 07:49:48 -0700 (MST)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] PostgreSQL pre-7.1 Linux/Alpha Status..."
}
]
|
[
{
"msg_contents": "Hi,\n\nIs it correct behaviour that unnamed table-level check constraints get the\nnames '$1', '$2', '$3', etc. in Postgres 7.0.3???\n\nEg, using table constraints:\n----------------------------\n\ntest=# create table test (temp char(1) NOT NULL, CHECK (temp IN ('M',\n'F')));\nCREATE\ntest=# select rcname from pg_relcheck;\nrcname\n$1\n(1 row)\n\nAnd, even worse - I think this has got to be a bug:\n---------------------------------------------------\n\ntest=# create table test (temp char(1) NOT NULL, CHECK (temp IN ('M',\n'F')));\nCREATE\ntest=# create table test2 (temp char(1) NOT NULL, CHECK (temp IN ('M',\n'F')));\nCREATE\ntest=# select rcname from pg_relcheck;\n rcname\n--------\n $1\n $1\n(2 rows)\n\nTwo constraints with the same name!!!!\n\nAnd if you use column constraints:\n----------------------------------\n\ntest=# create table test (temp char(1) NOT NULL CHECK (temp IN ('M', 'F')));\nCREATE\ntest=# select rcname from pg_relcheck;\n rcname\n-----------\n test_temp\n(1 row)\n\n--\nChristopher Kings-Lynne\nFamily Health Network (ACN 089 639 243)\n\n",
"msg_date": "Wed, 20 Dec 2000 12:18:11 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "CHECK constraint names"
},
{
"msg_contents": "\nThe constraint naming isn't really terribly sensible right now. The\nnames generated should be unique within I think schema according to\nthe spec and I think that should be true even if users name a constraint\nsuch that it would cause a collision (so, if i name a constraint what\nan automatic constraint would normally be named, it should be picking\na different automatic name rather than erroring:\ncreate table test( a int constraint test_b check (a>3), b int check\n(b<3));\n)\n\nUntil there's a a good way to look at the defined constraints (a catalog\nor something) this probably isn't a big deal, since these should also\nbe unique against the other constraints too (pk, unique, fk).\n\n\nStephan Szabo\[email protected]\n\nOn Wed, 20 Dec 2000, Christopher Kings-Lynne wrote:\n\n> Hi,\n> \n> Is it correct behaviour that unnamed table-level check constraints get the\n> names '$1', '$2', '$3', etc. in Postgres 7.0.3???\n> \n> Eg, using table constraints:\n> ----------------------------\n> \n> test=# create table test (temp char(1) NOT NULL, CHECK (temp IN ('M',\n> 'F')));\n> CREATE\n> test=# select rcname from pg_relcheck;\n> rcname\n> $1\n> (1 row)\n> \n> And, even worse - I think this has got to be a bug:\n> ---------------------------------------------------\n> \n> test=# create table test (temp char(1) NOT NULL, CHECK (temp IN ('M',\n> 'F')));\n> CREATE\n> test=# create table test2 (temp char(1) NOT NULL, CHECK (temp IN ('M',\n> 'F')));\n> CREATE\n> test=# select rcname from pg_relcheck;\n> rcname\n> --------\n> $1\n> $1\n> (2 rows)\n> \n> Two constraints with the same name!!!!\n> \n> And if you use column constraints:\n> ----------------------------------\n> \n> test=# create table test (temp char(1) NOT NULL CHECK (temp IN ('M', 'F')));\n> CREATE\n> test=# select rcname from pg_relcheck;\n> rcname\n> -----------\n> test_temp\n> (1 row)\n> \n> --\n> Christopher Kings-Lynne\n> Family Health Network (ACN 089 639 243)\n> \n\n",
"msg_date": "Wed, 20 Dec 2000 09:36:49 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK constraint names"
}
]
|
[
{
"msg_contents": "Is this bad, or are there expected to be known problems like this for\nOBSD?\n\n7.1beta1 had roughly the same errors..\n\n\n----------------- BEGIN -----------------------\n\nbpalmer@mizer:~/PG7.1/postgresql-snapshot>uname -a\nOpenBSD mizer 2.8 GENERIC#399 i386\n\n\nbpalmer@mizer:~/PG7.1/postgresql-snapshot>gmake check\ngmake -C doc all\ngmake[1]: Entering directory `/home/bpalmer/PG7.1/postgresql-snapshot/doc'\ngmake[1]: Nothing to be done for `all'.\ngmake[1]: Leaving directory `/home/bpalmer/PG7.1/postgresql-snapshot/doc'\n...\n... (no errors)\n...\ngmake[2]: Entering directory `/home/bpalmer/PG7.1/postgresql-snapshot/src/test/regress'\ngmake -C ../../../contrib/spi REFINT_VERBOSE=1 refint.so autoinc.so\ngmake[3]: Entering directory `/home/bpalmer/PG7.1/postgresql-snapshot/contrib/spi'\ngmake[3]: `refint.so' is up to date.\ngmake[3]: `autoinc.so' is up to date.\ngmake[3]: Leaving directory `/home/bpalmer/PG7.1/postgresql-snapshot/contrib/spi'\n/bin/sh ./pg_regress --temp-install --top-builddir=../../.. --schedule=./parallel_schedule --multibyte=\n============== removing existing temp installation ==============\n============== creating temporary installation ==============\n============== initializing database system ==============\n============== starting postmaster ==============\nrunning on port 65432 with pid 29043\n============== creating database \"regression\" ==============\nCREATE DATABASE\n============== installing PL/pgSQL ==============\n============== running regression test queries ==============\nparallel group (13 tests): boolean varchar int8 numeric text int4 char oid int2 float4 name float8 bit\n boolean ... FAILED\n char ... ok\n name ... ok\n varchar ... FAILED\n text ... ok\n int2 ... FAILED\n int4 ... FAILED\n int8 ... FAILED\n oid ... ok\n float4 ... ok\n float8 ... FAILED\n bit ... ok\n numeric ... FAILED\ntest strings ... FAILED\ntest numerology ... ok\nparallel group (18 tests): box type_sanity point abstime tinterval interval reltime inet oidjoins path comments timestamp date circle time lseg polygon opr_sanity\n point ... ok\n lseg ... ok\n box ... FAILED\n path ... FAILED\n polygon ... ok\n circle ... FAILED\n date ... FAILED\n time ... FAILED\n timestamp ... FAILED\n interval ... FAILED\n abstime ... FAILED\n reltime ... ok\n tinterval ... FAILED\n inet ... ok\n comments ... FAILED\n oidjoins ... FAILED\n type_sanity ... FAILED\n opr_sanity ... ok\ntest geometry ... FAILED\ntest horology ... FAILED\ntest create_function_1 ... ok\ntest create_type ... ok\ntest create_table ... ok\ntest create_function_2 ... ok\ntest copy ... ok\nparallel group (7 tests): inherit create_aggregate create_operator triggers create_misc constraints create_index\n constraints ... ok\n triggers ... ok\n create_misc ... ok\n create_aggregate ... ok\n create_operator ... ok\n create_index ... ok\n inherit ... FAILED\ntest create_view ... ok\ntest sanity_check ... FAILED\ntest errors ... ok\ntest select ... ok\nparallel group (16 tests): random union select_distinct select_into arrays portals transactions select_distinct_on select_having subselect select_implicit aggregates case join btree_index hash_index\n select_into ... ok\n select_distinct ... FAILED\n select_distinct_on ... ok\n select_implicit ... ok\n select_having ... ok\n subselect ... FAILED\n union ... FAILED\n case ... ok\n join ... ok\n aggregates ... ok\n transactions ... FAILED\n random ... failed (ignored)\n portals ... FAILED\n arrays ... FAILED\n btree_index ... ok\n hash_index ... ok\ntest misc ... FAILED\nparallel group (5 tests): portals_p2 select_views alter_table foreign_key rules\n select_views ... ok\n alter_table ... ok\n portals_p2 ... ok\n rules ... ok\n foreign_key ... ok\nparallel group (3 tests): temp limit plpgsql\n limit ... ok\n plpgsql ... ok\n temp ... ok\n============== shutting down postmaster ==============\n\n==================================================\n 31 of 76 tests failed, 1 failed test(s) ignored.\n==================================================\n\nThe differences that caused some tests to fail can be viewed in the\nfile `./regression.diffs'. A copy of the test summary that you see\nabove is saved in the file `./regression.out'.\n\ngmake[2]: *** [check] Error 1\ngmake[2]: Leaving directory `/home/bpalmer/PG7.1/postgresql-snapshot/src/test/regress'\ngmake[1]: *** [check] Error 2\ngmake[1]: Leaving directory `/home/bpalmer/PG7.1/postgresql-snapshot/src/test'\ngmake: *** [check] Error 2\n\n\n\n---------------- END -------------------\n\n\nb. palmer, [email protected]\npgp: www.crimelabs.net/bpalmer.pgp5\n\n\n\n",
"msg_date": "Wed, 20 Dec 2000 00:46:33 -0500 (EST)",
"msg_from": "bpalmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.1 snapshot on i386 BSD MAJOR failure"
},
{
"msg_contents": "bpalmer <[email protected]> writes:\n> [ lots of regress failures ]\n\nDoesn't look good, but it's impossible to tell what's wrong from that\namount of info...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 12:42:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 snapshot on i386 BSD MAJOR failure "
}
]
|
[
{
"msg_contents": "As part of our development, one of our programmers created a new database\ntype, plus operators and functions to go with it. One of the things he did\nwas this:\n\nCREATE OPERATOR testbit (\n leftarg = bitset,\n rightarg = int4,\n procedure = testbit,\n commutator = testbit\n);\n\nNotice that this is an ILLEGAL type - the name of the type (from docs) must\nonly contain these characters:\n\n+ - * / < > = ~ ! @ # % ^ & | ` ? $\n\nHowever, PostgreSQL 7.0.3 went right ahead and created the operator anyway!!\n\nNow we have a big problem, as the DROP OPERATOR command cannot delete the\nillegally named operator.\n\neg:\n\nusa=# drop operator testbit (bitset, int4);\nERROR: parser: parse error at or near \"testbit \"\nusa=# drop operator 'testbit ' (bitset, int4);\nERROR: parser: parse error at or near \"'\"\nusa=# drop operator \"testbit \" (bitset, int4);\nERROR: parser: parse error at or near \"\"\"\n\nWe can't delete it!!! I also assume that it was a bug that it could even be\ncreated in the first place...\n\nChris\n\n--\nChristopher Kings-Lynne\nFamily Health Network (ACN 089 639 243)\n\n",
"msg_date": "Wed, 20 Dec 2000 16:04:17 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug in CREATE OPERATOR"
},
{
"msg_contents": ">\n> CREATE OPERATOR testbit (\n> leftarg = bitset,\n> rightarg = int4,\n> procedure = testbit,\n> commutator = testbit\n> );\n>\n> Now we have a big problem, as the DROP OPERATOR command\n> cannot delete the illegally named operator.\n\nHave you tried deleting it directly from pg_operator instead of using\nDROP OPERATOR?\n\nDarren\n\n",
"msg_date": "Wed, 20 Dec 2000 12:40:18 -0500",
"msg_from": "\"Darren King\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Bug in CREATE OPERATOR"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> [ \"CREATE OPERATOR testbit\" is accepted ]\n\nNot only that, but it looks like you can create aggregate functions and\ntypes that have operator-like names :-(. Someone was way too eager to\nsave a production or two, I think:\n\nDefineStmt: CREATE def_type def_name definition\n {\n\t\t\t...\n }\n ;\n\ndef_type: OPERATOR { $$ = OPERATOR; }\n | TYPE_P { $$ = TYPE_P; }\n | AGGREGATE { $$ = AGGREGATE; }\n ;\n\ndef_name: PROCEDURE { $$ = \"procedure\"; }\n | JOIN { $$ = \"join\"; }\n | all_Op { $$ = $1; }\n | ColId { $$ = $1; }\n ;\n\nSeems to me that this should be simplified down to\n\n\tCREATE OPERATOR all_Op ...\n\n\tCREATE TYPE ColId ...\n\n\tCREATE AGGREGATE ColId ...\n\nAny objections? Has anyone got an idea why PROCEDURE and JOIN are\nspecial-cased here? PROCEDURE, at least, could be promoted from\nColLabel to ColId were it not offered as an alternative to ColId here.\n\n\n> Now we have a big problem, as the DROP OPERATOR command cannot delete the\n> illegally named operator.\n\nJust remove it by DELETEing the row from pg_operator.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 12:51:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in CREATE OPERATOR "
}
]
|
[
{
"msg_contents": "Hi,\n\nI think it would be convenient to add a sequence number on each mail\non the list. I have been always annoyed by identifying a mail in the\narchive, \"the mail is from foo around 2000/12/20...\" Rather than\nthat, it would be nice to say \"the mail is 1234567th mail on the\nhackers list.\" How do you think?\n--\nTatsuo Ishii\n\n",
"msg_date": "Wed, 20 Dec 2000 17:37:44 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "sequence numbering on mails"
}
]
|
[
{
"msg_contents": "In our test environment, we performed the following set of commands (given\nthat the in/out functions already exist):\n\nCREATE TYPE testtype (\n internallength = 4,\n input = test_in,\n output = test_out\n);\n\nCREATE FUNCTION test_function(testtype, testtype)\n RETURNS bool\n AS '/usr/local/pgsql/types/bitset.so'\n LANGUAGE 'c';\n\n\nCREATE OPERATOR ^^^ (\n leftarg = testtype,\n rightarg = testtype,\n procedure = test_function\n);\n\nDROP TYPE testtype;\n\nHere, we've just dropped the testttype that the operator ^^^ uses. This\ndoes not cause the operator to be deleted, it simply reverts its left and\nright args to 'NONE'.\n\nSo, we're left with an operator with two NONE arguments (and returns NONE),\nso we now try to drop this operator:\n\nDROP OPERATOR ^^^ (NONE, NONE);\n\nThis fails with:\n\nERROR: parser: parse error at or near \"NONE\"\n\nDROP OPERATOR ^^^ (none, none);\n\nAlso fails.\n\nHow do we go about dropping the now invalid operator???\n\nChris\n\n",
"msg_date": "Wed, 20 Dec 2000 16:55:00 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Another OPERATOR bug??"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> Here, we've just dropped the testttype that the operator ^^^ uses. This\n> does not cause the operator to be deleted, it simply reverts its left and\n> right args to 'NONE'.\n\nNo, it doesn't \"revert the types to NONE\", because there isn't any such\ntype as NONE. What you've got is an operator whose argument types refer\nto a nonexistent type OID. You won't be able to drop it with DROP\nOPERATOR because you can't name the now-gone type.\n\nThere's been talk of having interlocks that prevent you from dropping\na still-referenced type, function, etc, but they're not there now.\nI'd suggest cleaning up by executing commands like\n\n\tdelete from pg_operator where oprleft = oid-of-vanished-type\n\n(if you don't know what the type's oid was, a little crosschecking of\npg_operator against pg_type will tell you).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 13:00:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Another OPERATOR bug?? "
}
]
|
[
{
"msg_contents": "Ok, after being a little quiet recently things are picking up (well at least\nfor now).\n\nAnyhow, here's whats happening on the JDBC front:\n\n1) ANT vs Make\n\nAs someone (sorry forgotten your name) suggested using ANT instead of Make\nfor building the driver. Well yesterday was pretty quiet here so I finally\ntook the plunge and rewrote both the Makefile and its supporting class into\nANT's xml build file.\n\nLets just say this is definitely the way to go. The size of the Makefile &\nCheckVersions.java came to about 19K. build.xml is just over 2K and does\nexactly the same thing (and is faster).\n\nThe real beauty about ANT is that it's platform independent as it's written\nin Java itself. ANT is part of the Apache Jakarta Project, and more details\nare available from http://jakarta.apache.org/ant/\n\nSo as of last night we now have two methods of compiling the driver:\n\nMethod 1: Make\n make\t\t\tThis builds and runs a test to see which version to\nbuild. If this fails then run one of:\n make jdbc1\t\tThis builds the JDBC1 driver\n make jdbc2\t\tBuild the JDBC2 standard driver\n make enterprise\tBuild the JDBC2 Enterprise driver\n make examples\tBuild the JDBC1 & 2 Examples\n make examples2\tBuild the JDBC2 only examples\n make corba\t\tBuild the corba example\n\nMethod 2: ANT\n ant\t\t\tBuilds the correct driver dependent on what JDK you\nhave installed.\n ant clean\t\tRemoves any built classes and jar files\n ant examples\t\tBuilds the examples correct for the JDK version in\nuse\n\nI suggest we keep supporting both methods for now to see how people get on.\nCVS has an early version of build.xml, but I'm currently refining it today.\n\n2) Versioning\n\nOk, one problem in the past was keeping the driver uptodate with the correct\nversion numbers. Before it was stored in three locations. Now back in\nOctober I moved this all into the org.postgresql.Driver class, so there was\none location. Also as suggested on the Hackers list Make now extracts the\nversion from Makefile.global. This works fine for Make, but there are two\nproblems.\n\nFirst, ANT can't read this easily. This isn't that major, but the second one\nis. I've had reports that some people checkout just the interfaces, and not\nthe entire source, so Makefile.global is not available.\n\nHackers: Any ideas on how to solve this problem? If necessary I'll manually\nset it before release so it's not broken, but it needs sorting.\n\n3) Web site\n\nwww.retep.org.uk has now moved from Demon to Hub. However I'm having some\nproblems still, mainly with my linux box at home. It's probably not going to\nbe before next Wednesday before I get everything backup to speed. I'll try\nto get the updates on there, but can't guarantee it at the moment.\n\n4) Email addresses\n\nAs of this Friday (22nd December) the [email protected] address\nwill cease to exist. The [email protected] one should be dead by now\nso obviously don't use that one either.\n\nPlease only use the [email protected] one for contacting me from now on.\n\nThanks, Peter\n\n-- \nPeter Mount\nEnterprise Support Officer, Maidstone Borough Council\nEmail: [email protected]\nWWW: http://www.maidstone.gov.uk\nAll views expressed within this email are not the views of Maidstone Borough\nCouncil\n\n",
"msg_date": "Wed, 20 Dec 2000 11:42:49 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Status of JDBC Interface"
},
{
"msg_contents": "Peter Mount writes:\n\n> 1) ANT vs Make\n\n> I suggest we keep supporting both methods for now to see how people get on.\n\nIf you're confident about ANT is suggest that you dump the make interface\nbecause otherwise you increase the possible failure scenarios at the\ninstall level alone in combinatorial ways.\n\nWhat's a bit sad about the JDBC subtree is that it doesn't follow the\nbuild system conventions in the rest of the tree. For example, I would\nreally like to be able to do this:\n\n./configure ... --with-java[=/usr/local/j2sdk1.3.0]\nmake\nmake install\n\nThis wouldn't only make it easier on users, but many more people would\nperhaps be exposed to the fact that there's a JDBC driver in the tree at\nall.\n\nI have on and off had some ideas about autoconfiscating the JDBC build but\nI'm glad I didn't do it since this Ant thing seems to be much better.\nBut it would still be desirable to have a make wrapper since that is what\npeople are accustomed to.\n\nBtw., how does Ant choose the JDK it uses if I have three or four\ninstalled at random places? (This is perhaps yet another source of\nproblems, if make and ant use different JDKs by default.)\n\n> 2) Versioning\n\n> one location. Also as suggested on the Hackers list Make now extracts the\n> version from Makefile.global. This works fine for Make, but there are two\n> problems.\n>\n> First, ANT can't read this easily. This isn't that major, but the second one\n> is. I've had reports that some people checkout just the interfaces, and not\n> the entire source, so Makefile.global is not available.\n\nJust checking out interfaces is not advertised AFAIK and it definitely\ndoesn't work for anything but the JDBC driver.\n\nOTOH, nothing is stopping you from inventing your own versioning scheme\nfor the driver only. Several other things in the tree do this as well\n(PyGreSql, PgAccess).\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 21 Dec 2000 19:30:14 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of JDBC Interface"
}
]
|
[
{
"msg_contents": "I've been experimenting with the SSL connection support. Unfortunately I can't\nget the postmaster to start because the instructions in the documentation for\nsetting up a certificate don't work.\n\nThey say:\n=============================================================================\nFor details on how to create your server private key and certificate, refer\nto the OpenSSL documentation... To create a quick self-signed certificate, use\nthe CA.pl script included in OpenSSL:\n\nCA.pl -newcert\n\nFill out the information the script asks for. Make sure to enter the local\nhost name as Common Name. The script will generate a key that is passphrase\nprotected. To remove the passphrase (required if you want automatic\nstart-up of the postmaster), run the command\n\nopenssl x509 -inform PEM -outform PEM -in newreq.pem \\\n -out newkey_no_passphrase.pem\n\nEnter the old passphrase to unlock the existing key. Copy the file newreq.pem\nto PGDATA/server.crt and newkey_no_passphrase.pem to PGDATA/server.key.\nRemove the PRIVATE KEY part from the server.crt using any text editor.\n=============================================================================\n\nThe openssl x509 command runs with no interaction; this documentation seems\nto indicate that it will ask for a password.\n\nI can't find anything in the SSL documentation about removing or\nchanging the passphrase.\n\nHas anyone successfully done this? and if so, how is the documentation\nquoted above inforrect?\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"And she shall bring forth a son, and thou shall call \n his name JESUS; for he shall save his people from \n their sins.\" Matthew 1:21 \n\n\n",
"msg_date": "Wed, 20 Dec 2000 16:04:27 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "SSL Connections"
},
{
"msg_contents": "On Wed, 20 Dec 2000, Oliver Elphick wrote:\n\n> Has anyone successfully done this? and if so, how is the documentation\n> quoted above inforrect?\n\nWhen I did my testing, I just took some cert's that I had generated\nthrough Apache's make certificate command - just don't enter a passphrase,\nthen copy the certificate and key. Works great.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Wed, 20 Dec 2000 10:12:12 -0600 (CST)",
"msg_from": "\"Dominic J. Eidson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSL Connections"
},
{
"msg_contents": "On Wed, 20 Dec 2000, Oliver Elphick wrote:\n\n> To create a quick self-signed certificate, use the CA.pl script\n> included in OpenSSL:\n> \n> CA.pl -newcert\n\nOr you can do it manually:\n\nopenssl req -new -text -out cert.req (you will have to enter a password)\nmv privkey.pem cert.pem.pw\nopenssl rsa -in cert.pem.pw -out cert.pem (this removes the password)\nopenssl req -x509 -in cert.req -text -key cert.pem -out cert.cert\n\nMatthew.\n\n",
"msg_date": "Thu, 21 Dec 2000 15:48:46 +0000 (GMT)",
"msg_from": "Matthew Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSL Connections"
},
{
"msg_contents": "Matthew Kirkwood wrote:\n >On Wed, 20 Dec 2000, Oliver Elphick wrote:\n >\n >> To create a quick self-signed certificate, use the CA.pl script\n >> included in OpenSSL:\n...\n >Or you can do it manually:\n >\n >openssl req -new -text -out cert.req (you will have to enter a password)\n >mv privkey.pem cert.pem.pw\n >openssl rsa -in cert.pem.pw -out cert.pem (this removes the password)\n >openssl req -x509 -in cert.req -text -key cert.pem -out cert.cert\n\nthen\n\n cp cert.pem $PGDATA/server.key\n cp cert.cert $PGDATA/server.crt\n\nThank you; this works.\n\nI attach a documentation patch.\n\n\n\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For a child will be born to us, a son will be given to\n us; And the government will rest on His shoulders; And\n His name will be called Wonderful Counsellor, Mighty \n God, Eternal Father, Prince of Peace.\" \n Isaiah 9:6",
"msg_date": "Thu, 21 Dec 2000 16:49:29 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSL Connections "
},
{
"msg_contents": "Applied.\n\n> Matthew Kirkwood wrote:\n> >On Wed, 20 Dec 2000, Oliver Elphick wrote:\n> >\n> >> To create a quick self-signed certificate, use the CA.pl script\n> >> included in OpenSSL:\n> ...\n> >Or you can do it manually:\n> >\n> >openssl req -new -text -out cert.req (you will have to enter a password)\n> >mv privkey.pem cert.pem.pw\n> >openssl rsa -in cert.pem.pw -out cert.pem (this removes the password)\n> >openssl req -x509 -in cert.req -text -key cert.pem -out cert.cert\n> \n> then\n> \n> cp cert.pem $PGDATA/server.key\n> cp cert.cert $PGDATA/server.crt\n> \n> Thank you; this works.\n> \n> I attach a documentation patch.\n> \nContent-Description: ol\n\n[ Attachment, skipping... ]\n\n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"For a child will be born to us, a son will be given to\n> us; And the government will rest on His shoulders; And\n> His name will be called Wonderful Counsellor, Mighty \n> God, Eternal Father, Prince of Peace.\" \n> Isaiah 9:6 \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Dec 2000 14:08:07 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSL Connections"
}
]
|
[
{
"msg_contents": "> > Has anyone successfully done this? and if so, how is the \n> documentation\n> > quoted above inforrect?\n> \n> When I did my testing, I just took some cert's that I had generated\n> through Apache's make certificate command - just don't enter \n> a passphrase,\n> then copy the certificate and key. Works great.\n\nHmm. Those instructions worked when I wrote them - must've had an old\nversion of OpenSSL, and they changed it. Any chance you could update the\ndocumentation to something that works? \n\n//Magnus\n",
"msg_date": "Wed, 20 Dec 2000 17:26:01 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: SSL Connections"
}
]
|
[
{
"msg_contents": "the prior results (http://www.vix.com/~vixie/pgsql-results.png) showed:\n\n\t~70ms usual INSERT time (~1.5sec -> ~1.25sec occasional)\n\t~250ms usual SELECT time (~1.5sec occasional)\n\nchanging the attribute i key by to be PRIMARY KEY improved things a lot;\nthe new results (http://www.vix.com/~vixie/pgsql-indexed.png) show:\n\n\t~80ms usual INSERT time (~1.28sec -> ~1.18sec occasional)\n\t~100ms usual SELECT time (~1.18sec occasional)\n\nVACUUM ANALYZE after the INSERTs made no performance difference at all,\nwhich is good since no other modern database requires anything to be done\nto improve performance after a large number of INSERTs. (i can understand\nwhy COPY would need it, but not INSERT.)\n\nthe occasional 1.2sec has got to be due to some kind of scheduling or I/O\nirregularity. i'm going to try it on a 500MB \"MFS partition\" next. it\nturns out that MAPS RSS could actually live with \"occasional 1.2sec\" but\ni want to make sure that its cause isn't trivial or my-stupidity-related.\n\njust to let everybody know where i'm at with this. and-- THANKS for pgsql!\n",
"msg_date": "Wed, 20 Dec 2000 08:53:09 -0800",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "day 2 results"
},
{
"msg_contents": "Paul A Vixie <[email protected]> writes:\n> the occasional 1.2sec has got to be due to some kind of scheduling or I/O\n> irregularity.\n\nHmm, could it just be delay when your syncer process runs? Under WAL,\nI believe we don't fsync anything except the WAL log file, so a bulk\ninsert operation would probably create lots and lots of dirty kernel\nbuffers that syncer would decide to shove out to disk every 30 sec or\nso. Is there any way to correlate the timing spikes against syncer\nactivity?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 13:28:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: day 2 results "
},
{
"msg_contents": "> VACUUM ANALYZE after the INSERTs made no performance difference at all,\n> which is good since no other modern database requires anything to be done\n> to improve performance after a large number of INSERTs. (i can understand\n> why COPY would need it, but not INSERT.)\n\nafaik every modern database requires something like this to update\noptimizer stats, since on-the-fly stats accumulation can be expensive\nand inaccurate. But most of my recent experience has been with\nPostgreSQL and perhaps some other DBs have added some hacks to get\naround this. Of course, some databases advertised as modern don't do\nmuch optimization, so don't need the stats.\n\nThe biggest effect seen is when growing from an empty database, when the\nstats would be changing the most. Once populated, the stats update\nusually has little effect, but shouldn't be ignored forever.\n\nGlad to see the tests are going well...\n\n - Thomas\n\nbtw, I'll guess that \"no other\" and \"every\" could both be\noverstatements, but it sure makes a better sentence, eh? ;)\n",
"msg_date": "Wed, 20 Dec 2000 19:02:42 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: day 2 results"
}
]
|
[
{
"msg_contents": "I've committed contrib/rserv/ which provides replication capabilities to\nPostgreSQL. The code was developed by Vadim, with some script and\nwrapper support from myself. The current version has been tested under\nLinux only, though I know of no reason why it won't work elsewhere.\nRequires PostgreSQL, perl5, and (for a demo gui) tcl/tk.\n\nThe current package will replicate one master to one slave, though there\nare some hooks already in place to support multiple slaves. I'll add\nthat as time permits, or others are welcome to dive in and send patches\n;)\n\nThe package as-committed requires PostgreSQL-7.1 (appropriate because it\nis in the 7.1 tree), but the code can be built on 7.0.x given a\ndifferent Makefile. The compiled code works with both through the magic\nof #ifdef. I've got a standalone Makefile for 7.0.x and earlier, and can\npost it if desired. Also, I could use some advice on how to build extra\nmodules such as this from an rpm-managed PostgreSQL installation. Can it\nactually work? If so, where do the makefiles hide?\n\nHave fun :)\n\n - Thomas\n",
"msg_date": "Wed, 20 Dec 2000 18:07:54 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Replication toolkit added to repository"
},
{
"msg_contents": "> I've committed contrib/rserv/ which provides replication capabilities to\n> PostgreSQL. The code was developed by Vadim, with some script and\n> wrapper support from myself.\n[snip]\n> - Thomas\n\nHow does this jive with www.erserver.com? Completely seperate projects?\nHave you already explored working together?\n\nMany, including me, are chomping at the bit for an opensource replication\nability in pgsql.\n\nThx, -Dan.\n\n",
"msg_date": "Wed, 20 Dec 2000 15:39:24 -0800",
"msg_from": "\"Dan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replication toolkit added to repository"
},
{
"msg_contents": "On Wed, 20 Dec 2000, Dan wrote:\n\n> > I've committed contrib/rserv/ which provides replication capabilities to\n> > PostgreSQL. The code was developed by Vadim, with some script and\n> > wrapper support from myself.\n> [snip]\n> > - Thomas\n> \n> How does this jive with www.erserver.com? Completely seperate projects?\n> Have you already explored working together?\n> \n> Many, including me, are chomping at the bit for an opensource replication\n> ability in pgsql.\n\n*rofl* Thomas/Vadim == www.erserver.com ... what Thomas just commit'd our\nfirst open source version of it ...\n\n\n",
"msg_date": "Wed, 20 Dec 2000 19:51:38 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replication toolkit added to repository"
},
{
"msg_contents": "> > I've committed contrib/rserv/ which provides replication capabilities to\n> > PostgreSQL. The code was developed by Vadim, with some script and\n> > wrapper support from myself.\n> How does this jive with www.erserver.com? Completely seperate projects?\n> Have you already explored working together?\n\nActually, it *is* erserver, or at least the parts that fit well into the\ncurrent PostgreSQL code base. As it evolves we will open source the\nother pieces we can, as soon as we can. Some folks -- probably not all\nin the current hacker community -- will have specific requirements that\nthey will be interested in funding or having addressed on spec, and we\nwill be developing the toolkit with those as a priority.\n\n> Many, including me, are chomping at the bit for an opensource replication\n> ability in pgsql.\n\nThis is the start of the toolkit. Hope it helps to meet your needs.\n\n - Thomas\n",
"msg_date": "Thu, 21 Dec 2000 06:28:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replication toolkit added to repository"
}
]
|
[
{
"msg_contents": "I've committed to contrib/mysql/ a utility called mysql2pgsql to help\nwith database conversions from MySQL. It requires perl5 and not much\nelse besides a MySQL schema dump file.\n\nThanks to Tim Perdue for testing and feedback.\n\n - Thomas\n",
"msg_date": "Wed, 20 Dec 2000 18:17:05 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "MySQL conversion utility"
},
{
"msg_contents": "where and is this already availiable?\n\nMike\n\n> I've committed to contrib/mysql/ a utility called mysql2pgsql to help\n> with database conversions from MySQL. It requires perl5 and not much\n> else besides a MySQL schema dump file.\n> \n> Thanks to Tim Perdue for testing and feedback.\n> \n> - Thomas\n> \n\n",
"msg_date": "Wed, 20 Dec 2000 14:57:42 -0500",
"msg_from": "\"Mike Sears\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL conversion utility"
},
{
"msg_contents": "> where and is this already availiable?\n\nIt is already in the current PostgreSQL code tree, to be included in the\n7.1 release. I can get it posted elsewhere if you would find that\nhelpful.\n\n - Thomas\n\n> > I've committed to contrib/mysql/ a utility called mysql2pgsql to help\n> > with database conversions from MySQL. It requires perl5 and not much\n> > else besides a MySQL schema dump file.\n",
"msg_date": "Thu, 21 Dec 2000 06:30:30 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MySQL conversion utility"
}
]
|
[
{
"msg_contents": "Hi, I have this table on postgres-7.0.3\n\nCREATE TABLE ciudad (\n id_ciudad SERIAL,\n ciudad\t VARCHAR(60)\n);\n\nGRANT ALL ON ciudad TO martin;\n\nCREATE INDEX ciudad_idx ON ciudad (ciudad);\n\nAnd I try to insert a value using this query:\n\nINSERT INTO ciudad (ciudad) VALUES (\"Villa Guillermina\")\n\ngetting as responce the message:\n\nERROR: Attribute 'Villa Guillermina' not found. \n\nThe field ciudad is of type varchar(60), so I don't get why it gives me this \nmessage.\n\nAny ideas?\n\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Wed, 20 Dec 2000 17:23:20 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "problems with query"
},
{
"msg_contents": ">INSERT INTO ciudad (ciudad) VALUES (\"Villa Guillermina\")\n\nUse single quotes instead of double quotes.\n\n\n-- \n- Thomas Swan\n- Graduate Student - Computer Science\n- The University of Mississippi\n-\n- \"People can be categorized into two fundamental\n- groups, those that divide people into two groups\n- and those that don't.\"\n\nINSERT INTO ciudad (ciudad) VALUES\n(\"Villa Guillermina\")\nUse single quotes instead of double quotes.\n\n\n\n-- \n- Thomas Swan\n \n- Graduate Student - Computer Science\n- The University of Mississippi\n- \n- \"People can be categorized into two fundamental \n- groups, those that divide people into two groups \n- and those that don't.\"",
"msg_date": "Wed, 20 Dec 2000 14:26:16 -0600",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with query"
},
{
"msg_contents": "hi\nTry to use single quotes rather than double quotes..\nINSERT INTO ciudad (ciudad) VALUES ('Villa Guillermina');\nrather than \nINSERT INTO ciudad (ciudad) VALUES (\"Villa Guillermina\");\n\nHope this helps\nAnand \n\n\nOn Wed, Dec 20, 2000 at 05:23:20PM -0300, Martin A. Marques wrote:\n>Hi, I have this table on postgres-7.0.3\n>\n>CREATE TABLE ciudad (\n> id_ciudad SERIAL,\n> ciudad\t VARCHAR(60)\n>);\n>\n>GRANT ALL ON ciudad TO martin;\n>\n>CREATE INDEX ciudad_idx ON ciudad (ciudad);\n>\n>And I try to insert a value using this query:\n>\n>INSERT INTO ciudad (ciudad) VALUES (\"Villa Guillermina\")\n>\n>getting as responce the message:\n>\n>ERROR: Attribute 'Villa Guillermina' not found. \n>\n>The field ciudad is of type varchar(60), so I don't get why it gives me this \n>message.\n>\n>Any ideas?\n>\n>\n>-- \n>System Administration: It's a dirty job, \n>but someone told I had to do it.\n>-----------------------------------------------------------------\n>Mart�n Marqu�s\t\t\temail: \[email protected]\n>Santa Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\n>Administrador de sistemas en math.unl.edu.ar\n>-----------------------------------------------------------------\n",
"msg_date": "Thu, 21 Dec 2000 10:24:09 +0530",
"msg_from": "Anand Raman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with query"
}
]
|
[
{
"msg_contents": "\nMorning ...\n\n\tJust to let ppl know what I'm using as milestones for beta\nreleases:\n\nbeta2 - vadim finishes WAL stuff he's currently working on\nbeta3 - vadim incorporates LAZY extension to VACUUM\nbeta4 - a week later to clean out any bugs as a result of beta3\nbeta5 - first release candidate, a week after beta4\n\nassuming no bugs with beta5, no more beta's, just release ...\n\nVadim hopes to have WAL completed by ~Friday, and the LAZY stuff is\nalready written, he just doesn't want to divert time from WAL right now as\nits going to need the most testing ... so, if we are lucky, beta3 will be\nJan 1st(ish) and release for the 1st of February ...\n\n... there abouts ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 20 Dec 2000 17:04:16 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Future beta releases ... "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> beta2 - vadim finishes WAL stuff he's currently working on\n\nI think the TOAST-table-vacuuming issue is a \"must fix for beta2\", also.\nI'm on it now...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2000 17:11:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Future beta releases ... "
},
{
"msg_contents": "On Wed, 20 Dec 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > beta2 - vadim finishes WAL stuff he's currently working on\n> \n> I think the TOAST-table-vacuuming issue is a \"must fix for beta2\", also.\n> I'm on it now...\n\nAgreed ... that list wasn't a \"beta2 happens as soon as Vadim commits\",\nmore a \"when Vadim commits, I'll ask the list if there is anything left\noutstanding\" :) Sort of a personal trigger ...\n\n\n",
"msg_date": "Wed, 20 Dec 2000 18:19:01 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Future beta releases ... "
},
{
"msg_contents": "I'd like to get into some beta release with GiST changes.\nI want people who use GiST (not so much expected) to test\nchanges. I've posted a list of changes.\n\nRegards,\n\tOleg\n\nOn Wed, 20 Dec 2000, The Hermit Hacker wrote:\n\n> Date: Wed, 20 Dec 2000 17:04:16 -0400 (AST)\n> From: The Hermit Hacker <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] Future beta releases ... \n> \n> \n> Morning ...\n> \n> \tJust to let ppl know what I'm using as milestones for beta\n> releases:\n> \n> beta2 - vadim finishes WAL stuff he's currently working on\n> beta3 - vadim incorporates LAZY extension to VACUUM\n> beta4 - a week later to clean out any bugs as a result of beta3\n> beta5 - first release candidate, a week after beta4\n> \n> assuming no bugs with beta5, no more beta's, just release ...\n> \n> Vadim hopes to have WAL completed by ~Friday, and the LAZY stuff is\n> already written, he just doesn't want to divert time from WAL right now as\n> its going to need the most testing ... so, if we are lucky, beta3 will be\n> Jan 1st(ish) and release for the 1st of February ...\n> \n> ... there abouts ...\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 21 Dec 2000 01:36:25 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Future beta releases ... "
},
{
"msg_contents": "\nthe GiST changes are more fixes ... no?\n\n\nOn Thu, 21 Dec 2000, Oleg Bartunov wrote:\n\n> I'd like to get into some beta release with GiST changes.\n> I want people who use GiST (not so much expected) to test\n> changes. I've posted a list of changes.\n> \n> Regards,\n> \tOleg\n> \n> On Wed, 20 Dec 2000, The Hermit Hacker wrote:\n> \n> > Date: Wed, 20 Dec 2000 17:04:16 -0400 (AST)\n> > From: The Hermit Hacker <[email protected]>\n> > To: [email protected]\n> > Subject: [HACKERS] Future beta releases ... \n> > \n> > \n> > Morning ...\n> > \n> > \tJust to let ppl know what I'm using as milestones for beta\n> > releases:\n> > \n> > beta2 - vadim finishes WAL stuff he's currently working on\n> > beta3 - vadim incorporates LAZY extension to VACUUM\n> > beta4 - a week later to clean out any bugs as a result of beta3\n> > beta5 - first release candidate, a week after beta4\n> > \n> > assuming no bugs with beta5, no more beta's, just release ...\n> > \n> > Vadim hopes to have WAL completed by ~Friday, and the LAZY stuff is\n> > already written, he just doesn't want to divert time from WAL right now as\n> > its going to need the most testing ... so, if we are lucky, beta3 will be\n> > Jan 1st(ish) and release for the 1st of February ...\n> > \n> > ... there abouts ...\n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 20 Dec 2000 18:48:03 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Future beta releases ... "
},
{
"msg_contents": "On Wed, 20 Dec 2000, The Hermit Hacker wrote:\n\n> Date: Wed, 20 Dec 2000 18:48:03 -0400 (AST)\n> From: The Hermit Hacker <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] Future beta releases ... \n> \n> \n> the GiST changes are more fixes ... no?\n\nin general yes, I couldn't say what is adding support variable size of \nindex keys\n\n\tRegards,\n\t\tOleg\n> \n> \n> On Thu, 21 Dec 2000, Oleg Bartunov wrote:\n> \n> > I'd like to get into some beta release with GiST changes.\n> > I want people who use GiST (not so much expected) to test\n> > changes. I've posted a list of changes.\n> > \n> > Regards,\n> > \tOleg\n> > \n> > On Wed, 20 Dec 2000, The Hermit Hacker wrote:\n> > \n> > > Date: Wed, 20 Dec 2000 17:04:16 -0400 (AST)\n> > > From: The Hermit Hacker <[email protected]>\n> > > To: [email protected]\n> > > Subject: [HACKERS] Future beta releases ... \n> > > \n> > > \n> > > Morning ...\n> > > \n> > > \tJust to let ppl know what I'm using as milestones for beta\n> > > releases:\n> > > \n> > > beta2 - vadim finishes WAL stuff he's currently working on\n> > > beta3 - vadim incorporates LAZY extension to VACUUM\n> > > beta4 - a week later to clean out any bugs as a result of beta3\n> > > beta5 - first release candidate, a week after beta4\n> > > \n> > > assuming no bugs with beta5, no more beta's, just release ...\n> > > \n> > > Vadim hopes to have WAL completed by ~Friday, and the LAZY stuff is\n> > > already written, he just doesn't want to divert time from WAL right now as\n> > > its going to need the most testing ... so, if we are lucky, beta3 will be\n> > > Jan 1st(ish) and release for the 1st of February ...\n> > > \n> > > ... there abouts ...\n> > > \n> > > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > > Systems Administrator @ hub.org \n> > > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > > \n> > \n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> > \n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 21 Dec 2000 08:15:45 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Future beta releases ... "
},
{
"msg_contents": "> in general yes, I couldn't say what is adding support variable size of\n> index keys\n\nOleg, the correct answer at this point is always \"yes Marc it is just\nbug fixes\" ;)\n\nThe GiST stuff is *really great*, but as you've noticed was also sorely\nneglected. It will be A Good Thing to get this stuff active again.\n\n - Thomas\n",
"msg_date": "Thu, 21 Dec 2000 06:38:54 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Future beta releases ..."
}
]
|
[
{
"msg_contents": "Hello,\n\nAnyone have any clues why such crap can happend?\n\nslygreetings=# vacuum verbose analyze statistic;\nNOTICE: --Relation statistic --\nNOTICE: Pages 498: Changed 1, reaped 490, Empty 0, New 0; Tup 1167: Vac 0, \nKeep/VTL 1110/1110, Crash 0, UnUsed 76888, MinLen 48, MaxLen 48; Re-using: \nFree/Avail. Space 3700424/3700424; EndEmpty/Avail. Pages 0/490. CPU \n0.05s/0.02u sec.\nNOTICE: Index statistic_date_vid_key: Pages 458; Tuples 1167: Deleted 0. CPU \n0.05s/0.00u sec.\nNOTICE: Too old parent tuple found - can't continue vc_repair_frag\nNOTICE: Rel statistic: Pages: 498 --> 498; Tuple(s) moved: 0. CPU \n0.00s/0.01u sec.\nVACUUM \n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Thu, 21 Dec 2000 10:14:01 +0600",
"msg_from": "Denis Perchine <[email protected]>",
"msg_from_op": true,
"msg_subject": "Another vacuum problem"
}
]
|
[
{
"msg_contents": "\tListAdmin: Ignore the stalled/delayed posts from me earlier.\nAccidently posted with the wrong from address.... :(\n\nOn Wed, 20 Dec 2000, Tom Lane wrote:\n\n> Ryan Kirkpatrick <[email protected]> writes:\n> > INSERT INTO OID_TBL(f1) VALUES ('-1040');\n> > ERROR: oidin: error reading \"-1040\": value too large\n> \n> That's coming from a possibly-misguided error check that I put into\n> oidin():\n...\n> On a 32-bit machine, -1040 converts to 4294966256, but on a 64-bit\n> machine it converts to 2^64-1040, and the test is accordingly deciding\n> that that value won't fit in an Oid.\n> \n> Not sure what to do about this. If you had actually typed 2^64-1040,\n> it would be appropriate for the code to reject it. But I hadn't\n> realized that the extra check would introduce a discrepancy between\n> 32- and 64-bit machines for negative inputs. Maybe it'd be better just\n> to delete the check. Comments anyone?\n\n\tI will leave it up to the decision of the list. Though in my\nopinion attempting to insert a negative oid should be met with an error\noutright. Otherwise you end up with a phantom error (i.e. the result is\nnot what you expected, but you are given any warning or notice to that\neffect). \n\n> > SELECT p.name, p.hobbies.name FROM person* p;\n> > pqReadData() -- backend closed the channel unexpectedly.\n> \n> Backtrace please?\n\n\tIt appears that one has to run a decent amount of the regression\ntests to get the backend to crash. There is heavy dependency between the\ntests, so it is not trival to come up with a simple test case. Though I\ncan say simple inheritance (i.e. the inheritance example in the docs)\nworks fine. Apparently the more exotic and complex variety of inheritance\nstarts causing problems. I simply ran the regression tests in serial mode\nand gathered the below data. \n\tHere is the relevant from the postmaster logs:\n\n...\nDEBUG: Deadlock risk: raising lock level from ShareLock to ExclusiveLock\non object 401236/280863/8\nDEBUG: Deadlock risk: raising lock level from ShareLock to ExclusiveLock\non object 401242/280863/1\nDEBUG: Deadlock risk: raising lock level from ShareLock to ExclusiveLock\non object 401245/280863/7\nServer process (pid 8468) exited with status 11 at Wed Dec 20 21:01:56\n2000\nTerminating any active server processes...\nServer processes were terminated at Wed Dec 20 21:01:56 2000\nReinitializing shared memory and semaphores\n...\n\n\tAnd from gdb, here is the post-mortem (not for the faint of\nheart):\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x1200bc160 in ExecEvalFieldSelect ()\n\n#0 0x1200bc160 in ExecEvalFieldSelect ()\n#1 0x1200bc6dc in ExecEvalExpr ()\n#2 0x1200bb868 in ExecEvalFuncArgs ()\n#3 0x1200bb964 in ExecMakeFunctionResult ()\n#4 0x1200bbd18 in ExecEvalOper ()\n#5 0x1200bc63c in ExecEvalExpr ()\n#6 0x1200bc85c in ExecQual ()\n#7 0x1200bcf7c in ExecScan ()\n#8 0x1200c677c in ExecSeqScan ()\n#9 0x1200b9e2c in ExecProcNode ()\n#10 0x1200b8550 in ExecutePlan ()\n#11 0x1200b72bc in ExecutorRun ()\n#12 0x1200bee2c in postquel_getnext ()\n#13 0x1200bf080 in postquel_execute ()\n#14 0x1200bf298 in fmgr_sql ()\n#15 0x1200bbab0 in ExecMakeFunctionResult ()\n#16 0x1200bbdd8 in ExecEvalFunc ()\n#17 0x1200bc64c in ExecEvalExpr ()\n#18 0x1200bc14c in ExecEvalFieldSelect ()\n#19 0x1200bc6dc in ExecEvalExpr ()\n#20 0x1200b64dc in ExecEvalIter ()\n#21 0x1200bc5d0 in ExecEvalExpr ()\n#22 0x1200bcae0 in ExecTargetList ()\n#23 0x1200bce20 in ExecProject ()\n#24 0x1200c6330 in ExecResult ()\n#25 0x1200b9dec in ExecProcNode ()\n#26 0x1200b8550 in ExecutePlan ()\n#27 0x1200b72bc in ExecutorRun ()\n#28 0x1201319c8 in ProcessQuery ()\n#29 0x12012f77c in pg_exec_query_string ()\n#30 0x120131120 in PostgresMain ()\n#31 0x12010ced4 in DoBackend ()\n#32 0x12010c818 in BackendStartup ()\n#33 0x12010b228 in ServerLoop ()\n#34 0x12010a9a0 in PostmasterMain ()\n#35 0x1200d48a8 in main ()\n\n\tTTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n\n\n\n",
"msg_date": "Wed, 20 Dec 2000 21:17:05 -0700 (MST)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL pre-7.1 Linux/Alpha Status... (fwd)"
}
]
|
[
{
"msg_contents": "\n> select * from table where col = function() ;\n\n> (2) \"function()\" returns a number of values that are independent of the\n> query. Postgres should be able to optimize this to be: \"select * from\n> table where col in (val1, val2, val3, ..valn).\" I guess Postgres can\n> loop until done, using the isDone flag?\n\nI think the above needs a different sql statement to begin with. \nThe \"= function()\" clearly states that function is only allowed to return one row.\n\nThe following syntax currently works, and is imho sufficient:\n\tselect * from table where col in (select function());\n\nAndreas\n",
"msg_date": "Thu, 21 Dec 2000 10:43:39 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Three types of functions, ala function redux."
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > select * from table where col = function() ;\n> \n> > (2) \"function()\" returns a number of values that are independent of the\n> > query. Postgres should be able to optimize this to be: \"select * from\n> > table where col in (val1, val2, val3, ..valn).\" I guess Postgres can\n> > loop until done, using the isDone flag?\n> \n> I think the above needs a different sql statement to begin with.\n> The \"= function()\" clearly states that function is only allowed to return one row.\n> \n> The following syntax currently works, and is imho sufficient:\n> select * from table where col in (select function());\n\nBoth syntaxes work, but always force a table scan. If you have an index\non 'col' it will not be used. If your table has millions of records,\nthis takes time.\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Thu, 21 Dec 2000 07:27:56 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Three types of functions, ala function redux."
},
{
"msg_contents": "Acutally, a function can use an index scan *if* it is marked as cacheable:\n(the \"test\" table has 1 field, col (type is int4), which is populated with\nnumbers 1 thru 5000)\n\ntestdb=# create function func_test_cache (int4) returns int4 as '\ntestdb'# select $1;\ntestdb'# ' LANGUAGE 'sql' with (iscachable);\nCREATE\ntestdb=# create function func_test (int4) returns int4 as '\ntestdb'# select $1;\ntestdb'# ' LANGUAGE 'sql';\nCREATE\ntestdb=# vacuum analyze;\nVACUUM\ntestdb=# explain select * from test where col = func_test_cache(1);\nNOTICE: QUERY PLAN:\nIndex Scan using idxtest on test (cost=0.00..2.01 rows=1 width=4)\nEXPLAIN\ntestdb=# explain select * from test where col = func_test(1);\nNOTICE: QUERY PLAN:\nSeq Scan on test (cost=0.00..100.00 rows=1 width=4)\nEXPLAIN\n\nMichael Fork - CCNA - MCP - A+\nNetwork Support - Toledo Internet Access - Toledo Ohio\n\nOn Thu, 21 Dec 2000, mlw wrote:\n\n> Zeugswetter Andreas SB wrote:\n> > \n> > > select * from table where col = function() ;\n> > \n> > > (2) \"function()\" returns a number of values that are independent of the\n> > > query. Postgres should be able to optimize this to be: \"select * from\n> > > table where col in (val1, val2, val3, ..valn).\" I guess Postgres can\n> > > loop until done, using the isDone flag?\n> > \n> > I think the above needs a different sql statement to begin with.\n> > The \"= function()\" clearly states that function is only allowed to return one row.\n> > \n> > The following syntax currently works, and is imho sufficient:\n> > select * from table where col in (select function());\n> \n> Both syntaxes work, but always force a table scan. If you have an index\n> on 'col' it will not be used. If your table has millions of records,\n> this takes time.\n> \n> -- \n> http://www.mohawksoft.com\n> \n\n",
"msg_date": "Thu, 21 Dec 2000 10:56:25 -0500 (EST)",
"msg_from": "Michael Fork <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Three types of functions, ala function redux."
}
]
|
[
{
"msg_contents": "\n> Not sure what to do about this. If you had actually typed 2^64-1040,\n> it would be appropriate for the code to reject it. But I hadn't\n> realized that the extra check would introduce a discrepancy between\n> 32- and 64-bit machines for negative inputs. Maybe it'd be \n> better just\n> to delete the check. Comments anyone?\n\nIIRC oid uses int4in/int4out and those should definitely be able to parse \n-1040 into a 4 byte signed long without platform dependency, no ?\n\npg_dump with OID's dumps those negative numbers if oid > 2^32,\nthus this input must imho be made to work correctly.\n\nAndreas\n",
"msg_date": "Thu, 21 Dec 2000 11:53:11 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: PostgreSQL pre-7.1 Linux/Alpha Status... "
},
{
"msg_contents": "> IIRC oid uses int4in/int4out and those should definitely be able to parse\n> -1040 into a 4 byte signed long without platform dependency, no ?\n\nTom Lane changed this recently to have OID use its own i/o routines.\n\n - Thomas\n",
"msg_date": "Thu, 21 Dec 2000 14:54:10 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] PostgreSQL pre-7.1 Linux/Alpha Status..."
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> IIRC oid uses int4in/int4out and those should definitely be able to parse \n> -1040 into a 4 byte signed long without platform dependency, no ?\n\nIt has done that in past releases. I changed it to use unsigned display\nfor 7.1. Because of the past behavior, I think oidin had better accept\nnegative inputs for backwards compatibility. Besides which, strtoul()\ndoes that naturally ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Dec 2000 10:24:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: PostgreSQL pre-7.1 Linux/Alpha Status... "
}
]
|
[
{
"msg_contents": "\n> > VACUUM ANALYZE after the INSERTs made no performance difference at all,\n> > which is good since no other modern database requires anything to be done\n> > to improve performance after a large number of INSERTs. (i can understand\n> > why COPY would need it, but not INSERT.)\n\nI know of no DB that keeps statistics on the fly for each insert/update (maybe Adabas D ?).\n\n> afaik every modern database requires something like this to update\n> optimizer stats, since on-the-fly stats accumulation can be expensive\n> and inaccurate. But most of my recent experience has been with\n> PostgreSQL and perhaps some other DBs have added some hacks to get\n> around this. Of course, some databases advertised as modern don't do\n> much optimization, so don't need the stats.\n\nTo add another 2 Cents, \"most :-)\" other DB's have two modes of operation,\none rule based when stats are missing alltogether (never ran analyze/update statistics)\nwhich is actually not bad given a pure OLTP access pattern with a high modification volume.\nThe other is the cost based optimizer that needs stats and typically improves performance\nfor query intensive applications that also do OLAP access.\n\nActually PostgreSQL also has this \"sort of\" rule based optimizer which works well\nbefore the first vacuum of a table. The down side is, that the first vacuum\ncreates statistics, and that is not avoidable. My suggestion would be to alter \nvacuum to have a mode of operation that does not create (or even drops) all statistics.\n\nAndreas\n",
"msg_date": "Thu, 21 Dec 2000 12:21:31 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: day 2 results"
}
]
|
[
{
"msg_contents": "\n> > IIRC oid uses int4in/int4out and those should definitely be able to parse\n> > -1040 into a 4 byte signed long without platform dependency, no ?\n> \n> Tom Lane changed this recently to have OID use its own i/o routines.\n\nReading the code, I don't understand it. Why would strtoul return an int in the \nfirst place ? The name seems to imply an unsigned long return type. \nSeems config should check the return type of strtoul.\n\nAndreas\n",
"msg_date": "Thu, 21 Dec 2000 16:18:31 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: PostgreSQL pre-7.1 Linux/Alpha Status..."
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> Reading the code, I don't understand it. Why would strtoul return an\n> int in the first place ? The name seems to imply an unsigned long\n> return type.\n\nWhat's your point?\n\n\tunsigned long cvt;\n\n\tcvt = strtoul(s, &endptr, 10);\n\nThe trick is to get from unsigned long to oid (which is unsigned int)\non machines where those are different sizes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Dec 2000 10:34:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: PostgreSQL pre-7.1 Linux/Alpha Status... "
}
]
|
[
{
"msg_contents": "Tom,\n\nwhile porting our patches for GiST from 7.0.3 to 7.1 we\ngot a problem with equal operator for _int4 - \nsrc/backend/access/gist.c:540\n\n /* did union leave decompressed version of oldud unchanged? */\n FunctionCall3(&giststate->equalFn,\n PointerGetDatum(ev0p->pred),\n PointerGetDatum(datum),\n PointerGetDatum(&result));\n\nthis call produces core when one of the PointerGetDatum(ev0p->pred)\nor PointerGetDatum(datum) is NULL\n\nWe use internal postgres function for array comparison -\n &giststate->equalFn is references to array_eq\n\nThere is no problem in 7.0.3\n\nDo you have any idea what could be a reason for such behaivour ?\n(bug or feature :-)\n\n regards,\n\n\t\tOleg\n\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 21 Dec 2000 19:20:44 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "equal operator for _int4 (array of int4)"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> this call produces core when one of the PointerGetDatum(ev0p->pred)\n> or PointerGetDatum(datum) is NULL\n> We use internal postgres function for array comparison -\n> &giststate->equalFn is references to array_eq\n\narray_eq is marked strict, so it's not expecting to get a NULL input.\n\nIt's impossible to pass a true SQL NULL through FunctionCall3() anyway\n--- no, a null pointer is not an SQL null. So if you want to use\na coding convention that equates null pointer with SQL null, you'll\nhave to implement that within your own code and avoid calling array_eq\nwhen you have a null.\n\nIIRC, the rtree and/or gist index types are fairly sloppy about this\npoint at the moment. I do not like that, because I do not think an\nindex type should depend on the assumption that all datatypes it can\nhandle are pass-by-reference. If you're going to support nulls then\nthere needs to be a separate isnull flag for each datum, *not* an\nassumption that all-zero-bits can't be a valid datum value. But I\ndidn't get around to changing the code yet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Dec 2000 11:32:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: equal operator for _int4 (array of int4) "
},
{
"msg_contents": "On Thu, 21 Dec 2000, Tom Lane wrote:\n\n> Date: Thu, 21 Dec 2000 11:32:47 -0500\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: [HACKERS] Re: equal operator for _int4 (array of int4) \n> \n> Oleg Bartunov <[email protected]> writes:\n> > this call produces core when one of the PointerGetDatum(ev0p->pred)\n> > or PointerGetDatum(datum) is NULL\n> > We use internal postgres function for array comparison -\n> > &giststate->equalFn is references to array_eq\n> \n> array_eq is marked strict, so it's not expecting to get a NULL input.\n> \n> It's impossible to pass a true SQL NULL through FunctionCall3() anyway\n> --- no, a null pointer is not an SQL null. So if you want to use\n> a coding convention that equates null pointer with SQL null, you'll\n> have to implement that within your own code and avoid calling array_eq\n> when you have a null.\n\nok. one check isn't difficult to add :-)\n\n> \n> IIRC, the rtree and/or gist index types are fairly sloppy about this\n> point at the moment. I do not like that, because I do not think an\n> index type should depend on the assumption that all datatypes it can\n> handle are pass-by-reference. If you're going to support nulls then\n> there needs to be a separate isnull flag for each datum, *not* an\n> assumption that all-zero-bits can't be a valid datum value. But I\n> didn't get around to changing the code yet.\n> \n\nTom, this task is too complex for our current understanding of postgres \ninternals. What will happens if we ignore NULLs ? We need to provide\nvacuum some information about numbers of NULL values.\n\n\tOleg\n\n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 21 Dec 2000 19:48:50 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: equal operator for _int4 (array of int4) "
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> What will happens if we ignore NULLs ?\n\nSame thing that happens with hash:\n\nregression=# create table foo (f1 int);\nCREATE\nregression=# create index fooi on foo using hash (f1);\nCREATE\nregression=# insert into foo values(1);\nINSERT 292677 1\nregression=# insert into foo values(null);\nINSERT 292678 1\nregression=# vacuum foo;\nNOTICE: Index fooi: NUMBER OF INDEX' TUPLES (1) IS NOT THE SAME AS HEAP' (2).\n Recreate the index.\nVACUUM\n\n> We need to provide vacuum some information about numbers of NULL values.\n\nPreferably without hardwiring assumptions about the behavior of\ndifferent index types into VACUUM.\n\nThat cross-check in VACUUM has really caused way more grief than it's\nworth. I'm beginning to wonder if we should just take it out...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Dec 2000 13:09:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: equal operator for _int4 (array of int4) "
}
]
|
[
{
"msg_contents": "I've very roughly (first time I've tried anything but hello world c) hacked up inline comments.\n\npg_dump -I\n\nExports the comments generated through COMMENT ON in an appropriate manner (line above) the item with a -- in front. More or less a self documenting dump, or atleast an attempt at it.\n\nHowever, due to my poor programming in this language, I'm not sure of teh best way to handle the issues following:\n- Column comments mis-format the next row (Needs a \\t or something)\n- Database comments non-existent, wasn't sure how or where to pull them out.\n- I've only tested TABLE and COLUMN comments. Didn't have a database handy with the rest, and had a limited amount of time to fiddle.\n\nTake a look and see if it's worth anything or if it needs to be fixed up.\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the truth, and what really happened.",
"msg_date": "Thu, 21 Dec 2000 12:33:10 -0500",
"msg_from": "\"Rod Taylor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inline Comments for pg_dump"
},
{
"msg_contents": "We are already in beta, so I don't think I can apply this. I will keep\nit and apply in our 7.2 development tree.\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> I've very roughly (first time I've tried anything but hello world c) hacked up inline comments.\n> \n> pg_dump -I\n> \n> Exports the comments generated through COMMENT ON in an appropriate manner (line above) the item with a -- in front. More or less a self documenting dump, or atleast an attempt at it.\n> \n> However, due to my poor programming in this language, I'm not sure of teh best way to handle the issues following:\n> - Column comments mis-format the next row (Needs a \\t or something)\n> - Database comments non-existent, wasn't sure how or where to pull them out.\n> - I've only tested TABLE and COLUMN comments. Didn't have a database handy with the rest, and had a limited amount of time to fiddle.\n> \n> Take a look and see if it's worth anything or if it needs to be fixed up.\n> --\n> Rod Taylor\n> \n> There are always four sides to every story: your side, their side, the truth, and what really happened.\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Dec 2000 14:17:00 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inline Comments for pg_dump"
},
{
"msg_contents": "Believe it or not, I was just about to start working on comment support (by\npracticing in phpPgAdmin, so I'm happy to look over the code to see if I can\naddress the issues raised, and maybe to do it for all database objects...?\n\nChris\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> Sent: Friday, December 22, 2000 3:17 AM\n> To: Rod Taylor\n> Cc: [email protected]\n> Subject: Re: [HACKERS] Inline Comments for pg_dump\n>\n>\n> We are already in beta, so I don't think I can apply this. I will keep\n> it and apply in our 7.2 development tree.\n>\n>\n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > I've very roughly (first time I've tried anything but hello\n> world c) hacked up inline comments.\n> >\n> > pg_dump -I\n> >\n> > Exports the comments generated through COMMENT ON in an\n> appropriate manner (line above) the item with a -- in front.\n> More or less a self documenting dump, or atleast an attempt at it.\n> >\n> > However, due to my poor programming in this language, I'm not\n> sure of teh best way to handle the issues following:\n> > - Column comments mis-format the next row (Needs a \\t or something)\n> > - Database comments non-existent, wasn't sure how or where to\n> pull them out.\n> > - I've only tested TABLE and COLUMN comments. Didn't have a\n> database handy with the rest, and had a limited amount of time to fiddle.\n> >\n> > Take a look and see if it's worth anything or if it needs to be\n> fixed up.\n> > --\n> > Rod Taylor\n> >\n> > There are always four sides to every story: your side, their\n> side, the truth, and what really happened.\n>\n> [ Attachment, skipping... ]\n>\n> [ Attachment, skipping... ]\n>\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Fri, 22 Dec 2000 09:14:49 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Inline Comments for pg_dump"
},
{
"msg_contents": "\nCan someone remind me what happened to this issue?\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> I've very roughly (first time I've tried anything but hello world c) hacked up inline comments.\n> \n> pg_dump -I\n> \n> Exports the comments generated through COMMENT ON in an appropriate manner (line above) the item with a -- in front. More or less a self documenting dump, or atleast an attempt at it.\n> \n> However, due to my poor programming in this language, I'm not sure of teh best way to handle the issues following:\n> - Column comments mis-format the next row (Needs a \\t or something)\n> - Database comments non-existent, wasn't sure how or where to pull them out.\n> - I've only tested TABLE and COLUMN comments. Didn't have a database handy with the rest, and had a limited amount of time to fiddle.\n> \n> Take a look and see if it's worth anything or if it needs to be fixed up.\n> --\n> Rod Taylor\n> \n> There are always four sides to every story: your side, their side, the truth, and what really happened.\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 23:52:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inline Comments for pg_dump"
},
{
"msg_contents": "\nDo we handle COMMENT properly in pg_dump now already?\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> I've very roughly (first time I've tried anything but hello world c) hacked up inline comments.\n> \n> pg_dump -I\n> \n> Exports the comments generated through COMMENT ON in an appropriate manner (line above) the item with a -- in front. More or less a self documenting dump, or atleast an attempt at it.\n> \n> However, due to my poor programming in this language, I'm not sure of teh best way to handle the issues following:\n> - Column comments mis-format the next row (Needs a \\t or something)\n> - Database comments non-existent, wasn't sure how or where to pull them out.\n> - I've only tested TABLE and COLUMN comments. Didn't have a database handy with the rest, and had a limited amount of time to fiddle.\n> \n> Take a look and see if it's worth anything or if it needs to be fixed up.\n> --\n> Rod Taylor\n> \n> There are always four sides to every story: your side, their side, the truth, and what really happened.\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 09:46:15 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inline Comments for pg_dump"
}
]
|
[
{
"msg_contents": "Is there anything for encoding a PGresult struct into something I\ncan pass between processes? Like turning it into a platform\nindependant stream that I can pass between machines?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 21 Dec 2000 11:03:08 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "externalizing PGresult?"
}
]
|
[
{
"msg_contents": "\n\n-- \nPeter Mount\nEnterprise Support Officer, Maidstone Borough Council\nEmail: [email protected]\nWWW: http://www.maidstone.gov.uk\nAll views expressed within this email are not the views of Maidstone Borough\nCouncil\n\n\n> -----Original Message-----\n> From: Peter Eisentraut [mailto:[email protected]]\n> Sent: Thursday, December 21, 2000 6:30 PM\n> To: Peter Mount\n> Cc: PostgreSQL Interfaces (E-mail); PostgreSQL Developers \n> List (E-mail)\n> Subject: Re: [HACKERS] Status of JDBC Interface\n> \n> \n> Peter Mount writes:\n> \n> > 1) ANT vs Make\n> \n> > I suggest we keep supporting both methods for now to see \n> how people get on.\n> \n> If you're confident about ANT is suggest that you dump the \n> make interface because otherwise you increase the possible\n> failure scenarios at the install level alone in combinatorial ways.\n\nMy plan is to keep both for this release and then (assuming there's feedback\nabout how ANT is working) to remove the current Makefile for the next one.\n\n> What's a bit sad about the JDBC subtree is that it doesn't follow the\n> build system conventions in the rest of the tree. For example, I would\n> really like to be able to do this:\n> \n> ./configure ... --with-java[=/usr/local/j2sdk1.3.0]\n> make\n> make install\n\nI did think about that but ran out of time. What would be nice would be for\nconfigure to either detect the presence of the JDK & ANT (with --with-java\nand --with-ant pairs) and then automatically call a cut down Makefile. ie:\n\n-- Begin Makefile --\nall:\n\tant\n\nclean:\n\tant clean\n-- End Makefile --\n\nThe one big thing for ANT is that it makes the detection of the JDK/JVM\nversion simple, so detecting the JDK/JVM version isn't required anywhere in\nthe Makefiles.\n\n> This wouldn't only make it easier on users, but many more people would\n> perhaps be exposed to the fact that there's a JDBC driver in \n> the tree at all.\n\nAgreed. I still get emails from people asking for the source when it's\nincluded in the main source tree.\n\n> I have on and off had some ideas about autoconfiscating the \n> JDBC build but I'm glad I didn't do it since this Ant thing seems to be\nmuch better.\n> But it would still be desirable to have a make wrapper since \n> that is what people are accustomed to.\n\nYes. I could replace Makefile now, but I wanted to see what everyones\nopinion on ANT was first.\n\n> Btw., how does Ant choose the JDK it uses if I have three or four\n> installed at random places? (This is perhaps yet another source of\n> problems, if make and ant use different JDKs by default.)\n\nThere's the JAVA_HOME environment variable used by the JDK. Normally the JDK\ncan work it out without the user setting it. I use it to switch between\n1.1.8 & 1.2.2 (you also have to change PATH).\n\nAnyhow, you have set JAVA_HOME then ANT will use that for the compiler.\nThere's also ANT_HOME but I'm running ok without that one set.\n\nPS: ANT works with both Sun's javac, jikes (IBM's compiler) and jvc\n(Micro$oft's Java/VisualJ++)\n\nThe more I look into ANT's capabilities the more I like it. It's extensible\n(you can write your own class to implement new tasks) and it even has the\nability to use CVS and apply patches on the fly, so if someone has CVS\ninstalled they only need to run:\n\n\tant update\n\nand an update target (ANT's name for a rule in Make) then checks out the\ncurrent source before building.\n\n> > 2) Versioning\n> \n> > one location. Also as suggested on the Hackers list Make \n> now extracts the\n> > version from Makefile.global. This works fine for Make, but \n> there are two\n> > problems.\n> >\n> > First, ANT can't read this easily. This isn't that major, \n> but the second one\n> > is. I've had reports that some people checkout just the \n> interfaces, and not\n> > the entire source, so Makefile.global is not available.\n> \n> Just checking out interfaces is not advertised AFAIK and it definitely\n> doesn't work for anything but the JDBC driver.\n\nI had an email from someone who said they did this (I didn't know you could\nbefore then) because of space reasons. Before 7.0 yes JDBC would be\ncompileable but it has a link to Makefile.global now.\n\n> OTOH, nothing is stopping you from inventing your own \n> versioning scheme for the driver only. Several other things in the tree\ndo this as well\n> (PyGreSql, PgAccess).\n\nTrue, but historically the JDBC versioning has matched that of the backend,\nand I think it makes it easier to say JDBC Driver v7.1 is for 7.1.x\nbackends. This is especially as in this version DatabaseMetaData will not\nwork with earlier version backends (uses INNER & OUTER joins).\n\nIf I get chance today I'll see if I can get it to pull the versions out of\nMakefile.global.\n\nPeter\n\n",
"msg_date": "Fri, 22 Dec 2000 09:07:50 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Status of JDBC Interface"
}
]
|
[
{
"msg_contents": "It looks Ok, but it has one unnecessary step. There is no need to do the \"mv\nprivkey.pem cert.pem.pw\" if you just use \"privkey.pem\" in the following\nopenssl command (e.g. openssl rsa -in privkey.pem -out cert.pem\".\nBut there is nothing wrong with it as it is now, as far as I can see.\n\n\n//Magnus\n\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: den 21 december 2000 20:15\n> To: Magnus Hagander\n> Cc: 'Matthew Kirkwood'; '[email protected]'\n> Subject: Re: [PATCHES] RE: SSL Connections [doc PATCH]\n> \n> \n> I have applied an earlier patch to this file for SSL. Could you check\n> the current tree and see how you like it?\n> \n> \n> > Thanks for that one!\n> > \n> > Here is a patch to update the documentation based on this - \n> this should make\n> > it less dependant on the version of OpenSSL used.\n> > \n> > //Magnus\n> > \n> > \n> > \n> > > -----Original Message-----\n> > > From: Matthew Kirkwood [mailto:[email protected]]\n> > > Sent: den 21 december 2000 16:49\n> > > To: Oliver Elphick\n> > > Cc: [email protected]\n> > > Subject: Re: [HACKERS] SSL Connections\n> > > \n> > > \n> > > On Wed, 20 Dec 2000, Oliver Elphick wrote:\n> > > \n> > > > To create a quick self-signed certificate, use the CA.pl script\n> > > > included in OpenSSL:\n> > > > \n> > > > CA.pl -newcert\n> > > \n> > > Or you can do it manually:\n> > > \n> > > openssl req -new -text -out cert.req (you will have to enter \n> > > a password)\n> > > mv privkey.pem cert.pem.pw\n> > > openssl rsa -in cert.pem.pw -out cert.pem (this removes \n> the password)\n> > > openssl req -x509 -in cert.req -text -key cert.pem -out cert.cert\n> > > \n> > > Matthew.\n> > > \n> > \n> \n> [ Attachment, skipping... ]\n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, \n> Pennsylvania 19026\n> \n",
"msg_date": "Fri, 22 Dec 2000 12:09:54 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: RE: SSL Connections [doc PATCH]"
},
{
"msg_contents": "If this is a valid point, can someone send me a patch for it? Thanks.\n\n> It looks Ok, but it has one unnecessary step. There is no need to do the \"mv\n> privkey.pem cert.pem.pw\" if you just use \"privkey.pem\" in the following\n> openssl command (e.g. openssl rsa -in privkey.pem -out cert.pem\".\n> But there is nothing wrong with it as it is now, as far as I can see.\n> \n> \n> //Magnus\n> \n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > Sent: den 21 december 2000 20:15\n> > To: Magnus Hagander\n> > Cc: 'Matthew Kirkwood'; '[email protected]'\n> > Subject: Re: [PATCHES] RE: SSL Connections [doc PATCH]\n> > \n> > \n> > I have applied an earlier patch to this file for SSL. Could you check\n> > the current tree and see how you like it?\n> > \n> > \n> > > Thanks for that one!\n> > > \n> > > Here is a patch to update the documentation based on this - \n> > this should make\n> > > it less dependant on the version of OpenSSL used.\n> > > \n> > > //Magnus\n> > > \n> > > \n> > > \n> > > > -----Original Message-----\n> > > > From: Matthew Kirkwood [mailto:[email protected]]\n> > > > Sent: den 21 december 2000 16:49\n> > > > To: Oliver Elphick\n> > > > Cc: [email protected]\n> > > > Subject: Re: [HACKERS] SSL Connections\n> > > > \n> > > > \n> > > > On Wed, 20 Dec 2000, Oliver Elphick wrote:\n> > > > \n> > > > > To create a quick self-signed certificate, use the CA.pl script\n> > > > > included in OpenSSL:\n> > > > > \n> > > > > CA.pl -newcert\n> > > > \n> > > > Or you can do it manually:\n> > > > \n> > > > openssl req -new -text -out cert.req (you will have to enter \n> > > > a password)\n> > > > mv privkey.pem cert.pem.pw\n> > > > openssl rsa -in cert.pem.pw -out cert.pem (this removes \n> > the password)\n> > > > openssl req -x509 -in cert.req -text -key cert.pem -out cert.cert\n> > > > \n> > > > Matthew.\n> > > > \n> > > \n> > \n> > [ Attachment, skipping... ]\n> > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, \n> > Pennsylvania 19026\n> > \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Jan 2001 21:19:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE: SSL Connections [doc PATCH]"
},
{
"msg_contents": "\nIs this resolved?\n\n> It looks Ok, but it has one unnecessary step. There is no need to do the \"mv\n> privkey.pem cert.pem.pw\" if you just use \"privkey.pem\" in the following\n> openssl command (e.g. openssl rsa -in privkey.pem -out cert.pem\".\n> But there is nothing wrong with it as it is now, as far as I can see.\n> \n> \n> //Magnus\n> \n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > Sent: den 21 december 2000 20:15\n> > To: Magnus Hagander\n> > Cc: 'Matthew Kirkwood'; '[email protected]'\n> > Subject: Re: [PATCHES] RE: SSL Connections [doc PATCH]\n> > \n> > \n> > I have applied an earlier patch to this file for SSL. Could you check\n> > the current tree and see how you like it?\n> > \n> > \n> > > Thanks for that one!\n> > > \n> > > Here is a patch to update the documentation based on this - \n> > this should make\n> > > it less dependant on the version of OpenSSL used.\n> > > \n> > > //Magnus\n> > > \n> > > \n> > > \n> > > > -----Original Message-----\n> > > > From: Matthew Kirkwood [mailto:[email protected]]\n> > > > Sent: den 21 december 2000 16:49\n> > > > To: Oliver Elphick\n> > > > Cc: [email protected]\n> > > > Subject: Re: [HACKERS] SSL Connections\n> > > > \n> > > > \n> > > > On Wed, 20 Dec 2000, Oliver Elphick wrote:\n> > > > \n> > > > > To create a quick self-signed certificate, use the CA.pl script\n> > > > > included in OpenSSL:\n> > > > > \n> > > > > CA.pl -newcert\n> > > > \n> > > > Or you can do it manually:\n> > > > \n> > > > openssl req -new -text -out cert.req (you will have to enter \n> > > > a password)\n> > > > mv privkey.pem cert.pem.pw\n> > > > openssl rsa -in cert.pem.pw -out cert.pem (this removes \n> > the password)\n> > > > openssl req -x509 -in cert.req -text -key cert.pem -out cert.cert\n> > > > \n> > > > Matthew.\n> > > > \n> > > \n> > \n> > [ Attachment, skipping... ]\n> > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, \n> > Pennsylvania 19026\n> > \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 Jan 2001 23:53:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE: SSL Connections [doc PATCH]"
},
{
"msg_contents": "\nAgain, is this something that needs fixing? Just a YES or NO is all I\nneed.\n\n\n\n> It looks Ok, but it has one unnecessary step. There is no need to do the \"mv\n> privkey.pem cert.pem.pw\" if you just use \"privkey.pem\" in the following\n> openssl command (e.g. openssl rsa -in privkey.pem -out cert.pem\".\n> But there is nothing wrong with it as it is now, as far as I can see.\n> \n> \n> //Magnus\n> \n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > Sent: den 21 december 2000 20:15\n> > To: Magnus Hagander\n> > Cc: 'Matthew Kirkwood'; '[email protected]'\n> > Subject: Re: [PATCHES] RE: SSL Connections [doc PATCH]\n> > \n> > \n> > I have applied an earlier patch to this file for SSL. Could you check\n> > the current tree and see how you like it?\n> > \n> > \n> > > Thanks for that one!\n> > > \n> > > Here is a patch to update the documentation based on this - \n> > this should make\n> > > it less dependant on the version of OpenSSL used.\n> > > \n> > > //Magnus\n> > > \n> > > \n> > > \n> > > > -----Original Message-----\n> > > > From: Matthew Kirkwood [mailto:[email protected]]\n> > > > Sent: den 21 december 2000 16:49\n> > > > To: Oliver Elphick\n> > > > Cc: [email protected]\n> > > > Subject: Re: [HACKERS] SSL Connections\n> > > > \n> > > > \n> > > > On Wed, 20 Dec 2000, Oliver Elphick wrote:\n> > > > \n> > > > > To create a quick self-signed certificate, use the CA.pl script\n> > > > > included in OpenSSL:\n> > > > > \n> > > > > CA.pl -newcert\n> > > > \n> > > > Or you can do it manually:\n> > > > \n> > > > openssl req -new -text -out cert.req (you will have to enter \n> > > > a password)\n> > > > mv privkey.pem cert.pem.pw\n> > > > openssl rsa -in cert.pem.pw -out cert.pem (this removes \n> > the password)\n> > > > openssl req -x509 -in cert.req -text -key cert.pem -out cert.cert\n> > > > \n> > > > Matthew.\n> > > > \n> > > \n> > \n> > [ Attachment, skipping... ]\n> > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, \n> > Pennsylvania 19026\n> > \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 09:46:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] RE: SSL Connections [doc PATCH]"
},
{
"msg_contents": "\nChange made.\n\n> It looks Ok, but it has one unnecessary step. There is no need to do the \"mv\n> privkey.pem cert.pem.pw\" if you just use \"privkey.pem\" in the following\n> openssl command (e.g. openssl rsa -in privkey.pem -out cert.pem\".\n> But there is nothing wrong with it as it is now, as far as I can see.\n> \n> \n> //Magnus\n> \n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > Sent: den 21 december 2000 20:15\n> > To: Magnus Hagander\n> > Cc: 'Matthew Kirkwood'; '[email protected]'\n> > Subject: Re: [PATCHES] RE: SSL Connections [doc PATCH]\n> > \n> > \n> > I have applied an earlier patch to this file for SSL. Could you check\n> > the current tree and see how you like it?\n> > \n> > \n> > > Thanks for that one!\n> > > \n> > > Here is a patch to update the documentation based on this - \n> > this should make\n> > > it less dependant on the version of OpenSSL used.\n> > > \n> > > //Magnus\n> > > \n> > > \n> > > \n> > > > -----Original Message-----\n> > > > From: Matthew Kirkwood [mailto:[email protected]]\n> > > > Sent: den 21 december 2000 16:49\n> > > > To: Oliver Elphick\n> > > > Cc: [email protected]\n> > > > Subject: Re: [HACKERS] SSL Connections\n> > > > \n> > > > \n> > > > On Wed, 20 Dec 2000, Oliver Elphick wrote:\n> > > > \n> > > > > To create a quick self-signed certificate, use the CA.pl script\n> > > > > included in OpenSSL:\n> > > > > \n> > > > > CA.pl -newcert\n> > > > \n> > > > Or you can do it manually:\n> > > > \n> > > > openssl req -new -text -out cert.req (you will have to enter \n> > > > a password)\n> > > > mv privkey.pem cert.pem.pw\n> > > > openssl rsa -in cert.pem.pw -out cert.pem (this removes \n> > the password)\n> > > > openssl req -x509 -in cert.req -text -key cert.pem -out cert.cert\n> > > > \n> > > > Matthew.\n> > > > \n> > > \n> > \n> > [ Attachment, skipping... ]\n> > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, \n> > Pennsylvania 19026\n> > \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 10:19:18 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE: SSL Connections [doc PATCH]"
}
]
|
[
{
"msg_contents": "Hello,\n\nMerry Christmass and Happy New Year 2001 ;)\n\nR. \"BoBsoN\" Partyka\n\n\n",
"msg_date": "Fri, 22 Dec 2000 14:41:59 +0100 (CET)",
"msg_from": "Partyka Robert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Merry X-Mass"
},
{
"msg_contents": "Little early aren't you?\n\nselect now()::date gives me 2000-12-22\n\nHmm.. only one digit is odd.\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.\n----- Original Message -----\nFrom: \"Partyka Robert\" <[email protected]>\nTo: <[email protected]>\nCc: <[email protected]>\nSent: Friday, December 22, 2000 8:41 AM\nSubject: [HACKERS] Merry X-Mass\n\n\n> Hello,\n>\n> Merry Christmass and Happy New Year 2001 ;)\n>\n> R. \"BoBsoN\" Partyka\n>\n>\n>\n\n",
"msg_date": "Fri, 22 Dec 2000 08:53:41 -0500",
"msg_from": "\"Rod Taylor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Merry X-Mass"
},
{
"msg_contents": "\nOn Fri, 22 Dec 2000, Rod Taylor wrote:\n>\n> Little early aren't you?\n> \n\nI live from town (and this meen no internet access) today \nand when I back will be the XXI century so its last\nchance to wish You all mery xmass and happy new year ;)\n\nSo have a good party at night 31.12.2000, dont drink to much ;))) if You\nwant to remember how the XX millennium ends ;))))\n\n\nBoBsoN\n\n\n",
"msg_date": "Fri, 22 Dec 2000 16:15:26 +0100 (CET)",
"msg_from": "Partyka Robert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Merry X-Mass"
},
{
"msg_contents": "And beware the Y2K + 1 bug. :)\n\nAdam Lang\nSystems Engineer\nRutgers Casualty Insurance Company\nhttp://www.rutgersinsurance.com\n----- Original Message ----- \nFrom: \"Partyka Robert\" <[email protected]>\nTo: \"Rod Taylor\" <[email protected]>\nCc: <[email protected]>; <[email protected]>\nSent: Friday, December 22, 2000 10:15 AM\nSubject: [GENERAL] Re: [HACKERS] Merry X-Mass\n\n\n> \n> On Fri, 22 Dec 2000, Rod Taylor wrote:\n> >\n> > Little early aren't you?\n> > \n> \n> I live from town (and this meen no internet access) today \n> and when I back will be the XXI century so its last\n> chance to wish You all mery xmass and happy new year ;)\n> \n> So have a good party at night 31.12.2000, dont drink to much ;))) if You\n> want to remember how the XX millennium ends ;))))\n> \n> \n> BoBsoN\n> \n\n",
"msg_date": "Fri, 22 Dec 2000 10:30:49 -0500",
"msg_from": "\"Adam Lang\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] Merry X-Mass"
}
]
|
[
{
"msg_contents": "It no longer seems to be possible to refer to a table, which is an\nancestor of any other, in a referential integrity constraint.\n\nIn this example, \"person\" is the ancestor of several other tables:\n\n\nbray=# create table junk (id char(10) constraint junk_id_person references \nperson*(id));\nERROR: parser: parse error at or near \"*\"\nbray=# create table junk (id char(10) constraint junk_id_person references \nonly person(id));\nERROR: parser: parse error at or near \"only\"\nbray=# create table junk (id char(10) constraint junk_id_person references \nperson(id));\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nCREATE\nbray=# insert into junk values ('aa');\nERROR: SELECT FOR UPDATE is not supported for inherit queries\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"And there were in the same country shepherds abiding \n in the field, keeping watch over their flock by night.\n And, lo, the angel of the Lord came upon them, and the\n glory of the Lord shone around them; and they were \n sore afraid. And the angel said unto them, \" Fear not;\n for behold I bring you good tidings of great joy which\n shall be to all people. For unto you is born this day \n in the city of David a Saviour, which is Christ the \n Lord.\" Luke 2:8-11 \n\n\n",
"msg_date": "Fri, 22 Dec 2000 17:20:19 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RI problem with inherited table"
},
{
"msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> It no longer seems to be possible to refer to a table, which is an\n> ancestor of any other, in a referential integrity constraint.\n\n> bray=# create table junk (id char(10) constraint junk_id_person references \n> person(id));\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> CREATE\n> bray=# insert into junk values ('aa');\n> ERROR: SELECT FOR UPDATE is not supported for inherit queries\n\nHm. The short-term answer seems to be to modify the queries generated\nby the RI triggers to say \"ONLY foo\". I am not sure whether we\nunderstand the semantics involved in allowing a REFERENCES target to be\ntaken as an inheritance tree rather than just one table, but certainly\nthe current implementation won't handle that correctly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Dec 2000 12:52:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RI problem with inherited table "
}
]
|
[
{
"msg_contents": "The lack of a permissions check for creating a child table means that\nin current sources, any user can inject data of his choosing into\nanother user's tables. Example:\n\nUser A:\n\nregression=> create table foo (f1 text);\nCREATE\nregression=> insert into foo values ('good data');\nINSERT 271570 1\n\nUser B:\n\nregression=> create table foohack () inherits (foo);\nCREATE\nregression=> insert into foohack values ('you have been hacked!');\nINSERT 271598 1\n\nNow User A sees:\n\nregression=> select * from foo;\n f1\n-----------------------\n good data\n you have been hacked!\n(2 rows)\n\nUser A can only avoid this trap by being very careful to specify ONLY\nin every query. If he *intends* to use foo as an inheritance tree\nmaster, then that cure doesn't work either.\n\nJust to add insult to injury, user A is now unable to drop table foo.\nHe'll also get permission failures from commands like \"UPDATE foo ...\"\n\nI suppose a proper fix would involve adding a new permission type \"can\nmake child tables\", but I don't want to mess with that at the moment.\nFor 7.1, I propose that we only allow creation of child tables to the\nowner of the parent table.\n\nComments?\n\n\t\t\tregards, tom lane\n\nPS: another interesting problem: create a temp table, then create a\nnon-temp table that inherits from it. Unhappiness ensues when you\nend your session. Need to prohibit this combination, I think.\n",
"msg_date": "Fri, 22 Dec 2000 14:00:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inheritance is a security loophole!"
},
{
"msg_contents": "> I suppose a proper fix would involve adding a new permission type \"can\n> make child tables\", but I don't want to mess with that at the moment.\n> For 7.1, I propose that we only allow creation of child tables to the\n> owner of the parent table.\n\nI see no reason people would be inheriting from other people's tables. \nLet's disable it.\n\n> PS: another interesting problem: create a temp table, then create a\n> non-temp table that inherits from it. Unhappiness ensues when you\n> end your session. Need to prohibit this combination, I think.\n\nClear example where mixing features causes strange behavour. Part of\nthe UNION/TEMPORARY/subquery/aggregate/inheritance/rule/view/array mix.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Dec 2000 15:01:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance is a security loophole!"
}
]
|
[
{
"msg_contents": "Hi !!\nI'm using PostgreSQL 7.0.2 with Linux Mandrake 7.2, and not know as send\ngeneral data type or BLOB's to my win32 ODBC driver (PostODBC) from Visual\nFox Pro.\n\nIf yours have examples or source code of apps that sending/receive big\nbinaries data via ODBC, write me !!..\n\nHelp me friends !!\n\n\n",
"msg_date": "Fri, 22 Dec 2000 14:12:24 -0500",
"msg_from": "\"Wilderman Ceren\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "as i work BLOB's from PostODBC"
}
]
|
[
{
"msg_contents": "PostgreSQL 7.0.3 on sparc-sun-solaris2.6, compiled by gcc 2.95.2\n\ndrop table tryam;\ncreate table tryam (x text);\ninsert into tryam values ('22');\ninsert into tryam values ('1');\n\ntest=# select x from tryam union select x from tryam;\n x \n----\n 22\n 1\n(2 rows)\n\ntest=# select x from tryam union select x from tryam order by x;\n x \n----\n 1\n 22\n(2 rows)\n\ntest=# select x from tryam union select x from tryam order by length(x);\n x \n----\n 1\n 22\n 1\n 22\n(4 rows)\n\n???\n\nVadim\n",
"msg_date": "Fri, 22 Dec 2000 11:36:16 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.0.3: order by func in union"
}
]
|
[
{
"msg_contents": "> PostgreSQL 7.0.3 on sparc-sun-solaris2.6, compiled by gcc 2.95.2\n...\n> test=# select x from tryam union select x from tryam order by \n> length(x);\n> x \n> ----\n> 1\n> 22\n> 1\n> 22\n> (4 rows)\n\n7.1current:\n\nvac=# select x from tryam union select x from tryam order by length(x);\nERROR: Attribute 'x' not found\n\nVadim\n",
"msg_date": "Fri, 22 Dec 2000 12:14:08 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.1current: order by func in union"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> vac=# select x from tryam union select x from tryam order by length(x);\n> ERROR: Attribute 'x' not found\n\nThe UNION code doesn't support ordering by resjunk attributes, and never\nhas (but 7.1 at least knows it can't do it ;-)). Something to fix when\nwe redesign querytrees.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Dec 2000 16:53:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.1current: order by func in union "
}
]
|
[
{
"msg_contents": "What is the status of the genetic algorithm query optimizer? Is this\nsupposed to work well on many-table joins, or has it fallen out of favor\nor in disrepair? [I'm needing to optimize some large, many-table-join\nqueries and wondering time spent configuring/understanding geqo would be\nfruitful...]\n\nRegards,\nEd Loehr\n",
"msg_date": "Fri, 22 Dec 2000 16:46:26 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "GEQO status?"
},
{
"msg_contents": "Ed Loehr <[email protected]> writes:\n> What is the status of the genetic algorithm query optimizer? Is this\n> supposed to work well on many-table joins, or has it fallen out of favor\n> or in disrepair?\n\nIt's supposed to work ;-). I'm not sure that the default parameters are\noptimal, however. If you experiment with other settings, please post your\nresults.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Dec 2000 18:08:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GEQO status? "
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > What is the status of the genetic algorithm query optimizer? Is this\n> > supposed to work well on many-table joins, or has it fallen out of favor\n> > or in disrepair? [I'm needing to optimize some large, many-table-join\n> > queries and wondering time spent configuring/understanding geqo would be\n> > fruitful...]\n> \n> It is the only techique we have to achieve adequate performance on\n> many-table joins. It has received little work recently, but that may be\n> due to having received no complaints or discussions that I can recall.\n\nI'm having some trouble, not sure its related to GEQO. Is there a\nPGOPTIONS flag to turn it off to attempt isolate the problem?\n\nEd\n",
"msg_date": "Fri, 22 Dec 2000 17:10:47 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GEQO status?"
},
{
"msg_contents": "> What is the status of the genetic algorithm query optimizer? Is this\n> supposed to work well on many-table joins, or has it fallen out of favor\n> or in disrepair? [I'm needing to optimize some large, many-table-join\n> queries and wondering time spent configuring/understanding geqo would be\n> fruitful...]\n\nIt is the only techique we have to achieve adequate performance on\nmany-table joins. It has received little work recently, but that may be\ndue to having received no complaints or discussions that I can recall.\n\n - Thomas\n",
"msg_date": "Fri, 22 Dec 2000 23:14:25 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GEQO status?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Ed Loehr <[email protected]> writes:\n> > What is the status of the genetic algorithm query optimizer? Is this\n> > supposed to work well on many-table joins, or has it fallen out of favor\n> > or in disrepair?\n> \n> It's supposed to work ;-). I'm not sure that the default parameters are\n> optimal, however. If you experiment with other settings, please post your\n> results.\n\nQuery time dropped from many minutes to 13 seconds on a 12-table join\nwith a little tweaking from the default params:\n\nMy $PGDATA/pg_geqo:\n-------------------\nPool_Size 1024\n# Effort high\nGenerations 100\nRandom_Seed 330418761\nSelection_Bias 2.00\n\nSimilar performance with Generations setting of 800 derived from Effort.\n\nRegards,\nEd Loehr\n",
"msg_date": "Fri, 22 Dec 2000 17:39:49 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GEQO status?"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > What is the status of the genetic algorithm query optimizer? Is this\n> > supposed to work well on many-table joins, or has it fallen out of favor\n> > or in disrepair? [I'm needing to optimize some large, many-table-join\n> > queries and wondering time spent configuring/understanding geqo would be\n> > fruitful...]\n> \n> It is the only techique we have to achieve adequate performance on\n> many-table joins. It has received little work recently, but that may be\n> due to having received no complaints or discussions that I can recall.\n\nAt risk of being off-topic here, is there a reason why GEQO is off by\ndefault in the ODBC driver (postdrv.exe)? I vaguely recall something\nabout this from a year ago, but can't find it.\n\nRegards,\nEd Loehr\n",
"msg_date": "Fri, 22 Dec 2000 17:53:34 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GEQO status?"
},
{
"msg_contents": "Ed Loehr <[email protected]> writes:\n> is there a reason why GEQO is off by\n> default in the ODBC driver (postdrv.exe)?\n\nThere may once have been a good reason for that, but it sounds like a\nmighty bad idea nowadays.\n\nAFAICT ODBC's default setting has been that way for as long as ODBC has\nbeen in our CVS tree, so no way to know who chose to do that, when, or\nwhy.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Dec 2000 19:04:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GEQO status? "
},
{
"msg_contents": "Ed Loehr writes:\n\n> What is the status of the genetic algorithm query optimizer? Is this\n> supposed to work well on many-table joins, or has it fallen out of favor\n> or in disrepair? [I'm needing to optimize some large, many-table-join\n> queries and wondering time spent configuring/understanding geqo would be\n> fruitful...]\n\nI've seen a number of bug reports that would indicate to me the GEQO works\nless than perfectly. I vividly recall how, while working on my own code,\nmere additions of dummy clauses like '... AND 5=5' altered query results\nin seemingly random ways. That was admittedly quite a while ago, but the\nGEQO code hasn't changed since. I'd advise you to be *very* careful.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 23 Dec 2000 01:30:40 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GEQO status?"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I've seen a number of bug reports that would indicate to me the GEQO works\n> less than perfectly. I vividly recall how, while working on my own code,\n> mere additions of dummy clauses like '... AND 5=5' altered query results\n> in seemingly random ways.\n\nThe choices made by GEQO are intentionally random, so I would expect\nvariation in tuple output order even for repetitions of the identical\nquery. If you got a semantically different result, that would indeed\nbe a bug. But it would most likely be a bug in the core planner, since\nGEQO has essentially no influence over whether the produced plan is\ncorrect or not. GEQO merely forces specific choices of join order.\nAll else is in the core planner.\n\n> That was admittedly quite a while ago, but the\n> GEQO code hasn't changed since.\n\nThe planner has changed quite markedly over the past couple releases,\nso I don't put a lot of stock in old anecdotes. Let's see a test case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Dec 2000 19:51:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GEQO status? "
},
{
"msg_contents": "Ed Loehr <[email protected]> writes:\n> You can remove the randomness by setting the Seed configuration value,\n\nTrue, but that's not the default setup.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Dec 2000 20:00:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GEQO status? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> The choices made by GEQO are intentionally random, so I would expect\n> variation in tuple output order even for repetitions of the identical\n> query. If you got a semantically different result, that would indeed\n> be a bug. But it would most likely be a bug in the core planner, since\n> GEQO has essentially no influence over whether the produced plan is\n> correct or not. GEQO merely forces specific choices of join order.\n> All else is in the core planner.\n\nYou can remove the randomness by setting the Seed configuration value, if\nthe docs are correct.\n\nRegards,\nEd Loehr\n",
"msg_date": "Fri, 22 Dec 2000 19:04:39 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GEQO status?"
}
]
|
[
{
"msg_contents": "Hi, I am using the following command to check out the 7.1 branch of\nPostgreSQL.\ncvs -d :pserver:[email protected]:/home/projects/pgsql/cvsroot co -r REL7_1 pgsql\n\nThis is the error I am getting.\ncvs [server aborted]: cannot write /home/projects/pgsql/cvsroot/CVSROOT/val-tags: Permission denied\n\nI can check out HEAD perfectly alright\n\nAnybody else seeing similar results ?\n\n-- \nYusuf Goolamabbas\[email protected]\n\n",
"msg_date": "Sat, 23 Dec 2000 07:30:14 +0800",
"msg_from": "Yusuf Goolamabbas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unable to check out REL7_1 via cvs"
},
{
"msg_contents": "* Yusuf Goolamabbas <[email protected]> [001222 15:34] wrote:\n> Hi, I am using the following command to check out the 7.1 branch of\n> PostgreSQL.\n> cvs -d :pserver:[email protected]:/home/projects/pgsql/cvsroot co -r REL7_1 pgsql\n> \n> This is the error I am getting.\n> cvs [server aborted]: cannot write /home/projects/pgsql/cvsroot/CVSROOT/val-tags: Permission denied\n> \n> I can check out HEAD perfectly alright\n> \n> Anybody else seeing similar results ?\n\nTry using \"cvs -Rq ...\" or just use CVSup it's (cvsup) a lot quicker.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Fri, 22 Dec 2000 15:36:50 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to check out REL7_1 via cvs"
},
{
"msg_contents": "Nope, no luck with cvs -Rq also. Me thinks its some repository\npermission issue. Don't know if CVSup would help either. I don't have\ncvsup installed on this machine. \n\n> * Yusuf Goolamabbas <[email protected]> [001222 15:34] wrote:\n> > Hi, I am using the following command to check out the 7.1 branch of\n> > PostgreSQL.\n> > cvs -d :pserver:[email protected]:/home/projects/pgsql/cvsroot co -r REL7_1 pgsql\n> > \n> > This is the error I am getting.\n> > cvs [server aborted]: cannot write /home/projects/pgsql/cvsroot/CVSROOT/val-tags: Permission denied\n> > \n> > I can check out HEAD perfectly alright\n> > \n> > Anybody else seeing similar results ?\n> \n> Try using \"cvs -Rq ...\" or just use CVSup it's (cvsup) a lot quicker.\n> \n> -- \n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \"I have the heart of a child; I keep it in a jar on my desk.\"\n> \n\n-- \nYusuf Goolamabbas\[email protected]\n",
"msg_date": "Sat, 23 Dec 2000 07:46:53 +0800",
"msg_from": "Yusuf Goolamabbas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unable to check out REL7_1 via cvs"
},
{
"msg_contents": "* Yusuf Goolamabbas <[email protected]> [001222 15:47] wrote:\n> Nope, no luck with cvs -Rq also. Me thinks its some repository\n> permission issue. Don't know if CVSup would help either. I don't have\n> cvsup installed on this machine. \n\nCVSup would work, that's what I use.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Fri, 22 Dec 2000 15:48:34 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to check out REL7_1 via cvs"
},
{
"msg_contents": "Use HEAD. REL7_1 is a tag, not a branch (and a misplaced tag at that,\nIMHO, since it's not the formal release or even close...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Dec 2000 18:54:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to check out REL7_1 via cvs "
},
{
"msg_contents": "Yusuf Goolamabbas writes:\n\n> Hi, I am using the following command to check out the 7.1 branch of\n> PostgreSQL.\n> cvs -d :pserver:[email protected]:/home/projects/pgsql/cvsroot co -r REL7_1 pgsql\n\nI don't think there is a 7.1 branch yet, there being no 7.1 release yet\neither.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 23 Dec 2000 01:11:13 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to check out REL7_1 via cvs"
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.