threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "> > > * Instantaneous crash recovery.\n> > \n> > Because this never worked reliable, Vadim is working on WAL.\n> \n> Postgres recovery is not reliable?\n\nYes. It's possible to lose indices...\nThere is paper how to avoid this, somewhere in Berkeley' site.\nAs I remember, additional TID per index tuple is required and\nthis is complex.\n\nVadim\n",
"msg_date": "Tue, 11 Jul 2000 11:53:09 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Storage Manager (was postgres 7.2 features.)"
}
] |
[
{
"msg_contents": "> > > Do you mean way back when it was removed? How was it \n> getting in the way?\n> > \n> > Yes. Every tuple had this time-thing that had to be tested. Vadim\n> > wanted to revove it to clear up the coding, and we all agreed.\n> \n> And did that save a lot of code?\n\nThis removed one fsync per commit and saved a lot of space.\n\nVadim\n",
"msg_date": "Tue, 11 Jul 2000 11:54:10 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: postgres 7.2 features."
}
] |
[
{
"msg_contents": "> > * It's always faster than WAL in the presence of stable main memory.\n> > (Whether the stable caches in modern disk drives is an \n> > approximation I don't know).\n> \n> Yes, only if you want to be able to log changes to a txlog to \n> be able to rollforward changes after a restore, then that benefit\n> is not so large any more.\n\nI'm going to implement rollback as well... Without compensation records...\n\n> > * It's more scalable and has less logging contention. This allows\n> > greater scalablility in the presence of multiple processors.\n> \n> Same as above, if you want a txlog you lose. \n> But if we could switch the txlog off then at least in that mode\n> we would keep all benefits of the non-overwrite smgr.\n\nWAL allows \"batch commit\" (log commit record, then sleep for ~1/200 sec,\nnow write/fsync log... if no one else didn't fsync-ed it already...)\n\n> > * Time travel is available at no cost.\n> \n> Yes. Yes, it also solves MVCC.\n\nNot with 1sec time grain...\nBut 6+6 bytes per tuple would be enough.\n\nVadim\n",
"msg_date": "Tue, 11 Jul 2000 12:06:35 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Storage Manager (was postgres 7.2 features.)"
}
] |
[
{
"msg_contents": "Welcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntest=# create table pg_test (x int);\nERROR: Illegal class name 'pg_test'\n\tThe 'pg_' name prefix is reserved for system catalogs\ntest=# \n\nYuck! :-)\n\nPerhaps a new relkind could be added to pg_class; say \"R\" for system\ntables?\n\nT.\n\n-- \nTimothy H. Keitt\nNational Center for Ecological Analysis and Synthesis\n735 State Street, Suite 300, Santa Barbara, CA 93101\nPhone: 805-892-2519, FAX: 805-892-2510\nhttp://www.nceas.ucsb.edu/~keitt/\n",
"msg_date": "Tue, 11 Jul 2000 13:46:15 -0700",
"msg_from": "\"Timothy H. Keitt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "system tables"
}
] |
[
{
"msg_contents": "Well, I took Thomas' advice and upgraded to 7.0.2 from source.tar.gz. \n\nFor some reason, I cannot create the following index:\n\ndb_geocrawler=# DROP INDEX idx_mail_archive_list_subject;\nERROR: index \"idx_mail_archive_list_subject\" nonexistent\n\ndb_geocrawler=# CREATE INDEX \"idx_mail_archive_list_subject\" on\n\"tbl_mail_archive\" using btree ( \"fld_mail_\nlist\" \"int4_ops\", \"fld_mail_subject\" \"text_ops\" );\nERROR: cannot create idx_mail_archive_list_subject\n\n[root@geocrawler db_geocrawler]# rm -f idx_mail_archive_list_subject\n\nThat removes the physical file on disk, so I can then try to create it\nagain. If I then issue the SQL command, postgres accepts it and it runs\nforever, never creating more than an 8192 byte file.\n\n\n\nIf you watch your process list:\n\n[root@geocrawler db_geocrawler]# ps ax\n PID TTY STAT TIME COMMAND\n 457 ? SW 0:00 [postmaster]\n 1419 ? R 1:34 [postmaster]\n\nEventually, the psql connection disappears from the process list and I\nget strange postmaster processes running (see above).\n\n\nAfter that, I get this error from psql:\n\nERROR: btree: index item size 2820 exceeds maximum 2717\n\nAny way to tell where that item is at?\n\n\n\n From the pgserver.log file:\n\nDEBUG: Data Base System is starting up at Tue Jul 11 16:59:33 2000\nDEBUG: Data Base System was interrupted being in production at Tue Jul\n11 15:47:04 2000\nDEBUG: Data Base System is in production state at Tue Jul 11 16:59:33\n2000\n\n...Doesn't give me much to go on.\n\n\n\nI'm really at wits end - I've spent over two days trying to rebuild\nGeocrawler.\n\nNext step is reformatting the hard disk and reinstalling postgres 6.4.2.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Tue, 11 Jul 2000 18:51:08 -0500",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.0.2 issues / Geocrawler"
},
{
"msg_contents": "Tim Perdue wrote:\n> ...\n> After that, I get this error from psql:\n> \n> ERROR: btree: index item size 2820 exceeds maximum 2717\n> \n> Any way to tell where that item is at?\n\nI've been wondering at the state of the problems you've been\nhaving with PostgreSQL and wondering why I haven't experienced\nthe same. I think this may very well be it. Earlier versions of\nPostgreSQL allowed for the creation of indexes on fields whose\nlength would not permit at least 2 entries per index page. 95% of\nthe time, things would work fine. But 5% you would get corrupted\ndata.\n\nBefore creating the index:\n\nSELECT * FROM tbl_main_archive WHERE Length(fld_mail_subject) >\n2700;\n\nwill get you the list of records which cannot be indexed. You're\nattempting to create a multi-key index so I would truncate (or\ndelete) any record whose fld_mail_subject is > 2700:\n\nUPDATE tbl_main_archive SET fld_mail_subject =\nSubStr(fld_mail_subject, 1, 2700);\n\nAt this point, your index creation should be relatively quick\n(and successful) depending upon how many rows you have. I have a\nfew tables with ~2 million rows that take about 5 - 10 minutes\n(with fsync off, naturally) to index. I would also recommend\nletting PostgreSQL determine the correct \"ops\":\n\nCREATE INDEX idx_mail_archive_list_subject \nON tbl_mail_archive (fld_mail_list, fld_mail_subject);\n\nWithout following the lists every day, most people wouldn't know\nabout this issue. I'm surprised it took so long for PostgreSQL\n7.0.2 to bail on the index creation though. Is this a\nparticularly large table? At any rate, this is an example of a\nbug which *would* allow for the kinds of corruption you've seen\nin the past that has been addressed in 7.0.2, as Tom Lane crushed\nthem by the hundreds. If you can:\n\npsql db_geocrawler < 6_4dump.txt\n\nand it never bails, then you know all your data is \"clean\". Until\nthat point, any index you have on a \"text\" datatype is subject to\nsimilar problems. \n\nHope that helps,\n\nMike Mascari\n",
"msg_date": "Wed, 12 Jul 2000 03:16:32 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "This is a *big* help.\n\nYes, the table is approx 10-12GB in size and running your length() and\nupdate queries is going to take a lifetime, since it will require a\ncalculation on 4 million rows.\n\nThis doesn't address the serious performance problem I'm finding in\n7.0.2 for a multi-key select/order by/limit/offset query, which I sent\nin a separate email.\n\nTim\n\n\n\n\nMike Mascari wrote:\n> \n> Tim Perdue wrote:\n> > ...\n> > After that, I get this error from psql:\n> >\n> > ERROR: btree: index item size 2820 exceeds maximum 2717\n> >\n> > Any way to tell where that item is at?\n> \n> I've been wondering at the state of the problems you've been\n> having with PostgreSQL and wondering why I haven't experienced\n> the same. I think this may very well be it. Earlier versions of\n> PostgreSQL allowed for the creation of indexes on fields whose\n> length would not permit at least 2 entries per index page. 95% of\n> the time, things would work fine. But 5% you would get corrupted\n> data.\n> \n> Before creating the index:\n> \n> SELECT * FROM tbl_main_archive WHERE Length(fld_mail_subject) >\n> 2700;\n> \n> will get you the list of records which cannot be indexed. You're\n> attempting to create a multi-key index so I would truncate (or\n> delete) any record whose fld_mail_subject is > 2700:\n> \n> UPDATE tbl_main_archive SET fld_mail_subject =\n> SubStr(fld_mail_subject, 1, 2700);\n> \n> At this point, your index creation should be relatively quick\n> (and successful) depending upon how many rows you have. I have a\n> few tables with ~2 million rows that take about 5 - 10 minutes\n> (with fsync off, naturally) to index. I would also recommend\n> letting PostgreSQL determine the correct \"ops\":\n> \n> CREATE INDEX idx_mail_archive_list_subject\n> ON tbl_mail_archive (fld_mail_list, fld_mail_subject);\n> \n> Without following the lists every day, most people wouldn't know\n> about this issue. I'm surprised it took so long for PostgreSQL\n> 7.0.2 to bail on the index creation though. Is this a\n> particularly large table? At any rate, this is an example of a\n> bug which *would* allow for the kinds of corruption you've seen\n> in the past that has been addressed in 7.0.2, as Tom Lane crushed\n> them by the hundreds. If you can:\n> \n> psql db_geocrawler < 6_4dump.txt\n> \n> and it never bails, then you know all your data is \"clean\". Until\n> that point, any index you have on a \"text\" datatype is subject to\n> similar problems.\n> \n> Hope that helps,\n> \n> Mike Mascari\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 12 Jul 2000 04:27:51 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "Tim Perdue wrote:\n> \n> This is a *big* help.\n> \n> Yes, the table is approx 10-12GB in size and running your length() and\n> update queries is going to take a lifetime, since it will require a\n> calculation on 4 million rows.\n> \n> This doesn't address the serious performance problem I'm finding in\n> 7.0.2 for a multi-key select/order by/limit/offset query, which I sent\n> in a separate email.\n> \n> Tim\n\nIf I recall correctly, Marc experienced similar performance\ndifferences with UDM search after upgrading. The optimizer was\nredesigned to be smarter about using indexes with both order by\nand limit. Tom Lane, of course, knows all there is to know on\nthis. All I can ask is standard issue precursor to optimizer\nquestions:\n\nHave you VACUUM ANALYZE'd the table(s) in question?\n\nIf so, hopefully Tom Lane can comment.\n\nSorry I couldn't be more help, \n\nMike Mascari\n",
"msg_date": "Wed, 12 Jul 2000 07:34:13 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "Mike Mascari wrote:\n> Have you VACUUM ANALYZE'd the table(s) in question?\n\nYes, they've been vacuum analyze'd and re-vaccum analyze'd to death.\nAlso added some extra indexes that I don't really need just to see if\nthat helps.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 12 Jul 2000 06:17:23 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "On Wed, Jul 12, 2000 at 06:17:23AM -0700, Tim Perdue wrote:\n> Mike Mascari wrote:\n> > Have you VACUUM ANALYZE'd the table(s) in question?\n> \n> Yes, they've been vacuum analyze'd and re-vaccum analyze'd to death.\n> Also added some extra indexes that I don't really need just to see if\n> that helps.\n\nTim, why are you building a multikey index, especially one containing a \nlarge text field? It's almost never a win to index a text field, unless\nall the WHERE clauses that use it are either anchored to the beginning\nof the field, or are equality tests (in which case, the field is really\nan enumerated type, masquerading as a text field)\n\nA multikey index is only useful for a very limited set of queries. Here's\na message from last August, where Tom Lane talks about that:\n\nhttp://www.postgresql.org/mhonarc/pgsql-sql/1999-08/msg00145.html\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Wed, 12 Jul 2000 10:00:08 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> Tim, why are you building a multikey index, especially one containing a\n> large text field? It's almost never a win to index a text field, unless\n\nThis is not a key on a text field.\n\nThe keys are:\n\nmail_list (example, the PHP mailing list=1)\nmail_year (1999)\nmail_month (July=7)\n\nYes it is a multi-key index, and the matches are exact.\n\nSomeone else asked why I have separated these fields out from the\nmail_date.\n\nIf I didn't, and I wanted to see the messages for this month, I'd have\nto regex and that would overwhelm the database.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 12 Jul 2000 08:14:29 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> \n> On Wed, Jul 12, 2000 at 06:17:23AM -0700, Tim Perdue wrote:\n> > Mike Mascari wrote:\n> > > Have you VACUUM ANALYZE'd the table(s) in question?\n> >\n> > Yes, they've been vacuum analyze'd and re-vaccum analyze'd to death.\n> > Also added some extra indexes that I don't really need just to see if\n> > that helps.\n> \n> Tim, why are you building a multikey index, especially one containing a\n> large text field? It's almost never a win to index a text field, unless\n> all the WHERE clauses that use it are either anchored to the beginning\n> of the field, or are equality tests (in which case, the field is really\n> an enumerated type, masquerading as a text field)\n> \n> A multikey index is only useful for a very limited set of queries. Here's\n> a message from last August, where Tom Lane talks about that:\n> \n> http://www.postgresql.org/mhonarc/pgsql-sql/1999-08/msg00145.html\n\nI think Tim had 2 problems. The first was tuples whose text\nattributes did not permit two on the same index page. The second,\nhowever, is that a query against the *same schema* under 6.x now\nruns slower by a factor of 15 under 7.x:\n\n\"The following query is at the very heart of the site and it\ntakes\nupwards of 15-20 seconds to run now. It used to be instantaneous.\n\nexplain SELECT mailid, mail_date, mail_is_followup, mail_from,\nmail_subject \n FROM mail_archive WHERE mail_list=35 AND mail_year=2000\n AND mail_month=1 ORDER BY mail_date DESC LIMIT 26 OFFSET 0;\n\nNOTICE: QUERY PLAN:\n\nSort (cost=138.41..138.41 rows=34 width=44)\n -> Index Scan using idx_mail_archive_list_yr_mo on\ntbl_mail_archive \n(cost=0.00..137.55 rows=34 width=44)\n\nEXPLAIN\"\n\nEven though he's using a mult-key index here, it is composed\nentirely of integer fields. Its reducing to a simple index scan +\nsort, so I don't see how the performance could drop off so\ndramatically. Perhaps if we could see the EXPLAIN output with the\nsame query against the 6.x database we could see what's going on.\n\n \nMike Mascari\n",
"msg_date": "Wed, 12 Jul 2000 11:20:34 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "On Wed, 12 Jul 2000, Mike Mascari wrote:\n\n> Tim Perdue wrote:\n> > \n> > This is a *big* help.\n> > \n> > Yes, the table is approx 10-12GB in size and running your length() and\n> > update queries is going to take a lifetime, since it will require a\n> > calculation on 4 million rows.\n> > \n> > This doesn't address the serious performance problem I'm finding in\n> > 7.0.2 for a multi-key select/order by/limit/offset query, which I sent\n> > in a separate email.\n> > \n> > Tim\n> \n> If I recall correctly, Marc experienced similar performance\n> differences with UDM search after upgrading. The optimizer was\n> redesigned to be smarter about using indexes with both order by\n> and limit. Tom Lane, of course, knows all there is to know on\n> this. All I can ask is standard issue precursor to optimizer\n> questions:\n\nit was a problem with v7.0 that Tom provided a work around for, but I'm\n99% certain that the work around was included in v7.0.1 ...\n\n\n",
"msg_date": "Wed, 12 Jul 2000 13:06:36 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "On Wed, 12 Jul 2000, Tim Perdue wrote:\n\n> Mike Mascari wrote:\n> > Have you VACUUM ANALYZE'd the table(s) in question?\n> \n> Yes, they've been vacuum analyze'd and re-vaccum analyze'd to death.\n> Also added some extra indexes that I don't really need just to see if\n> that helps.\n\nwhat does EXPLAIN <query>; show and what is the QUERY itself that is so\nslow?\n\n\n",
"msg_date": "Wed, 12 Jul 2000 13:07:10 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "On Wed, 12 Jul 2000, Tim Perdue wrote:\n\n> \"Ross J. Reedstrom\" wrote:\n> > Tim, why are you building a multikey index, especially one containing a\n> > large text field? It's almost never a win to index a text field, unless\n> \n> This is not a key on a text field.\n> \n> The keys are:\n> \n> mail_list (example, the PHP mailing list=1)\n> mail_year (1999)\n> mail_month (July=7)\n> \n> Yes it is a multi-key index, and the matches are exact.\n> \n> Someone else asked why I have separated these fields out from the\n> mail_date.\n> \n> If I didn't, and I wanted to see the messages for this month, I'd have\n> to regex and that would overwhelm the database.\n\nif you did it as a proper date field, you can use stuff like 'date_part'\nand 'date_trunc' to pull out a particular month, year, etc ...\n\n\n",
"msg_date": "Wed, 12 Jul 2000 13:08:31 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "On Wed, Jul 12, 2000 at 08:14:29AM -0700, Tim Perdue wrote:\n> \"Ross J. Reedstrom\" wrote:\n> > Tim, why are you building a multikey index, especially one containing a\n> > large text field? It's almost never a win to index a text field, unless\n> \n> This is not a key on a text field.\n> \n\nAh, I see, I had merged the two problems you reported together. I see now\nthat the 'can't create index' problem was on a different index. \n\nMike Mascari gave you a detailed answer to that, which you seemd to just blow\noff, based on you guesstimate that it would run too long:\n\n> This is a *big* help.\n> \n> Yes, the table is approx 10-12GB in size and running your length() and\n> update queries is going to take a lifetime, since it will require a\n> calculation on 4 million rows.\n\nMike mentioned that he's run similar index creations on 2 million rows,\nand it took 5-10 minutes. I reiterate: you've got a long subject that\ntripped a bug in index creation in postgresql versions < 7.0. Give his\nsolution a try. It's a 'clean it up once' sort of thing: I don't think\nanyone's going to complain about the subject getting trimmed at ~ 2k.\n\n> The keys are:\n> \n> mail_list (example, the PHP mailing list=1)\n> mail_year (1999)\n> mail_month (July=7)\n> \n> Yes it is a multi-key index, and the matches are exact.\n> \n\nRight, as your explain output showed: the planner is picking this index\nand using it. I'd guess that your time is getting lost in the sort step.\nI seem to recall that Tom reworked the sort code as well, to reduce the\nsize of temporary sort files: perhaps you've found a corner case that is\nmuch slower.\n\nDo you still have the 6.X install available? EXPLAIN output from that\nwould be useful.\n\n> Someone else asked why I have separated these fields out from the\n> mail_date.\n> \n> If I didn't, and I wanted to see the messages for this month, I'd have\n> to regex and that would overwhelm the database.\n\nThat's what the date_part function is for:\n\nreedstrm=# select now();\n now \n------------------------\n 2000-07-12 11:03:11-05\n(1 row)\n\nreedstrm=# select date_part('month', now());\n date_part \n-----------\n 7\n(1 row)\n\nreedstrm=# select date_part('year', now());\n date_part \n-----------\n 2000\n(1 row)\n\nSo your query would look like:\n\n\nSELECT mailid, mail_date, mail_is_followup, mail_from, mail_subject FROM\nmail_archive WHERE mail_list=35 AND date_part('year',mail_date)=2000 AND\ndate_part('month',mail_date)=1 ORDER BY mail_date DESC LIMIT 26 OFFSET 0;\n\nYou can even build functional indices. However, since you're selecting\nand sorting based on the same attribute, the time of the message, it\nshould be possible to build an index on mail_date, and construct a SELECT\nthat uses it for ordering as well as limiting the tuples returned.\n\n\nYou're generating the queries programmatically, from a scripting language,\nright? So, the best thing would be if you could create a query that\nlooks something like:\n\n SELECT mailid, mail_date, mail_is_followup, mail_from, mail_subject\n FROM mail_archive WHERE mail_list=35 AND mail_date >= 'January 1, 2000'\n AND mail_date < 'February 1, 2000' ORDER BY mail_date DESC LIMIT 26 OFFSET 0;\n\n\nWith an index on mail_date, that should do a single index scan, returning\nthe first 26, and stop. I'd bet a lot that it's the sort that's killing\nyou, since the backend has to retrieve the entire result set and sort\nit to be sure it returns the first 26.\n\nYou might be able to use a two key index, on mail_date, mailid. I think\nyou have to be careful to put key you want sorted output on first,\nto ensure that the index order is presorted, and the planner know it.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n",
"msg_date": "Wed, 12 Jul 2000 11:36:44 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "On Wed, Jul 12, 2000 at 11:36:44AM -0500, Ross J. Reedstrom wrote:\n> \n> You might be able to use a two key index, on mail_date, mailid. I think\n> you have to be careful to put key you want sorted output on first,\n> to ensure that the index order is presorted, and the planner know it.\n\nBah, I clearly need lunch: that last sentence, with better grammar:\n\n [...] be careful to put the key you want output sorted on first,\n to ensure that the index order is presorted, and that the planner\n knows it.\n \nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Wed, 12 Jul 2000 11:48:24 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> Mike Mascari gave you a detailed answer to that, which you seemd to just blow\n> off, based on you guesstimate that it would run too long:\n\nThat is a separate issue - unrelated to this performance issue and it\nwas not \"blown\" off, I was merely making a comment.\n\n> Right, as your explain output showed: the planner is picking this index\n> and using it. I'd guess that your time is getting lost in the sort step.\n\nI think you're probably right. It's hard to imagine that sorting is that\nmuch slower, but it's hard to say.\n\nYour ideas for selecting based on the date are intriguing, however the\nschema of the db was not done with that in mind. Everyone thinks I'm a\nnut when I say this, but the date is stored in a char(14) field in\ngregorian format: 19990101125959\n\nSo perhaps sorting a char(14) field is somehow majorly slower now.\n\nNo I don't have 6.5.3 installed anymore - it was totally fubar and\nwasn't running anymore.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 12 Jul 2000 10:00:14 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "On Wed, 12 Jul 2000, Tim Perdue wrote:\n\n> \"Ross J. Reedstrom\" wrote:\n> > Mike Mascari gave you a detailed answer to that, which you seemd to just blow\n> > off, based on you guesstimate that it would run too long:\n> \n> That is a separate issue - unrelated to this performance issue and it\n> was not \"blown\" off, I was merely making a comment.\n> \n> > Right, as your explain output showed: the planner is picking this index\n> > and using it. I'd guess that your time is getting lost in the sort step.\n> \n> I think you're probably right. It's hard to imagine that sorting is that\n> much slower, but it's hard to say.\n\njust curious, but what if you remove the ORDER BY, just to test ... is it\nthat much faster without then with? Just want to narrow down *if* its a\nsorting issue or not, that's all ...\n\nIf it is a sorting issue, what if you raise the -S value? \n\n",
"msg_date": "Wed, 12 Jul 2000 14:01:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> just curious, but what if you remove the ORDER BY, just to test ... is it\n> that much faster without then with? Just want to narrow down *if* its a\n> sorting issue or not, that's all ...\n\nGood call - it was instantaneous as it used to be.\n\n> If it is a sorting issue, what if you raise the -S value?\n\n-S is 32768 right now\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 12 Jul 2000 10:54:09 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "On Wed, 12 Jul 2000, Tim Perdue wrote:\n\n> The Hermit Hacker wrote:\n> > just curious, but what if you remove the ORDER BY, just to test ... is it\n> > that much faster without then with? Just want to narrow down *if* its a\n> > sorting issue or not, that's all ...\n> \n> Good call - it was instantaneous as it used to be.\n\nIt takes us awhile sometimes, but we eventually clue in :)\n\n> > If it is a sorting issue, what if you raise the -S value?\n> \n> -S is 32768 right now\n\nhow many results come back from the query? ignoring the LIMIT, that is\n... see, it has to ORDER BY before the LIMIT, of course...\n\n\n",
"msg_date": "Wed, 12 Jul 2000 15:00:31 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "Tim Perdue <[email protected]> writes:\n> The Hermit Hacker wrote:\n>> just curious, but what if you remove the ORDER BY, just to test ... is it\n>> that much faster without then with? Just want to narrow down *if* its a\n>> sorting issue or not, that's all ...\n\n> Good call - it was instantaneous as it used to be.\n\nHow much data is getting passed through the sort step --- you might need\nto raise the query LIMIT to find out...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 14:21:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2 issues / Geocrawler "
}
] |
[
{
"msg_contents": "Okay, the crowd seemed to say that PostgreSQL is not really harder to\ninstall than MySQL, but that it isn't easy either. My personal goal then\nis to make the installation so easy that even people who don't even have\nany data will download and install PostgreSQL just to see how smooth it\nworks. :-)\n\nSeriously, the particular trouble points I could identify are both in\ndocumentation and in implementation.\n\n\n1) User account juggling\n\nIt is sometimes confusing to keep track when you need to log in as\nwho:\n\n > mkdir /usr/local/pgsql/data\n > chown postgres /usr/local/pgsql/data\n > su - postgres\n > /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n\nThis sort of stuff shouldn't be necessary. Perhaps it should just work\nlike this:\n\n root# adduser postgres\n root# /usr/local/pgsql/bin/initdb -u postgres -D /usr/local/pgsql/data\n root# /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data\n\nIf initdb is run as root it simply reexecutes itself as whatever -u said,\nafter having created the data directory and done the chown thing. It could\nfurthermore initialize the postgresql.conf file with a line `user =\n$user'. That way the postmaster could also be started as root and setuid\nitself after having read the configuration file, like Apache does.\n\nThis would require some code auditing, but it seems doable.\n\n\n2) System specific problems\n\nUsers can expect to be told of platform-specific exceptions up front. \"If\nyou have problems, look into doc/FAQ_xxx\" is not nice. If it is known that\nlibpq++ doesn't build on platform X then that should be mentioned before\npeople ever run configure. \"Might be different on your platform\" is not\ngood. We know pretty well what platforms we run on, so we can also provide\naccurate command lines.\n\nThere's a whole subculture of alternative installation instructions\nexisting in the various \"FAQs\", many of which are tendentially out of\ndate. Of course at the end it's also better to fix these problems rather\nthan documenting them.\n\n\n3) Error conditions not documented\n\nOkay, that simply needs to be done.\n\n\n4) Backup & Restore\n\nThere's not much to be done about this requirement, but it could be\ndocumented better and perhaps in different ways. For example, some people\nmight not want to install the new version over the old one, but rather run\nboth in parallel and transfer the data over then. (I'd do that in any\ncase.)\n\nOne premise is that people who upgrade already have an existing\ninstallation, by definition, so they can also read the existing\ndocumentation and already have existing knowledge about the backup\nprocedure. Therefore the Administrator's Guide should describe the\nimplicications of the possible backup and upgrade strategies before one\neven looks at the installation instructions. The latter could then simply\nsay \"by now you should have backed up your data\" in a slightly more\nverbose fashion rather than trying to cram the administrator's guide into\nthree sentences.\n\n\nSome people were mentioning that the installation instructions should also\ntell more about createuser and these things. I don't agree with that.\nThese sort of things should be covered in the tutorial or the\nadministrator's guide (\"Database user and permission management\", hard to\nmiss).\n\n\nDoes anyone have other ideas?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 12 Jul 2000 02:23:34 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Installation"
}
] |
[
{
"msg_contents": "Theoretically, the documentation should be installed automatically when\nyou do `make install'. Users shouldn't have to look for it separately.\n\nThat's of course a problem: the CVS source doesn't contain any\ndocumentation and most people are not set up to rebuild it. Ignoring\nmissing files isn't good either because that would pose problems to people\nthat do try to rebuild it.\n\nI could imagine some kludgery like not automatically installing the\ndocumentation if there's a subtle hint, like the existence of a \"CVS\"\nsubdirectory, or perhaps something more explicit, that wouldn't be in the\ndistribution. Still seems ugly though.\n\nAny better ideas?\n\nI would also like to renew my campaign for shipping the HTML documentation\nin non-compressed/tarred form. It seems to be no longer necessary since\nthese files are gone from CVS. Also, this way users could read them still\nin the install tree (if the flat text INSTALL doesn't suit them, for\nexample).\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 12 Jul 2000 02:24:07 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to handle documentation installation"
}
] |
[
{
"msg_contents": "I know you're all sick of hearing from me, but I'm passing this along\nanyway. Looks like I need to go back down to 6.5.3 for some reason.\n\nThe following query is at the very heart of the site and it takes\nupwards of 15-20 seconds to run now. It used to be instantaneous.\n\nexplain SELECT mailid, mail_date, mail_is_followup, mail_from,\nmail_subject \n FROM mail_archive WHERE mail_list=35 AND mail_year=2000\n AND mail_month=1 ORDER BY mail_date DESC LIMIT 26 OFFSET 0;\n\nNOTICE: QUERY PLAN:\n\nSort (cost=138.41..138.41 rows=34 width=44)\n -> Index Scan using idx_mail_archive_list_yr_mo on tbl_mail_archive \n(cost=0.00..137.55 rows=34 width=44)\n\nEXPLAIN\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Tue, 11 Jul 2000 19:35:52 -0500",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Serious Performance Loss in 7.0.2??"
},
{
"msg_contents": "Tim Perdue wrote:\n> \n> I know you're all sick of hearing from me.\n\nI hope there was really a <g> at the end of that because it is not true\nat all! When problems are seen and solved they offer opportunities for\nothers in the future, and it is also how things get better :-)\n\n\n> The following query is at the very heart of the site and it takes\n> upwards of 15-20 seconds to run now. It used to be instantaneous.\n> \n> explain SELECT mailid, mail_date, mail_is_followup, mail_from,\n> mail_subject\n> FROM mail_archive WHERE mail_list=35 AND mail_year=2000\n> AND mail_month=1 ORDER BY mail_date DESC LIMIT 26 OFFSET 0;\n> \n> NOTICE: QUERY PLAN:\n> \n> Sort (cost=138.41..138.41 rows=34 width=44)\n> -> Index Scan using idx_mail_archive_list_yr_mo on tbl_mail_archive\n> (cost=0.00..137.55 rows=34 width=44)\n> \n> EXPLAIN\n\nOK, I'll give it a go :-)\n\nFirst of all, I find it easiest to optimise these sort of queries in\npsql because you can go back and edit things and 'play' quite a bit to\nachieve the desired behaviour, then implement it back in the old PHP\ncode (or wherever :-).\n\nThe query optimiser changed quite a bit from 6.5.3 to 7.x and this seems\nto be one area that now works harder to do what you say. From the name\nof your index it seems that you have an index on mail_list, mail_year,\nmail_month, mail_date?\n\nPostgreSQL seems to not get the index choice right when you have index\nmatches that are like =, =, =, DESC so you actually need to specify the\nORDER BY clause in full like:\n\tORDER BY mail_list DESC, mail_year DESC, mail_month DESC, mail_date\nDESC\nand things will hopefully be all OK again.\n\nPersonally I consider this to be a 'bug', or at least a 'buglet', but I\nguess I'd bow to Tom's opinion on that :-)\n\nHope this is some help,\n\t\t\t\t\tAndrew.\n\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n",
"msg_date": "Thu, 13 Jul 2000 00:19:17 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Serious Performance Loss in 7.0.2??"
},
{
"msg_contents": "Andrew McMillan wrote:\n> PostgreSQL seems to not get the index choice right when you have index\n> matches that are like =, =, =, DESC so you actually need to specify the\n> ORDER BY clause in full like:\n> ORDER BY mail_list DESC, mail_year DESC, mail_month DESC, mail_date\n> DESC\n> and things will hopefully be all OK again.\n\nWow - that definitely does *not* work. That took 1:30 to return the\nresult for the linux-kernel list.\n\nMy server has a load of 19.25 right now - at 6 AM.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 12 Jul 2000 06:25:25 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Serious Performance Loss in 7.0.2??"
},
{
"msg_contents": "On Wed, 12 Jul 2000, Tim Perdue wrote:\n\n> Andrew McMillan wrote:\n> > PostgreSQL seems to not get the index choice right when you have index\n> > matches that are like =, =, =, DESC so you actually need to specify the\n> > ORDER BY clause in full like:\n> > ORDER BY mail_list DESC, mail_year DESC, mail_month DESC, mail_date\n> > DESC\n> > and things will hopefully be all OK again.\n> \n> Wow - that definitely does *not* work. That took 1:30 to return the\n> result for the linux-kernel list.\n> \n> My server has a load of 19.25 right now - at 6 AM.\n\nOuch! because of that query, or is this standard? what is the server,\nanyway?\n\n\n",
"msg_date": "Wed, 12 Jul 2000 13:11:32 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Serious Performance Loss in 7.0.2??"
},
{
"msg_contents": "Tim Perdue <[email protected]> writes:\n> explain SELECT mailid, mail_date, mail_is_followup, mail_from,\n> mail_subject \n> FROM mail_archive WHERE mail_list=35 AND mail_year=2000\n> AND mail_month=1 ORDER BY mail_date DESC LIMIT 26 OFFSET 0;\n\n> NOTICE: QUERY PLAN:\n\n> Sort (cost=138.41..138.41 rows=34 width=44)\n> -> Index Scan using idx_mail_archive_list_yr_mo on tbl_mail_archive \n> (cost=0.00..137.55 rows=34 width=44)\n\nHard to tell with this much info. How many rows are actually retrieved\nby the query (the planner is guessing 34, is that anywhere in the right\nballpark? How big is the table, anyway?)\n\nAlso, what's the definition of the index idx_mail_archive_list_yr_mo?\n\nIt might help to see the EXPLAIN VERBOSE output also --- I'm wondering\nif all the WHERE clauses are getting used as index keys or not...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 13:56:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Serious Performance Loss in 7.0.2?? "
},
{
"msg_contents": "Tom Lane wrote:\n> Hard to tell with this much info. How many rows are actually retrieved\n> by the query (the planner is guessing 34, is that anywhere in the right\n> ballpark? How big is the table, anyway?)\n> \n> Also, what's the definition of the index idx_mail_archive_list_yr_mo?\n> \n> It might help to see the EXPLAIN VERBOSE output also --- I'm wondering\n> if all the WHERE clauses are getting used as index keys or not...\n\nOK - there are 5851 rows in this query.\n\nidx_mail_archive_list_yr_mo is an index on \nmail_list (int)\nmail_year (int)\nmail_month(int)\n\nWith 5850 rows to sort, I wouldn't expect it to be lightning fast, but\nthere is a very definite difference from 6.5.3 (or 6.4.x).\n\nAs requested by \"The Hermit Hacker\", I took out the ORDER BY and it was\ninstantaneous.\n\nI'm sending the explain verbose separately to you as it's very big.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 12 Jul 2000 11:14:31 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Serious Performance Loss in 7.0.2??"
},
{
"msg_contents": "Tim Perdue <[email protected]> writes:\n> Tom Lane wrote:\n>> Hard to tell with this much info. How many rows are actually retrieved\n>> by the query (the planner is guessing 34, is that anywhere in the right\n>> ballpark? How big is the table, anyway?)\n>> \n>> Also, what's the definition of the index idx_mail_archive_list_yr_mo?\n>> \n>> It might help to see the EXPLAIN VERBOSE output also --- I'm wondering\n>> if all the WHERE clauses are getting used as index keys or not...\n\n> OK - there are 5851 rows in this query.\n\n> idx_mail_archive_list_yr_mo is an index on \n> mail_list (int)\n> mail_year (int)\n> mail_month(int)\n\n> With 5850 rows to sort, I wouldn't expect it to be lightning fast, but\n> there is a very definite difference from 6.5.3 (or 6.4.x).\n\n> As requested by \"The Hermit Hacker\", I took out the ORDER BY and it was\n> instantaneous.\n\n> I'm sending the explain verbose separately to you as it's very big.\n\nThe explain verbose looks just like it should: all three WHERE clauses\nare being used as indexkeys. So I'm mystified. 7.0 is not doing\nanything obviously wrong here, and I do not understand what 6.5 might\nhave done differently. Given the query as posed and the available\nindex, there is no other alternative but an indexscan followed by sort.\n\nI like Andreas' suggestion of rearranging things so that the indexscan\nwill produce already-sorted output (since that will allow the LIMIT to\nstop the indexscan without reading the whole month's traffic). But that\ndoesn't answer the question of why 7.0 is so much slower given the same\nquery as 6.5.\n\nHow long does it take to do\n\tSELECT count(*) FROM tbl_mail_archive WHERE fld_mail_list=35 AND\n\tfld_mail_year=2000 AND fld_mail_month=1\n? That should tell us how much time is being spent in the indexscan.\n\nAlso, when you are doing the complete query with sort, does a\npg_sorttemp file appear in the database directory? If so, how big does\nit get?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 14:46:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Serious Performance Loss in 7.0.2?? "
},
{
"msg_contents": "Tom Lane wrote:\n> > As requested by \"The Hermit Hacker\", I took out the ORDER BY and it was\n> > instantaneous.\n> \n> How long does it take to do\n> SELECT count(*) FROM tbl_mail_archive WHERE fld_mail_list=35 AND\n> fld_mail_year=2000 AND fld_mail_month=1\n> ? That should tell us how much time is being spent in the indexscan.\n\nIt's pretty much instantaneous.\n\n> Also, when you are doing the complete query with sort, does a\n> pg_sorttemp file appear in the database directory? If so, how big does\n> it get?\n\nDoesn't look like it. There is a (supposedly) dead pg_sorttemp file in\nthat directory that is 74MB. Probably unrelated.\n\nMy only remaining question is whether an index on 4,000,000 datestamps\nis going to be fast or not. Or how Long will my nightly vacuum run?\nForever?\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 12 Jul 2000 12:05:53 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Serious Performance Loss in 7.0.2??"
},
{
"msg_contents": "Tim Perdue <[email protected]> writes:\n>> Also, when you are doing the complete query with sort, does a\n>> pg_sorttemp file appear in the database directory? If so, how big does\n>> it get?\n\n> Doesn't look like it. There is a (supposedly) dead pg_sorttemp file in\n> that directory that is 74MB. Probably unrelated.\n\nThat's even odder. If the sort is being done entirely in memory (which\nI'd expected given the amount of data and your -S setting, but it's good\nto confirm) then it's basically a qsort() call, and there shouldn't be\nany measurable difference between 6.5 and 7.0.\n\nAll else being equal that is. Since this is a sort on a char() field,\nperhaps all else is not equal. In particular I'm suddenly wondering\nif your 7.0 installation was compiled with LOCALE or MULTIBYTE support\nand your 6.5 not. A few tens of thousands of strcoll() calls to do the\nsort comparisons might account for the slowdown...\n\n> My only remaining question is whether an index on 4,000,000 datestamps\n> is going to be fast or not.\n\nI'd expect no significant difference from your other indexes on that\ntable. If anything, it'll be faster than your existing index because\nof the lack of duplicate keys.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 15:11:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Serious Performance Loss in 7.0.2?? "
},
{
"msg_contents": "> All else being equal that is. Since this is a sort on a char() field,\n> perhaps all else is not equal. In particular I'm suddenly wondering\n> if your 7.0 installation was compiled with LOCALE or MULTIBYTE support\n> and your 6.5 not. A few tens of thousands of strcoll() calls to do the\n> sort comparisons might account for the slowdown...\n\nJust to clarify, MULTIBYTE never calls strcoll() as far as I know.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 13 Jul 2000 13:31:12 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Serious Performance Loss in 7.0.2?? "
}
] |
[
{
"msg_contents": "I suggest that we change vacuum to only move remove tuples if there is\nmore than 20% expired tuples.\n\nWhen we do vacuum, we drop all indexes and recreate them. \n\nThis fixes the complaint about vacuum slowness when there are many\nexpired rows in the table. We know this is causes by excessive index\nupdates. It allows indexes to shrink (Jan pointed this out to me.) And\nit fixes the TOAST problem with TOAST values in indexes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 Jul 2000 20:40:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum only with 20% old tuples"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I suggest that we change vacuum to only move remove tuples if there is\n> more than 20% expired tuples.\n\n> When we do vacuum, we drop all indexes and recreate them. \n\n> This fixes the complaint about vacuum slowness when there are many\n> expired rows in the table. We know this is causes by excessive index\n> updates. It allows indexes to shrink (Jan pointed this out to me.) And\n> it fixes the TOAST problem with TOAST values in indexes.\n\nWe can't \"drop and recreate\" without a solution to the relation\nversioning issue (unless you are prepared to accept a nonfunctional\ndatabase after a failure partway through index rebuild on a system\ntable). I think we should do this, but it's not all that simple...\n\nI do not see what your 20% idea has to do with this, though, nor\nwhy it's a good idea. If I've told the thing to vacuum I think\nit should vacuum. 20% of a big table could be a lot of megabytes,\nand I don't want some arbitrary decision in the code about whether\nI can reclaim that space or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Jul 2000 22:06:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum only with 20% old tuples "
},
{
"msg_contents": "> We can't \"drop and recreate\" without a solution to the relation\n> versioning issue (unless you are prepared to accept a nonfunctional\n> database after a failure partway through index rebuild on a system\n> table). I think we should do this, but it's not all that simple...\n> \n> I do not see what your 20% idea has to do with this, though, nor\n> why it's a good idea. If I've told the thing to vacuum I think\n> it should vacuum. 20% of a big table could be a lot of megabytes,\n> and I don't want some arbitrary decision in the code about whether\n> I can reclaim that space or not.\n\nWell, I think we should do a sequential scan before starting vacuum to\nfind the number of expired rows.\n\nNow that we are removing indexes, doing that to remove a few tuples is a\nmajor waste. The user can not really know if the table is worth\nvacuuming in normal use. They are just going to use the default. Now,\nI think a FORCE option would be good, or the ability to change the 20%\ndefault.\n\nRemember, commercial db's don't even return unused space if you remove\nall the rows in a table. At least Informix doesn't and I am sure there\nare others. I like vacuum, but let's not make it do major hurtles for\nsmall gain.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 Jul 2000 22:49:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuum only with 20% old tuples"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n>\n> Bruce Momjian <[email protected]> writes:\n> > I suggest that we change vacuum to only move remove tuples if there is\n> > more than 20% expired tuples.\n>\n> > When we do vacuum, we drop all indexes and recreate them.\n>\n> > This fixes the complaint about vacuum slowness when there are many\n> > expired rows in the table. We know this is causes by excessive index\n> > updates. It allows indexes to shrink (Jan pointed this out to me.) And\n> > it fixes the TOAST problem with TOAST values in indexes.\n>\n> We can't \"drop and recreate\" without a solution to the relation\n> versioning issue (unless you are prepared to accept a nonfunctional\n> database after a failure partway through index rebuild on a system\n> table). I think we should do this, but it's not all that simple...\n>\n\nIs this topic independent of WAL in the first place ?\n\nRegards.\n\nHiroshi Inoue\n\n",
"msg_date": "Wed, 12 Jul 2000 12:10:09 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Vacuum only with 20% old tuples "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> We can't \"drop and recreate\" without a solution to the relation\n>> versioning issue (unless you are prepared to accept a nonfunctional\n>> database after a failure partway through index rebuild on a system\n>> table). I think we should do this, but it's not all that simple...\n\n> Is this topic independent of WAL in the first place ?\n\nSure, unless Vadim sees some clever way of using WAL to eliminate\nthe need for versioned relations. But as far as I've seen in the\ndiscussions, versioned relations are independent of WAL.\n\nBasically what I want here is to build the new index relation as\na new file (set of files, if large) and then atomically commit it\nas the new version of the index.\n\nIf we only want to solve the problem of rebuilding indexes, it's\nprobably not necessary to have true versions, because nothing outside\nof pg_index refers to an index. You could build a complete new index\n(new OID, new pg_class and pg_attribute entries, the whole nine yards)\nas a new set of files, and delete the old index, and your commit of\nthis transaction would atomically replace the index. (Vacuuming\npg_index's own indexes this way might be a tad tricky though...)\nBut that approach doesn't solve the problem of making a CLUSTER\noperation that really works the way it should. So I'd rather see us\nput the effort into doing relation versions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Jul 2000 23:16:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum only with 20% old tuples "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> We can't \"drop and recreate\" without a solution to the relation\n> >> versioning issue (unless you are prepared to accept a nonfunctional\n> >> database after a failure partway through index rebuild on a system\n> >> table). I think we should do this, but it's not all that simple...\n> \n> > Is this topic independent of WAL in the first place ?\n> \n> Sure, unless Vadim sees some clever way of using WAL to eliminate\n> the need for versioned relations. But as far as I've seen in the\n> discussions, versioned relations are independent of WAL.\n> \n> Basically what I want here is to build the new index relation as\n> a new file (set of files, if large) and then atomically commit it\n> as the new version of the index.\n>\n\nHmm,your plan seems to need WAL.\nWe must postpone to build indexes until the end of tuple moving\nin vacuum. Once tuple moving started,the consistency between\nheap and indexes would be broken. Currently(without WAL) this\ninconsistency could never be recovered in case of rollback.\n\n> If we only want to solve the problem of rebuilding indexes, it's\n> probably not necessary to have true versions, because nothing outside\n> of pg_index refers to an index. You could build a complete new index\n> (new OID, new pg_class and pg_attribute entries, the whole nine yards)\n> as a new set of files, and delete the old index, and your commit of\n> this transaction would atomically replace the index. (Vacuuming\n> pg_index's own indexes this way might be a tad tricky though...)\n\n??? Don't pg_class and pg_attribute needs tricky handling either ?\nSeems pg_class alone needs to be tricky when we use rel versioning.\n\nAnyway we couldn't rely on indexes of currently vacuuming table.\nI don't think it's easy to maintain indexes of pg_class,pg_indexes,\npg_atribute all together properly.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 12 Jul 2000 12:53:11 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Vacuum only with 20% old tuples "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> Basically what I want here is to build the new index relation as\n>> a new file (set of files, if large) and then atomically commit it\n>> as the new version of the index.\n\n> Hmm,your plan seems to need WAL.\n> We must postpone to build indexes until the end of tuple moving\n> in vacuum. Once tuple moving started,the consistency between\n> heap and indexes would be broken. Currently(without WAL) this\n> inconsistency could never be recovered in case of rollback.\n\nWhy? The same commit that makes the new index valid would make the\ntuple movements valid. Actually, the way VACUUM currently works,\nthe tuple movements have been committed before we start freeing index\nentries anyway. (One reason VACUUM is so inefficient with indexes\nis that there is a peak index usage where there are index entries for\n*both* old and new tuple positions. I don't feel a need to change\nthat, as long as the duplicate entries are in the old index that\nwe're hoping to get rid of.)\n\n>> this transaction would atomically replace the index. (Vacuuming\n>> pg_index's own indexes this way might be a tad tricky though...)\n\n> ??? Don't pg_class and pg_attribute needs tricky handling either ?\n> Seems pg_class alone needs to be tricky when we use rel versioning.\n\nCould be. I think working through how we handle system tables and\nindexes is the key stumbling block we've got to get past to have\nversioning. I don't know quite how to do it, yet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 00:01:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum only with 20% old tuples "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> Basically what I want here is to build the new index relation as\n> >> a new file (set of files, if large) and then atomically commit it\n> >> as the new version of the index.\n> \n> > Hmm,your plan seems to need WAL.\n> > We must postpone to build indexes until the end of tuple moving\n> > in vacuum. Once tuple moving started,the consistency between\n> > heap and indexes would be broken. Currently(without WAL) this\n> > inconsistency could never be recovered in case of rollback.\n> \n> Why? The same commit that makes the new index valid would make the\n> tuple movements valid.\n\nOops,I rememered I wasn't correct. Certainly it's not so dangerous as\nI wrote. But there remains a possibilty that index tuples would point to\ncleaned heap blocks unless we delete index tuples for those heap blocks.\nCleaned blocks would be reused by UPDATE operation. \n\nRegards.\n\nHiroshi Inoue\n\n\n\n\n",
"msg_date": "Wed, 12 Jul 2000 13:58:17 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Vacuum only with 20% old tuples "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> We can't \"drop and recreate\" without a solution to the relation\n> >> versioning issue (unless you are prepared to accept a nonfunctional\n> >> database after a failure partway through index rebuild on a system\n> >> table). I think we should do this, but it's not all that simple...\n>\n> > Is this topic independent of WAL in the first place ?\n>\n> Sure, unless Vadim sees some clever way of using WAL to eliminate\n> the need for versioned relations. But as far as I've seen in the\n> discussions, versioned relations are independent of WAL.\n>\n> Basically what I want here is to build the new index relation as\n> a new file (set of files, if large) and then atomically commit it\n> as the new version of the index.\n\n What implicitly says we need to vacuum the toast relation\n AFTER beeing completely done with the indices - in contranst\n to what you said before. Otherwise, the old index (the\n active one) would still refer to entries that don't exist any\n more.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 12 Jul 2000 13:17:53 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Vacuum only with 20% old tuples"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> > I suggest that we change vacuum to only move remove tuples if there is\n> > more than 20% expired tuples.\n> \n> > When we do vacuum, we drop all indexes and recreate them.\n> \n> > This fixes the complaint about vacuum slowness when there are many\n> > expired rows in the table. We know this is causes by excessive index\n> > updates. It allows indexes to shrink (Jan pointed this out to me.) And\n> > it fixes the TOAST problem with TOAST values in indexes.\n> \n> We can't \"drop and recreate\" without a solution to the relation\n> versioning issue (unless you are prepared to accept a nonfunctional\n> database after a failure partway through index rebuild on a system\n> table). I think we should do this, but it's not all that simple...\n> \n> I do not see what your 20% idea has to do with this, though, nor\n> why it's a good idea. If I've told the thing to vacuum I think\n> it should vacuum. 20% of a big table could be a lot of megabytes,\n> and I don't want some arbitrary decision in the code about whether\n> I can reclaim that space or not.\n\nI can see some value in having a _configurable_ threshold %age of\ndeletes before vacuum kicked in and attempted to shrink table/index\non-disk file sizes. This would let the end-user decide, and 20% is\nprobably a reasonable default, but if it isn't then changing a default\nis easier to do down the track.\n\nI can also see that it could be done with (perhaps) a modification to\nVACUUM syntax, say:\n\tVACUUM [VERBOSE] [SHRINK] ...\n\nAnd I believe that the whole thing will go better if ANALYZE is taken\n_out_ of vacuum, as was discussed on this list a month or two ago.\n\nCheers,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n",
"msg_date": "Thu, 13 Jul 2000 00:30:37 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum only with 20% old tuples"
},
{
"msg_contents": "> I can see some value in having a _configurable_ threshold %age of\n> deletes before vacuum kicked in and attempted to shrink table/index\n> on-disk file sizes. This would let the end-user decide, and 20% is\n> probably a reasonable default, but if it isn't then changing a default\n> is easier to do down the track.\n> \n> I can also see that it could be done with (perhaps) a modification to\n> VACUUM syntax, say:\n> \tVACUUM [VERBOSE] [SHRINK] ...\n> \n> And I believe that the whole thing will go better if ANALYZE is taken\n> _out_ of vacuum, as was discussed on this list a month or two ago.\n\nThe analayze process no longer locks the table exclusively. It will be\nmade a separate command in 7.1, though an ANALYZE option will still be\navaiable in VACUUM.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jul 2000 09:37:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuum only with 20% old tuples"
},
{
"msg_contents": "On Tue, 11 Jul 2000, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > I suggest that we change vacuum to only move remove tuples if there is\n> > more than 20% expired tuples.\n> \n> > When we do vacuum, we drop all indexes and recreate them. \n> \n> > This fixes the complaint about vacuum slowness when there are many\n> > expired rows in the table. We know this is causes by excessive index\n> > updates. It allows indexes to shrink (Jan pointed this out to me.) And\n> > it fixes the TOAST problem with TOAST values in indexes.\n> \n> We can't \"drop and recreate\" without a solution to the relation\n> versioning issue (unless you are prepared to accept a nonfunctional\n> database after a failure partway through index rebuild on a system\n> table). I think we should do this, but it's not all that simple...\n> \n> I do not see what your 20% idea has to do with this, though, nor\n> why it's a good idea. If I've told the thing to vacuum I think\n> it should vacuum. 20% of a big table could be a lot of megabytes,\n> and I don't want some arbitrary decision in the code about whether\n> I can reclaim that space or not.\n\nI wouldn't mind seeing some automagic vacuum happen *if* >20% expired\n... but don't understand the limit when I tell it to vacuum either ...\n\n\n",
"msg_date": "Wed, 12 Jul 2000 11:55:16 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum only with 20% old tuples "
},
{
"msg_contents": "\nhow about leaving vacuum as is, but extend REINDEX so that it\ndrops/rebuilds all indices on a TABLE | DATABASE? Or does it do that\nnow? From reading \\h REINDEX, my thought is that it doesn't, but ...\n\n\n\nOn Tue, 11 Jul 2000, Bruce Momjian wrote:\n\n> > We can't \"drop and recreate\" without a solution to the relation\n> > versioning issue (unless you are prepared to accept a nonfunctional\n> > database after a failure partway through index rebuild on a system\n> > table). I think we should do this, but it's not all that simple...\n> > \n> > I do not see what your 20% idea has to do with this, though, nor\n> > why it's a good idea. If I've told the thing to vacuum I think\n> > it should vacuum. 20% of a big table could be a lot of megabytes,\n> > and I don't want some arbitrary decision in the code about whether\n> > I can reclaim that space or not.\n> \n> Well, I think we should do a sequential scan before starting vacuum to\n> find the number of expired rows.\n> \n> Now that we are removing indexes, doing that to remove a few tuples is a\n> major waste. The user can not really know if the table is worth\n> vacuuming in normal use. They are just going to use the default. Now,\n> I think a FORCE option would be good, or the ability to change the 20%\n> default.\n> \n> Remember, commercial db's don't even return unused space if you remove\n> all the rows in a table. At least Informix doesn't and I am sure there\n> are others. I like vacuum, but let's not make it do major hurtles for\n> small gain.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 12 Jul 2000 11:57:19 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum only with 20% old tuples"
},
{
"msg_contents": "> > I do not see what your 20% idea has to do with this, though, nor\n> > why it's a good idea. If I've told the thing to vacuum I think\n> > it should vacuum. 20% of a big table could be a lot of megabytes,\n> > and I don't want some arbitrary decision in the code about whether\n> > I can reclaim that space or not.\n> \n> I wouldn't mind seeing some automagic vacuum happen *if* >20% expired\n> ... but don't understand the limit when I tell it to vacuum either ...\n\nI am confused by your comment.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jul 2000 11:40:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuum only with 20% old tuples"
},
{
"msg_contents": "On Wed, 12 Jul 2000, Bruce Momjian wrote:\n\n> > > I do not see what your 20% idea has to do with this, though, nor\n> > > why it's a good idea. If I've told the thing to vacuum I think\n> > > it should vacuum. 20% of a big table could be a lot of megabytes,\n> > > and I don't want some arbitrary decision in the code about whether\n> > > I can reclaim that space or not.\n> > \n> > I wouldn't mind seeing some automagic vacuum happen *if* >20% expired\n> > ... but don't understand the limit when I tell it to vacuum either ...\n> \n> I am confused by your comment.\n\nMake the backend reasonably intelligent ... periodically do a scan, as\nyou've suggested would be required for your above 20% idea, and if >20%\nare expired records, auto-start a vacuum (settable, of course) ...\n\n\n",
"msg_date": "Wed, 12 Jul 2000 13:13:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum only with 20% old tuples"
},
{
"msg_contents": "> On Wed, 12 Jul 2000, Bruce Momjian wrote:\n> \n> > > > I do not see what your 20% idea has to do with this, though, nor\n> > > > why it's a good idea. If I've told the thing to vacuum I think\n> > > > it should vacuum. 20% of a big table could be a lot of megabytes,\n> > > > and I don't want some arbitrary decision in the code about whether\n> > > > I can reclaim that space or not.\n> > > \n> > > I wouldn't mind seeing some automagic vacuum happen *if* >20% expired\n> > > ... but don't understand the limit when I tell it to vacuum either ...\n> > \n> > I am confused by your comment.\n> \n> Make the backend reasonably intelligent ... periodically do a scan, as\n> you've suggested would be required for your above 20% idea, and if >20%\n> are expired records, auto-start a vacuum (settable, of course) ...\n\nWould be good if we could to vacuum without locking. We could find a\ntable when things are mostly idle, and it then.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jul 2000 12:15:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuum only with 20% old tuples"
},
{
"msg_contents": "On Wed, 12 Jul 2000, Bruce Momjian wrote:\n\n> > On Wed, 12 Jul 2000, Bruce Momjian wrote:\n> > \n> > > > > I do not see what your 20% idea has to do with this, though, nor\n> > > > > why it's a good idea. If I've told the thing to vacuum I think\n> > > > > it should vacuum. 20% of a big table could be a lot of megabytes,\n> > > > > and I don't want some arbitrary decision in the code about whether\n> > > > > I can reclaim that space or not.\n> > > > \n> > > > I wouldn't mind seeing some automagic vacuum happen *if* >20% expired\n> > > > ... but don't understand the limit when I tell it to vacuum either ...\n> > > \n> > > I am confused by your comment.\n> > \n> > Make the backend reasonably intelligent ... periodically do a scan, as\n> > you've suggested would be required for your above 20% idea, and if >20%\n> > are expired records, auto-start a vacuum (settable, of course) ...\n> \n> Would be good if we could to vacuum without locking. We could find a\n> table when things are mostly idle, and it then.\n\nDefinitely :)\n\n\n",
"msg_date": "Wed, 12 Jul 2000 13:32:30 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum only with 20% old tuples"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of The Hermit Hacker\n> \n> how about leaving vacuum as is, but extend REINDEX so that it\n> drops/rebuilds all indices on a TABLE | DATABASE? Or does it do that\n> now? From reading \\h REINDEX, my thought is that it doesn't, but ...\n>\n\nAs for user tables,REINDEX could do it already,i.e\n REINDEX TABLE table_name FORCE; is possible under psql.\nIf REINDEX fails,PostgreSQL just ignores the indexes of the table\n(i.e Indexscan is never applied) and REINDEX/VACUUM would\nrecover the state. Yes,VACUUM already has a hidden functionality\nto reindex.\n\nAs for system indexes,you must shutdown postmaster and \ninvoke standalone postgres with -P option.\n REINDEX DATABASE database_name FORCE; would\nreindex(shrink) all system tables of the database.\nIt may be possible even under postmaster if REINDEX\nnever fails. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 13 Jul 2000 10:34:16 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Vacuum only with 20% old tuples"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > -----Original Message-----\n> > From: [email protected] [mailto:[email protected]]On\n> > Behalf Of The Hermit Hacker\n> >\n> > how about leaving vacuum as is, but extend REINDEX so that it\n> > drops/rebuilds all indices on a TABLE | DATABASE? Or does it do that\n> > now? From reading \\h REINDEX, my thought is that it doesn't, but ...\n> >\n>\n> As for user tables,REINDEX could do it already,i.e\n> REINDEX TABLE table_name FORCE; is possible under psql.\n> If REINDEX fails,PostgreSQL just ignores the indexes of the table\n> (i.e Indexscan is never applied) and REINDEX/VACUUM would\n> recover the state. Yes,VACUUM already has a hidden functionality\n> to reindex.\n\n Sorry, but there seem to be problems with that.\n\n pgsql=# delete from t2;\n DELETE 0\n pgsql=# vacuum;\n VACUUM\n pgsql=# reindex table t2 force;\n REINDEX\n pgsql=# \\c\n You are now connected to database pgsql as user pgsql.\n pgsql=# insert into t2 select * from t1;\n FATAL 1: btree: failed to add item to the page\n pqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\n\n Happens too if I don't reconnect to the database between\n REINDEX and INSERT. Also if I drop connection and restart\n postmaster, so it shouldn't belong to old blocks hanging\n aroung in the cache.\n\n The interesting thing is that the btree index get's reset to\n 2 blocks. Need to dive into...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Fri, 14 Jul 2000 02:02:19 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Vacuum only with 20% old tuples"
},
{
"msg_contents": "> -----Original Message-----\n> From: Jan Wieck [mailto:[email protected]]\n> \n> Hiroshi Inoue wrote:\n> > > -----Original Message-----\n> > > From: [email protected] \n> [mailto:[email protected]]On\n> > > Behalf Of The Hermit Hacker\n> > >\n> > > how about leaving vacuum as is, but extend REINDEX so that it\n> > > drops/rebuilds all indices on a TABLE | DATABASE? Or does it do that\n> > > now? From reading \\h REINDEX, my thought is that it doesn't, but ...\n> > >\n> >\n> > As for user tables,REINDEX could do it already,i.e\n> > REINDEX TABLE table_name FORCE; is possible under psql.\n> > If REINDEX fails,PostgreSQL just ignores the indexes of the table\n> > (i.e Indexscan is never applied) and REINDEX/VACUUM would\n> > recover the state. Yes,VACUUM already has a hidden functionality\n> > to reindex.\n> \n> Sorry, but there seem to be problems with that.\n> \n> pgsql=# delete from t2;\n> DELETE 0\n> pgsql=# vacuum;\n> VACUUM\n> pgsql=# reindex table t2 force;\n> REINDEX\n> pgsql=# \\c\n> You are now connected to database pgsql as user pgsql.\n> pgsql=# insert into t2 select * from t1;\n> FATAL 1: btree: failed to add item to the page\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> \n> Happens too if I don't reconnect to the database between\n> REINDEX and INSERT. Also if I drop connection and restart\n> postmaster, so it shouldn't belong to old blocks hanging\n> aroung in the cache.\n> \n> The interesting thing is that the btree index get's reset to\n> 2 blocks. Need to dive into...\n>\n\nHmm,couldn't reproduce it here.\nWhat kind of indexes t2 have ?\n\nAnyway the index get's reset to 2 blocks seems reasonable because\nt2 is empty.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n",
"msg_date": "Fri, 14 Jul 2000 10:05:56 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Vacuum only with 20% old tuples"
}
] |
[
{
"msg_contents": "On Tue, 11 Jul 2000, Alfred Perlstein wrote:\n> In an effort to complicate the postmaster beyond recognition I'm\n> proposing an idea that I hope can be useful to the developers.\n \n> Connection pooling:\n \n> The idea is to have the postmaster multiplex and do hand-offs of\n> database connections to other postgresql processes when the max\n> connections has been exceeded.\n\nAOLserver is one client that already does this, using the existing fe-be\nprotocol. It would be a good model to emulate -- although, to date, there\nhasn't been much interest from the main developers on spending the time to do\nthis.\n\nIf you need or want this performance on a db-backed website, use AOLserver :-P\nor some good connection pooling module for Apache, et al. PHP does a form of\npersistent connections, but I don't know enough about them to know if they are\ntruly pooled (as AOLserver's are). I do know that AOLserver's pooling is a\nmajor performance win.\n\nAs Ben has already said, this is a good place for client-side optimization,\nwhich is really where it would get the most use anyway.\n\nAOLserver has done this since around early 1995.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 11 Jul 2000 21:20:15 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Connection pooling."
},
{
"msg_contents": "In an effort to complicate the postmaster beyond recognition I'm\nproposing an idea that I hope can be useful to the developers.\n\nConnection pooling:\n\nThe idea is to have the postmaster multiplex and do hand-offs of\ndatabase connections to other postgresql processes when the max\nconnections has been exceeded.\n\nThis allows several gains:\n\n1) Postgresql can support a large number of connections without\nrequiring a large amount of processes to do so.\n\n2) Connection startup/finish will be cheaper because Postgresql\nprocesses will not exit and need to reninit things such as shared\nmemory attachments and file opens. This will also reduce the load\non the supporting operating system and make postgresql much 'cheaper'\nto run on systems that don't support the fork() model of execution\ngracefully.\n\n3) Long running connections can be preempted at transaction boundries\nallowing other connections to gain process timeslices from the\nconnection pool.\n\nThe idea is to make the postmaster that accepts connections a broker\nfor the connections. It will dole out descriptors using file\ndescriptor passing to children. If there's a demand for connections\nmeaning that all the postmasters are busy and there are pending\nconnections the postmaster can ask for a yeild on one of the\nconnections.\n\nA yeild involves the child postgresql process passing back the\nclient connection at a transaction boundry (between transactions)\nso it can later be given to another (perhaps the same) child process.\n\nI spoke with Bruce briefly about this and he suggested that system\ntables containing unique IDs could be used to identify passed\nconnections to the children and back to the postmaster.\n\nWhen a handoff occurs, the descriptor along with an ID referencing\nthings like temp tables and enviornment variables and authentication\ninformation could be handed out as well allowing the child to resume\nservice to the interrupted connection.\n\nI really don't have the knowledge of Postgresql internals to\naccomplish this, but the concepts are simple and the gains would\nseem to be very high.\n\nComments?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Tue, 11 Jul 2000 18:53:18 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Connection pooling."
},
{
"msg_contents": "It seems like a first step would be to just have postmaster cache unused\nconnections. In other words if a client closes a connection, postmaster\nkeeps the connection and the child process around for the next connect\nrequest. This has many of your advantages, but not all. However, it seems\nlike it would be simpler than attempting to multiplex a connection between\nmultiple clients.\n\nJeff\n\n>\n> Alfred Perlstein wrote:\n> >\n> > In an effort to complicate the postmaster beyond recognition I'm\n> > proposing an idea that I hope can be useful to the developers.\n> >\n> > Connection pooling:\n> >\n> > The idea is to have the postmaster multiplex and do hand-offs of\n> > database connections to other postgresql processes when the max\n> > connections has been exceeded.\n> >\n> > This allows several gains:\n> >\n> > 1) Postgresql can support a large number of connections without\n> > requiring a large amount of processes to do so.\n> >\n> > 2) Connection startup/finish will be cheaper because Postgresql\n> > processes will not exit and need to reninit things such as shared\n> > memory attachments and file opens. This will also reduce the load\n> > on the supporting operating system and make postgresql much 'cheaper'\n> > to run on systems that don't support the fork() model of execution\n> > gracefully.\n> >\n> > 3) Long running connections can be preempted at transaction boundries\n> > allowing other connections to gain process timeslices from the\n> > connection pool.\n> >\n> > The idea is to make the postmaster that accepts connections a broker\n> > for the connections. It will dole out descriptors using file\n> > descriptor passing to children. If there's a demand for connections\n> > meaning that all the postmasters are busy and there are pending\n> > connections the postmaster can ask for a yeild on one of the\n> > connections.\n> >\n> > A yeild involves the child postgresql process passing back the\n> > client connection at a transaction boundry (between transactions)\n> > so it can later be given to another (perhaps the same) child process.\n> >\n> > I spoke with Bruce briefly about this and he suggested that system\n> > tables containing unique IDs could be used to identify passed\n> > connections to the children and back to the postmaster.\n> >\n> > When a handoff occurs, the descriptor along with an ID referencing\n> > things like temp tables and enviornment variables and authentication\n> > information could be handed out as well allowing the child to resume\n> > service to the interrupted connection.\n> >\n> > I really don't have the knowledge of Postgresql internals to\n> > accomplish this, but the concepts are simple and the gains would\n> > seem to be very high.\n> >\n> > Comments?\n> >\n> > --\n> > -Alfred Perlstein - [[email protected]|[email protected]]\n> > \"I have the heart of a child; I keep it in a jar on my desk.\"\n\n",
"msg_date": "Tue, 11 Jul 2000 23:10:46 -0400",
"msg_from": "Jeffery Collins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection pooling."
},
{
"msg_contents": "\nSeems a lot trickier than you think. A backend can only be running\none transaction at a time, so you'd have to keep track of which backends\nare in the middle of a transaction. I can imagine race conditions here.\nAnd backends can have contexts that are set by various clients using\nSET and friends. Then you'd have to worry about authentication each\ntime. And you'd have to have algorithms for cleaning up old processes\nand/or dead processes. It all really sounds a bit hard. \n\nAlfred Perlstein wrote:\n> \n> In an effort to complicate the postmaster beyond recognition I'm\n> proposing an idea that I hope can be useful to the developers.\n> \n> Connection pooling:\n> \n> The idea is to have the postmaster multiplex and do hand-offs of\n> database connections to other postgresql processes when the max\n> connections has been exceeded.\n> \n> This allows several gains:\n> \n> 1) Postgresql can support a large number of connections without\n> requiring a large amount of processes to do so.\n> \n> 2) Connection startup/finish will be cheaper because Postgresql\n> processes will not exit and need to reninit things such as shared\n> memory attachments and file opens. This will also reduce the load\n> on the supporting operating system and make postgresql much 'cheaper'\n> to run on systems that don't support the fork() model of execution\n> gracefully.\n> \n> 3) Long running connections can be preempted at transaction boundries\n> allowing other connections to gain process timeslices from the\n> connection pool.\n> \n> The idea is to make the postmaster that accepts connections a broker\n> for the connections. It will dole out descriptors using file\n> descriptor passing to children. If there's a demand for connections\n> meaning that all the postmasters are busy and there are pending\n> connections the postmaster can ask for a yeild on one of the\n> connections.\n> \n> A yeild involves the child postgresql process passing back the\n> client connection at a transaction boundry (between transactions)\n> so it can later be given to another (perhaps the same) child process.\n> \n> I spoke with Bruce briefly about this and he suggested that system\n> tables containing unique IDs could be used to identify passed\n> connections to the children and back to the postmaster.\n> \n> When a handoff occurs, the descriptor along with an ID referencing\n> things like temp tables and enviornment variables and authentication\n> information could be handed out as well allowing the child to resume\n> service to the interrupted connection.\n> \n> I really don't have the knowledge of Postgresql internals to\n> accomplish this, but the concepts are simple and the gains would\n> seem to be very high.\n> \n> Comments?\n> \n> --\n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Wed, 12 Jul 2000 13:48:20 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection pooling."
},
{
"msg_contents": "At 23:10 11/07/00 -0400, Jeffery Collins wrote:\n>It seems like a first step would be to just have postmaster cache unused\n>connections. In other words if a client closes a connection, postmaster\n>keeps the connection and the child process around for the next connect\n>request. This has many of your advantages, but not all. However, it seems\n>like it would be simpler than attempting to multiplex a connection between\n>multiple clients.\n>\n\nAdd the ability to tell the postmaster to keep a certain number of 'free'\nservers (up to a max total, of course), and you can then design your apps\nto connect/disconnect very quickly. This way you don't need to request a\nclient to get off - you trust the app designer to disconnect whenever they\ncan.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 12 Jul 2000 14:24:40 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection pooling."
},
{
"msg_contents": "> It seems like a first step would be to just have postmaster cache unused\n> connections. In other words if a client closes a connection, postmaster\n> keeps the connection and the child process around for the next connect\n> request. This has many of your advantages, but not all. However, it seems\n> like it would be simpler than attempting to multiplex a connection between\n> multiple clients.\n> \n\nThis does seem like a good optimization.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jul 2000 00:28:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection pooling."
},
{
"msg_contents": "* Chris Bitmead <[email protected]> [000711 20:53] wrote:\n> \n> Seems a lot trickier than you think. A backend can only be running\n> one transaction at a time, so you'd have to keep track of which backends\n> are in the middle of a transaction. I can imagine race conditions here.\n> And backends can have contexts that are set by various clients using\n> SET and friends. Then you'd have to worry about authentication each\n> time. And you'd have to have algorithms for cleaning up old processes\n> and/or dead processes. It all really sounds a bit hard. \n\nThe backends can simply inform the postmaster when they are ready\neither because they are done with a connection or because they\nhave just closed a transaction.\n\nAll the state (auth/temp tables) can be held in the system tables.\n\nIt's complicated, but no where on the order of something like\na new storage manager.\n\n-Alfred\n",
"msg_date": "Tue, 11 Jul 2000 22:22:39 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection pooling."
},
{
"msg_contents": "* Bruce Momjian <[email protected]> [000711 21:31] wrote:\n> > It seems like a first step would be to just have postmaster cache unused\n> > connections. In other words if a client closes a connection, postmaster\n> > keeps the connection and the child process around for the next connect\n> > request. This has many of your advantages, but not all. However, it seems\n> > like it would be simpler than attempting to multiplex a connection between\n> > multiple clients.\n> > \n> \n> This does seem like a good optimization.\n\nI'm not sure if the postmaster is needed besideds just to fork/exec\nthe backend, if so then when a backend finishes it can just call\naccept() on the listening socket inherited from the postmaster to\nget the next incomming connection.\n\n-Alfred\n",
"msg_date": "Tue, 11 Jul 2000 22:35:00 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection pooling."
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Seems a lot trickier than you think. A backend can only be running\n> one transaction at a time, so you'd have to keep track of which backends\n> are in the middle of a transaction. I can imagine race conditions here.\n\nAborting out of a transaction is no problem; we have code for that\nanyway. More serious problems:\n\n* We have no code for reassigning a backend to a different database,\n so the pooling would have to be per-database.\n\n* AFAIK there is no portable way to pass a socket connection from the\n postmaster to an already-existing backend process. If you do a\n fork() then the connection is inherited ... otherwise you've got a\n problem. (You could work around this if the postmaster relays\n every single byte in both directions between client and backend,\n but the performance problems with that should be obvious.)\n\n> And backends can have contexts that are set by various clients using\n> SET and friends.\n\nResetting SET variables would be a problem, and there's also the\nassigned user name to be reset. This doesn't seem impossible, but\nit does seem tedious and error-prone. (OTOH, Peter E's recent work\non guc.c might have unified option-handling enough to bring it\nwithin reason.)\n\nThe killer problem here is that you can't hand off a connection\naccepted by the postmaster to a backend except by fork() --- at least\nnot with methods that work on a wide variety of Unixen. Unless someone\nhas a way around that, I think the idea is dead in the water; the lesser\nissues don't matter.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 01:52:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection pooling. "
},
{
"msg_contents": "At 01:52 12/07/00 -0400, Tom Lane wrote:\n>\n>The killer problem here is that you can't hand off a connection\n>accepted by the postmaster to a backend except by fork() --- at least\n>not with methods that work on a wide variety of Unixen. Unless someone\n>has a way around that, I think the idea is dead in the water; the lesser\n>issues don't matter.\n>\n\nMy understanding of pg client interfaces is that the client uses ont of the\npg interface libraries to make a connection to the db; they specify host &\nport and get back some kind of connection object.\n\nWhat stops the interface library from using the host & port to talk to the\npostmaster, find the host & port the spare db server, then connect directly\nto the server? This second connection is passed back in the connection object.\n\nWhen the client disconnects from the server, it tells the postmaster it's\navailable again etc.\n\nie. in very rough terms:\n\n client calls interface to connect\n\n interface talks to postmaster on port 5432, says \"I want a server for\nxyz db\"\n\n postmaster replies with \"Try port ABCD\" OR \"no servers available\"\n postmaster marks the nominated server as 'used'\n postmaster disconnects from client\n\n interface connects to port ABCD as per normal protocols\n interface fills in connection object & returns\n\n ...client does some work...\n\n client disconnects\n\n db server tells postmaster it's available again.\n\n\nThere would also need to be timeout code to handle the case where the\ninterface did not do the second connect.\n\nYou could also have the interface allocate a port send it's number to the\npostmaster then listen on it, but I think that would represent a potential\nsecurity hole.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 12 Jul 2000 16:22:10 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection pooling. "
},
{
"msg_contents": "* Tom Lane <[email protected]> [000711 22:53] wrote:\n> Chris Bitmead <[email protected]> writes:\n> > Seems a lot trickier than you think. A backend can only be running\n> > one transaction at a time, so you'd have to keep track of which backends\n> > are in the middle of a transaction. I can imagine race conditions here.\n> \n> Aborting out of a transaction is no problem; we have code for that\n> anyway. More serious problems:\n> \n> * We have no code for reassigning a backend to a different database,\n> so the pooling would have to be per-database.\n\nThat would need to be fixed. How difficult would that be?\n\n> * AFAIK there is no portable way to pass a socket connection from the\n> postmaster to an already-existing backend process. If you do a\n> fork() then the connection is inherited ... otherwise you've got a\n> problem. (You could work around this if the postmaster relays\n> every single byte in both directions between client and backend,\n> but the performance problems with that should be obvious.)\n\nno, see below.\n\n> > And backends can have contexts that are set by various clients using\n> > SET and friends.\n> \n> Resetting SET variables would be a problem, and there's also the\n> assigned user name to be reset. This doesn't seem impossible, but\n> it does seem tedious and error-prone. (OTOH, Peter E's recent work\n> on guc.c might have unified option-handling enough to bring it\n> within reason.)\n\nWhat can be done is that each incomming connection can be assigned an\nID into a system table. As options are added the system would assign\nthem to key-value pairs in this table. Once someone detects that the\nremote side has closed the connection the data can be destroyed, but\nuntil then along with the descriptor passing the ID of the client\nas an index into the table can be passed for the backend to fetch.\n\n> The killer problem here is that you can't hand off a connection\n> accepted by the postmaster to a backend except by fork() --- at least\n> not with methods that work on a wide variety of Unixen. Unless someone\n> has a way around that, I think the idea is dead in the water; the lesser\n> issues don't matter.\n\nThe code has been around since 4.2BSD, it takes a bit of #ifdef to\nget it right on all systems but it's not impossible, have a look at\nhttp://www.fhttpd.org/ for a web server that does this in a portable\nfashion.\n\nI should have a library whipped up for you guys really soon now\nto handle the descriptor and message passing.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Tue, 11 Jul 2000 23:30:49 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection pooling."
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> * Tom Lane <[email protected]> [000711 22:53] wrote:\n>> The killer problem here is that you can't hand off a connection\n>> accepted by the postmaster to a backend except by fork() --- at least\n>> not with methods that work on a wide variety of Unixen.\n\n> The code has been around since 4.2BSD, it takes a bit of #ifdef to\n> get it right on all systems but it's not impossible, have a look at\n> http://www.fhttpd.org/ for a web server that does this in a portable\n> fashion.\n\nI looked at this to see if it would teach me something I didn't know.\nIt doesn't. It depends on sendmsg() which is a BSD-ism and not very\nportable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 03:04:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection pooling. "
},
{
"msg_contents": "* Tom Lane <[email protected]> [000712 00:04] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > * Tom Lane <[email protected]> [000711 22:53] wrote:\n> >> The killer problem here is that you can't hand off a connection\n> >> accepted by the postmaster to a backend except by fork() --- at least\n> >> not with methods that work on a wide variety of Unixen.\n> \n> > The code has been around since 4.2BSD, it takes a bit of #ifdef to\n> > get it right on all systems but it's not impossible, have a look at\n> > http://www.fhttpd.org/ for a web server that does this in a portable\n> > fashion.\n> \n> I looked at this to see if it would teach me something I didn't know.\n> It doesn't. It depends on sendmsg() which is a BSD-ism and not very\n> portable.\n\nIt's also specified by Posix.1g if that means anything.\n\n-Alfred\n",
"msg_date": "Wed, 12 Jul 2000 00:09:47 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection pooling."
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> What stops the interface library from using the host & port to talk to\n> the postmaster, find the host & port the spare db server, then connect\n> directly to the server?\n\nYou're assuming that we can change the on-the-wire protocol freely and\nonly the API presented by the client libraries matters. In a perfect\nworld that might be true, but reality is that we can't change the wire\nprotocol easily. If we do, it breaks all existing precompiled clients.\nUpdating clients can be an extremely painful experience in multiple-\nmachine installations.\n\t\nAlso, we don't have just one set of client libraries to fix. There are\nat least three client-side implementations that don't depend on libpq.\n\nWe have done protocol updates in the past --- in fact I was responsible\nfor the last one. (And I'm still carrying the scars, which is why I'm\nnot too enthusiastic about another one.) It's not impossible, but it\nneeds more evidence than \"this should speed up connections by\nI-don't-know-how-much\"...\n\nIt might also be worth pointing out that the goal was to speed up the\nend-to-end connection time. Redirecting as you suggest is not free\n(at minimum it would appear to require two TCP connection setups and two\nauthentication cycles). What evidence have you got that it'd be faster\nthan spawning a new backend?\n\nI tend to agree with the opinion that connection-pooling on the client\nside offers more bang for the buck than we could hope to get by doing\nsurgery on the postmaster/backend setup.\n\nAlso, to return to the original point, AFAIK we have not tried hard\nto cut the backend startup time, other than the work that was done\na year or so back to eliminate exec() of a separate executable.\nIt'd be worth looking to see what could be done there with zero\nimpact on existing clients.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 03:47:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection pooling. "
}
] |
[
{
"msg_contents": "While testing the just-committed executor memory leak fixes, I observe\nthat there are still slow leaks in some operations such as sorts on\nlarge text fields. With a little digging, it seems that the leak must\nbe blamed on the behavior of aset.c for large chunks. Chunks between\n1K and 64K (ALLOC_SMALLCHUNK_LIMIT and ALLOC_BIGCHUNK_LIMIT) are all\nkept in the same freelist, and when a request is made for an amount\nof memory in that range, aset.c gives back the first chunk that's\nbig enough from that freelist.\n\nFor example, let's say you are allocating and freeing roughly equal\nnumbers of 2K and 10K blocks. Over time, about half of the 2K\nrequests will be answered by returning a 10K block --- which will\nprevent the next 10K request from being filled by recycling that\n10K block, causing a new 10K block to be allocated. If aset.c\nhad given back a 2K block whenever possible, the net memory usage\nwould be static, but since it doesn't, the usage gradually creeps\nup as more and more chunks are used inefficiently. Our actual\nmaximum memory usage might be m * 2K plus n * 10K but the allocated\nspace will creep towards (m + n) * 10K where *all* the active blocks\nare the larger size.\n\nA straightforward solution would be to scan the entire freelist\nand give back the smallest block that's big enough for the request.\nThat could be pretty slow (and induce lots of swap thrashing) so\nI don't like it much.\n\nAnother idea is to split the returned chunk and put the wasted part back\nas a smaller free chunk, but I don't think this solves the problem; it\njust means that the wasted space ends up on a small-chunk freelist, not\nthat you can actually do anything with it. But maybe someone can figure\nout a variant that works better.\n\nIt might be that this behavior won't be seen much in practice and we\nshouldn't slow down aset.c at all to try to deal with it. But I think\nit's worth looking for solutions that won't slow typical cases much.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Jul 2000 23:48:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance problem in aset.c"
},
{
"msg_contents": "\nCan you have a set of free lists. Like chunks of 2^8 or bigger,\n2^9, 2^10, 2^11 etc. It should be faster than finding the first block\nlike now and error would be mostly bounded to a factor of 2.\n\nTom Lane wrote:\n> \n> While testing the just-committed executor memory leak fixes, I observe\n> that there are still slow leaks in some operations such as sorts on\n> large text fields. With a little digging, it seems that the leak must\n> be blamed on the behavior of aset.c for large chunks. Chunks between\n> 1K and 64K (ALLOC_SMALLCHUNK_LIMIT and ALLOC_BIGCHUNK_LIMIT) are all\n> kept in the same freelist, and when a request is made for an amount\n> of memory in that range, aset.c gives back the first chunk that's\n> big enough from that freelist.\n> \n> For example, let's say you are allocating and freeing roughly equal\n> numbers of 2K and 10K blocks. Over time, about half of the 2K\n> requests will be answered by returning a 10K block --- which will\n> prevent the next 10K request from being filled by recycling that\n> 10K block, causing a new 10K block to be allocated. If aset.c\n> had given back a 2K block whenever possible, the net memory usage\n> would be static, but since it doesn't, the usage gradually creeps\n> up as more and more chunks are used inefficiently. Our actual\n> maximum memory usage might be m * 2K plus n * 10K but the allocated\n> space will creep towards (m + n) * 10K where *all* the active blocks\n> are the larger size.\n> \n> A straightforward solution would be to scan the entire freelist\n> and give back the smallest block that's big enough for the request.\n> That could be pretty slow (and induce lots of swap thrashing) so\n> I don't like it much.\n> \n> Another idea is to split the returned chunk and put the wasted part back\n> as a smaller free chunk, but I don't think this solves the problem; it\n> just means that the wasted space ends up on a small-chunk freelist, not\n> that you can actually do anything with it. But maybe someone can figure\n> out a variant that works better.\n> \n> It might be that this behavior won't be seen much in practice and we\n> shouldn't slow down aset.c at all to try to deal with it. But I think\n> it's worth looking for solutions that won't slow typical cases much.\n> \n> regards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 14:05:29 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem in aset.c"
},
{
"msg_contents": "At 23:48 11/07/00 -0400, Tom Lane wrote:\n>Another idea is to split the returned chunk and put the wasted part back\n>as a smaller free chunk, but I don't think this solves the problem; it\n>just means that the wasted space ends up on a small-chunk freelist, not\n>that you can actually do anything with it. But maybe someone can figure\n>out a variant that works better.\n\nCan you maintain one free list for each power of 2 (which it might already\nbe doing by the look of it), and always allocate the max size for the list.\nThen when you want a 10k chunk, you get a 16k chunk, but you know from the\nrequest size which list to go to, and anything on the list will satisfy the\nrequirement.\n\nIn the absolute worst case this will double your memory consumption, and on\naverage should only use 50% more memory. If this worries you, then maintain\ntwice as many lists and halve the wastage. \n\nThe process of finding the right list is just a matter of examining the MSB\nof the chunk size. If you want twice the number of lists, then look at the\ntop two bits etc.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 12 Jul 2000 14:20:08 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem in aset.c"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> Can you maintain one free list for each power of 2 (which it might already\n> be doing by the look of it), and always allocate the max size for the list.\n> Then when you want a 10k chunk, you get a 16k chunk, but you know from the\n> request size which list to go to, and anything on the list will satisfy the\n> requirement.\n\nThat is how it works for small chunks (< 1K with the current\nparameters). I don't think we want to do it that way for really\nhuge chunks though.\n\nMaybe the right answer is to eliminate the gap between small chunks\n(which basically work as Philip sketches above) and huge chunks (for\nwhich we fall back on malloc). The problem is with the stuff in\nbetween, for which we have a kind of half-baked approach...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 01:21:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance problem in aset.c "
},
{
"msg_contents": "* Tom Lane <[email protected]> [000711 22:23] wrote:\n> Philip Warner <[email protected]> writes:\n> > Can you maintain one free list for each power of 2 (which it might already\n> > be doing by the look of it), and always allocate the max size for the list.\n> > Then when you want a 10k chunk, you get a 16k chunk, but you know from the\n> > request size which list to go to, and anything on the list will satisfy the\n> > requirement.\n> \n> That is how it works for small chunks (< 1K with the current\n> parameters). I don't think we want to do it that way for really\n> huge chunks though.\n> \n> Maybe the right answer is to eliminate the gap between small chunks\n> (which basically work as Philip sketches above) and huge chunks (for\n> which we fall back on malloc). The problem is with the stuff in\n> between, for which we have a kind of half-baked approach...\n\nEr, are you guys seriously layering your own general purpose allocator\nover the OS/c library allocator?\n\nDon't do that!\n\nThe only time you may want to do this is if you're doing a special purpose\nallocator like a zone or slab allocator, otherwise it's a pessimization.\nThe algorithms you're discussing to fix these leaks have been implemented\nin almost any modern allocator that I know of.\n\nSorry if i'm totally off base, but \"for which we fall back on malloc\"\nmakes me wonder what's going on here.\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Tue, 11 Jul 2000 22:33:12 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem in aset.c"
},
{
"msg_contents": "At 01:21 12/07/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> Can you maintain one free list for each power of 2 (which it might already\n>> be doing by the look of it), and always allocate the max size for the list.\n>> Then when you want a 10k chunk, you get a 16k chunk, but you know from the\n>> request size which list to go to, and anything on the list will satisfy the\n>> requirement.\n>\n>Maybe the right answer is to eliminate the gap between small chunks\n>(which basically work as Philip sketches above) and huge chunks (for\n>which we fall back on malloc). The problem is with the stuff in\n>between, for which we have a kind of half-baked approach...\n\nThat sounds good to me. \n\nYou *might* want to enable some kind of memory statistics in shared memory\n(for a mythical future repoting tool) so you can see how many memory\nallocations fall into the 'big chunk' range, and adjust your definition of\n'big chunk' appropriately.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 12 Jul 2000 15:40:38 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem in aset.c "
},
{
"msg_contents": "At 22:33 11/07/00 -0700, Alfred Perlstein wrote:\n>\n>The only time you may want to do this is if you're doing a special purpose\n>allocator like a zone or slab allocator, otherwise it's a pessimization.\n\nThat is why it is done (I believe) - memory can be allocated using one\ncontext then freed as a block when the context is no longer used.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 12 Jul 2000 16:30:10 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem in aset.c"
},
{
"msg_contents": "Alfred Perlstein wrote:\n> * Tom Lane <[email protected]> [000711 22:23] wrote:\n> > Philip Warner <[email protected]> writes:\n> > > Can you maintain one free list for each power of 2 (which it might already\n> > > be doing by the look of it), and always allocate the max size for the list.\n> > > Then when you want a 10k chunk, you get a 16k chunk, but you know from the\n> > > request size which list to go to, and anything on the list will satisfy the\n> > > requirement.\n> >\n> > That is how it works for small chunks (< 1K with the current\n> > parameters). I don't think we want to do it that way for really\n> > huge chunks though.\n> >\n> > Maybe the right answer is to eliminate the gap between small chunks\n> > (which basically work as Philip sketches above) and huge chunks (for\n> > which we fall back on malloc). The problem is with the stuff in\n> > between, for which we have a kind of half-baked approach...\n>\n> Er, are you guys seriously layering your own general purpose allocator\n> over the OS/c library allocator?\n>\n> Don't do that!\n>\n> The only time you may want to do this is if you're doing a special purpose\n> allocator like a zone or slab allocator, otherwise it's a pessimization.\n> The algorithms you're discussing to fix these leaks have been implemented\n> in almost any modern allocator that I know of.\n>\n> Sorry if i'm totally off base, but \"for which we fall back on malloc\"\n> makes me wonder what's going on here.\n\n To clearify this:\n\n I developed this in aset.c because of the fact that we use\n alot (really alot) of very small chunks beeing palloc()'d.\n Any allocation must be remembered in some linked lists to\n know what to free at memory context reset or destruction. In\n the old version, every however small amount was allocated\n using malloc() and remembered separately in one huge List for\n the context. Traversing this list was awfully slow when a\n context said bye. And I saw no way to speedup this traversal.\n\n With the actual concept, only big chunks are remembered for\n their own. All small allocations aren't tracked that\n accurately and memory context destruction simply can throw\n away all the blocks allocated for it.\n\n At the time I implemented it it gained a speedup of ~10% for\n the regression test. It's an approach of gaining speed by\n wasting memory.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 12 Jul 2000 13:27:50 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Performance problem in aset.c"
},
{
"msg_contents": "> At 23:48 11/07/00 -0400, Tom Lane wrote:\n> >Another idea is to split the returned chunk and put the wasted part back\n> >as a smaller free chunk, but I don't think this solves the problem; it\n> >just means that the wasted space ends up on a small-chunk freelist, not\n> >that you can actually do anything with it. But maybe someone can figure\n> >out a variant that works better.\n> \n> Can you maintain one free list for each power of 2 (which it might already\n> be doing by the look of it), and always allocate the max size for the list.\n> Then when you want a 10k chunk, you get a 16k chunk, but you know from the\n> request size which list to go to, and anything on the list will satisfy the\n> requirement.\n\nSounds like the BSD malloc memory manager. I can post details, or the\npages from a kernel book I have about it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jul 2000 09:12:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem in aset.c"
},
{
"msg_contents": "\nOn Tue, 11 Jul 2000, Tom Lane wrote:\n\n> A straightforward solution would be to scan the entire freelist\n> and give back the smallest block that's big enough for the request.\n> That could be pretty slow (and induce lots of swap thrashing) so\n> I don't like it much.\n\n Yes. It is right solution. A little better chunk selection can be allowed\nwith more freelist than current 8. If we will 16 free lists a chunk real \nsize and request size will more similar.\n\n> Another idea is to split the returned chunk and put the wasted part back\n> as a smaller free chunk, but I don't think this solves the problem; it\n> just means that the wasted space ends up on a small-chunk freelist, not\n> that you can actually do anything with it. But maybe someone can figure\n> out a variant that works better.\n\n If I good understand it is a little like my idea with one-free-chunk from\nblock residual space.\n\n It is not bad idea, but we have aligned chunks now. A question is if in \nwasted part will always space for new *aligned* chunk. And second question,\nnot will this method create small and smaller chunks? For example you \nafter 1000 free/alloc you will have very much small chunks (from the wasted \npart).\n\nThe chunk from wasted part is good, but you must have usage of this chunks.\n\n*IMHO* current new mem design create new demand on context. In old design\nwe used one context for more different proccess and request-allocation-size \nwas more heterogeneous. But now, we use for specific operetions specific \ncontexts and it probably very often create context that use very homogeneous\nchunks. For example if a testing my query cache 90% of all alocation are in\nthe range 16-32b and one cached plan need 2000-5000b --- effect it that\nquery cache not need 8Kb blocks ...etc. Very simular situation will in \nother parts in PG.\n Tom add to AllocSet context defaul/max block size setting for each context.\nIt is good and it is first step to more specific context setting. Hmm, \nI haven't idea for some next step now... But something setting is needful \nfor chunks operations (for example primary-default chunk size and from this\nfrequent freelist, big chunk limit setting ..etc.)\n\n\t\t\t\t\t\tKarel\n \n\n\n\n \n\n\n",
"msg_date": "Thu, 13 Jul 2000 12:04:34 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem in aset.c"
}
] |
[
{
"msg_contents": "\n> > I don't like that --- seems it would put a definite crimp in the\n> > whole point of TOAST, which is not to have arbitrary limits on field\n> > sizes.\n> \n> If we can solve it, let's do so. If we cannot, let's restrict\n> it for 7.1.\n\nHow are you doing the index toasting currently ? Is it on the same \nline as table toasting ? That is: toast some index column values if the key \nexceeds 2k ?\n\nAndreas\n",
"msg_date": "Wed, 12 Jul 2000 12:32:03 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: update on TOAST status'"
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n>\n> > > I don't like that --- seems it would put a definite crimp in the\n> > > whole point of TOAST, which is not to have arbitrary limits on field\n> > > sizes.\n> >\n> > If we can solve it, let's do so. If we cannot, let's restrict\n> > it for 7.1.\n>\n> How are you doing the index toasting currently ? Is it on the same\n> line as table toasting ? That is: toast some index column values if the key\n> exceeds 2k ?\n\n The current CVS is broken in that area. You'll notice as soon\n as you have many huge \"text\" values in an index, update them,\n vacuum and continue to update.\n\n The actual behaviour of the toaster is to toast each tuple\n until it has a delicious looking, brown and crispy surface.\n The indicator for beeing delicious is that it shrank below\n MaxTupleSize/4 - that's a little less than 2K in a default 8K\n blocksize setup.\n\n It then sticks the new tuple into the HeapTuple's t_data\n pointer.\n\n Index inserts are allways done after heap_insert() or\n heap_update(). At that time, the index tuples will be built\n from the values found in the now replaced heap tuple. And\n since the heap tuple found now is allways smaller than 2K,\n any combination of attributes out of it must be too (it's\n impossible to specify one and the same attribute multiple\n times in one index).\n\n So the indices simply inherit the toasting result. If a value\n got compressed, the index will store the compressed format.\n If it got moved off, the index will hold the toast entry\n reference for it.\n\n One of the biggest advantages is this: In the old system, an\n indexed column of 2K caused 2K be stored in the heap plus 2K\n stored in the index. Plus all the 2K instances in upper index\n block range specs. Now, the heap and the index will only\n hold references or compressed items.\n\n Absolutely no problem for compressed items. All information\n to recreate the original value is in the Datum itself.\n\n For external stored ones, the reference tells the OIDs of the\n secondary relation and it's index (where to find the data of\n this entry), a unique identifier of the item (another OID)\n and some other info. So the reference contains all the\n information required to fetch the data just by looking at the\n reference. And since the detoaster scans the secondary\n relation with a visibility of SnapShotAny, it'll succeed to\n find them even if they've been deleted long ago by another\n committed transaction. So index traversal will succeed on\n that in any case.\n\n What I didn't knew at the time of implementation is, that\n btree indices can keep such a reference in upper level blocks\n range specifications even after a vacuum successfully deleted\n the index tuple holding the reference itself. That's the\n current pity.\n\n Thus, if vacuum finally removed deleted tuples from the\n secondary relations (after the heap and index have been\n vacuumed), the detoaster cannot find those entries,\n referenced by upper index blocks, any more.\n\n Maybe we could propagate key range changes into upper blocks\n at index_delete() time. Will look at the btree code now.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 12 Jul 2000 14:41:17 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: AW: update on TOAST status'"
},
{
"msg_contents": "I wrote:\n>\n> Maybe we could propagate key range changes into upper blocks\n> at index_delete() time. Will look at the btree code now.\n\n After looking at the vacuum code it doesn't seem to be a good\n idea. Doing so would require to traverse the btree first,\n while the current implementation just grabs the block by\n index ctid and pulls out the tuple. I would expect it to\n significantly slow down vacuum again - what we all don't\n want.\n\n So the only way left is recreating the indices from scratch\n and moving the new ones into place.\n\n But in contrast to things like column dropping, this would\n have to happen on every vacuum run for alot of tables.\n\n Isn't it appropriate to have a specialized version of it for\n this case instead of waiting for a general relation\n versioning?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 12 Jul 2000 21:11:29 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: AW: update on TOAST status'"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> So the only way left is recreating the indices from scratch\n> and moving the new ones into place.\n> But in contrast to things like column dropping, this would\n> have to happen on every vacuum run for alot of tables.\n> Isn't it appropriate to have a specialized version of it for\n> this case instead of waiting for a general relation\n> versioning?\n\nI don't see a \"specialized\" way that would be any different in\nperformance from a \"generalized\" solution. The hard part AFAICT is how\ndoes a newly-started backend discover the current version numbers for\nthe critical system tables and indexes. To do versioning of system\nindexes at all, we need a full-fledged solution.\n\nBut as you pointed out before, none of the system indexes are on\ntoastable datatypes. (I just checked --- the only index opclasses used\nin template1 are: int2_ops int4_ops oid_ops char_ops oidvector_ops\nname_ops.) Maybe we could have an interim solution using the old method\nfor system indexes and a drop-and-rebuild approach for user indexes.\nA crash partway through rebuild would leave you with a busted index,\nbut maybe WAL could take care of redoing the index build after restart.\n(Of course, if the index build failure is reproducible, you're in\nbig trouble...)\n\nI don't *like* that approach a whole lot; it's ugly and doesn't sound\nall that reliable. But if we don't want to deal with relation\nversioning for 7.1, maybe it's the only way for now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 16:01:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: update on TOAST status' "
}
] |
[
{
"msg_contents": "Although it sounds like a good idea, over the years I've not seen a split\nreally work well unless there is a real divide between them.\n\nLike our Hackers & Interfaces lists now, they are quite different, but some\nproblems relating to say JDBC also affect ODBC, and soon people start cross\nposting removing the point of the split in the first place.\n\nPeter\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n\n-----Original Message-----\nFrom: Gunnar R|nning [mailto:[email protected]]\nSent: Wednesday, July 12, 2000 12:06 PM\nTo: Peter Mount\nCc: Bruce Momjian; Peter Mount; PostgreSQL Developers List (E-mail);\nPostgreSQL Interfaces (E-mail)\nSubject: Re: [HACKERS] Contacting me\n\n\n\"Peter Mount\" <[email protected]> writes:\n\n> I get that a lot here, spending about 2 hours an evening reading mail,\nthen\n> realising there's not enough time to do any programming :-(\n\nWhat about trying to partition into more mailing lists with more specific\ntopics ?\n\nFor instance when it comes to interfaces I'm really only interested in the\njdbc interface. \n \n\tGunnar\n",
"msg_date": "Wed, 12 Jul 2000 12:36:02 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Contacting me"
}
] |
[
{
"msg_contents": "\n> Philip's INSERT ... RETURNING idea could support returning TID and\n> table OID as a special case, and it has the saving grace that it\n> won't affect apps that don't use it...\n\nYes, and the current fe-be protocol can handle it, since an on insert rule\ncan also return a select result for an insert statement.\n\nAndreas\n",
"msg_date": "Wed, 12 Jul 2000 14:58:20 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Re: postgres TODO "
}
] |
[
{
"msg_contents": "\nI have been speaking to Pavel about pg_dump support of blobs, and he thinks\nit is important to allow for some kind of human-readable version of the\ndump to be created.\n\nMy guess is that this will involve a plain text schema dump, followed by\nall BLOBs in separate files, and a script to load them. To implement this\nI'll obviosly need to be passed a directory/file location for the script\nsince I can't pipe seperate files to stdout.\n\nI'd be interested in knowing what features people think are important in\nthis kind of format; what do you need to do with the blob files, what do\npeple want to edit, etc etc.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 12 Jul 2000 23:51:13 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump & blobs - editable dump?"
},
{
"msg_contents": "Philip Warner wrote:\n> My guess is that this will involve a plain text schema dump, followed by\n> all BLOBs in separate files, and a script to load them. To implement this\n> I'll obviosly need to be passed a directory/file location for the script\n> since I can't pipe seperate files to stdout.\n\nuuencode the blobs, perhaps, using a shar-like format?\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 12 Jul 2000 10:13:39 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump & blobs - editable dump?"
},
{
"msg_contents": "At 10:13 12/07/00 -0400, Lamar Owen wrote:\n>Philip Warner wrote:\n>> My guess is that this will involve a plain text schema dump, followed by\n>> all BLOBs in separate files, and a script to load them. To implement this\n>> I'll obviosly need to be passed a directory/file location for the script\n>> since I can't pipe seperate files to stdout.\n>\n>uuencode the blobs, perhaps, using a shar-like format?\n\nFor the human readable version, the request was to make it editable and\nsendable to psql. As a result the BLOBs need to be in their binary format\nOR psql needs to support BLOB import from stdin. As a first pass I was\nhoping for the simple 'dump them into files' solution.\n\nWhat I am confused by is what people actually want to do with a load of\nBLOBs sitting in a directory; if there are specific needs, then I'd also\nlike to cater for them in the custom file formats.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 13 Jul 2000 00:21:53 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump & blobs - editable dump?"
},
{
"msg_contents": "Philip Warner wrote:\n> At 10:13 12/07/00 -0400, Lamar Owen wrote:\n> >Philip Warner wrote:\n> >> I'll obviosly need to be passed a directory/file location for the script\n> >> since I can't pipe seperate files to stdout.\n> >\n> >uuencode the blobs, perhaps, using a shar-like format?\n \n> For the human readable version, the request was to make it editable and\n> sendable to psql. As a result the BLOBs need to be in their binary format\n> OR psql needs to support BLOB import from stdin. As a first pass I was\n> hoping for the simple 'dump them into files' solution.\n\nIf in a shell archive format, shouldn't it be easy enough for pg_restore\nto be made to do the stdin-to-blob thing (through whatever mechanisms\nyou're already using to get the blob back in in the first place,\ncombined with some steering/deshar-ing/uudecoding logic)? The backup\ncould even be made 'self-extracting' as shars usually are... :-) Of\ncourse, you then have to be on the watch for the usual shar trojans...\n\nIf we simply know that the backup cannot be sent to psql, but a\ndeshar-ed version can have the schema sent to psql, would that\nameliorate most concerns?\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 12 Jul 2000 10:38:57 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump & blobs - editable dump?"
},
{
"msg_contents": "At 10:38 12/07/00 -0400, Lamar Owen wrote:\n>\n>If we simply know that the backup cannot be sent to psql, but a\n>deshar-ed version can have the schema sent to psql, would that\n>ameliorate most concerns?\n>\n\nIn the current version\n\n pg_restore --schema\n\nwill send the schema to stdout\n\nIs that sufficient? Or are you strictly interested in the text output side\nof things?\n\n\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 13 Jul 2000 01:16:49 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump & blobs - editable dump?"
},
{
"msg_contents": "Philip Warner wrote:\n> will send the schema to stdout\n \n> Is that sufficient? Or are you strictly interested in the text output side\n> of things?\n\nStrictly interested in the text output side of things, for various\nnot-necessarily-good reasons (:-)).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 12 Jul 2000 11:25:46 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump & blobs - editable dump?"
}
] |
[
{
"msg_contents": "Slashdot has a report about Interbase and their new Kilyx development\nenvironment.\n\n\thttp://slashdot.org/articles/00/07/12/124244.shtml\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jul 2000 09:52:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Interbase news"
}
] |
[
{
"msg_contents": "Why not have it using something like tar, and the first file being stored in\nascii?\n\nThat way, you could extract easily the human readable SQL but still pipe the\nblobs to stdout.\n\nPeter\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n\n-----Original Message-----\nFrom: Philip Warner [mailto:[email protected]]\nSent: Wednesday, July 12, 2000 2:51 PM\nTo: [email protected]; [email protected]\nCc: [email protected]\nSubject: [HACKERS] pg_dump & blobs - editable dump?\n\n\n\nI have been speaking to Pavel about pg_dump support of blobs, and he thinks\nit is important to allow for some kind of human-readable version of the\ndump to be created.\n\nMy guess is that this will involve a plain text schema dump, followed by\nall BLOBs in separate files, and a script to load them. To implement this\nI'll obviosly need to be passed a directory/file location for the script\nsince I can't pipe seperate files to stdout.\n\nI'd be interested in knowing what features people think are important in\nthis kind of format; what do you need to do with the blob files, what do\npeple want to edit, etc etc.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 12 Jul 2000 14:58:50 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] pg_dump & blobs - editable dump?"
},
{
"msg_contents": "[This is a multi-reply, CCs and -general removed.]\n\n From: Peter Mount <[email protected]>\n Date: Wed, 12 Jul 2000 14:58:50 +0100\n\nHi,\n\n > Why not have it using something like tar, and the first file being\n > stored in ascii?\n\nsome filesystems do not allow you to have files bigger then 2G :-( I do not\nthink that one file (even gzipped tar file) is good.\n\n\n From: Peter Mount <[email protected]>\n Date: Wed, 12 Jul 2000 15:32:10 +0100\n\n > I don't know why you would want them as separate files - just think what\n > would happen to directory search times!!\n\nNo problem, you can use one index file and hashes in it so files are then\nstored as:\n\n AA/AA/AA/00\n AA/AA/AA/01\n\nSee how squid (http://www.squid-proxy.org/) does his job here. No problem,\nI think. I really prefer this solution over one big file. You can easily\nswap files with other databases, you can even create md5sum of md5sums of\neach file so you can have a multi-md5sum of your database (you can be\nreally sure that your backup is OK, etc. :-).\n\n > That way (depending on the database design), you could handle the sql\n > & blobs separately but still have everything backed up.\n > \n > PS: Backups is formost on my mind at the moment - had an NT one blow\n > up in my face on Monday and it wasn't nice :-(\n\nNo one (I hope) is arguing about the need for backing BLOBs from the DB :-)\n-- \nPavel Jan�k ml.\[email protected]\n",
"msg_date": "Thu, 13 Jul 2000 00:35:46 +0200",
"msg_from": "[email protected] (Pavel =?iso-8859-2?q?Jan=EDk?= ml.)",
"msg_from_op": false,
"msg_subject": "Re: pg_dump & blobs - editable dump?"
}
] |
[
{
"msg_contents": "At 14:58 12/07/00 +0100, Peter Mount wrote:\n>Why not have it using something like tar, and the first file being stored in\n>ascii?\n>\n>That way, you could extract easily the human readable SQL but still pipe the\n>blobs to stdout.\n\nHas Tom Lane paid you to send this message? :-}\n\nIf anyone can send me a nice interface for reading and writing a tar file\nfrom C, I'll do it. I just don't have the inclination to learn about tar\ninternals at the moment. By 'nice' I mean that I would like:\n\n- to be able to create the archive and write files sequentially using\nsomething similar to fopen/fwrite/fclose.\n\n- open an archive and examine and read files sequentially using a similar\ninterface to opendir/readdir/fopen/fread/fclose.\n\n- Ideally open a specified file in the archive by name, but if not\npossible, then it should be easy using the 'opedir' function above.\n\nThis would be a very useful library, I am sure. It also needs to be\nlicensable under BSD to go into the PG distribution.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 13 Jul 2000 00:17:28 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RE: [HACKERS] pg_dump & blobs - editable dump?"
},
{
"msg_contents": "On Thu, Jul 13, 2000 at 12:17:28AM +1000, Philip Warner wrote:\n> At 14:58 12/07/00 +0100, Peter Mount wrote:\n> >Why not have it using something like tar, and the first file being stored in\n> >ascii?\n> >\n> >That way, you could extract easily the human readable SQL but still pipe the\n> >blobs to stdout.\n> \n> If anyone can send me a nice interface for reading and writing a tar file\n> from C, I'll do it. I just don't have the inclination to learn about tar\n> internals at the moment. By 'nice' I mean that I would like:\n\ni suspect you might find a library of either tar or cpio read functions as\npart of the FreeBSD sysinstall utility.\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ Don't be fooled by cheap Finnish imitations; BSD is the One True Code. ]\n",
"msg_date": "Wed, 12 Jul 2000 10:24:46 -0400",
"msg_from": "Jim Mercer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE: [HACKERS] pg_dump & blobs - editable dump?"
},
{
"msg_contents": " If anyone can send me a nice interface for reading and writing a tar file\n from C, I'll do it. I just don't have the inclination to learn about tar\n internals at the moment. By 'nice' I mean that I would like:\n\nI don't know the details of the API, but the NetBSD pax code handles\ntar formats (and others) nicely and on a cursory glance seems to have\nwhat you need. Of course, the license is acceptable. If you want the\nsource, let me know.\n\nCheers,\nBrook\n",
"msg_date": "Wed, 12 Jul 2000 08:56:49 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE: [HACKERS] pg_dump & blobs - editable dump?"
}
] |
[
{
"msg_contents": "No he didn't, just I've been sort of lurking on this subject ;-)\n\nActually, tar files are simply a small header, followed by the file's\ncontents. To add another file, you simply write another header, and contents\n(which is why you can cat two tar files together and get a working file).\n\nhttp://www.goice.co.jp/member/mo/formats/tar.html has a nice brief\ndescription of the header.\n\nAs for a C api with a compatible licence, if needs must I'll write one to\nyour spec (maidast should be back online in a couple of days, so I'll be\nback in business development wise).\n\nPeter\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n\n-----Original Message-----\nFrom: Philip Warner [mailto:[email protected]]\nSent: Wednesday, July 12, 2000 3:17 PM\nTo: Peter Mount; [email protected];\[email protected]\nCc: [email protected]\nSubject: Re: [GENERAL] RE: [HACKERS] pg_dump & blobs - editable dump?\n\n\nAt 14:58 12/07/00 +0100, Peter Mount wrote:\n>Why not have it using something like tar, and the first file being stored\nin\n>ascii?\n>\n>That way, you could extract easily the human readable SQL but still pipe\nthe\n>blobs to stdout.\n\nHas Tom Lane paid you to send this message? :-}\n\nIf anyone can send me a nice interface for reading and writing a tar file\nfrom C, I'll do it. I just don't have the inclination to learn about tar\ninternals at the moment. By 'nice' I mean that I would like:\n\n- to be able to create the archive and write files sequentially using\nsomething similar to fopen/fwrite/fclose.\n\n- open an archive and examine and read files sequentially using a similar\ninterface to opendir/readdir/fopen/fread/fclose.\n\n- Ideally open a specified file in the archive by name, but if not\npossible, then it should be easy using the 'opedir' function above.\n\nThis would be a very useful library, I am sure. It also needs to be\nlicensable under BSD to go into the PG distribution.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 12 Jul 2000 15:25:24 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: RE: [HACKERS] pg_dump & blobs - editable dump?"
}
] |
[
{
"msg_contents": "\n> The actual behaviour of the toaster is to toast each tuple\n> until it has a delicious looking, brown and crispy surface.\n> The indicator for beeing delicious is that it shrank below\n> MaxTupleSize/4 - that's a little less than 2K in a default 8K\n> blocksize setup.\n\n> So the indices simply inherit the toasting result. If a value\n> got compressed, the index will store the compressed format.\n> If it got moved off, the index will hold the toast entry\n> reference for it.\n\nOk, this is where I would probably rethink the behavior.\nWould it be possible to choose which columns need to \nstay \"moved off\" in the index on the basis that the key size stays \nbelow page size / 4 ? Thus if a key fits inside the 2k\nyou don't store the reference, but the compressed values\n(even if they stay moved off for the heap table).\n\nThe timings you did only involved heap tuples not index. \nMy guess would be, that there is a similar tradeoff in the index.\nFetching toast values for header pages in an index seems like \na very expensive operation, because it needs to be performed for\nevery index access even if the searched value is not toasted\nbut falls into this range.\n\nOf course this does not solve the \"toast value\" for key already \ndeleted, but it would lower the probability somewhat.\n\nAndreas\n",
"msg_date": "Wed, 12 Jul 2000 16:29:53 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: update on TOAST status'"
}
] |
[
{
"msg_contents": "Which is why having them on stdout is still a nice option to have. You can\npipe the lot through your favourite compressor (gzip, bzip2 etc) and\nstraight on to tape, or whatever.\n\nI don't know why you would want them as separate files - just think what\nwould happen to directory search times!!\n\nHow about this as an idea:\n\t* Option to dump sql to stdout and blobs to a designated file\n\t* option to dump sql & blobs to stdout\n\t* option to dump just sql to stdout\n\t* option to dump just blobs to stdout\n\nThat way (depending on the database design), you could handle the sql &\nblobs separately but still have everything backed up.\n\nPS: Backups is formost on my mind at the moment - had an NT one blow up in\nmy face on Monday and it wasn't nice :-(\n\nPeter\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n\n-----Original Message-----\nFrom: Philip Warner [mailto:[email protected]]\nSent: Wednesday, July 12, 2000 3:22 PM\nTo: Lamar Owen\nCc: [email protected]; [email protected];\[email protected]\nSubject: Re: [HACKERS] pg_dump & blobs - editable dump?\n\n\nAt 10:13 12/07/00 -0400, Lamar Owen wrote:\n>Philip Warner wrote:\n>> My guess is that this will involve a plain text schema dump, followed by\n>> all BLOBs in separate files, and a script to load them. To implement this\n>> I'll obviosly need to be passed a directory/file location for the script\n>> since I can't pipe seperate files to stdout.\n>\n>uuencode the blobs, perhaps, using a shar-like format?\n\nFor the human readable version, the request was to make it editable and\nsendable to psql. As a result the BLOBs need to be in their binary format\nOR psql needs to support BLOB import from stdin. As a first pass I was\nhoping for the simple 'dump them into files' solution.\n\nWhat I am confused by is what people actually want to do with a load of\nBLOBs sitting in a directory; if there are specific needs, then I'd also\nlike to cater for them in the custom file formats.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 12 Jul 2000 15:32:10 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] pg_dump & blobs - editable dump?"
}
] |
[
{
"msg_contents": "At 15:32 12/07/00 +0100, Peter Mount wrote:\n>Which is why having them on stdout is still a nice option to have. You can\n>pipe the lot through your favourite compressor (gzip, bzip2 etc) and\n>straight on to tape, or whatever.\n\nWell, the custom format does that, it also does compression and can go to\nstdout. \n\n\n>I don't know why you would want them as separate files - just think what\n>would happen to directory search times!!\n\nI agree; the request was based on a desire to do something like pg_dump_lo,\nwhich puts them all in a directory, I think.\n\n\n>How about this as an idea:\n>\t* Option to dump sql to stdout and blobs to a designated file\n>\t* option to dump sql & blobs to stdout\n>\t* option to dump just sql to stdout\n>\t* option to dump just blobs to stdout\n>\n\nThe sql is *tiny* compared to most BLOB contents. The new pg_dump currently\nsupports:\n\n * schema, table data, & blobs\n * schema, table data\n * schema\n * table data & blobs\n * table data\n\nBLOBS without table data are not recomended since the process of relinking\nthe BLOBs to the tables is *only* performed on tables that are restored.\nThis is to allow import of BLOBS & tables into existing DBs. As a result\nyour fourth option is not really an option. The other three are already\ncovered.\n\nAny single-file format (tar would be one of those) can be sent to stdout,\nand BLOBs are not supported in plain-text output (for obvious reasons).\n\n\n>That way (depending on the database design), you could handle the sql &\n>blobs separately but still have everything backed up.\n\nUnfortunately the data and BLOBS need to go together.\n\n\n>PS: Backups is formost on my mind at the moment - had an NT one blow up in\n>my face on Monday and it wasn't nice :-(\n\nWith the current version you should be able to do:\n\n pg_dump -Fc --blobs | /dev/myfavoritetapedrive\n\nto backup the entire database, with compressed data, to tape.\n\nAnd \n\n cat /dev/mt | pg_restore --db=dbname\n\nto restore the entire db into the specified database\n\nOr,\n\n pg_dump -Fc --blobs | pg_restore --db=dbname\n\nto copy a database with blobs...\n\nSo, in summary, I think most of what you want is already there. It's just\nthe human-readable part that's a problem.\n\n*Please* let me know if there is some issue I have not considered...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 13 Jul 2000 00:56:24 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] pg_dump & blobs - editable dump?"
}
] |
[
{
"msg_contents": "> > Just wanted to give a short message, that current snapshot does not \n> > compile on AIX due to the fmgr changes.\n> \n> Without more details, that's completely unhelpful.\n\nYes sorry, I just wanted to know if somebody already knows, \nor is working on it, before going into detail.\n\nxlc -I../../../include -I/usr/local/include -qmaxmem=16384 -qhalt=w -qsrcmsg\n-qlanglvl=ext\nended -qlonglong -I/usr/local/include -c -o dfmgr.o dfmgr.c\n\"../../../include/utils/dynamic_loader.h\", line 26.14: 1506-276 (S) Syntax\nerror: possible\n missing identifier?\n\"../../../include/utils/dynamic_loader.h\", line 26.14: 1506-282 (S) The type\nof the parame\nters must be specified in a prototype.\n\"../../../include/utils/dynamic_loader.h\", line 27.19: 1506-343 (S)\nRedeclaration of dlsym\n differs from previous declaration on line 36 of\n\"../../../include/dynloader.h\".\n\"../../../include/utils/dynamic_loader.h\", line 27.19: 1506-050 (I) Return\ntype \"unsigned\nlong(*)(struct FunctionCallInfoData*)\" in redeclaration is not compatible\nwith the previou\ns return type \"void*\".\n\"../../../include/utils/dynamic_loader.h\", line 28.13: 1506-343 (S)\nRedeclaration of dlclo\nse differs from previous declaration on line 38 of\n\"../../../include/dynloader.h\".\n\"../../../include/utils/dynamic_loader.h\", line 28.13: 1506-050 (I) Return\ntype \"void\" in\nredeclaration is not compatible with the previous return type \"int\".\n\"../../../include/utils/dynamic_loader.h\", line 29.25: 1506-215 (E) Too many\narguments spe\ncified for macro pg_dlerror.\ngmake[4]: *** [dfmgr.o] Error 1\n\nWhy do you declare dlopen, dlsym, ... in dynamic_loader.h ?\nThey are defined in the port specific dynloader.h .\nWhy do you use \"void pg_dlclose\" when dlclose is \"int dlclose\" ?\nThis makes a wrapper function necessary.\n\nSame with \"PGFunction pg_dlsym\" when dlsym is \"void *dlsym\"\n\nWhat I tried is in the attachment, but I guess at least the cast in dfmgr.c\nis unwanted.\n\nAndreas",
"msg_date": "Wed, 12 Jul 2000 17:41:22 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: fmgr changes not yet ported to AIX "
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> Why do you declare dlopen, dlsym, ... in dynamic_loader.h ?\n> They are defined in the port specific dynloader.h .\n> Why do you use \"void pg_dlclose\" when dlclose is \"int dlclose\" ?\n> This makes a wrapper function necessary.\n\nIt seems to me that the real problem here is that someone tried to\ntake shortcuts in the AIX dynloader code. Instead of implementing\nthe same interface that the rest of the ports support, the AIX files\ntry to force their own definition of the pg_dlXXX functions --- and\nfor what? To save a wrapper function? These are hardly performance-\ncritical routines, so I don't see the point.\n\nI propose the following changes instead. I don't have any way to\ntest them however --- would you check them?\n\n\t\t\tregards, tom lane\n\n*** aix.h~\tMon Jul 17 00:40:12 2000\n--- aix.h\tMon Jul 17 00:41:34 2000\n***************\n*** 45,56 ****\n \n #ifdef __cplusplus\n }\n- \n #endif\n- \n- #define pg_dlopen(f)\tdlopen(filename, RTLD_LAZY)\n- #define pg_dlsym(h,f)\tdlsym(h, f)\n- #define pg_dlclose(h)\tdlclose(h)\n- #define pg_dlerror()\tdlerror()\n \n #endif\t /* __dlfcn_h__ */\n--- 45,50 ----\n*** aix.c~\tMon Jul 17 00:40:19 2000\n--- aix.c\tMon Jul 17 00:45:34 2000\n***************\n*** 601,603 ****\n--- 601,631 ----\n \tfree(buf);\n \treturn ret;\n }\n+ \n+ /*\n+ * PostgreSQL interface to the above functions\n+ */\n+ \n+ void *\n+ pg_dlopen(char *filename)\n+ {\n+ \treturn dlopen(filename, RTLD_LAZY);\n+ }\n+ \n+ PGFunction\n+ pg_dlsym(void *handle, char *funcname)\n+ {\n+ \treturn (PGFunction) dlsym(handle, funcname);\n+ }\n+ \n+ void\n+ pg_dlclose(void *handle)\n+ {\n+ \tdlclose(h);\n+ }\n+ \n+ char *\n+ pg_dlerror()\n+ {\n+ \treturn dlerror();\n+ }\n",
"msg_date": "Mon, 17 Jul 2000 00:55:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: fmgr changes not yet ported to AIX "
}
] |
[
{
"msg_contents": "\n> Yes it is a multi-key index, and the matches are exact.\n> \n> Someone else asked why I have separated these fields out from the\n> mail_date.\n> \n> If I didn't, and I wanted to see the messages for this month, I'd have\n> to regex and that would overwhelm the database.\n\nAs I said in that mail you could use a between first and last day of month\nwhich can use an index and do the order by with that index.\n\nUnfortunately a datepart(mail_date, year to month) is probably not \nunderstood as indexable by the optimizer.\n\nAndreas\n",
"msg_date": "Wed, 12 Jul 2000 17:49:53 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: 7.0.2 issues / Geocrawler"
}
] |
[
{
"msg_contents": "\n> Note that I don't talk about overwriting/non-overwriting smgr at all!\n> It's not issue. There are no problems with keeping dead \n> tuples in files\n> as long as required. When I told about new smgr I meant \n> ability to re-use\n> space without vacuum and store > 1 tables per file.\n> But I'll object storing transaction commit times in tuple header and\n> old-designed pg_time. If you want to do TT - welcome... but make\n> it optional, without affect for those who need not in TT.\n\nThank you for explaining, I did not see that as a possibility. \nNow, if we had a possibility to tune/specify what is allowed to be \noverwritten, then an online OS backup would still be possible \nwith that design, maybe even some limited TT.\n\nBut: I do think that if we are going towards overwrite smgr, then the\nonly way that we also gain the advantages of it, is to update inplace.\n1. index update only if indexed column changes\n2. reduce IO (many updates to one row --> only one data write)\n\nAndreas\n",
"msg_date": "Wed, 12 Jul 2000 18:57:57 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: postgres 7.2 features."
}
] |
[
{
"msg_contents": "\n> >> We can't \"drop and recreate\" without a solution to the relation\n> >> versioning issue (unless you are prepared to accept a nonfunctional\n> >> database after a failure partway through index rebuild on a system\n> >> table). I think we should do this, but it's not all that simple...\n> \n> > Is this topic independent of WAL in the first place ?\n> \n> Sure, unless Vadim sees some clever way of using WAL to eliminate\n> the need for versioned relations. But as far as I've seen in the\n> discussions, versioned relations are independent of WAL.\n\nWAL can solve the versioned relations problem.\nRemember that a sure new step in postmaster startup will be a rollforward of\nthe WAL,\nsince that will have the only sync write of our last txn's. Thus in this\nstep it can also \ndo any pending rename or delete of files. If a rename or delete fails we \nbail out, since we don't want postmaster running under such circumstances\nanyway.\n\nAndreas\n",
"msg_date": "Wed, 12 Jul 2000 18:58:28 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Vacuum only with 20% old tuples "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Zeugswetter Andreas SB\n>\n> > >> We can't \"drop and recreate\" without a solution to the relation\n> > >> versioning issue (unless you are prepared to accept a nonfunctional\n> > >> database after a failure partway through index rebuild on a system\n> > >> table). I think we should do this, but it's not all that simple...\n> >\n> > > Is this topic independent of WAL in the first place ?\n> >\n> > Sure, unless Vadim sees some clever way of using WAL to eliminate\n> > the need for versioned relations. But as far as I've seen in the\n> > discussions, versioned relations are independent of WAL.\n>\n> WAL can solve the versioned relations problem.\n> Remember that a sure new step in postmaster startup will be a\n> rollforward of\n> the WAL,\n> since that will have the only sync write of our last txn's. Thus in this\n> step it can also\n> do any pending rename or delete of files.\n\nHmm,don't you allow DDL commands inside transaction block ?\n\nIf we allow DDL commands inside transaction block,WAL couldn't\npostpone all rename/unlink operations until the end of transaction\nwithout a resolution of the conflict of table file name.\n\nFor the following queries\n\n begin;\n drop table t;\n create table t (..);\n insert into t values (...);\n commit;\n\nthe old table file of t must vanish(using unlink() etc) before 'create table\nt'\nunless new file name is different from old one(OID file name would\nresolve the conflict in this case).\nTo unlink/rename the table file immediately isn't a problem for the\nrollforward functionality. It seems a problem of rollback functionality.\n\n> If a rename or delete fails we\n> bail out, since we don't want postmaster running under such circumstances\n> anyway.\n\nNo there's a significant difference between the failure of 'delete'\nand that of 'rename'. We would have no consistency problem even\nthough 'delete' fails and wouldn't have to stop postmaster. But we\nwouldn't be able to see the renamed relation in case of 'rename'\nfailure and an excellent(??) dba would have to recover the inconsistency.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Fri, 14 Jul 2000 12:19:50 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Vacuum only with 20% old tuples "
}
] |
[
{
"msg_contents": "\n> Your ideas for selecting based on the date are intriguing, however the\n> schema of the db was not done with that in mind. Everyone thinks I'm a\n> nut when I say this, but the date is stored in a char(14) field in\n> gregorian format: 19990101125959\n\nPerfect, that makes it a lot easier:\n\n1. index on (mail_list, mail_date)\n2. SELECT mailid, mail_date, mail_is_followup, mail_from, mail_subject \n FROM mail_archive WHERE mail_list=35 \n AND mail_date between '20000100' and '20000199'\n ORDER BY mail_list DESC, mail_date DESC LIMIT 26 OFFSET 0;\n\nNote the appended 00 and 99 which is generic for all months.\n\nAndreas\n",
"msg_date": "Wed, 12 Jul 2000 19:18:44 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > Your ideas for selecting based on the date are intriguing, however the\n> > schema of the db was not done with that in mind. Everyone thinks I'm a\n> > nut when I say this, but the date is stored in a char(14) field in\n> > gregorian format: 19990101125959\n> \n> Perfect, that makes it a lot easier:\n> \n> 1. index on (mail_list, mail_date)\n> 2. SELECT mailid, mail_date, mail_is_followup, mail_from, mail_subject\n> FROM mail_archive WHERE mail_list=35\n> AND mail_date between '20000100' and '20000199'\n> ORDER BY mail_list DESC, mail_date DESC LIMIT 26 OFFSET 0;\n> \n> Note the appended 00 and 99 which is generic for all months.\n\nshouldn't it be between '20000100000000' and '20000199000000'?\n\nI've never indexed that date column, because it is likely that there are\n3 million+ different dates in there - remember 4 million emails sent\nover the course of 15 years are likely to have a lot of different dates,\nwhen the hour/minute/second is attached.\n\nYou still think that will work?\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 12 Jul 2000 10:44:46 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: 7.0.2 issues / Geocrawler"
},
{
"msg_contents": "Tim Perdue <[email protected]> writes:\n> Zeugswetter Andreas SB wrote:\n>> 1. index on (mail_list, mail_date)\n>> 2. SELECT mailid, mail_date, mail_is_followup, mail_from, mail_subject\n>> FROM mail_archive WHERE mail_list=35\n>> AND mail_date between '20000100' and '20000199'\n>> ORDER BY mail_list DESC, mail_date DESC LIMIT 26 OFFSET 0;\n>> \n>> Note the appended 00 and 99 which is generic for all months.\n\n> shouldn't it be between '20000100000000' and '20000199000000'?\n\nShouldn't matter, given that this is a char() field and not a numeric...\n\n> I've never indexed that date column, because it is likely that there are\n> 3 million+ different dates in there - remember 4 million emails sent\n> over the course of 15 years are likely to have a lot of different dates,\n> when the hour/minute/second is attached.\n\nWhat of it? There will be one index entry per table row in any case.\nActually, btree indexes work a heck of a lot better when there are a lot\nof distinct values than when there are many duplicates, so I think you'd\nfind a index on mail_date to work better than an index on mail_year and\nmail_month.\n\nI think Andreas' advice is sound. I'd still like to understand why 7.0\nis slower than 6.5 given the query as posed --- that may reveal\nsomething that needs fixing. But if you just want to get some work done\nI'd suggest trying the arrangement he recommends.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 15:01:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: 7.0.2 issues / Geocrawler "
}
] |
[
{
"msg_contents": "\nI have a bunch of these just sitting there:\n\npgsql 86144 0.0 0.0 304 120 p3 D+ 5:17PM 0:00.00 grep udmsearch\npgsql 69312 0.0 3.6 21652 18808 p1 S 12:20PM 10:09.68 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 83848 0.0 3.7 22472 19300 p1 D 4:24PM 3:02.53 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch (postgres)\npgsql 83906 0.0 1.2 21436 6044 p1 S 4:25PM 0:00.10 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 84138 0.0 0.8 21436 4256 p1 S 4:30PM 0:00.08 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 84190 0.0 0.8 21436 4200 p1 S 4:30PM 0:00.07 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 84251 0.0 0.8 21436 4292 p1 S 4:32PM 0:00.08 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 84610 0.0 0.8 21436 4272 p1 S 4:40PM 0:00.07 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 84863 0.0 0.8 21440 4184 p1 S 4:46PM 0:00.06 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 84910 0.0 0.8 21440 4384 p1 S 4:47PM 0:00.07 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 84990 0.0 0.8 21440 4312 p1 S 4:50PM 0:00.06 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 85010 0.0 0.8 21440 4268 p1 S 4:50PM 0:00.06 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 85030 0.0 0.8 21440 4252 p1 S 4:51PM 0:00.09 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 85035 0.0 0.8 21440 4328 p1 S 4:51PM 0:00.06 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 85116 0.0 0.8 21440 4296 p1 S 4:53PM 0:00.07 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 85120 0.0 0.8 21440 4324 p1 S 4:53PM 0:00.06 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 85714 0.0 0.9 21444 4628 p1 S 5:08PM 0:00.07 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 85733 0.0 0.8 21456 4412 p1 S 5:08PM 0:00.07 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 85868 0.0 0.9 21444 4492 p1 S 5:11PM 0:00.06 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 85951 0.0 0.8 21444 4236 p1 S 5:13PM 0:00.05 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 85976 0.0 0.8 21444 4276 p1 S 5:13PM 0:00.08 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\npgsql 86039 0.0 0.8 21444 4344 p1 S 5:14PM 0:00.08 postmaster: /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch /pgsql/bin/postgres pgsql 216.126.84.1 udmsearch waiting (postgres)\n> date\nWed Jul 12 17:17:24 EDT 2000\n> \n\nI'm leaving them there for now ... any clue how to to check what they are\nwaiting for? That one at the top has been waiting since 12:30 and its now\n17:17 :) Very patient processes, eh? :)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 12 Jul 2000 18:17:54 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "[7.0.2] waiting ... ?"
}
] |
[
{
"msg_contents": "\nIgnore me ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 12 Jul 2000 18:47:29 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Just a test ..."
}
] |
[
{
"msg_contents": "I posted this to pg_admin yesterday, but I'm not sure if anyone is\nreading that list.\n---------------------------------------------------------\n\nI have Postgres 7.0.2 installed on my RedHat Linux 6.0 system, but\nrealized today that I hadn't built any of the manual pages or\ndocumentation during that installation. I downloaded 7.0.2 again, did a\n\nconfigure, and then followed the instruction to do 'gmake install' in\nthe src/postgresql-7.0.2/doc directory (in my case, the\n/database/src/postgresql-7.0.2/doc directory).\n\nThe install script keeps wanting to put the man pages into\n/usr/local/pgsql/src; however, on my system I have Postgres installed in\n\n/database/local/pgsql (i.e. Postgres is not installed in the default\n/usr/local/pgsql directory)\n\nIs there someway that I can specify this directory for the man pages\n(e.g. is there some option I can set for the 'gmake install' to\ntell it that Postgres is installed in a different directory than the\ndefault) ?\n\nThanks.\n-Tony\n\n\n\n\n",
"msg_date": "Wed, 12 Jul 2000 15:20:29 -0700",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Installing the man pages"
},
{
"msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> Is there someway that I can specify this directory for the man pages\n> (e.g. is there some option I can set for the 'gmake install' to\n> tell it that Postgres is installed in a different directory than the\n> default) ?\n\nIIRC the makefile for the docs expects you to have done a configure over\nin the src directory, and it's looking there for an override on the\ndefault install dir. Go back and run configure (needn't build) with the\nright --prefix.\n\nI think Peter E. is making this cleaner for 7.1, but that's how it\nworked last time I looked.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 19:46:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Installing the man pages "
},
{
"msg_contents": "Tom Lane wrote:\n\n> IIRC the makefile for the docs expects you to have done a configure over\n> in the src directory, and it's looking there for an override on the\n> default install dir. Go back and run configure (needn't build) with the\n> right --prefix.\n>\n>\n\nI figured out what happened.\n\nI ran 'configure --with-prefix=/database/local/pgsql' rather than\n'configure --prefix=/database/local/pgsql'.\nThe configure script didn't balk at '--with-prefix' so the error never\nentered my mind until your e-mail said '--prefix'.\n\nNow the man pages are back on my system.\n\nThanks Tom.\n-Tony\n\n\n",
"msg_date": "Wed, 12 Jul 2000 17:31:03 -0700",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Installing the man pages"
}
] |
[
{
"msg_contents": "There are quite a number of misfeatures in Postgres' current support for\naggregate functions (COUNT, SUM, etc). We've had previous discussion\nthreads about them in the past, see pghackers around 1/23/00 and 6/16/99\nfor example. I propose to fix a few things for 7.1:\n\n* Hard-wired boundary condition handling in ExecAgg prevents\n user-defined aggregates from determining their treatment of nulls,\n no input rows, etc.\n* Some SUM and AVG aggregates are vulnerable to overflow due to poor\n choice of datatypes for accumulators.\n* Dual-transition-function design is bizarre and unnecessary.\n\nThe last of these might break user-defined aggregates, if there are\nany out there that use two transition functions, so I'm soliciting\ncomments first.\n\nCurrently, ExecAgg evaluates each aggregate in the following steps:\n(initcond1, initcond2 are the initial values and sfunc1, sfunc2, and\nfinalfunc are the transition functions.)\n\n\tvalue1 = initcond1\n\tvalue2 = initcond2\n\tforeach input_value do\n\t\tvalue1 = sfunc1(value1, input_value)\n\t\tvalue2 = sfunc2(value2)\n\tvalue1 = finalfunc(value1, value2)\n\n(It's possible to omit one of sfunc1/sfunc2, and then optionally omit\nfinalfunc as well, but that doesn't affect things materially other than\ncreating a bunch of confusing rules about which combinations are legal.)\n\nThe following behaviors are currently hard-wired in ExecAgg and can't\nbe changed by the transition functions:\n* If initcond1 is NULL then the first non-NULL input_value is assigned\n directly to value1. sfunc1 isn't applied until value1 is non-NULL.\n (This behavior is useful for aggregates like min() and max() where\n you really want value1 to just be the least or greatest value so far.)\n* When the current tuple's input_value is NULL, sfunc1 and sfunc2 are\n not applied, instead the previous value1 and value2 are kept.\n* If value1 is still NULL at the end (ie, there were no non-null input\n values), then finalfunc is not invoked, instead a NULL result is\n forced.\n\nThese behaviors were necessary back when we didn't have a reasonable\ndesign for NULL handling in the function manager, but now that we do\nit is possible to allow the transition functions to control behavior\nfor NULL inputs and zero input rows, instead of hard-wiring the results.\nI propose the following definition instead:\n\n1. If sfunc1 is not marked \"strict\" in pg_proc, then it is invoked for\n every input tuple and must do all the right things for itself.\n ExecAgg will not protect it against a NULL value1 or NULL input.\n An sfunc1 coded in this style might use value1 = NULL as a flag\n for \"no input data yet\". It is up to the function whether and\n how to change the state value when the input_value is NULL.\n\n2. If sfunc1 is marked \"strict\" then it will never be called with\n NULL inputs. Therefore, ExecAgg will automatically skip tuples\n with NULL input_values, preserving the previous value1/value2.\n Also, if value1 is initially NULL, then the first non-NULL\n input_value will automatically be inserted into value1; sfunc1\n will only be called beginning with the second non-NULL input.\n (This is essentially the same as current behavior.)\n\n3. If finalfunc is not marked \"strict\" then it will be called in\n any case, even if value1 is NULL from its initial setting or\n as a result of sfunc1 having returned NULL. It is up to the\n finalfunc to return an appropriate value or NULL.\n\n4. If finalfunc is marked \"strict\" then it cannot be called with\n NULL inputs, so a NULL result will be substituted if value1\n or value2 is still NULL. (Same as current behavior.)\n\nsfunc2 is a somewhat peculiar appendage in this scheme. To do AVG()\ncorrectly, sfunc2 should only be called for the same input tuples that\nsfunc1 is called for (ie, not for NULLs). This implies that a \"strict\"\nsfunc2 must not be called when input_value is NULL, even though\ninput_value is not actually being passed to sfunc2! This seems like a\npretty weird abuse of the notion of function strictness. It's also quite\nunclear whether it makes any sense to have sfunc1 and sfunc2 with\ndifferent strictness settings.\n\nWhat I would *like* to do about this is eliminate value2 and sfunc2 from\nthe picture completely, leaving us with a clean single-transition-value\ndesign for aggregate handling. That would mean that AVG() would need to\nbe redone with a specialized transition datatype (holding both the sum and\ncount) and specialized transition function and final function, rather than\nusing addition/increment/divide as it does now. Given the AVG fixes I\nintend to do anyway, this isn't significant extra work as far as the\nstandard distribution is concerned. But it would break any existing\nuser-defined aggregates that may be out there, if they use two transition\nfunctions. Is there anyone who has such functions and wants to object?\n\nAs to the other point: currently, the SUM and AVG aggregates use an\naccumulator of the same datatype as the input data. This is almost\nguaranteed to cause overflow if the input column is int2, and there's\na nontrivial risk for int4 as well. It also seems bizarre to me that\nAVG on an integer column delivers an integral result --- it should\nyield a float or numeric result so as not to lose the fractional part.\n\nWhat I propose to do is reimplement SUM and AVG so that there are\nbasically only two underlying implementations: one with a \"float8\"\naccumulator and one with a \"numeric\" accumulator. The float8 style\nwould be used for either float4 or float8 input and would deliver a\nfloat8 result. The numeric style would be used for all numeric and\nintegral input types and would yield a numeric result of appropriate\nprecision.\n\n(It might also be worth reimplementing COUNT() to use a \"numeric\"\ncounter, so that it doesn't poop out at 2 billion rows. Anyone have\na strong feeling pro or con about that? It could possibly cause problems\nfor queries that are expecting COUNT() to deliver an int4.)\n\nThese changes were not practical before 7.0 because aggregates with\npass-by-reference transition data types leaked memory. The leak problem\nis gone, so we can change to wider accumulator datatypes without fear of\nthat. The code will probably run a little slower than before, but that\nseems an acceptable price for solving overflow problems.\n\nComments, objections, better ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 18:53:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposal for aggregate-function cleanups in 7.1"
}
] |
[
{
"msg_contents": "One thing that occurred to me, if I'm going to rejuggle the templates, why\nnot name them like the Makefile.${os} and the include/port/${os}.h? We\ndon't really need two matching logics and no matter how smart we make the\ntemplate matching, there's still nothing to be gained if we don't find the\nright Makefile.port and include/port/os.h (which we apparently do). So\nperhaps get rid of all the fanciness and just use the name detected by the\nbig case statement at the top of configure.in? (And then treat the subtle\ndifferences *within* the template files.)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 13 Jul 2000 00:53:48 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Template matching, a different perspective"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> One thing that occurred to me, if I'm going to rejuggle the templates, why\n> not name them like the Makefile.${os} and the include/port/${os}.h? We\n> don't really need two matching logics and no matter how smart we make the\n> template matching, there's still nothing to be gained if we don't find the\n> right Makefile.port and include/port/os.h (which we apparently do).\n\nEr, um, hmm ... for some reason I'd thought the Makefile and port files\nwere selected by the template file, but I see they ain't. I agree\nthat's pretty stupid; no point in smart template matching if the other\npart falls over.\n\nIf we are eliminating the compiler choice from the template names, then\nI think you've got a good idea: make a one-for-one correspondence\nbetween templates, Makefile.ports, and os.h's.\n\nIf we do that then I'd still like to see a --with-template option, but\nnow it'd select all three files, and would provide a way for the user\nto override that big case on $host_os. (Alternatively, if the regular\nconfigure \"--host\" option allows the same result, then we wouldn't need\n--with-template anymore.)\n\nBTW, if you are going to end up editing most or all of the templates\nanyway, I'd suggest getting rid of that hack about substituting : to =,\nand make the templates plain-vanilla shell scripts. I put the hack\nin awhile ago because I didn't want to edit all the templates, but\nI was just being lazy :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 20:32:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Template matching, a different perspective "
}
] |
[
{
"msg_contents": "\nOdd .. why is heap reporting 5899, when count() only reports 2951?\n\nglobalmatch=# select count(gid) from images;\n count \n-------\n 2951\n(1 row)\n\nglobalmatch=# create index images_gid on images using btree ( gid );\nCREATE\nglobalmatch=# vacuum verbose analyze images;\nNOTICE: --Relation images--\nNOTICE: Pages 56: Changed 0, reaped 0, Empty 0, New 0; Tup 5899: Vac 0, Keep/VTL 2948/0, Crash 0, UnUsed 0, MinLen 51, MaxLen 79; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.04s/0.00u sec.\nNOTICE: Index images_gid: Pages 8; Tuples 2951. CPU 0.00s/0.00u sec.\nNOTICE: Index images_gid: NUMBER OF INDEX' TUPLES (2951) IS NOT THE SAME AS HEAP' (5899).\n Recreate the index.\nVACUUM\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 12 Jul 2000 20:01:32 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "[7.0.2] INDEX' TUPLES != HEAP' .."
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Odd .. why is heap reporting 5899, when count() only reports 2951?\n\nOpen transactions preventing recently-dead tuples from being reaped?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 19:13:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] INDEX' TUPLES != HEAP' .. "
},
{
"msg_contents": "On Wed, 12 Jul 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Odd .. why is heap reporting 5899, when count() only reports 2951?\n> \n> Open transactions preventing recently-dead tuples from being reaped?\n\nnope ... I've tried recreating the indices, no change ... and no change in\nnumber of tuples ... actually, since this database is up, there would have\nbeen zero additions or deletions, as that ability is yet to be re-written,\nso other then SELECTs, that table is static/read-only\n\n\n\n",
"msg_date": "Wed, 12 Jul 2000 20:17:51 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [7.0.2] INDEX' TUPLES != HEAP' .. "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Wed, 12 Jul 2000, Tom Lane wrote:\n>> The Hermit Hacker <[email protected]> writes:\n>>>> Odd .. why is heap reporting 5899, when count() only reports 2951?\n>> \n>> Open transactions preventing recently-dead tuples from being reaped?\n\n> nope ... I've tried recreating the indices, no change ... and no change in\n> number of tuples ...\n\nThat would fit right in: a newly-created index will only index the\ntuples that are currently live. (OK, since an old transaction that\ncould still see the dead tuples couldn't see the index anyway.)\n\n> actually, since this database is up, there would have\n> been zero additions or deletions,\n\nWhat about UPDATEs?\n\nGiven your other comment about a bunch of waiting backends, it sure\nsounds like you've got some backend that's sitting on an old open\ntransaction.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 19:29:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] INDEX' TUPLES != HEAP' .. "
},
{
"msg_contents": "On Wed, 12 Jul 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > On Wed, 12 Jul 2000, Tom Lane wrote:\n> >> The Hermit Hacker <[email protected]> writes:\n> >>>> Odd .. why is heap reporting 5899, when count() only reports 2951?\n> >> \n> >> Open transactions preventing recently-dead tuples from being reaped?\n> \n> > nope ... I've tried recreating the indices, no change ... and no change in\n> > number of tuples ...\n> \n> That would fit right in: a newly-created index will only index the\n> tuples that are currently live. (OK, since an old transaction that\n> could still see the dead tuples couldn't see the index anyway.)\n> \n> > actually, since this database is up, there would have\n> > been zero additions or deletions,\n> \n> What about UPDATEs?\n\nzip ... only SELECTs right now ... no facility there to do updates,\ndeletes or inserts ...\n\n> Given your other comment about a bunch of waiting backends, it sure\n> sounds like you've got some backend that's sitting on an old open\n> transaction.\n\nlast email about waiting was a different database ... will *that* affect\nthis? :(\n\n\n",
"msg_date": "Wed, 12 Jul 2000 20:46:18 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [7.0.2] INDEX' TUPLES != HEAP' .. "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> Given your other comment about a bunch of waiting backends, it sure\n>> sounds like you've got some backend that's sitting on an old open\n>> transaction.\n\n> last email about waiting was a different database ... will *that* affect\n> this? :(\n\nNot directly, but the \"oldest open transaction\" is across the whole\ninstallation IIRC. So an old open transaction could prevent VACUUM\nfrom deleting tuples even if the xact is being done in another DB.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 19:59:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] INDEX' TUPLES != HEAP' .. "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n> \n> The Hermit Hacker <[email protected]> writes:\n> > On Wed, 12 Jul 2000, Tom Lane wrote:\n> >> The Hermit Hacker <[email protected]> writes:\n> >>>> Odd .. why is heap reporting 5899, when count() only reports 2951?\n> >> \n> >> Open transactions preventing recently-dead tuples from being reaped?\n> \n> > nope ... I've tried recreating the indices, no change ... and \n> no change in\n> > number of tuples ...\n> \n> That would fit right in: a newly-created index will only index the\n> tuples that are currently live. (OK, since an old transaction that\n> could still see the dead tuples couldn't see the index anyway.)\n> \n> > actually, since this database is up, there would have\n> > been zero additions or deletions,\n> \n> What about UPDATEs?\n> \n> Given your other comment about a bunch of waiting backends, it sure\n> sounds like you've got some backend that's sitting on an old open\n> transaction.\n>\n\nI've mentioned I have a fix for this case.\nBut I've hesitated to commit it for a while.\n\nIt has a performance problem for unique indexes.\nI moved the place of duplicate check from tuplesort()\nto btbuild() in my fix. So it may take long time to check\nthe uniqueness of indexes when there are many updated\n-dead-but-cannot-be-discarded tuples(maybe Marc's\ncase is so).\n. \nIn addtion it recently caused the fail of initdb in my\nlocal branch. I don't think my fix is beautiful and am\nsuspicious if my fix would be robust for related changes.\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 13 Jul 2000 09:43:40 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [7.0.2] INDEX' TUPLES != HEAP' .. "
}
] |
[
{
"msg_contents": "I added the suggested index and changed my sql and the subjective tests\nseem to be improved somewhat. I checked EXPLAIN and it is using the new\nindex.\n\nI still think there must be sorting going on, as the result is returned\ninstantly if you remove the ORDER BY. I don't know - I do think it's\nmuch better now.\n\nThanks for all your help - I (of course) will let you know if I have any\ntroubles with corruption on 7.0.2 ;-)\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 12 Jul 2000 16:28:31 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some Improvement"
},
{
"msg_contents": "Tim Perdue <[email protected]> writes:\n> I added the suggested index and changed my sql and the subjective tests\n> seem to be improved somewhat. I checked EXPLAIN and it is using the new\n> index.\n\n> I still think there must be sorting going on, as the result is returned\n> instantly if you remove the ORDER BY.\n\nYou \"think\"? What does EXPLAIN show in the two cases?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 19:48:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some Improvement "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Tim Perdue <[email protected]> writes:\n> > I added the suggested index and changed my sql and the subjective tests\n> > seem to be improved somewhat. I checked EXPLAIN and it is using the new\n> > index.\n> \n> > I still think there must be sorting going on, as the result is returned\n> > instantly if you remove the ORDER BY.\n> \n> You \"think\"? What does EXPLAIN show in the two cases?\n> \n> regards, tom lane\n\nFollowing is the info - again thanks for your help. If you need, I can\ntry to re-install 6.5.3 and re-import the database. Although with tables\nof this size, it is a true nightmare to do this. If you feel the info is\nvaluable, I'd like to help.\n\nTim\n\n\nWith the ORDER BY\n\n\ndb_geocrawler=# explain verbose SELECT fld_mailid, fld_mail_date,\nfld_mail_is_followup, fld_mail_from, fld_mail_subject FROM\ntbl_mail_archive WHERE fld_mail_list='35' AND fld_mail_date between\n'20000100' AND '20000199' ORDER BY fld_mail_date DESC LIMIT 51 OFFSET 0;\nNOTICE: QUERY DUMP:\n\n{ SORT :startup_cost 5.03 :total_cost 5.03 :rows 1 :width 44 :state <>\n:qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 23\n:restypmod -1 :resname fld_mailid :reskey 0 :reskeyop 0 :ressortgroupref\n0 :resjunk false } :expr { VAR :varno 1 :varattno 1 :vartype 23\n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1}} { TARGETENTRY\n:resdom { RESDOM :resno 2 :restype 1042 :restypmod 18 :resname\nfld_mail_date :reskey 1 :reskeyop 1051 :ressortgroupref 1 :resjunk false\n} :expr { VAR :varno 1 :varattno 3 :vartype 1042 :vartypmod 18 \n:varlevelsup 0 :varnoold 1 :varoattno 3}} { TARGETENTRY :resdom { RESDOM\n:resno 3 :restype 23 :restypmod -1 :resname fld_mail_is_followup :reskey\n0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\n:varattno 4 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 4}} { TARGETENTRY :resdom { RESDOM :resno 4 :restype 25\n:restypmod -1 :resname fld_mail_from :reskey 0 :reskeyop 0\n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 5\n:vartype 25 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 5}} {\nTARGETENTRY :resdom { RESDOM :resno 5 :restype 25 :restypmod -1 :resname\nfld_mail_subject :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false\n} :expr { VAR :varno 1 :varattno 6 :vartype 25 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 6}}) :qpqual <> :lefttree {\nINDEXSCAN :startup_cost 0.00 :total_cost 5.02 :rows 1 :width 44 :state\n<> :qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 23\n:restypmod -1 :resname fld_mailid :reskey 0 :reskeyop 0 :ressortgroupref\n0 :resjunk false } :expr { VAR :varno 1 :varattno 1 :vartype 23\n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1}} { TARGETENTRY\n:resdom { RESDOM :resno 2 :restype 1042 :restypmod 18 :resname\nfld_mail_date :reskey 0 :reskeyop 0 :ressortgroupref 1 :resjunk false }\n:expr { VAR :varno 1 :varattno 3 :vartype 1042 :vartypmod 18 \n:varlevelsup 0 :varnoold 1 :varoattno 3}} { TARGETENTRY :resdom { RESDOM\n:resno 3 :restype 23 :restypmod -1 :resname fld_mail_is_followup :reskey\n0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\n:varattno 4 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 4}} { TARGETENTRY :resdom { RESDOM :resno 4 :restype 25\n:restypmod -1 :resname fld_mail_from :reskey 0 :reskeyop 0\n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 5\n:vartype 25 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 5}} {\nTARGETENTRY :resdom { RESDOM :resno 5 :restype 25 :restypmod -1 :resname\nfld_mail_subject :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false\n} :expr { VAR :varno 1 :varattno 6 :vartype 25 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 6}}) :qpqual <> :lefttree <>\n:righttree <> :extprm () :locprm () :initplan <> :nprm 0 :scanrelid 1\n:indxid ( 5913536) :indxqual (({ EXPR :typeOid 16 :opType op :oper {\nOPER :opno 96 :opid 65 :opresulttype 16 } :args ({ VAR :varno 1\n:varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 2} { CONST :consttype 23 :constlen 4 :constisnull false\n:constvalue 4 [ 35 0 0 0 ] :constbyval true })} { EXPR :typeOid 16 \n:opType op :oper { OPER :opno 1061 :opid 1052 :opresulttype 16 } :args\n({ VAR :varno 1 :varattno 2 :vartype 1042 :vartypmod 18 :varlevelsup 0\n:varnoold 1 :varoattno 3} { CONST :consttype 1042 :constlen -1\n:constisnull false :constvalue 12 [ 12 0 0 0 50 48 48 48 48 49 48 48 ] \n:constbyval false })} { EXPR :typeOid 16 :opType op :oper { OPER :opno\n1059 :opid 1050 :opresulttype 16 } :args ({ VAR :varno 1 :varattno 2\n:vartype 1042 :vartypmod 18 :varlevelsup 0 :varnoold 1 :varoattno 3} {\nCONST :consttype 1042 :constlen -1 :constisnull false :constvalue 12 [\n12 0 0 0 50 48 48 48 48 49 57 57 ] :constbyval false })}))\n:indxqualorig (({ EXPR :typeOid 16 :opType op :oper { OPER :opno 96\n:opid 65 :opresulttype 16 } :args ({ VAR :varno 1 :varattno 2 :vartype\n23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 2} { CONST\n:consttype 23 :constlen 4 :constisnull false :constvalue 4 [ 35 0 0 0\n] :constbyval true })} { EXPR :typeOid 16 :opType op :oper { OPER\n:opno 1061 :opid 1052 :opresulttype 16 } :args ({ VAR :varno 1 :varattno\n3 :vartype 1042 :vartypmod 18 :varlevelsup 0 :varnoold 1 :varoattno 3}\n{ CONST :consttype 1042 :constlen -1 :constisnull false :constvalue 12\n[ 12 0 0 0 50 48 48 48 48 49 48 48 ] :constbyval false })} { EXPR\n:typeOid 16 :opType op :oper { OPER :opno 1059 :opid 1050 :opresulttype\n16 } :args ({ VAR :varno 1 :varattno 3 :vartype 1042 :vartypmod 18 \n:varlevelsup 0 :varnoold 1 :varoattno 3} { CONST :consttype 1042\n:constlen -1 :constisnull false :constvalue 12 [ 12 0 0 0 50 48 48 48\n48 49 57 57 ] :constbyval false })})) :indxorderdir 1 } :righttree <>\n:extprm () :locprm () :initplan <> :nprm 0 :nonameid 0 :keycount 1 }\nNOTICE: QUERY PLAN:\n\nSort (cost=5.03..5.03 rows=1 width=44)\n -> Index Scan using idx_archive_list_date on tbl_mail_archive \n(cost=0.00..5.02 rows=1 width=44)\n\nEXPLAIN\ndb_geocrawler=#\n\n\nWithout the ORDER BY\n\n\n\ndb_geocrawler=#\ndb_geocrawler=# explain verbose SELECT fld_mailid, fld_mail_date,\nfld_mail_is_followup, fld_mail_from, fld_mail_subject FROM\ntbl_mail_archive WHERE fld_mail_list='35' AND fld_mail_date between\n'20000100' AND '20000199' LIMIT 51 OFFSET 0;\nNOTICE: QUERY DUMP:\n\n{ INDEXSCAN :startup_cost 0.00 :total_cost 5.02 :rows 1 :width 44 :state\n<> :qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 23\n:restypmod -1 :resname fld_mailid :reskey 0 :reskeyop 0 :ressortgroupref\n0 :resjunk false } :expr { VAR :varno 1 :varattno 1 :vartype 23\n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1}} { TARGETENTRY\n:resdom { RESDOM :resno 2 :restype 1042 :restypmod 18 :resname\nfld_mail_date :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false }\n:expr { VAR :varno 1 :varattno 3 :vartype 1042 :vartypmod 18 \n:varlevelsup 0 :varnoold 1 :varoattno 3}} { TARGETENTRY :resdom { RESDOM\n:resno 3 :restype 23 :restypmod -1 :resname fld_mail_is_followup :reskey\n0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\n:varattno 4 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 4}} { TARGETENTRY :resdom { RESDOM :resno 4 :restype 25\n:restypmod -1 :resname fld_mail_from :reskey 0 :reskeyop 0\n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 5\n:vartype 25 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 5}} {\nTARGETENTRY :resdom { RESDOM :resno 5 :restype 25 :restypmod -1 :resname\nfld_mail_subject :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false\n} :expr { VAR :varno 1 :varattno 6 :vartype 25 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 6}}) :qpqual <> :lefttree <>\n:righttree <> :extprm () :locprm () :initplan <> :nprm 0 :scanrelid 1\n:indxid ( 5913536) :indxqual (({ EXPR :typeOid 16 :opType op :oper {\nOPER :opno 96 :opid 65 :opresulttype 16 } :args ({ VAR :varno 1\n:varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 2} { CONST :consttype 23 :constlen 4 :constisnull false\n:constvalue 4 [ 35 0 0 0 ] :constbyval true })} { EXPR :typeOid 16 \n:opType op :oper { OPER :opno 1061 :opid 1052 :opresulttype 16 } :args\n({ VAR :varno 1 :varattno 2 :vartype 1042 :vartypmod 18 :varlevelsup 0\n:varnoold 1 :varoattno 3} { CONST :consttype 1042 :constlen -1\n:constisnull false :constvalue 12 [ 12 0 0 0 50 48 48 48 48 49 48 48 ] \n:constbyval false })} { EXPR :typeOid 16 :opType op :oper { OPER :opno\n1059 :opid 1050 :opresulttype 16 } :args ({ VAR :varno 1 :varattno 2\n:vartype 1042 :vartypmod 18 :varlevelsup 0 :varnoold 1 :varoattno 3} {\nCONST :consttype 1042 :constlen -1 :constisnull false :constvalue 12 [\n12 0 0 0 50 48 48 48 48 49 57 57 ] :constbyval false })}))\n:indxqualorig (({ EXPR :typeOid 16 :opType op :oper { OPER :opno 96\n:opid 65 :opresulttype 16 } :args ({ VAR :varno 1 :varattno 2 :vartype\n23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 2} { CONST\n:consttype 23 :constlen 4 :constisnull false :constvalue 4 [ 35 0 0 0\n] :constbyval true })} { EXPR :typeOid 16 :opType op :oper { OPER\n:opno 1061 :opid 1052 :opresulttype 16 } :args ({ VAR :varno 1 :varattno\n3 :vartype 1042 :vartypmod 18 :varlevelsup 0 :varnoold 1 :varoattno 3}\n{ CONST :consttype 1042 :constlen -1 :constisnull false :constvalue 12\n[ 12 0 0 0 50 48 48 48 48 49 48 48 ] :constbyval false })} { EXPR\n:typeOid 16 :opType op :oper { OPER :opno 1059 :opid 1050 :opresulttype\n16 } :args ({ VAR :varno 1 :varattno 3 :vartype 1042 :vartypmod 18 \n:varlevelsup 0 :varnoold 1 :varoattno 3} { CONST :consttype 1042\n:constlen -1 :constisnull false :constvalue 12 [ 12 0 0 0 50 48 48 48\n48 49 57 57 ] :constbyval false })})) :indxorderdir 1 }\nNOTICE: QUERY PLAN:\n\nIndex Scan using idx_archive_list_date on tbl_mail_archive \n(cost=0.00..5.02 rows=1 width=44)\n\nEXPLAIN\n\n\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 12 Jul 2000 20:02:15 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some Improvement"
},
{
"msg_contents": "Tim Perdue <[email protected]> writes:\n> Tom Lane wrote:\n>> Tim Perdue <[email protected]> writes:\n>>>> I still think there must be sorting going on, as the result is returned\n>>>> instantly if you remove the ORDER BY.\n>> \n>> You \"think\"? What does EXPLAIN show in the two cases?\n\n> With the ORDER BY\n\n> db_geocrawler=# explain verbose SELECT fld_mailid, fld_mail_date,\n> fld_mail_is_followup, fld_mail_from, fld_mail_subject FROM\n> tbl_mail_archive WHERE fld_mail_list='35' AND fld_mail_date between\n> '20000100' AND '20000199' ORDER BY fld_mail_date DESC LIMIT 51 OFFSET 0;\n\n> NOTICE: QUERY PLAN:\n\n> Sort (cost=5.03..5.03 rows=1 width=44)\n> -> Index Scan using idx_archive_list_date on tbl_mail_archive \n> (cost=0.00..5.02 rows=1 width=44)\n\nWell, you obviously are getting a sort step here, which you want to\navoid because the LIMIT isn't doing you much good when there's a SORT\nin between --- the indexscan has to run over the whole month then.\n\nI assume idx_archive_list_date is an index on tbl_mail_archive\n(fld_mail_list, fld_mail_date) in that order? The reason you're\ngetting the extra sort is that the planner believes the indexscan\nwill produce data ordered like\n\tORDER BY fld_mail_list, fld_mail_date\nwhich is not what you asked for: you asked for a sort by fld_mail_date,\nperiod. (Now you know and I know that since the query retrieves only\ntuples with a single value of fld_mail_list, there's no practical\ndifference. The planner, however, is less bright than we are and does\nnot make the connection.) To avoid the extra sort, you need to specify\nan ORDER BY that the planner will recognize as compatible with the\nindex:\n\tORDER BY fld_mail_list DESC, fld_mail_date DESC\nNote it's important that both clauses be marked DESC or neither;\notherwise the clause still won't look like it matches the index's\nordering. But with the correct ORDER BY incantation, you should\nget a plan like\n\nIndex Scan Backwards using idx_archive_list_date on tbl_mail_archive \n\nand then you will be happy ;-).\n\n(Alternatively, you could declare the index on (fld_mail_date,\nfld_mail_list) and then ORDER BY fld_mail_date DESC would work by\nitself. You should think about which ordering you'd want for a\nquery retrieving rows from more than one list before you decide.)\n\nBTW, the 6.5 planner was quite incapable of generating a plan like\nthis, so I'm still not sure why you saw better performance with 6.5.\nWas there anything to the theory about LOCALE slowing down the sort?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jul 2000 00:20:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some Improvement "
},
{
"msg_contents": "> But I don't see the \"Backwards index scan\" you mentioned.\n\nThen we're not there yet. It looks like there may indeed be a bug\nhere. Trying it with a dummy table:\n\nregression=# create table ff1 (f1 int, f2 char(14));\nCREATE\nregression=# create index ff1i on ff1(f1,f2);\nCREATE\nregression=# explain select * from ff1 where f1 = 3 and f2 between '4' and '5'\nregression-# order by f1,f2;\nNOTICE: QUERY PLAN:\n\nIndex Scan using ff1i on ff1 (cost=0.00..2.02 rows=1 width=16)\n\nEXPLAIN\nregression=# explain select * from ff1 where f1 = 3 and f2 between '4' and '5'\nregression-# order by f1 desc,f2 desc;\nNOTICE: QUERY PLAN:\n\nSort (cost=2.03..2.03 rows=1 width=16)\n -> Index Scan using ff1i on ff1 (cost=0.00..2.02 rows=1 width=16)\n\nEXPLAIN\nregression=# set enable_sort TO off;\nSET VARIABLE\n\nregression=# explain select * from ff1 where f1 = 3 and f2 between '4' and '5'\nregression-# order by f1 desc, f2 desc;\nNOTICE: QUERY PLAN:\n\nIndex Scan Backward using ff1i on ff1 (cost=0.00..67.50 rows=1 width=16)\n\nEXPLAIN\n\nSo it knows how to generate an indexscan backwards plan, but it's not\nchoosing that because there's something wacko with the cost estimate.\nHmm. This works great for single-column indexes, I wonder what's wrong\nwith the multi-column case? Will start digging.\n\nI hesitate to suggest that you throw \"SET enable_sort TO off\" and then\n\"SET enable_sort TO on\" around your query, because it's so ugly,\nbut that might be the best short-term answer.\n\n>> Was there anything to the theory about LOCALE slowing down the sort?\n\n> Well, I didn't intentionally compile LOCALE support. Just did the usual \n\n> ./configure --with-max-backends=128 (or whatever)\n> gmake\n\nThat shouldn't cause LOCALE to get compiled. I'm still at a loss why\n6.5 would be faster for your original query. For sure it's not\ngenerating a more intelligent plan...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jul 2000 01:02:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some Improvement "
},
{
"msg_contents": ">> But I don't see the \"Backwards index scan\" you mentioned.\n> Then we're not there yet.\n\n> I hesitate to suggest that you throw \"SET enable_sort TO off\" and then\n> \"SET enable_sort TO on\" around your query, because it's so ugly,\n> but that might be the best short-term answer.\n\nNo, actually that's no short-term answer at all. It turns out that the\n\"bogus\" cost estimate was perfectly correct, because what the planner\nwas actually generating was a plan for a backwards index scan over the\nwhole table, with restrictions applied after the fact :-(. Forcing it\nto use that plan won't help.\n\nI have corrected this silly oversight. Attached is the patch needed to\nmake backwards index scans work properly in 7.0.*.\n\n\t\t\tregards, tom lane\n\n*** src/backend/optimizer/path/indxpath.c.orig\tSun Apr 16 00:41:01 2000\n--- src/backend/optimizer/path/indxpath.c\tThu Jul 13 01:49:51 2000\n***************\n*** 196,202 ****\n \t\t\t useful_for_ordering(root, rel, index, ForwardScanDirection))\n \t\t\t\tadd_path(rel, (Path *)\n \t\t\t\t\t\t create_index_path(root, rel, index,\n! \t\t\t\t\t\t\t\t\t\t NIL,\n \t\t\t\t\t\t\t\t\t\t ForwardScanDirection));\n \t\t}\n \n--- 196,202 ----\n \t\t\t useful_for_ordering(root, rel, index, ForwardScanDirection))\n \t\t\t\tadd_path(rel, (Path *)\n \t\t\t\t\t\t create_index_path(root, rel, index,\n! \t\t\t\t\t\t\t\t\t\t restrictclauses,\n \t\t\t\t\t\t\t\t\t\t ForwardScanDirection));\n \t\t}\n \n***************\n*** 208,214 ****\n \t\tif (useful_for_ordering(root, rel, index, BackwardScanDirection))\n \t\t\tadd_path(rel, (Path *)\n \t\t\t\t\t create_index_path(root, rel, index,\n! \t\t\t\t\t\t\t\t\t NIL,\n \t\t\t\t\t\t\t\t\t BackwardScanDirection));\n \n \t\t/*\n--- 208,214 ----\n \t\tif (useful_for_ordering(root, rel, index, BackwardScanDirection))\n \t\t\tadd_path(rel, (Path *)\n \t\t\t\t\t create_index_path(root, rel, index,\n! \t\t\t\t\t\t\t\t\t restrictclauses,\n \t\t\t\t\t\t\t\t\t BackwardScanDirection));\n \n \t\t/*\n",
"msg_date": "Thu, 13 Jul 2000 01:57:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some Improvement "
},
{
"msg_contents": "Tom Lane wrote:\n> I have corrected this silly oversight. Attached is the patch needed to\n> make backwards index scans work properly in 7.0.*.\n\nGod - don't you love Open Source Software? You probably don't remember,\nbut 1 1/2 years ago, I ran into this \"2GB Disaster limit\" and I think\nyou were the one that came up with a patch to bring the segment size\ndown to 1GB within like 24hrs.\n\nI'll apply the patch and rebuild - thanks.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Thu, 13 Jul 2000 07:10:35 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some Improvement"
}
] |
[
{
"msg_contents": "Here is a great article about the Unix philosophy of doing things:\n\n http://www.performancecomputing.com/archives/articles/1998/september/9809of1.shtml\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jul 2000 21:11:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unix philosophy"
}
] |
[
{
"msg_contents": "\nAfter going through the udmsearch indexer code again, I found that they\nwere doing a 'BEGIN WORK;LOCK url', but they never UNLOCKED it afterwards,\nso it was staying locked until indexer ended, several hours later ...\n\nI've fixed that code locally and sent them a patch ... problem fixed :)\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 12 Jul 2000 22:54:15 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Earlier question regarding waiting processes ..."
}
] |
[
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce,\n> \n> The bug list includes the following:\n> \n> a.. SELECT foo UNION SELECT foo is incorrectly simplified to SELECT foo \n> \n> Wy is this simplification incorrect? I don't get it.\n\nNot sure. Maybe someone can comment. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jul 2000 22:30:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query 'Bout A Bug."
},
{
"msg_contents": "At 22:30 12/07/00 -0400, Bruce Momjian wrote:\n>[ Charset ISO-8859-1 unsupported, converting... ]\n>> Bruce,\n>> \n>> The bug list includes the following:\n>> \n>> a.. SELECT foo UNION SELECT foo is incorrectly simplified to SELECT foo \n>> \n>> Wy is this simplification incorrect? I don't get it.\n>\n>Not sure. Maybe someone can comment. \n>\n\nAs far as I can see, we'd need to know the definition of 'foo'.\n\neg. \n\n select nextval('id') UNION SELECT nextval('id') \n\nshould produce two rows. \n\nIf foo is invariant, then you should be fine because the default behaviour\nfor union should be to do a set union of the tuples (ie. only *distinct*\nrows are added to the result set). \n\nBut, determining invariance is pretty hard for a complex foo (eg. a select\nstatement that causes rewrite rules to fire).\n\nFinally, \n\tselect 1 union ALL select 1\n\nshould produce two rows.\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 13 Jul 2000 13:05:19 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Query 'Bout A Bug."
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> The bug list includes the following:\n>> a.. SELECT foo UNION SELECT foo is incorrectly simplified to SELECT foo \n>> Wy is this simplification incorrect? I don't get it.\n\n> Not sure. Maybe someone can comment. \n\nUNION implies DISTINCT according to the spec. Thus correct output from\nthe first query will contain no duplicates. The \"simplified\" version\nwill produce duplicates if there are any in the table.\n\nWe get this case right:\n\nregression=# explain select f1+1 from int4_tbl union select f1+2 from int4_tbl;\n\nUnique (cost=2.27..2.29 rows=1 width=4)\n -> Sort (cost=2.27..2.27 rows=10 width=4)\n -> Append (cost=0.00..2.10 rows=10 width=4)\n -> Seq Scan on int4_tbl (cost=0.00..1.05 rows=5 width=4)\n -> Seq Scan on int4_tbl (cost=0.00..1.05 rows=5 width=4)\n\n(EXPLAIN doesn't show the expressions being computed at each step, but\nyou can see a UNIQUE is getting done) but not this case:\n\nregression=# explain select f1+1 from int4_tbl union select f1+1 from int4_tbl;\n\nSeq Scan on int4_tbl (cost=0.00..1.05 rows=5 width=4)\n\nWhy, you ask? Because someone thought it'd be a clever idea to simplify\nredundant UNION/INTERSECT/EXCEPT query trees by pretending they are\nOR/AND/AND NOT boolean-expression trees and handing them to cnfify().\ncnfify knows \"x OR x\" reduces to \"x\". Neat idea, too bad the semantics\naren't quite the same. But really it's a waste of planning cycles\nanyway, since who'd be likely to write such a query in the first place?\nI'm planning to rip out the whole foolishness when we redo querytrees\nfor 7.2.\n\nIt would also be incorrect to simplify SELECT foo UNION ALL SELECT foo,\nbtw, since this should produce all the tuples in foo twice. This one\nwe get right, although I don't recall offhand what prevents us from\nblowing it --- cnfify() certainly wouldn't know the difference.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2000 23:46:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Query 'Bout A Bug. "
}
] |
[
{
"msg_contents": "\n> > The bug list includes the following:\n> > \n> > a.. SELECT foo UNION SELECT foo is incorrectly simplified \n> to SELECT foo \n> > \n> > Wy is this simplification incorrect? I don't get it.\n> \n> Not sure. Maybe someone can comment. \n\nIt can only be simplified to:\n\tSELECT DISTINCT * from foo\n\nAndreas\n",
"msg_date": "Thu, 13 Jul 2000 09:07:59 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Re: Query 'Bout A Bug."
}
] |
[
{
"msg_contents": "\n> I still think there must be sorting going on, as the result \n> is returned\n> instantly if you remove the ORDER BY. I don't know - I do think it's\n> much better now.\n\nAre you doing the exact query I wrote for you ?\nThat is:\n\torder by mail_list desc, mail_date desc\n\nexplain should tell you if it does a sort. There should not be a difference\nwith \nor without the order by.\nHiroshi, I think you implemented the backwards index scan ?\nOtherwise he would need to recreate the index as\n (mail_list desc, mail_date desc)\n\nSorry for the inconvenience\nAndreas\n",
"msg_date": "Thu, 13 Jul 2000 09:11:30 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Some Improvement"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Zeugswetter Andreas SB\n> \n> > I still think there must be sorting going on, as the result \n> > is returned\n> > instantly if you remove the ORDER BY. I don't know - I do think it's\n> > much better now.\n> \n> Are you doing the exact query I wrote for you ?\n> That is:\n> \torder by mail_list desc, mail_date desc\n> \n> explain should tell you if it does a sort. There should not be a \n> difference\n> with \n> or without the order by.\n> Hiroshi, I think you implemented the backwards index scan ?\n\nYes,but I didn't implement backwards index path for cost estimate.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 13 Jul 2000 16:45:57 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Some Improvement"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> Hiroshi, I think you implemented the backwards index scan ?\n\n> Yes,but I didn't implement backwards index path for cost estimate.\n\nI did ... but I didn't get it quite right :-( ... see later message ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jul 2000 04:07:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some Improvement "
}
] |
[
{
"msg_contents": "\nTom, and others...\n\nI've made modifications to libpq to accept multiple return type groups,\nand now the thing to do is to make the back-end generate them from OO\nhierarchies. Do you have any thoughts on the general approach to do\nthis? Should I be doing something like expandAll in parse_relation.c to\ncater for different types? Do I play with ExecProject? Can anybody\nremember anything about how postgres originally handled differing return\ntypes?\n\nChris.\n",
"msg_date": "Thu, 13 Jul 2000 17:17:12 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "OO enhancements"
}
] |
[
{
"msg_contents": "\nWhile working with alter table add constraint, I realized we \nget these messages if a second session blocks on the lock the \nalter table is getting. I'm not sure what (if any) way there\nis around this in the code, but I figure I should remove this\nproblem before sending the next patch for it so I thought I'd\nthrow it out too see if anyone can give me a pointer as\nto what to do.\n\nStephan Szabo\[email protected]\n\n",
"msg_date": "Thu, 13 Jul 2000 00:31:38 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Questions relating to \"modified while in use\" messages"
},
{
"msg_contents": "Stephan Szabo <[email protected]> writes:\n> While working with alter table add constraint, I realized we \n> get these messages if a second session blocks on the lock the \n> alter table is getting.\n\nIt's coming from the relcache code, which sees that the table\ndefinition has been altered when the ref count on the relcache\nentry is > 0. This is unfortunately the wrong thing, because\nif the definition update is seen when first trying to access\nthe table during a given transaction, the ref count is guaranteed\nto be > 0 (we bump the count and then check for SI update messages).\nBut there's no reason to fail in that situation, we should just\naccept the update and forge ahead.\n\nThe correct fix will involve giving the relcache more context\ninformation about whether it's safe to accept the cache update without\nthrowing an error. I think the behavior we really want is that a change\nis accepted silently if refcount = 0 (ie, we're not really using the\ntable now, we just have a leftover cache entry for it), *or* if we are\njust now locking the table for the first access to it in the current\ntransaction. But relcache can't now determine whether that second\nstatement is true. Adding a relcache entry field like \"xact ID at time\nof last unlock of this cache entry\" might do the trick, but I'm too\ntired to think it through carefully right now.\n\nA related issue is that we should probably grab some kind of lock\non a table when it is first touched by the parser within a statement;\nright now we grab the lock and then release it, meaning someone could\nalter information that is already getting factored into parse/plan.\nThrowing an error as relcache now does protects us from trying to\nactually execute such an obsoleted plan, but if we change relcache\nto be more permissive we can't get away with such sloppiness at the\nhigher levels. This has been discussed before (I think Hiroshi pointed\nit out first). See the archives...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jul 2000 03:56:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions relating to \"modified while in use\" messages "
},
{
"msg_contents": "On Thu, 13 Jul 2000, Tom Lane wrote:\n\n> Stephan Szabo <[email protected]> writes:\n> > While working with alter table add constraint, I realized we \n> > get these messages if a second session blocks on the lock the \n> > alter table is getting.\n> \n> It's coming from the relcache code, which sees that the table\n> definition has been altered when the ref count on the relcache\n> entry is > 0. This is unfortunately the wrong thing, because\n\nOkay... I found the code that was giving the message, but wasn't\nsure if there was a way around it that one was expected to use.\nIt had worried me since that meant that using an alter on a\ntable that might be in use would do bad things, and I didn't want\nto let it through if there was some local thing in my routine\nthat would easily fix it.\nOf course, I also really only noticed it when I ran the two really\nclose together or the alter table inside a transaction.\n\n\n",
"msg_date": "Thu, 13 Jul 2000 09:01:18 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questions relating to \"modified while in use\" messages"
},
{
"msg_contents": "Stephan Szabo <[email protected]> writes:\n> Of course, I also really only noticed it when I ran the two really\n> close together or the alter table inside a transaction.\n\nRight, the problem only comes up if the report of the table change\narrives at the other backend just as it's preparing to start a new\ntransaction on that same table. If the other backend's first table\nlock after the ALTER commits is on some other table, no problem.\n\nWe do need to fix it but I think there are higher-priority problems...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jul 2000 12:17:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions relating to \"modified while in use\" messages "
}
] |
[
{
"msg_contents": "\n> I went at it in a different way: pulled out pg_lzcompress into a\n> standalone source program that could also call zlib. \n\nIf you want you could give minilzo a try.\n\nLZO is a data compression library which is suitable for data de-/compression\nin real-time.\nLZO implements the fastest compression and decompression algorithms around.\nSee the ratings for lzop in the famous Archive Comparison Test .\n\nLZO needs no memory for decompression, 64k or 8k for compression.\n\nLZO and the LZO algorithms and implementations are distributed under the\nterms of the GNU General Public License (GPL) { auf Deutsch }. Special\nlicenses for commercial and other applications are available by contacting\nthe author.\n\nSee http://wildsau.idv.uni-linz.ac.at/mfx/lzo.html\n\nAndreas\n",
"msg_date": "Thu, 13 Jul 2000 09:45:04 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: lztext and compression ratios... "
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> LZO and the LZO algorithms and implementations are distributed under the\n> terms of the GNU General Public License (GPL) { auf Deutsch }.\n\nGPL is alone a fatal objection for something that has to go into the\nguts of Postgres.\n\nMore to the point, does it have the same patent-freedom credentials zlib\ndoes? I trust Gailly's opinion about zlib being patent-free because\nhe spent more time researching the legalities than he did writing code\n(and also because zlib has now been around for quite some time with no\nchallenges). What assurances does LZO offer that we won't be stepping\non a patent mine?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jul 2000 04:04:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: lztext and compression ratios... "
}
] |
[
{
"msg_contents": "\n> > LZO and the LZO algorithms and implementations are distributed under the\n> > terms of the GNU General Public License (GPL) { auf Deutsch }.\n> \n> GPL is alone a fatal objection for something that has to go into the\n> guts of Postgres.\n\nYes, but the author states that special projects can get a special license.\n\n> What assurances does LZO offer that we won't be stepping on a patent mine?\n\nThere was a whole discussion on the author's website explaining what he did \nto guarantee the free use of the algorithms, but unfortunately I don't find\nit anymore. \n\nThe author wants PGP encrypted mail which I don't have.\nMaybe he would have a comment for us \"BSD License defending PostgreSQL'ers\" \nif someone forwarded this to him.\n(http://wildsau.idv.uni-linz.ac.at/mfx/pgp.html)\n\nI can tell you from experience that the performance of these algorithms is\nreally \nastonishing.\n\nAndreas\n",
"msg_date": "Thu, 13 Jul 2000 10:47:35 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: lztext and compression ratios... "
}
] |
[
{
"msg_contents": "[Forwarded to PostgreSQL hackers list]\n\nFrom: for email <[email protected]>\nSubject: A postgreSQL question\nDate: Tue, 11 Jul 2000 00:56:35 +0600\n\n> Warning! The reply address of previous same message is bad. Please\n> use this address. \n> \n> [email protected]\n> \n> Hello Tatsuo Ishii!\n> \n> At first, I'm sorry for my bad English.\n> At second, I'm very thank you for Cyrillic support in PostgreSQL. This\n> is good and useful work.\n> \n> While changing PostgreSQL 6.5.3 to 7.0.2 version I have some problem.\n> \n> In 6.5.3 version, I compiled PostgreSQL with --with-mb=KOI8 and\n> --enable-locale keywords. Then I'm created base with \"createdb -E KOI8\"\n> (abstract, file README.locale consists error: \"-e\" key is not vaild, we\n> must to use \"-E\" key instead). With these keywords, my base was been\n> storage in UNICODE, but psql worked fine in KOI8-R encoding and Windows\n> Client (Delphi program) worked with WIN1251 encoding\n> (because PGCLIENTENCODING='win').\n> \n> Now, I do same, but my Windows Client is confuse. I see, my base is\n> storaged in KOI8-R encoding. I'm read documentation and create\n> my base in UNICODE, but Windows Client is confuse again. In README.mb\n> wrote: \"A automatic encoding translation between Unicode\n> and any other encodings is not supported (yet)\". \n> \n> I don't understand. So, in 6.5.3 version this translation was been, but\n> in 7.0.2 version this translation is not supported? Is this \"upgrade\"? ;)\n\nThis is really odd. Since 6.5.3 does not have the capability to do a\nconverison between Unicode and other encodings. How did you submit\nUNICODE data into your KOI8 database on 6.5.3? Can you show me your\nexample UNICODE data actually stored in the database on 6.5.3?\nod -x output is preferable. I don't understand Cyrillic anyway:-)\n\n> Well! When the translation will be released?\n\nNot decided yet...\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 13 Jul 2000 19:21:16 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A postgreSQL question"
}
] |
[
{
"msg_contents": "I have a question about performance issues related to temporary tables.\n\nIIRC temporary tables were implemented as ordinary tables with some \npre/post-processing to give them unique names so that they would not \nclash with similar-named tables from other sessions. \n\nIs this all that is done or has some work been done to improve their \nperformance further?\n\nI'm mainly interested in INSERT performance as this is the area that is \nmuch slower than other operations.\n\nIMHO temporary tables could be made significantly faster than \"ordinary\" \nas they are only accessed inside one session and thus have no need for \nlocking or even WAL as they could do with max 2 copies of the same row \nthe other of which can be discarded at end of transaction thereby making \nit possible to provide much faster insert behaviour.\n\n---------------------\nHannu\n",
"msg_date": "Thu, 13 Jul 2000 13:30:22 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Temp tables performance question"
},
{
"msg_contents": "> I have a question about performance issues related to temporary tables.\n> \n> IIRC temporary tables were implemented as ordinary tables with some \n> pre/post-processing to give them unique names so that they would not \n> clash with similar-named tables from other sessions. \n\nRight.\n\n> \n> Is this all that is done or has some work been done to improve their \n> performance further?\n> \n> I'm mainly interested in INSERT performance as this is the area that is \n> much slower than other operations.\n\nSo you are not saying that INSERT on temp tables is any slower than\nordinary tables, just that you think there is a way to make temp tables\nfaster. \n\nMy guess is that WAL is going to make INSERT's poor performance a\nnon-issue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 Jul 2000 09:21:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temp tables performance question"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> I have a question about performance issues related to temporary tables.\n> IIRC temporary tables were implemented as ordinary tables with some \n> pre/post-processing to give them unique names so that they would not \n> clash with similar-named tables from other sessions. \n\nRight, there's basically no performance difference at all from ordinary\ntables.\n\nIt'd be possible to have them go through the \"local buffer manager\"\nfor their entire lives, rather than only for the transaction in which\nthey are created, as happens for ordinary tables. This would avoid\nat least some shared-buffer-manipulation overhead. I'm not sure it'd\nbuy a whole lot, but it probably wouldn't take much work to make it\nhappen, either.\n\nI think it would be folly to try to make them use a different smgr or\navoid WAL; that'd require propagating differences between ordinary and\ntemp tables into way too many places.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jul 2000 12:09:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temp tables performance question "
},
{
"msg_contents": "> It'd be possible to have them go through the \"local buffer manager\"\n> for their entire lives, rather than only for the transaction in which\n> they are created, as happens for ordinary tables. This would avoid\n> at least some shared-buffer-manipulation overhead. I'm not sure it'd\n> buy a whole lot, but it probably wouldn't take much work to make it\n> happen, either.\n> \n> I think it would be folly to try to make them use a different smgr or\n> avoid WAL; that'd require propagating differences between ordinary and\n> temp tables into way too many places.\n\nYes, temp table optimization hardly seems worth it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 Jul 2000 12:16:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temp tables performance question"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> > I have a question about performance issues related to temporary tables.\n> > IIRC temporary tables were implemented as ordinary tables with some\n> > pre/post-processing to give them unique names so that they would not\n> > clash with similar-named tables from other sessions.\n> \n> Right, there's basically no performance difference at all from ordinary\n> tables.\n> \n> It'd be possible to have them go through the \"local buffer manager\"\n> for their entire lives, rather than only for the transaction in which\n> they are created, as happens for ordinary tables. This would avoid\n> at least some shared-buffer-manipulation overhead. I'm not sure it'd\n> buy a whole lot, but it probably wouldn't take much work to make it\n> happen, either.\n\nI was hoping that at least fsync()'s could be avoided on temp tables \neven without -F.\n\nOther kinds of improvemants should be possible too, due to the \nessentially non-transactional nature of temp tables.\n\n> I think it would be folly to try to make them use a different smgr or\n> avoid WAL; that'd require propagating differences between ordinary and\n> temp tables into way too many places.\n\nDoes using a different storage manager really need propagating\ndifferences \nin other places than the storage manager code ? \n\nIn an ideal world it should not be so ;)\n\n-----------\nHannu\n",
"msg_date": "Fri, 14 Jul 2000 00:13:00 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Temp tables performance question"
}
] |
[
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > IMHO temporary tables could be made significantly faster than \"ordinary\"\n> > as they are only accessed inside one session and thus have no need for\n> > locking or even WAL as they could do with max 2 copies of the same row\n> > the other of which can be discarded at end of transaction thereby making\n> > it possible to provide much faster insert behaviour.\n> \n> I am somewhat confused. What does the max 2 copies issue have to do with\n> inserts, where you only have one copy of the row anyway ?\n\nYou may want to rollback the transaction;\n\n--------------\nHannu\n",
"msg_date": "Thu, 13 Jul 2000 14:50:22 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: Temp tables performance question"
},
{
"msg_contents": "\n> IMHO temporary tables could be made significantly faster than \"ordinary\" \n> as they are only accessed inside one session and thus have no need for \n> locking or even WAL as they could do with max 2 copies of the same row \n> the other of which can be discarded at end of transaction thereby making \n> it possible to provide much faster insert behaviour.\n\nI am somewhat confused. What does the max 2 copies issue have to do with \ninserts, where you only have one copy of the row anyway ?\n\nAndreas\n",
"msg_date": "Thu, 13 Jul 2000 14:03:10 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": false,
"msg_subject": "AW: Temp tables performance question"
}
] |
[
{
"msg_contents": "\n> So you are not saying that INSERT on temp tables is any slower than\n> ordinary tables, just that you think there is a way to make temp tables\n> faster. \n> \n> My guess is that WAL is going to make INSERT's poor performance a\n> non-issue.\n\nI do not think that WAL in its first version can speed anything up,\nit will rather slow things down.\nI think that insert performance should be somewhere near \"\\copy\" \nperformance which is not so bad now. \nThus it probably could be improved for both regular and temp tables.\n\nAndreas\n\nPS: I am off for a week now\n",
"msg_date": "Thu, 13 Jul 2000 15:57:49 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Temp tables performance question"
}
] |
[
{
"msg_contents": "I see the following symptom using current sources...\n\n -Thomas\n\nmyst> psql\npsql: pqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nmyst> createdb\nCREATE DATABASE\nmyst> psql\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nlockhart=#\n\n\n\n",
"msg_date": "Fri, 14 Jul 2000 15:40:32 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with error detection in libpq?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I see the following symptom using current sources...\n> myst> psql\n> psql: pqReadData() -- backend closed the channel unexpectedly.\n\nNo sign of this here. I last pulled a CVS update Tuesday evening,\nso if it's broken the breakage is very recent.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jul 2000 11:51:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with error detection in libpq? "
},
{
"msg_contents": "> No sign of this here. I last pulled a CVS update Tuesday evening,\n> so if it's broken the breakage is very recent.\n\nHmm. It is possible that it is related to my \"nested block comment\"\ncode, but I don't see how that would be related.\n\n - Thomas\n",
"msg_date": "Fri, 14 Jul 2000 16:11:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem with error detection in libpq?"
}
] |
[
{
"msg_contents": "I've committed some changes which implement a few new things.\n\n From the CVS log:\n\nImplement nested block comments in the backend and in psql.\n Include updates for the comment.sql regression test.\nImplement SET SESSION CHARACTERISTICS and SET DefaultXactIsoLevel.\nImplement SET SESSION CHARACTERISTICS TRANSACTION COMMIT\n and SET AutoCommit in the parser only.\n Need to add code to actually do something.\nImplement WITHOUT TIME ZONE type qualifier.\nDefine SCHEMA keyword, along with stubbed-out grammar.\nImplement \"[IN|INOUT|OUT] [varname] type\" function arguments\n in parser only; INOUT and OUT throws an elog(ERROR).\nAdd PATH as a type-specific token, since PATH is in SQL99\n to support schema resource search and resolution.\n\nSince some of these changes involve tokens in the parser, you may need\nto do a \"make clean all install\" to get everything sync'd. afaict this\nis because we propagate a few of the gram.y token values deeper into the\nbackend...\n\n - Thomas\n",
"msg_date": "Fri, 14 Jul 2000 16:14:28 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Just committed some changes..."
},
{
"msg_contents": "On Fri, Jul 14, 2000 at 04:14:28PM +0000, Thomas Lockhart wrote:\n> Define SCHEMA keyword, along with stubbed-out grammar.\n\nI was just digging into pgaccess a little, cleaning up the \ndiagramming feature I had added. In a fit of naivete, I had\ncalled them \"schema\", as in the logical idea, not the \nSQL key word. Since we're going to be implementing SQL SCHEMA,\neventually, perhaps this would be a good time to rename this\nfeature in pgaccess. \n\nHow does \"Diagrams\" sound? Anyone have a better idea? The name appears\non a tab. The other tabs are labelled:\n\nTables, Queries, Views, Sequences, Functions, Reports, Forms, Scripts,\nUsers, ????.\n\nKeep in mind, I have been thinking about connecting the schema diagrammer\ninto the visual query builder, so that you can set a 'strict' mode,\nwhere it'll only allow joins that have been defined in the schema\neditor. That's why I named it schema, in the first place. It'd be nice to\nhave the diagramming interact with primary key/foreign key constraints,\noptionally drawing them in automatically. However, all it does right\nnow is draw tables and let you draw links between them, by dragging and\ndropping fieldnames.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n\n",
"msg_date": "Fri, 14 Jul 2000 14:43:02 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "PgAccess diagrams (was: Re: Just committed some changes...)"
},
{
"msg_contents": "> How does \"Diagrams\" sound? Anyone have a better idea? The name appears\n> on a tab. The other tabs are labelled:\n\nOK with me...\n\n - Thomas\n",
"msg_date": "Sat, 15 Jul 2000 01:46:07 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PgAccess diagrams (was: Re: Just committed some\n changes...)"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> \n> How does \"Diagrams\" sound? Anyone have a better idea? The name appears\n> on a tab. The other tabs are labelled:\n\nSounds fine to me.\n\nAnyway, you are the people speaking english all day long :-) and you\nshould say if it's ok\n\nI'll apply the pathes to the next PgAccess release.\n\nConstantin Teodorescu\nBraila, ROMANIA\n",
"msg_date": "Sat, 15 Jul 2000 18:51:52 +0300",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PgAccess diagrams (was: Re: Just committed some\n changes...)"
}
] |
[
{
"msg_contents": "I have been cleaning up the index-related code a little bit while\ntrying to eliminate memory leaks associated with functional indexes,\nand I have noticed an old \"feature\" that I think we should remove.\nIt's undocumented (I'd never even heard of it before) and as far as\nI can tell the only thing you can do with it is shoot yourself in the\nfoot.\n\nWhat I'm talking about is the \"opt_type\" clause in CREATE INDEX\ncolumn items:\n\nindex_elem: attr_name opt_type opt_class\n\nfunc_index: func_name '(' name_list ')' opt_type opt_class\n\nopt_type: ':' Typename { $$ = $2; }\n | FOR Typename { $$ = $2; }\n | /*EMPTY*/ { $$ = NULL; }\n ;\n\nThis is not to be confused with the index operator class option.\nWhat it seems to do is override the system's choice of the size and\nstorage type of the index column --- the index is created as though\nits stored column has the specified type, not the type of the\nunderlying data column or function result. Mind you, no actual\nruntime data conversion is performed, we just cause the index's\ncatalog entries to lie about what's in the column.\n\nI can see no possible use for this except triggering coredumps.\nWhatever functionality might be served by pretending the column type\nis different from reality is already taken care of by the ability\nto override the index operator class for the column. Furthermore,\nas of 7.0 the opclass code contains safety checks to prevent you from\nsubstituting operators on a non-binary-compatible datatype. There are\nno checks whatever on the \"opt_type\" substitution.\n\nUnless someone comes up with a good reason for this feature,\nit's going to be history very shortly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jul 2000 15:29:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index column \"opt_type\" slated for destruction"
}
] |
[
{
"msg_contents": "Hi All,\n\nI have a five table join that should return 1 record but, 0 comes back\nthis was working before running vacuum last night - as a simple test\nI set enable_seqscan=off and hay-presstoe it came back !\n\nI guess there is an optimizer problem, so I have two questions\n\n1) Is there some known bad problems outstanding ?\n\n2) How do I get the output from explain to a file & would you like it ?\n\n\nTIA,\n Frank.\n\n",
"msg_date": "Fri, 14 Jul 2000 15:59:43 -0400",
"msg_from": "frank <[email protected]>",
"msg_from_op": true,
"msg_subject": "lost records !"
},
{
"msg_contents": "frank <[email protected]> writes:\n> I have a five table join that should return 1 record but, 0 comes back\n> this was working before running vacuum last night - as a simple test\n> I set enable_seqscan=off and hay-presstoe it came back !\n\n> I guess there is an optimizer problem, so I have two questions\n> 1) Is there some known bad problems outstanding ?\n\nNo, at least not in 7.0.*.\n\n> 2) How do I get the output from explain to a file & would you like it ?\n\nCut and paste from a shell window is close enough ... explain would be\ngood, also the full schema for your tables and indexes thereon\n(pg_dump -s is a good way to extract the schema info).\n\nDon't forget the text of the query, too ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jul 2000 17:22:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: lost records ! "
},
{
"msg_contents": "frank <[email protected]> writes:\n>>>> I have a five table join that should return 1 record but, 0 comes back\n>>>> this was working before running vacuum last night - as a simple test\n>>>> I set enable_seqscan=off and hay-presstoe it came back !\n \n> invent=# \\i t\n> psql:t:13: NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=1283.07..1283.07 rows=1 width=36)\n> -> Nested Loop (cost=940.96..1282.35 rows=289 width=36)\n> -> Merge Join (cost=940.96..1000.04 rows=7 width=32)\n> -> Index Scan using purch_order_pkey on purch_order (cost=0.00..28.53 rows=35 width=8)\n> -> Sort (cost=940.96..940.96 rows=2001 width=24)\n> -> Merge Join (cost=498.47..831.27 rows=2001 width=24)\n> -> Index Scan using part_info_pkey on part_info (cost=0.00..194.06 rows=3450 width=12)\n> -> Sort (cost=498.47..498.47 rows=5799 width=12)\n> -> Seq Scan on po_line_item (cost=0.00..135.99 rows=5799 width=12)\n> -> Index Scan using parts_pkey on parts (cost=0.00..39.38 rows=41 width=4)\n> \n> invent=# set enable_seqscan=off;\n> SET VARIABLE\n> invent=# \\i t\n> psql:t:13: NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=100001283.07..100001283.07 rows=1 width=36)\n> -> Nested Loop (cost=100000940.96..100001282.35 rows=289 width=36)\n> -> Merge Join (cost=100000940.96..100001000.04 rows=7 width=32)\n> -> Index Scan using purch_order_pkey on purch_order (cost=0.00..28.53 rows=35 width=8)\n> -> Sort (cost=100000940.96..100000940.96 rows=2001 width=24)\n> -> Merge Join (cost=100000498.47..100000831.27 rows=2001 width=24)\n> -> Index Scan using part_info_pkey on part_info (cost=0.00..194.06 rows=3450 width=12)\n> -> Sort (cost=100000498.47..100000498.47 rows=5799 width=12)\n> -> Seq Scan on po_line_item (cost=100000000.00..100000135.99 rows=5799 width=12)\n> -> Index Scan using parts_pkey on parts (cost=0.00..39.38 rows=41 width=4)\n\nWell, that's pretty dang odd, because the two plans sure look the same!\nSo how could they generate different results?\n\nIt's possible that there is some difference in details that don't show\nin EXPLAIN --- are the EXPLAIN VERBOSE results also the same?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jul 2000 13:12:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: lost records ! "
}
] |
[
{
"msg_contents": "Here's a patch to clean up some issues with the schema diagram\neditor in pgaccess. This patch:\n\n\t* allows schema window to be resized up to full screen size\n\t* autosizes window on schema open\n\t* allows multiselect of tables (by shift click) to allow\n\t\tdragging of sets of tables\n\t\tdeletion of sets of tables\n\t* fixes a bug in table deletion code that did not delete\n\t links to that table properly\n\t* changes link lines to be continous, so they export better\n\t\tvia postscript->pstoedit->xfig\n\t\nIt does not change the name of the feature from schema, as I \nasked about on HACKERS. I'm off for a week's vacation starting\ntomorrow, so I probably won't see any commentary on this until\nMonday the 24th\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005",
"msg_date": "Fri, 14 Jul 2000 15:27:37 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCHES] PgAccess schema-diagram cleanup"
},
{
"msg_contents": "I have bounced this message to the pgaccess maintainer, and to the\ninterfaces list. I would prefer Constantin to apply the patch to his\nversion, and issue a new release.\n\n\n> Here's a patch to clean up some issues with the schema diagram\n> editor in pgaccess. This patch:\n> \n> \t* allows schema window to be resized up to full screen size\n> \t* autosizes window on schema open\n> \t* allows multiselect of tables (by shift click) to allow\n> \t\tdragging of sets of tables\n> \t\tdeletion of sets of tables\n> \t* fixes a bug in table deletion code that did not delete\n> \t links to that table properly\n> \t* changes link lines to be continous, so they export better\n> \t\tvia postscript->pstoedit->xfig\n> \t\n> It does not change the name of the feature from schema, as I \n> asked about on HACKERS. I'm off for a week's vacation starting\n> tomorrow, so I probably won't see any commentary on this until\n> Monday the 24th\n> \n> Ross\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n> \n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 14 Jul 2000 17:09:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PgAccess schema-diagram cleanup"
},
{
"msg_contents": "Can someone tell me where we are on this? Constantin?\n\n> Here's a patch to clean up some issues with the schema diagram\n> editor in pgaccess. This patch:\n> \n> \t* allows schema window to be resized up to full screen size\n> \t* autosizes window on schema open\n> \t* allows multiselect of tables (by shift click) to allow\n> \t\tdragging of sets of tables\n> \t\tdeletion of sets of tables\n> \t* fixes a bug in table deletion code that did not delete\n> \t links to that table properly\n> \t* changes link lines to be continous, so they export better\n> \t\tvia postscript->pstoedit->xfig\n> \t\n> It does not change the name of the feature from schema, as I \n> asked about on HACKERS. I'm off for a week's vacation starting\n> tomorrow, so I probably won't see any commentary on this until\n> Monday the 24th\n> \n> Ross\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n> \n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 11 Oct 2000 14:10:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] PgAccess schema-diagram cleanup"
},
{
"msg_contents": "\nRoss, this looks very good. What happened to it?\n\n> Here's a patch to clean up some issues with the schema diagram\n> editor in pgaccess. This patch:\n> \n> \t* allows schema window to be resized up to full screen size\n> \t* autosizes window on schema open\n> \t* allows multiselect of tables (by shift click) to allow\n> \t\tdragging of sets of tables\n> \t\tdeletion of sets of tables\n> \t* fixes a bug in table deletion code that did not delete\n> \t links to that table properly\n> \t* changes link lines to be continous, so they export better\n> \t\tvia postscript->pstoedit->xfig\n> \t\n> It does not change the name of the feature from schema, as I \n> asked about on HACKERS. I'm off for a week's vacation starting\n> tomorrow, so I probably won't see any commentary on this until\n> Monday the 24th\n> \n> Ross\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n> \n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 08:49:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] PgAccess schema-diagram cleanup"
},
{
"msg_contents": "It got bounced to Constantin. I'm not sure if it's made it in, there.\nI'll ping him and see if he needs a new patch.\n\nRoss\n\nOn Wed, Jan 24, 2001 at 08:49:13AM -0500, Bruce Momjian wrote:\n> \n> Ross, this looks very good. What happened to it?\n> \n> > Here's a patch to clean up some issues with the schema diagram\n> > editor in pgaccess. This patch:\n> > \n> > \t* allows schema window to be resized up to full screen size\n> > \t* autosizes window on schema open\n> > \t* allows multiselect of tables (by shift click) to allow\n> > \t\tdragging of sets of tables\n> > \t\tdeletion of sets of tables\n> > \t* fixes a bug in table deletion code that did not delete\n> > \t links to that table properly\n> > \t* changes link lines to be continous, so they export better\n> > \t\tvia postscript->pstoedit->xfig\n> > \t\n> > It does not change the name of the feature from schema, as I \n> > asked about on HACKERS. I'm off for a week's vacation starting\n> > tomorrow, so I probably won't see any commentary on this until\n> > Monday the 24th\n> > \n> > Ross\n",
"msg_date": "Wed, 24 Jan 2001 10:43:40 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] PgAccess schema-diagram cleanup"
},
{
"msg_contents": "\nWohoo, looks like this was applied. We will ship the new pgaccess\nversion with 7.1.\n\n> Here's a patch to clean up some issues with the schema diagram\n> editor in pgaccess. This patch:\n> \n> \t* allows schema window to be resized up to full screen size\n> \t* autosizes window on schema open\n> \t* allows multiselect of tables (by shift click) to allow\n> \t\tdragging of sets of tables\n> \t\tdeletion of sets of tables\n> \t* fixes a bug in table deletion code that did not delete\n> \t links to that table properly\n> \t* changes link lines to be continous, so they export better\n> \t\tvia postscript->pstoedit->xfig\n> \t\n> It does not change the name of the feature from schema, as I \n> asked about on HACKERS. I'm off for a week's vacation starting\n> tomorrow, so I probably won't see any commentary on this until\n> Monday the 24th\n> \n> Ross\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n> \n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 27 Jan 2001 13:18:04 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PgAccess schema-diagram cleanup"
}
] |
[
{
"msg_contents": "Hi All,\n\nAttempting to build with the current CVS I'm getting the following\nerror for every file that includes config.h.\n\n\ngcc -I../../include -Wall -Wmissing-prototypes -Wmissing-declarations -g -O2 \n-Wno-error -c -o postgres.o postgres.c\nIn file included from ../../include/c.h:47,\n from ../../include/postgres.h:40,\n from postgres.c:20:\n../../include/config.h:420: warning: `struct in_addr' declared inside parameter \nlist\n../../include/config.h:420: warning: its scope is only this definition or \ndeclaration, which is probably not what you want.\n\nThe build ends with the following error.\n\n\nIn file included from postgres.c:34:\n/usr/include/arpa/inet.h:52: conflicting types for `inet_aton'\n../../include/config.h:420: previous declaration of `inet_aton'\n\n\nIt's all tied, I think, to the following segment of code.\n\n/* Set to 1 if you have inet_aton() */\n/* #undef HAVE_INET_ATON */\n#ifndef HAVE_INET_ATON\n# ifdef HAVE_ARPA_INET_H\n# ifdef HAVE_NETINET_IN_H\n# include <sys/types.h>\n# include <netinet/in.h>\n# endif\n# include <arpa/inet.h>\n# endif\nextern int inet_aton(const char *cp, struct in_addr * addr);\n#endif\n\n\nWhere HAVE_ARPA_INET_H and HAVE_NETINET_IN_H are no longer\nset by configure.\n\nThe platform is Solaris SPARC 2.7, compiler gcc.\n\nKeith.\n\n",
"msg_date": "Fri, 14 Jul 2000 23:45:43 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems compiling CVS on Solaris."
}
] |
[
{
"msg_contents": "I take it all back.\n\nI see, from my latest update, that config.h.in has changed.\n\nI'll do a build with the *very* latest CVS and hopefully that\nwill be OK.\n\nSorry for raising the alarm too soon.\n\nKeith.\n\nKeith Parks <[email protected]>\n> \n> Hi All,\n> \n> Attempting to build with the current CVS I'm getting the following\n> error for every file that includes config.h.\n> \n> \n> gcc -I../../include -Wall -Wmissing-prototypes -Wmissing-declarations -g -O2 \n> -Wno-error -c -o postgres.o postgres.c\n> In file included from ../../include/c.h:47,\n> from ../../include/postgres.h:40,\n> from postgres.c:20:\n> ../../include/config.h:420: warning: `struct in_addr' declared inside \nparameter \n> list\n> ../../include/config.h:420: warning: its scope is only this definition or \n> declaration, which is probably not what you want.\n> \n> The build ends with the following error.\n> \n> \n> In file included from postgres.c:34:\n> /usr/include/arpa/inet.h:52: conflicting types for `inet_aton'\n> ../../include/config.h:420: previous declaration of `inet_aton'\n> \n> \n> It's all tied, I think, to the following segment of code.\n> \n> /* Set to 1 if you have inet_aton() */\n> /* #undef HAVE_INET_ATON */\n> #ifndef HAVE_INET_ATON\n> # ifdef HAVE_ARPA_INET_H\n> # ifdef HAVE_NETINET_IN_H\n> # include <sys/types.h>\n> # include <netinet/in.h>\n> # endif\n> # include <arpa/inet.h>\n> # endif\n> extern int inet_aton(const char *cp, struct in_addr * addr);\n> #endif\n> \n> \n> Where HAVE_ARPA_INET_H and HAVE_NETINET_IN_H are no longer\n> set by configure.\n> \n> The platform is Solaris SPARC 2.7, compiler gcc.\n> \n> Keith.\n> \n> \n\n",
"msg_date": "Fri, 14 Jul 2000 23:56:35 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems compiling CVS on Solaris."
},
{
"msg_contents": "Keith Parks writes:\n\n> I take it all back.\n> \n> I see, from my latest update, that config.h.in has changed.\n> \n> I'll do a build with the *very* latest CVS and hopefully that\n> will be OK.\n> \n> Sorry for raising the alarm too soon.\n\nAll my fault...\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 15 Jul 2000 23:33:11 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems compiling CVS on Solaris."
}
] |
[
{
"msg_contents": "> Add PATH as a type-specific token, since PATH is in SQL99\n> to support schema resource search and resolution.\n\nYou do realize this broke the geometry regress test?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jul 2000 19:59:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Latest parser changes"
},
{
"msg_contents": "> > Add PATH as a type-specific token, since PATH is in SQL99\n> > to support schema resource search and resolution.\n> You do realize this broke the geometry regress test?\n\nUm, er, I guess not. But I *did* make one round of fixes to get the\nregression tests to pass (by allowing PATH as a type name in gram.y),\nand my recollection is that the PATH naming problem had actually\naffected several regression tests. That is no longer the case on my\nmachine (i.e. only one test fails).\n\nSo I'm guessing again (my development laptop is powered off right now)\nthat the regression test is only partially damaged, perhaps with a\ncolumn name problem?? I'll try to look at it this weekend.\n\n - Thomas\n",
"msg_date": "Sat, 15 Jul 2000 02:04:08 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Latest parser changes"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> So I'm guessing again (my development laptop is powered off right now)\n> that the regression test is only partially damaged, perhaps with a\n> column name problem?? I'll try to look at it this weekend.\n\nThe problem is this query now fails:\n \n SELECT '' AS four, path(f1) FROM POLYGON_TBL;\n! ERROR: parser: parse error at or near \"(\"\n\nEvidently, although \"path\" is still accepted as a type name, it's\nnot allowable as a function name, which puts a crimp in type coercion.\n\nAFAICT from a quick scan of SQL99, PATH is only intended to be used\nin specialized contexts like SET PATH. It is *not* a data type nor\nneeded in type-related constructs.\n\nI think this could be fixed by removing the generic->PATH_P production\nand allowing PATH to be a ColId instead of just a ColLabel. Of course,\nthat just makes one wonder why bother to make it a keyword yet, when\nwe don't yet have any productions that need it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jul 2000 22:09:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: Latest parser changes "
}
] |
[
{
"msg_contents": "> How does \"Diagrams\" sound? Anyone have a better idea? The name appears\n> on a tab. The other tabs are labelled:\n\nIf one bothers to compare with them, \"Diagrams\" is what it's called in the\nMicrosoft SQL Server world. (\"Database Diagrams\" to be exact). And they can\nbe used to build foreign key relationships, like you suggested - a very nice\nfeature, I might add :-)\n\n//Magnus\n",
"msg_date": "Sat, 15 Jul 2000 17:03:36 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PgAccess diagrams (was: Re: Just committed some cha\n\tnges...)"
}
] |
[
{
"msg_contents": "\nThere is a new version of pg_dump (with blob support) at:\n\n ftp://ftp.rhyme.com.au/pub/postgresql/pg_dump/blobs/\n\nThis version supports TAR output and has a few bug fixes. There are two\nversions to download:\n\n pg_dump_...CVS.tar.gz\nand\n pg_dump_...702.tar.gz\n\nwhich can be used to build against CVS and version 7.0.2 respectively.\n\nA TAR format dump is produced by using the -Ft option, and can be used as\ninput into pg_restore. As with the previous release, BLOBs can only be\nreloaded by pg_restore if a db connection is used (via --db=<dbname>).\n\nAny comments & suggestions would be appreciated...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 16 Jul 2000 13:34:09 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump with BLOBS & V7.0.2 fix"
}
] |
[
{
"msg_contents": "\nAnother update of the dump code; this fixes a few bugs that people without\nzlib under 7.02 would have experienced. Also, if you wish to use zlib, you\nneed to modify the Makefile to add -DHAVE_LIBZ to the CFLAGS....\n\n--- Original announcement ----\n\nThere is a new version of pg_dump (with blob support) at:\n\n ftp://ftp.rhyme.com.au/pub/postgresql/pg_dump/blobs/\n\nThis version supports TAR output and has a few bug fixes. There are two\nversions to download:\n\n pg_dump_...CVS.tar.gz\nand\n pg_dump_...702.tar.gz\n\nwhich can be used to build against CVS and version 7.0.2 respectively.\n\nA TAR format dump is produced by using the -Ft option, and can be used as\ninput into pg_restore. As with the previous release, BLOBs can only be\nreloaded by pg_restore if a db connection is used (via --db=<dbname>).\n\nAny comments & suggestions would be appreciated...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 16 Jul 2000 20:49:35 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump with BLOBS & V7.0.2 UPDATED"
},
{
"msg_contents": "From: Philip Warner <[email protected]>\n Date: Sun, 16 Jul 2000 20:49:35 +1000\n\nHi Philip,\n\n > Another update of the dump code; this fixes a few bugs that people without\n > zlib under 7.02 would have experienced. Also, if you wish to use zlib, you\n > need to modify the Makefile to add -DHAVE_LIBZ to the CFLAGS....\n\nplease apply attached patch for 7.0.2 version. It is long, but it only\nremoves Makefile and creates Makefile.in which can be used for 7.0.2. With\nthis patch applied, just untar the newest snapshot to src/bin/pg_dump and\nyou can continue with normal installation: configure && make && make install.\n\nThank you for good work.\n-- \nPavel Jan�k ml.\[email protected]",
"msg_date": "Wed, 19 Jul 2000 11:11:05 +0200",
"msg_from": "[email protected] (Pavel =?iso-8859-2?q?Jan=EDk?= ml.)",
"msg_from_op": false,
"msg_subject": "Re: pg_dump with BLOBS & V7.0.2 UPDATED"
},
{
"msg_contents": "From: [email protected] (Pavel Jan�k ml.)\n Date: Wed, 19 Jul 2000 11:11:05 +0200\n\nHi,\n\n > this patch applied, just untar the newest snapshot to src/bin/pg_dump\n > and you can continue with normal installation: configure && make &&\n > make install.\n\nhmm. PG_VERSION is defined as 0 in version.h:\n\n#define PG_VERSION \"0\"\n\nbut in CVS version the variable is defined in config.h:\n\n#define PG_VERSION \"7.1devel\"\n\nSo if you compile the new pg_dump as I wrote above, you must use\n--ignore-version:\n\nSnowWhite:/home/pavel$ pg_dump db\nDatabase version: Archiver(db)\nPostgreSQL 7.0.2 on i486-pc-linux-gnu, compiled by gcc 2.7.2.3 version: 0\nAborting because of version mismatch.\nUse --ignore-version if you think it's safe to proceed anyway.\nSnowWhite:/home/pavel$\n\nI think this is the correct (and smallest :-) patch:\n\n\n\n\n-- \nPavel Jan�k ml.\[email protected]",
"msg_date": "Wed, 19 Jul 2000 13:18:41 +0200",
"msg_from": "[email protected] (Pavel =?iso-8859-2?q?Jan=EDk?= ml.)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pg_dump with BLOBS & V7.0.2 UPDATED"
}
] |
[
{
"msg_contents": "If you configure with --enable-depend and you are using GCC then you'll\nget dependency information generated on the fly. Give it a try. Other\ncompilers can be supported if there is demand. See Makefile.global for the\ninternals.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 16 Jul 2000 16:51:19 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Automatic dependency tracking is here"
}
] |
[
{
"msg_contents": "Hi,\n\nI have hacked the full-text indexing code in contrib to make it do\nkeyword indexing.\n\nIs someone willing to run an eye over it for obvious stupidities and\nthen include it into the standard tree?\n\nOliver has been including this in the Debian distribution for a while\nnow (thanks Oliver :-) but it's probably time it went into the main\ntree.\n\nThanks,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n",
"msg_date": "Mon, 17 Jul 2000 09:24:25 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Keyword indexing code for contrib"
},
{
"msg_contents": "Sure, send it over to the patches list, and if it looks good, we will\napply it.\n\n\n> Hi,\n> \n> I have hacked the full-text indexing code in contrib to make it do\n> keyword indexing.\n> \n> Is someone willing to run an eye over it for obvious stupidities and\n> then include it into the standard tree?\n> \n> Oliver has been including this in the Debian distribution for a while\n> now (thanks Oliver :-) but it's probably time it went into the main\n> tree.\n> \n> Thanks,\n> \t\t\t\t\tAndrew.\n> -- \n> _____________________________________________________________________\n> Andrew McMillan, e-mail: [email protected]\n> Catalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\n> Me: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 Jul 2000 19:24:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Keyword indexing code for contrib"
}
] |
[
{
"msg_contents": "Now that I'm actually on -HACKERS, did this make it to the list?\n\nLarry \n----- Forwarded message (env-from ler) -----\n\n2 Items I noticed with PostgreSQL 7.0.2:\n\n1) the macaddr type is NOT documented in the datatypes and functions\n documentation.\n2) the manufacturer list in the mac.c module is way out of date. \n this list can be updated from http://standards.ieee.org/regauth/oui/index.shtml \n and massaging the oui.txt file available from there. \n\n Is there anything I can do to help this?\n\n Also, I noticed a discussion on the INET/CIDR types ongoing here, and was wondering\n if there was a way to get an output function for these types that would ALWAYS print\n all 4 octets of the adddress for use by non-technical people. I am currently writing\n an IP allocation and tracking system using these types and find this a prioblem.\n\n Thanks!\n\n Larry Rosenman\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n----- End of forwarded message (env-from ler) -----\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 16 Jul 2000 21:33:10 -0500 (CDT)",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "macaddr type (fwd)"
},
{
"msg_contents": "Larry Rosenman <[email protected]> writes:\n> 1) the macaddr type is NOT documented in the datatypes and functions\n> documentation.\n> 2) the manufacturer list in the mac.c module is way out of date. \n> this list can be updated from http://standards.ieee.org/regauth/oui/index.shtml \n> and massaging the oui.txt file available from there. \n\n> Is there anything I can do to help this?\n\nSure: do it and submit a patch. Doc patches gratefully accepted, too.\n\n> Also, I noticed a discussion on the INET/CIDR types ongoing here, and\n> was wondering if there was a way to get an output function for these\n> types that would ALWAYS print all 4 octets of the adddress for use by\n> non-technical people. I am currently writing an IP allocation and\n> tracking system using these types and find this a prioblem.\n\nThis probably ought to be one of the considerations in the INET/CIDR\nredesign that we seem to be working on. I haven't been paying much\nattention to the details, but feel free to contribute to that thread.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jul 2000 00:23:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: macaddr type (fwd) "
}
] |
[
{
"msg_contents": "\nThis question comes out of my work on pg_dump. AFAICT, the only way of\nshowing, eg, the SQL for a procedure definition (other than 'select prosrc\nfrom pg_procs, or whatever'), is to use pg_dump.\n\nThis seems strange to me, since I often want to look at a procedure within\npsql, and running 'select' on system tables is not my first thought.\n\nI would have thought that the database itself should be the tool used to\ndisplay SQL, and if not the database, then one of the interface libraries. \n\nIf it were separated from pg_dump, then psql could more easily have a new\n\"\\D table table-name\" and \"\\D rule rule-name\" to dump object definitions,\nor \"\\D rules\", to dump the names of all rules etc.\n\nThe separation would have the further advantage that when a new language\nfeature is added the person adding it does not have to remember to update\npg_dump, psql etc. And the task might be a little easier, since I would\nhope that the code to dump the definition would be close to the code to\nparse it.\n\nDoes this sound resonable/sensible/worth doing?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 17 Jul 2000 16:21:56 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should PG backend know how to represent metadata? "
},
{
"msg_contents": "\nSomething like...\n# \\D table foo\n\ncreate table foo (\n bar text,\n baz integer\n);\n\n?\n\nSounds pretty good.\n\nPhilip Warner wrote:\n> \n> This question comes out of my work on pg_dump. AFAICT, the only way of\n> showing, eg, the SQL for a procedure definition (other than 'select prosrc\n> from pg_procs, or whatever'), is to use pg_dump.\n> \n> This seems strange to me, since I often want to look at a procedure within\n> psql, and running 'select' on system tables is not my first thought.\n> \n> I would have thought that the database itself should be the tool used to\n> display SQL, and if not the database, then one of the interface libraries.\n> \n> If it were separated from pg_dump, then psql could more easily have a new\n> \"\\D table table-name\" and \"\\D rule rule-name\" to dump object definitions,\n> or \"\\D rules\", to dump the names of all rules etc.\n> \n> The separation would have the further advantage that when a new language\n> feature is added the person adding it does not have to remember to update\n> pg_dump, psql etc. And the task might be a little easier, since I would\n> hope that the code to dump the definition would be close to the code to\n> parse it.\n> \n> Does this sound resonable/sensible/worth doing?\n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.C.N. 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 17 Jul 2000 16:30:23 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should PG backend know how to represent metadata?"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> I would have thought that the database itself should be the tool used to\n> display SQL, and if not the database, then one of the interface libraries. \n\nYou might be on to something. We have bits and pieces of that, such as\nthe rule reverse-lister and the code Peter just added to create a\nreadable version of a type name, but maybe some more consolidation is\nin order.\n\n> The separation would have the further advantage that when a new language\n> feature is added the person adding it does not have to remember to update\n> pg_dump, psql etc. And the task might be a little easier, since I would\n> hope that the code to dump the definition would be close to the code to\n> parse it.\n\nNo, not really. The only advantage would be in centralizing the display\ncapability and having just one copy instead of several. That is a\nsubstantial advantage, but you only get it if you make sure the backend\ndisplay capability is defined in a way that lets all these apps use it.\nThat might take some careful thought. For example, does the definition\nof a table include associated constraints and indexes? pg_dump would\nwant them separate, other apps perhaps not. Also, psql's \\d command\ndoesn't display the schema of a table in the form of a CREATE command\nto recreate it, and I don't think it should. Certainly you don't want\nto condemn every app that wants to know \"what are the columns of this\ntable\" to have to include a full SQL parser to make sense of the\nanswer. So I think some thought is needed to figure out what a\ngeneral-purpose representation would be like.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jul 2000 02:43:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should PG backend know how to represent metadata? "
},
{
"msg_contents": "At 16:30 17/07/00 +1000, Chris Bitmead wrote:\n>\n>Something like...\n># \\D table foo\n>\n>create table foo (\n> bar text,\n> baz integer\n>);\n\nExactly. Also, \"\\D table\" would list all tables.\n\nThe idea is to have [a replacement for] the 'dump' code from pg_dump in the\nbackend or in a library (libpq? libdump?). It might also be worth allowing: \n\n \\D ALL\n\nto do something close to what pg_dump does at the moment (ie. raw text dump\nof schema), although I would probably be inclined to sort the output in\nlogical order (all tables together etc).\n\nAs has already been commented, the changes I have made to pg_dump are\nnon-trivial, so now might be a good time to make these extra changes.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 17 Jul 2000 16:52:01 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should PG backend know how to represent metadata?"
},
{
"msg_contents": "At 02:43 17/07/00 -0400, Tom Lane wrote:\n>That is a\n>substantial advantage, but you only get it if you make sure the backend\n>display capability is defined in a way that lets all these apps use it.\n>That might take some careful thought. For example, does the definition\n>of a table include associated constraints and indexes?\n\nYou need to separate the API from what is displayed (eg. in psql). \n\nSuggestion:\n\nI would envisage the API consisting of a custom dump routine for each\nobject type. In the case of the table dumper API, it would return a table\ndefinition with no indexes or constraints and a list of related entities\nconsisting of (object-type, object-oid) pairs suitable for passing back to\nthe dumper API. psql could display as little or as much as it desired,\npg_dump could ferret the extra items away for later use etc. For those\nitems that can not be separated out, then they obviously have to go into\nthe main definition.\n\n\n>Also, psql's \\d command\n>doesn't display the schema of a table in the form of a CREATE command\n>to recreate it, and I don't think it should. \n\nI agree. \\D is not to replace \\d or \\df etc.\n\n\n>Certainly you don't want\n>to condemn every app that wants to know \"what are the columns of this\n>table\"\n\nThis is where we need to decide what the dumper code is for. I don't know\nmuch about the other things you have mentioned, so perhaps you could\nexpand. But, in my original plan, this suggestion was intended for human\nreadable dumps from pg_dump and psql. It would be great if it could be made\nto work elsewhere.\n\n> to have to include a full SQL parser to make sense of the\n>answer. So I think some thought is needed to figure out what a\n>general-purpose representation would be like.\n\nAnd where it would go...\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 17 Jul 2000 17:06:45 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should PG backend know how to represent\n metadata?"
},
{
"msg_contents": "At 17:06 17/07/00 +1000, Philip Warner wrote:\n>\n>Suggestion:\n>\n>I would envisage the API consisting of a custom dump routine for each\n>object type. In the case of the table dumper API, it would return a table\n>definition with no indexes or constraints and a list of related entities\n>consisting of (object-type, object-oid) pairs suitable for passing back to\n>the dumper API. psql could display as little or as much as it desired,\n>pg_dump could ferret the extra items away for later use etc. For those\n>items that can not be separated out, then they obviously have to go into\n>the main definition.\n>\n\nJust took the dog for a walk, and had another thought. If we really want\nthis to have the maximum usability, then we should make it available from SQL.\n\nie.\n\n\tselect pg_dump('table', 'foo')\n\nwhere pg_dump returns (possibly) multiple rows, the first being the most\nbasic definition, and subsequent rows being additional items & their\nname/id, eg:\n\n'Create Table Foo(Bar int);\" NULL\n'index' 'foo_ix1'\n'constraint' 'foo_pk'\n\netc.\n\nI don't think we have functions that return multiple rows, and a 'select'\nwithout a 'from' is not strictly legal, but aside from that, an SQL-based\nsolution mught be a nice idea. Which brings me to my next idea:\n\n select defn from pg_dump where type='table and name = 'foo'\nor\n select related_items from pg_dump where type='table and name = 'foo'\n\nwhere pg_dump can be implemented via a rewrite rule....maybe.\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 17 Jul 2000 17:43:39 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should PG backend know how to represent\n metadata?"
},
{
"msg_contents": "On Mon, 17 Jul 2000, Philip Warner wrote:\n\n> I would have thought that the database itself should be the tool used to\n> display SQL, and if not the database, then one of the interface libraries. \n\nSQL is only one of the many formats that people might want meta data in.\npsql and pgaccess, for example, have somewhat different requirements.\n\nThe SQL standard defines a large set of information schema views which\nprovide the database metadata in a portable fashion, from there it should\nbe a relatively short distance to the format of your choice, and the\nmaintainance problem across releases is decreased.\n\nOf course without schema support these views would intolerably clutter the\nuser name space, but I could think of a couple of ways to work around that\nfor the moment.\n\nBtw., in your wheeling and dealing in pg_dump you might want to look at\nthe format_type function I added, which is a step in this direction.\n(examples in psql/describe.c)\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 17 Jul 2000 10:45:38 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Should PG backend know how to represent metadata? "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> I don't think we have functions that return multiple rows,\n\nWe do, although they're a bit unwieldy to use; might be better to avoid\nthat feature. I'd be inclined to avoid the issue, and just have the\nfunction return one result (which might contain newlines for readability\nof course).\n\n> and a 'select'\n> without a 'from' is not strictly legal,\n\nIt is in postgres, and this is hardly SQL-standard-based anyway...\n\n> Which brings me to my next idea:\n\n> select defn from pg_dump where type='table and name = 'foo'\n> or\n> select related_items from pg_dump where type='table and name = 'foo'\n\n> where pg_dump can be implemented via a rewrite rule....maybe.\n\nThe rewrite rule couldn't do any of the heavy lifting; it'd still end\nup calling a function. A view like pg_rules might not be a bad idea,\nbut you should plan on exposing the underlying function for\nflexibility.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jul 2000 10:46:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should PG backend know how to represent metadata? "
},
{
"msg_contents": "At 10:46 17/07/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> I don't think we have functions that return multiple rows,\n>\n>We do, although they're a bit unwieldy to use; might be better to avoid\n>that feature. I'd be inclined to avoid the issue, and just have the\n>function return one result (which might contain newlines for readability\n>of course).\n\nNot sure that this has the flexibility needed for tables; I'd like the\ncalling application to be able to get just the base table definition with\nno constraints, and also request the related items (constraints, indexes\netc). I also want to avoid the caller from having to parse the output in\nany way.\n\nPerhaps it is best left as an API-only feature, but now I have thought of\nan SQL interface, I do like the idea.\n\n\n>\n>> Which brings me to my next idea:\n>\n>> select defn from pg_dump where type='table and name = 'foo'\n>> or\n>> select related_items from pg_dump where type='table and name = 'foo'\n>\n>> where pg_dump can be implemented via a rewrite rule....maybe.\n>\n>The rewrite rule couldn't do any of the heavy lifting; it'd still end\n>up calling a function. A view like pg_rules might not be a bad idea,\n>but you should plan on exposing the underlying function for\n>flexibility.\n\nSounds fine. Only I'm not sure that a rule can do it - AFAICT I still need\nsome underlying table to select from when I use a rule...unless I can fake\na result set for a 'special' table?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 18 Jul 2000 02:30:38 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should PG backend know how to represent\n metadata?"
},
{
"msg_contents": "At 10:45 17/07/00 -0400, [email protected] wrote:\n>On Mon, 17 Jul 2000, Philip Warner wrote:\n>\n>> I would have thought that the database itself should be the tool used to\n>> display SQL, and if not the database, then one of the interface libraries. \n>\n>SQL is only one of the many formats that people might want meta data in.\n>psql and pgaccess, for example, have somewhat different requirements.\n\nI would have thought that pgaccess would still need to display table\ndefinitions in SQL, but I have not looked at it closely enough. At the\nlowest level I suspect pgaccess will always have to use direct access to\npg_* tables.\n\n\n>The SQL standard defines a large set of information schema views which\n>provide the database metadata in a portable fashion, from there it should\n>be a relatively short distance to the format of your choice, and the\n>maintainance problem across releases is decreased.\n\nThis sounds good; where are they defined in the spec?\n\n\n>Of course without schema support these views would intolerably clutter the\n>user name space, but I could think of a couple of ways to work around that\n>for the moment.\n\nPresumably they could be called pg_*...\n\n\n>Btw., in your wheeling and dealing in pg_dump you might want to look at\n>the format_type function I added, which is a step in this direction.\n>(examples in psql/describe.c)\n\nThis is the sort of thing I'd like to see, but on a more general level:\n\n format_object('table', <oid>)\n\nwould return the base definition of the table.\n\nBut I'd also like some kind of interface that allowed related items (&\ntheir type) to be returned, which is where I came from with a 'select'\nexpression returning multiple rows. The functional interface could be\nwritten as (ignoring names!):\n\n typedef {\n\tint\trelationship;\n\tchar*\tobjType;\n\tOid\toid\n } objRef;\n\n formatObject(const char* objType, Oid oid, \n\t\t\tchar* objDefn, int *defnLen, objRef *refs[], int *numRefs)\n\nwhere formatObject is passed a type and an Oid, and returns a definition\nand an array of references to other objects. Note that the fields of the\nobjRef structure match the input args of formatObjects.\n\nOne could also call formatObject with a null oid to get a list of objects\nof the given type, and call it with a null objTye and oid to get a list of\navailable types to dump...perhaps I am overloading the function just a\nlittle, but does this sound reasonable?\n\nIf desired, the 'relationship' field could be used to indicate te parent\ntable for an index, or the 'child' indexes for a table, but it might be\nbetter to have a separate list for parent (one only?) and children?\n\nAny suggestions would be appreciated....\n\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 18 Jul 2000 02:47:45 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should PG backend know how to represent\n metadata?"
},
{
"msg_contents": "Philip Warner writes:\n\n> I would have thought that pgaccess would still need to display table\n> definitions in SQL, but I have not looked at it closely enough. At the\n> lowest level I suspect pgaccess will always have to use direct access to\n> pg_* tables.\n\nI thought it was your intention to get rid of this fact. We should\ncertainly be thinking in terms of all client applications.\n\n[Information Schema]\n> This sounds good; where are they defined in the spec?\n\nPart 2, chapter 20, if that helps you. It's not really possible to\nimplement all of these at this point because many are quite complex and\ndepend on outer joins and other fancy features, or contain\nmeta-information on features that don't exist yet. Actually, we probably\nneed the full-blown TOAST before some of these will fit at all.\n\n> Presumably they could be called pg_*...\n\nWe could name them pg_IS_* for the moment and add simplistic parser\nsupport for schemas that wiil pick up these tables if the\ninformation_schema is referenced.\n\n\n> This is the sort of thing I'd like to see, but on a more general level:\n> \n> format_object('table', <oid>)\n> \n> would return the base definition of the table.\n\nI'm not sure if we want to move the entire pg_dump functionality into the\nbackend. For example, if someone wants to move SQL dumps to a\nnot-quite-SQL or a much-more-SQL database and the format is slightly\nwrong, then there's no way to amend that short of patching the backend.\nThen we could as well have the backend returns pre-formatted output for\npsql.\n\nA human-oriented layer over the system catalogs (which are implementation\noriented) could go a long way toward maximum flexibility.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 18 Jul 2000 00:29:17 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should PG backend know how to represent metadata? "
},
{
"msg_contents": "Philip Warner wrote:\n\n> This is the sort of thing I'd like to see, but on a more general level:\n> \n> format_object('table', <oid>)\n> \n> would return the base definition of the table.\n\nformat_object(<oid>) should be sufficient.\n",
"msg_date": "Tue, 18 Jul 2000 09:09:44 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should PG backend know how to representmetadata?"
},
{
"msg_contents": "At 09:09 18/07/00 +1000, Chris Bitmead wrote:\n>Philip Warner wrote:\n>\n>> This is the sort of thing I'd like to see, but on a more general level:\n>> \n>> format_object('table', <oid>)\n>> \n>> would return the base definition of the table.\n>\n>format_object(<oid>) should be sufficient.\n>\n\nTechnically yes, but my belief is that identifying what the oid points to\nis actually a matter of searching everywhere, which is probably to be avoided.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 18 Jul 2000 11:14:49 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should PG backend know how to representmetadata?"
},
{
"msg_contents": "At 00:29 18/07/00 +0200, Peter Eisentraut wrote:\n>Philip Warner writes:\n>\n>> I would have thought that pgaccess would still need to display table\n>> definitions in SQL, but I have not looked at it closely enough. At the\n>> lowest level I suspect pgaccess will always have to use direct access to\n>> pg_* tables.\n>\n>I thought it was your intention to get rid of this fact. We should\n>certainly be thinking in terms of all client applications.\n\nI agree, but it seems we have a gain if we can get psql-compliant sql out\nof a single library. I'm quite open to making a more general\nimplementation, but I'd need to know what pgaccess needs over and above a\npsql-compliant SQL output.\n\nThe reason I think pgaccess will probably have to continue with internal\nknowledge is that it is a low level manager for the database; at the\nsimplest level, getting tables and their columns would be great, but it\nprobably also needs to know what the primary key is, and even understand\nconstraints (at least non-NULL ones). This is a very different problem, and\ndefinitely related to the SQL information schemas.\n\nPerhaps what I do here can be structured to be useful to whoever implements\ninformation schemas when they come along.\n\n\n>[Information Schema]\n>> This sounds good; where are they defined in the spec?\n>\n>Part 2, chapter 20, if that helps you. It's not really possible to\n>implement all of these at this point because many are quite complex and\n>depend on outer joins and other fancy features, or contain\n>meta-information on features that don't exist yet. Actually, we probably\n>need the full-blown TOAST before some of these will fit at all.\n\nI agree. At best we could implement things like COLUMNS, and even then the\nvarious 'schema' columns would be meaningless (until schemas come along).\n\n\n>\n>> This is the sort of thing I'd like to see, but on a more general level:\n>> \n>> format_object('table', <oid>)\n>> \n>> would return the base definition of the table.\n>\n>I'm not sure if we want to move the entire pg_dump functionality into the\n>backend. For example, if someone wants to move SQL dumps to a\n>not-quite-SQL or a much-more-SQL database and the format is slightly\n>wrong, then there's no way to amend that short of patching the backend.\n>Then we could as well have the backend returns pre-formatted output for\n>psql.\n>\n>A human-oriented layer over the system catalogs (which are implementation\n>oriented) could go a long way toward maximum flexibility.\n\nYou may be right, but being able to 'select' a table or field definition is\nvery appealing. Can it be made a little cleaner by being implemented as a\ndynamically linked function (as per user defined functions). That would\nseem to reduce the problem you have with releasing a new backend, at least.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 18 Jul 2000 11:36:05 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should PG backend know how to represent\n metadata?"
}
] |
[
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> [ pltcl's regress test is failing ]\n> Seems to suffer due to some bug. The functions use a feature\n> of the Tcl interpreter, who treates a backslash followed by a\n> newline as a whitespace that doesn't start a new command\n> (previous command is continued).\n\n> I did some other tests and ISTM that it is totally impossible\n> by now to insert data where backslash is followed by newline\n> at all. At least I wasn't able to quote it properly. Maybe\n> these are filtered already by psql?\n\nYes, it seems that psql's handling of backslashes has changed for the\nworse.\n\nIn current sources, I type:\n\nregression=# select 'abc \\\\\nregression'# def';\n ?column?\n-----------\n abc\ndef\n(1 row)\n\nRunning with -d2, the postmaster log shows:\n\nDEBUG: StartTransactionCommand\nDEBUG: query: select 'abc \ndef';\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\n\npsql has eaten the backslashes, even though they are within quotes.\nThis is not cool. 6.5.* psql did not do that, and current sources\ndon't either *unless* the backslashes are at the very end of a line.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jul 2000 10:36:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pltcl regress test? "
},
{
"msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > [ pltcl's regress test is failing ]\n> > Seems to suffer due to some bug. The functions use a feature\n> > of the Tcl interpreter, who treates a backslash followed by a\n> > newline as a whitespace that doesn't start a new command\n> > (previous command is continued).\n>\n> > I did some other tests and ISTM that it is totally impossible\n> > by now to insert data where backslash is followed by newline\n> > at all. At least I wasn't able to quote it properly. Maybe\n> > these are filtered already by psql?\n>\n> Yes, it seems that psql's handling of backslashes has changed for the\n> worse.\n\n After Peter's fix to psql I updated the pltcl test expected\n result and removed an ordering problem from the test queries.\n Should work again now.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Tue, 18 Jul 2000 13:27:15 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: pltcl regress test?"
}
] |
[
{
"msg_contents": "I'm not sure whether my long post made it to the list, so I created\na tarball of the patch and the awk script:\n\nftp://ftp.lerctr.org/postgresql-patches/macupdate.tar.gz\n\nThis has the patch file for mac.c to update the manufacturer list with\nthe current list from IEEE and the awk script I ran against the IEEE's\noui.txt file. \n\nThis patch is relative to 7.0.0, but the file is the same in 7.0.2 \nthe MD5 checksums are identical). \n\nLet me know if I can do anything else to help integrate this.\n\nLarry Rosenman\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 17 Jul 2000 11:10:45 -0500 (CDT)",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Update: mac.c update, patch now on ftp"
},
{
"msg_contents": "I am getting connection refused. Can you just send over a context diff\nof the old and new files?\n\n> I'm not sure whether my long post made it to the list, so I created\n> a tarball of the patch and the awk script:\n> \n> ftp://ftp.lerctr.org/postgresql-patches/macupdate.tar.gz\n> \n> This has the patch file for mac.c to update the manufacturer list with\n> the current list from IEEE and the awk script I ran against the IEEE's\n> oui.txt file. \n> \n> This patch is relative to 7.0.0, but the file is the same in 7.0.2 \n> the MD5 checksums are identical). \n> \n> Let me know if I can do anything else to help integrate this.\n> \n> Larry Rosenman\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jul 2000 14:13:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update: mac.c update, patch now on ftp"
},
{
"msg_contents": "I sent a private copy to Bruce. \n\nMy Firewall doesn't like PASV FTP, for future reference. \n\nLarry Rosenman\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: Monday, July 17, 2000 1:13 PM\nTo: Larry Rosenman\nCc: [email protected]; [email protected]\nSubject: Re: [HACKERS] Update: mac.c update, patch now on ftp\n\n\nI am getting connection refused. Can you just send over a context diff\nof the old and new files?\n\n> I'm not sure whether my long post made it to the list, so I created\n> a tarball of the patch and the awk script:\n> \n> ftp://ftp.lerctr.org/postgresql-patches/macupdate.tar.gz\n> \n> This has the patch file for mac.c to update the manufacturer list with\n> the current list from IEEE and the awk script I ran against the IEEE's\n> oui.txt file. \n> \n> This patch is relative to 7.0.0, but the file is the same in 7.0.2 \n> the MD5 checksums are identical). \n> \n> Let me know if I can do anything else to help integrate this.\n> \n> Larry Rosenman\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jul 2000 13:37:14 -0500",
"msg_from": "\"Larry Rosenman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Update: mac.c update, patch now on ftp"
}
] |
[
{
"msg_contents": "Hi,\n\n seems to me that psql thinks to know a little too much about\n quoting. I'm not able to qoute a backslash at the end of a\n line:\n\n xxx=# select 'a\\\\b\\\\\n xxx'# c';\n ?column?\n ----------\n a\\b\n c\n (1 row)\n\n There is a newline following directly after b\\\\, so I\n expected a \"a\\b\\<NL>c\" response - what the above obviously\n isn't.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Mon, 17 Jul 2000 19:01:23 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "psql eating backslashes"
},
{
"msg_contents": "Jan Wieck writes:\n\n> seems to me that psql thinks to know a little too much about\n> quoting. I'm not able to qoute a backslash at the end of a\n> line:\n> \n> xxx=# select 'a\\\\b\\\\\n> xxx'# c';\n> ?column?\n> ----------\n> a\\b\n> c\n> (1 row)\n\nI committed a fix that should give you better results.\n\npeter=# select 'abc\\\\\npeter'# def';\n ?column?\n----------\n abc\\\ndef\n(1 row)\n\nBut what should\n\npeter=# select 'abc\\\npeter'# def';\n\ndo? This doesn't seem right:\n\n ?column?\n----------\n abc\ndef\n(1 row)\n\nShould the newline be stripped?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 17 Jul 2000 20:30:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql eating backslashes"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> But what should\n\n> peter=# select 'abc\\\n> peter'# def';\n\n> do? This doesn't seem right:\n\n> ?column?\n> ----------\n> abc\n> def\n> (1 row)\n\nLooks fine to me.\n\n> Should the newline be stripped?\n\nI would think not. That would mean that backslash-newline gives you\nsomething *other* than a literal newline, which is an inconsistency\nwe don't need since we don't treat newline as special.\n\nAlso, it would be changing the old (pre-7.0) behavior, which would\ndoubtless break someone's code somewhere. In the absence of a\ncompelling reason to change the behavior, I think we have to leave it\nalone.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jul 2000 17:42:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql eating backslashes "
}
] |
[
{
"msg_contents": "> > > > I'm not sure whether my long post made it to the list, so I created\n> > > > a tarball of the patch and the awk script:\n> > > >\n> > > > ftp://ftp.lerctr.org/postgresql-patches/macupdate.tar.gz\n> > > >\n> > > > This has the patch file for mac.c to update the manufacturer list with\n> > > > the current list from IEEE and the awk script I ran against the IEEE's\n> > > > oui.txt file.\n> > > >\n> > > > This patch is relative to 7.0.0, but the file is the same in 7.0.2\n> > > > the MD5 checksums are identical).\n> \n> In looking at this patch, it changes mac.c from:\n> \n> 8513 mac.c\n> 184323 mac.c.new\n> \n> \n> This seems like a major size increase. Is it worth it to get\n> card manufacturers?\n\nGiven that our list is old, and the new list is large, does anyone\nreally want this feature. It allows the lookup of an ethernet card\nmanufacturer based on the ethernet MAC address.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jul 2000 14:42:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update: mac.c update, patch now on ftp"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> In looking at this patch, it changes mac.c from:\n>> \n>> 8513 mac.c\n>> 184323 mac.c.new\n\nYipes.\n\n>> This seems like a major size increase. Is it worth it to get\n>> card manufacturers?\n\n> Given that our list is old, and the new list is large, does anyone\n> really want this feature. It allows the lookup of an ethernet card\n> manufacturer based on the ethernet MAC address.\n\nSeems like hardwiring the lookup table in the server code is\nfundamentally wrongheaded anyway. It should have been designed as\n(wait for it...) a database table lookup.\n\nI'd suggest maybe this feature should be offered as a contrib item\ncontaining a database table dump and a simple lookup function\ncoded in SQL. Lots easier to maintain, and it doesn't bloat the\nbackend for people who don't need it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jul 2000 17:46:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update: mac.c update, patch now on ftp "
},
{
"msg_contents": "I just don't have the smarts to do the code for the backend. \n\n(strange thought:\n include my ouiparse.awk (changed to generate SQL inserts), and\n does macaddr support like xx:xx:xx% lookups? if so, give them\n the tool to make their own DB.). \n\n The code in mac.c does a LINEAR search, and the list is only going to\n grow. It's at 4110+ OUI's that are PUBLIC.). \n\n Larry\n> Bruce Momjian <[email protected]> writes:\n> >> In looking at this patch, it changes mac.c from:\n> >> \n> >> 8513 mac.c\n> >> 184323 mac.c.new\n> \n> Yipes.\n> \n> >> This seems like a major size increase. Is it worth it to get\n> >> card manufacturers?\n> \n> > Given that our list is old, and the new list is large, does anyone\n> > really want this feature. It allows the lookup of an ethernet card\n> > manufacturer based on the ethernet MAC address.\n> \n> Seems like hardwiring the lookup table in the server code is\n> fundamentally wrongheaded anyway. It should have been designed as\n> (wait for it...) a database table lookup.\n> \n> I'd suggest maybe this feature should be offered as a contrib item\n> containing a database table dump and a simple lookup function\n> coded in SQL. Lots easier to maintain, and it doesn't bloat the\n> backend for people who don't need it.\n> \n> \t\t\tregards, tom lane\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 17 Jul 2000 20:55:40 -0500 (CDT)",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update: mac.c update, patch now on ftp"
},
{
"msg_contents": "> The code in mac.c does a LINEAR search, and the list is only going to\n> grow. It's at 4110+ OUI's that are PUBLIC.).\n\nYuck. Let's put it into a (user loadable) table. And rather than having\nmacaddr_manuf(), we should just have a \"sametype()\" (or whatever) which\ncan compare an arbitrary mac address with another one. Then\n\n select brand from mactypes where '03:04:...' = macmask;\n\nor at least\n\n select brand from mactypes where sametype('03:04:...',macmask);\n\nwould get you the right thing.\n\nI can help with these internals, if that is the right way to head.\n\n - Thomas\n",
"msg_date": "Tue, 18 Jul 2000 02:54:51 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update: mac.c update, patch now on ftp"
},
{
"msg_contents": "I was thinking of making the following:\n\n1) macaddr_manuf function that returns the first 3 octets of a macaddr\n2) a macaddr_manuf TYPE that can be used for the table\n3) supply the ouiparse.awk to generate a set of INSERT statements\n to load a table\n4) allow the above table to be indexed. \n\nWhat does the group think of this?\n\nLarry Rosenman\n> > The code in mac.c does a LINEAR search, and the list is only going to\n> > grow. It's at 4110+ OUI's that are PUBLIC.).\n> \n> Yuck. Let's put it into a (user loadable) table. And rather than having\n> macaddr_manuf(), we should just have a \"sametype()\" (or whatever) which\n> can compare an arbitrary mac address with another one. Then\n> \n> select brand from mactypes where '03:04:...' = macmask;\n> \n> or at least\n> \n> select brand from mactypes where sametype('03:04:...',macmask);\n> \n> would get you the right thing.\n> \n> I can help with these internals, if that is the right way to head.\n> \n> - Thomas\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 18 Jul 2000 06:50:25 -0500 (CDT)",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update: mac.c update, patch now on ftp"
},
{
"msg_contents": "> I was thinking of making the following:\n> 1) macaddr_manuf function that returns the first 3 octets of a macaddr\n> 2) a macaddr_manuf TYPE that can be used for the table\n> 3) supply the ouiparse.awk to generate a set of INSERT statements\n> to load a table\n> 4) allow the above table to be indexed.\n\nWhy not use a comparison function which knows how to compare just the\nmanufacturer's id fields? Or define a function which masks a mac address\nto return the id fields and zeros everywhere else (so you can then use\nthe \"normal\" comparison functions)?\n\nDefining a new type to allow this masking and comparison seems more than\nnecessary (which, if this argument holds up, would make it undesirable).\nWe can still define functional indices so that a table lookup will be\nfast in any case.\n\n - Thomas\n",
"msg_date": "Thu, 20 Jul 2000 06:07:38 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update: mac.c update, patch now on ftp"
},
{
"msg_contents": "Ok, how would one set this up (I've NEVER programmed in PG's backend,\nso I'm a little skittish). \n\nThe changes I did for mac.c were obvious, so I'm afraid of breaking\nPG's world. \n\nAnything I can do?\n\nLarry\n> > I was thinking of making the following:\n> > 1) macaddr_manuf function that returns the first 3 octets of a macaddr\n> > 2) a macaddr_manuf TYPE that can be used for the table\n> > 3) supply the ouiparse.awk to generate a set of INSERT statements\n> > to load a table\n> > 4) allow the above table to be indexed.\n> \n> Why not use a comparison function which knows how to compare just the\n> manufacturer's id fields? Or define a function which masks a mac address\n> to return the id fields and zeros everywhere else (so you can then use\n> the \"normal\" comparison functions)?\n> \n> Defining a new type to allow this masking and comparison seems more than\n> necessary (which, if this argument holds up, would make it undesirable).\n> We can still define functional indices so that a table lookup will be\n> fast in any case.\n> \n> - Thomas\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 20 Jul 2000 01:33:33 -0500 (CDT)",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update: mac.c update, patch now on ftp"
}
] |
[
{
"msg_contents": "\nI've tried doing as much reading on the mailing lists as I can on this.\n\nThe problem lies in that I'm trying to use PostgreSQL to store HTML \ndocuments which are larger than the default tuple size of 8120 bytes. I \noriginally thought that the data type TEXT wasn't limited in size. Also, \nthrough some of the postings, I came to the understanding there was a \nlimit, but that it was supposed to be removed.\n\nI've gone through the source some and found where MaxTupleSize is defined, \nbut I'm not sure what would be acceptable values to set it to.\n\nCan anyone offer some help?\n\nThanks in advance,\nThomas\n-\n- Thomas Swan\n- Graduate Student - Computer Science\n- The University of Mississippi\n-\n- \"People can be categorized into two fundamental\n- groups, those that divide people into two groups\n- and those that don't.\"\n\n",
"msg_date": "Mon, 17 Jul 2000 13:58:08 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": true,
"msg_subject": "TUPLE SIZE HELP"
},
{
"msg_contents": ">I've gone through the source some and found where MaxTupleSize is defined, \n>but I'm not sure what would be acceptable values to set it to.\n\nThough 8k is the default tuple size, you can change that number to be\nup to 32k AT COMPILE TIME. You can't, however, go above that size. \nThe code base doesn't support a larger size. At this time, you can\neither use Large Objects or wait a bit for the TOAST project to be\nfinish (is there a max for that? 2G?)\n\n- Brandon\nsixdegrees.com\nw: 212.375.2688\nc: 917.734.1981\n\n\n",
"msg_date": "Mon, 17 Jul 2000 16:06:43 -0400",
"msg_from": "\"B. Palmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TUPLE SIZE HELP"
},
{
"msg_contents": ">Though 8k is the default tuple size, you can change that number to be\n>up to 32k AT COMPILE TIME. You can't, however, go above that size.\n>The code base doesn't support a larger size. At this time, you can\n>either use Large Objects or wait a bit for the TOAST project to be\n>finish (is there a max for that? 2G?)\n\nHow can I do this? Is there a -DEFINE or something?\n\nExcuse my ignorance, there are things I SHOULD know...\n\nThomas\n\n\n-\n- Thomas Swan\n- Graduate Student - Computer Science\n- The University of Mississippi\n-\n- \"People can be categorized into two fundamental\n- groups, those that divide people into two groups\n- and those that don't.\"\n\nThough 8k is the default tuple size, you\ncan change that number to be\nup to 32k AT COMPILE TIME. You can't, however, go above\nthat size. \nThe code base doesn't support a larger size. At this time, \nyou can\neither use Large Objects or wait a bit for the TOAST project to be\nfinish (is there a max for that? 2G?)\nHow can I do this? Is there a -DEFINE or something?\n\nExcuse my ignorance, there are things I SHOULD know...\n\nThomas\n\n\n\n- \n- Thomas Swan\n \n- Graduate Student - Computer Science\n- The University of Mississippi\n- \n- \"People can be categorized into two fundamental \n- groups, those that divide people into two groups \n- and those that don't.\"",
"msg_date": "Mon, 17 Jul 2000 15:55:43 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TUPLE SIZE HELP"
},
{
"msg_contents": "Thomas Swan <[email protected]> writes:\n> I've gone through the source some and found where MaxTupleSize is defined, \n> but I'm not sure what would be acceptable values to set it to.\n\nThat would be the wrong thing to change in any case. BLCKSZ in\ninclude/config.h is the value you can twiddle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jul 2000 17:35:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TUPLE SIZE HELP "
},
{
"msg_contents": "\n>That would be the wrong thing to change in any case. BLCKSZ in\n>include/config.h is the value you can twiddle.\n\nAre there any plans to remove this limitation from the TEXT datatype? If \nso, do you know what version we might see this in?\n\n\n-\n- Thomas Swan\n- Graduate Student - Computer Science\n- The University of Mississippi\n-\n- \"People can be categorized into two fundamental\n- groups, those that divide people into two groups\n- and those that don't.\"\n\n",
"msg_date": "Mon, 17 Jul 2000 16:56:28 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TUPLE SIZE HELP "
},
{
"msg_contents": "Thomas Swan wrote:\n>\n> >That would be the wrong thing to change in any case. BLCKSZ in\n> >include/config.h is the value you can twiddle.\n>\n> Are there any plans to remove this limitation from the TEXT datatype? If\n> so, do you know what version we might see this in?\n\n It is already in the current development tree and will be\n shipped with the next release, scheduled for end of the year.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Tue, 18 Jul 2000 11:10:28 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: TUPLE SIZE HELP"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> On Tue, 18 Jul 2000, Jan Wieck wrote:\n> \n> > Thomas Swan wrote:\n> > >\n> > > >That would be the wrong thing to change in any case. BLCKSZ in\n> > > >include/config.h is the value you can twiddle.\n> > >\n> > > Are there any plans to remove this limitation from the TEXT datatype? If\n> > > so, do you know what version we might see this in?\n> >\n> > It is already in the current development tree and will be\n> > shipped with the next release, scheduled for end of the year.\n> \n> Just a quick note ... with v7.x, you *can* use the lztext type to get by\n> the limitation ... its a temporary fix, but I believe that Vince is using\n> it for the web site for similar reasons to the original poster ...\n\nIIRC lztext does only compression and not splitting, so it just gets you\nso far ...\n\n-------\nHannu\n",
"msg_date": "Wed, 19 Jul 2000 02:36:47 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TUPLE SIZE HELP"
},
{
"msg_contents": "On Tue, 18 Jul 2000, Jan Wieck wrote:\n\n> Thomas Swan wrote:\n> >\n> > >That would be the wrong thing to change in any case. BLCKSZ in\n> > >include/config.h is the value you can twiddle.\n> >\n> > Are there any plans to remove this limitation from the TEXT datatype? If\n> > so, do you know what version we might see this in?\n> \n> It is already in the current development tree and will be\n> shipped with the next release, scheduled for end of the year.\n\nJust a quick note ... with v7.x, you *can* use the lztext type to get by\nthe limitation ... its a temporary fix, but I believe that Vince is using\nit for the web site for similar reasons to the original poster ...\n\n\n",
"msg_date": "Tue, 18 Jul 2000 20:50:19 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TUPLE SIZE HELP"
},
{
"msg_contents": "On Tue, 18 Jul 2000, The Hermit Hacker wrote:\n\n> On Tue, 18 Jul 2000, Jan Wieck wrote:\n> \n> > Thomas Swan wrote:\n> > >\n> > > >That would be the wrong thing to change in any case. BLCKSZ in\n> > > >include/config.h is the value you can twiddle.\n> > >\n> > > Are there any plans to remove this limitation from the TEXT datatype? If\n> > > so, do you know what version we might see this in?\n> > \n> > It is already in the current development tree and will be\n> > shipped with the next release, scheduled for end of the year.\n> \n> Just a quick note ... with v7.x, you *can* use the lztext type to get by\n> the limitation ... its a temporary fix, but I believe that Vince is using\n> it for the web site for similar reasons to the original poster ...\n\nYou are correct. The guide now fits into one tuple.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 18 Jul 2000 20:00:37 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TUPLE SIZE HELP"
},
{
"msg_contents": "On Wed, 19 Jul 2000, Hannu Krosing wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > On Tue, 18 Jul 2000, Jan Wieck wrote:\n> > \n> > > Thomas Swan wrote:\n> > > >\n> > > > >That would be the wrong thing to change in any case. BLCKSZ in\n> > > > >include/config.h is the value you can twiddle.\n> > > >\n> > > > Are there any plans to remove this limitation from the TEXT datatype? If\n> > > > so, do you know what version we might see this in?\n> > >\n> > > It is already in the current development tree and will be\n> > > shipped with the next release, scheduled for end of the year.\n> > \n> > Just a quick note ... with v7.x, you *can* use the lztext type to get by\n> > the limitation ... its a temporary fix, but I believe that Vince is using\n> > it for the web site for similar reasons to the original poster ...\n> \n> IIRC lztext does only compression and not splitting, so it just gets you\n> so far ...\n\nCorrect ...\n\n\n",
"msg_date": "Tue, 18 Jul 2000 21:41:43 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TUPLE SIZE HELP"
}
] |
[
{
"msg_contents": "If I understand the fundamental design of TOAST correctly, it's not\nallowed to have multiple heap tuples containing pointers to the same\nmoved-off TOAST item. For example, if one tuple has such a pointer,\nand we copy it with INSERT ... SELECT, then the new tuple has to be\nconstructed with its own copy of the moved-off item. Without this\nyou'd need reference counts and so forth for moved-off values.\n\nIt looks like you have logic for all this in tuptoaster.c, but\nI see a flaw: the code doesn't look inside array fields to see if\nany of the array elements are pre-toasted values. There could be\na moved-off-item pointer inside an array, copied from some other\nplace.\n\nNote the fact that arrays aren't yet considered toastable is\nno defense. An array of a toastable data type is sufficient\nto create the risk.\n\nWhat do you want to do about this? We could have heap_tuple_toast_attrs\nscan through all the elements of arrays of toastable types, but that\nstrikes me as slow. I'm thinking the best approach is for the array\nconstruction routines to refuse to insert toasted values into array\nobjects in the first place --- instead, expand them before insertion.\nThen the whole array could be treated as a toastable object, but there\nare no references inside the array to worry about.\n\nIf we do that, should compressed-in-place array items be expanded back\nto full size before insertion in the array? If we don't, we'd likely\nend up trying to compress already-compressed data, which is a waste of\neffort ... but OTOH it seems a shame to force the data back to full\nsize unnecessarily. Either way would work, I'm just not sure which\nis likely to be more efficient.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jul 2000 19:14:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "TOAST vs arrays"
},
{
"msg_contents": "Tom Lane wrote:\n> If I understand the fundamental design of TOAST correctly, it's not\n> allowed to have multiple heap tuples containing pointers to the same\n> moved-off TOAST item. For example, if one tuple has such a pointer,\n> and we copy it with INSERT ... SELECT, then the new tuple has to be\n> constructed with its own copy of the moved-off item. Without this\n> you'd need reference counts and so forth for moved-off values.\n>\n> It looks like you have logic for all this in tuptoaster.c, but\n> I see a flaw: the code doesn't look inside array fields to see if\n> any of the array elements are pre-toasted values. There could be\n> a moved-off-item pointer inside an array, copied from some other\n> place.\n>\n> Note the fact that arrays aren't yet considered toastable is\n> no defense. An array of a toastable data type is sufficient\n> to create the risk.\n\n Yepp\n\n> What do you want to do about this? We could have heap_tuple_toast_attrs\n> scan through all the elements of arrays of toastable types, but that\n> strikes me as slow. I'm thinking the best approach is for the array\n> construction routines to refuse to insert toasted values into array\n> objects in the first place --- instead, expand them before insertion.\n> Then the whole array could be treated as a toastable object, but there\n> are no references inside the array to worry about.\n\n I think the array construction routines is the right place to\n expand them.\n\n> If we do that, should compressed-in-place array items be expanded back\n> to full size before insertion in the array? If we don't, we'd likely\n> end up trying to compress already-compressed data, which is a waste of\n> effort ... but OTOH it seems a shame to force the data back to full\n> size unnecessarily. Either way would work, I'm just not sure which\n> is likely to be more efficient.\n\n I think it's not too bad to expand them and then let the\n toaster (eventually) compress the entire array again. Larger\n data usually yields better compression results. Given the\n actual speed of our compression code, I don't expect a\n performance penalty from it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Tue, 18 Jul 2000 12:58:32 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: TOAST vs arrays"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n>> What do you want to do about this? We could have heap_tuple_toast_attrs\n>> scan through all the elements of arrays of toastable types, but that\n>> strikes me as slow. I'm thinking the best approach is for the array\n>> construction routines to refuse to insert toasted values into array\n>> objects in the first place --- instead, expand them before insertion.\n>> Then the whole array could be treated as a toastable object, but there\n>> are no references inside the array to worry about.\n\n> I think the array construction routines is the right place to\n> expand them.\n\nSounds like a plan.\n\nJust in case anyone wants to object: I'm planning to rip out all of\nthe \"large object array\" and \"chunked array\" support that's in there\nnow. AFAICS it does nothing that won't be done as well or better by\ntoasted arrays, and it probably doesn't work anyway (seeing that much\nof it has been ifdef'd out for a long time).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jul 2000 10:43:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TOAST vs arrays "
}
] |
[
{
"msg_contents": "First:\n\n1. We're compiling on AIX 4.3.3, 64-bit, using the native compiler and gmake.\n2. I'm not an AIX compiler guru :-)\n\nHistory:\n\n1. Built postgres but failed on libpgtcl, error was a number of unrecognized symbols. This happened when we linked from libpgtcl.a to libpgtcl.so. See attachment for details.\n\n2. I hacked the src/interfaces/libpgtcl/Makefile to add \"LDFLAGS+= -ltcl8.0\", and added the directory in --with-libraries directive in configure.\n\n3. Run gmake again and it compiled, but failed with the exact same error on pltcl.\n\n4. Made the same hack to the pltcl Makefile and compiled and it works.\n\nI can run pgtclsh just fine. However, our product has embedded tcl and a number of extensions (incrtcl, tcl++, tix, tclx, and more). I'm building off of the libtcl8.0.a build for our product.\n\nSo when I try \"load libpgtcl.so\", I get the following error:\n\n\nError: couldn't load file \"./libpgtcl.so\": 0509-130 Symbol resolution faile\nd for ./libpgtcl.so because:\n 0509-136 Symbol __start (number 0) is not exported from\n dependent module /home/postgres/postgresql/bin/postgres.\n 0509-192 Examine .loader section symbols with the\n 'dump -Tv' command. \n\n\nI did the dump command on the postgres executable and found:\n\n[177] 0x20022eb0 .data ENTpt DS SECdef [noIMid] __start\n\n\nI'm sure there are more questions coming, but this is at least a start. The bottom line is that I need to be able to use libpgtcl with our tclshell. Also note that we setup a number of environment variables, pointing our tclshell to various directories with our libraries in them.\n\nThanks for the help in advance!\n\nTim\nResending...I didn't see my post come back to me on the list...\n\n----- Original Message ----- \nFrom: Tim Dunnington \nTo: [email protected] \nSent: Friday, July 07, 2000 6:35 PM\nSubject: Help compiling postgres libpgtcl.so on AIX\n\n\nWe're on AIX 4.3.3 64-bit. I've compiled the entire postgres distro, except for tcl, which is very important to us. The problem seems to be in linking the libpgtcl.so...it compiles to a static library just fine.\n\nFYI, Tcl is embedded in our product and hence we have the Tcl sources in places not expected. I've added as many library and include directories as I could think of, as you'll see in the ld dump below, but they obviously don't help.\n\nI got the warning messages about the duplicate symbols during the compile at other stages, but only libpgtcl errors.\n\nPerhaps even if you could tell me where these symbols are defined (what library or object), so that I can hack the linker and make it work...\n\nThanks for your help in advance!\n\nTim\n\n\ngmake[2]: Entering directory `/home/postgres/postgresql-7.0.2/src/interfaces/lib\npgtcl'\n../../backend/port/aix/mkldexport.sh libpgtcl.a /home/postgres/postgresql/lib >\nlibpgtcl.exp\nld -H512 -bM:SRE -bI:../../backend/postgres.imp -bE:libpgtcl.exp -o libpgtcl.so\nlibpgtcl.a -L/work/cloverrel/cloverleaf-external/Tcl/generic -L/work/cloverrel/c\nloverleaf-external/Tcl/unix -L/work/cloverrel/cloverleaf-external/Tk/unix -lPW -\nlcrypt -lld -lnsl -ldl -lm -lcurses -L../../interfaces/libpq -lpq -lcrypt -lc\nld: 0711-224 WARNING: Duplicate symbol: ._ptrgl\nld: 0711-224 WARNING: Duplicate symbol: .PQclear\nld: 0711-224 WARNING: Duplicate symbol: .PQgetlength\nld: 0711-224 WARNING: Duplicate symbol: .PQgetvalue\nld: 0711-224 WARNING: Duplicate symbol: .PQfsize\nld: 0711-224 WARNING: Duplicate symbol: .PQftype\nld: 0711-224 WARNING: Duplicate symbol: .PQfnumber\nld: 0711-224 WARNING: Duplicate symbol: .PQfname\nld: 0711-224 WARNING: Duplicate symbol: .PQnfields\nld: 0711-224 WARNING: Duplicate symbol: .PQntuples\nld: 0711-224 WARNING: Duplicate symbol: .PQfn\nld: 0711-224 WARNING: Duplicate symbol: .PQexec\nld: 0711-224 WARNING: Duplicate symbol: .sprintf\nld: 0711-224 WARNING: Duplicate symbol: sprintf\nld: 0711-224 WARNING: Duplicate symbol: ._moveeq\nld: 0711-224 WARNING: Duplicate symbol: .bcopy\nld: 0711-224 WARNING: Duplicate symbol: .ovbcopy\nld: 0711-224 WARNING: Duplicate symbol: .memcpy\nld: 0711-224 WARNING: Duplicate symbol: .memmove\nld: 0711-224 WARNING: Duplicate symbol: .strlen\nld: 0711-224 WARNING: Duplicate symbol: strlen\nld: 0711-224 WARNING: Duplicate symbol: .vsnprintf\nld: 0711-224 WARNING: Duplicate symbol: vsnprintf\nld: 0711-224 WARNING: Duplicate symbol: .realloc\nld: 0711-224 WARNING: Duplicate symbol: realloc\nld: 0711-224 WARNING: Duplicate symbol: .free\nld: 0711-224 WARNING: Duplicate symbol: free\nld: 0711-224 WARNING: Duplicate symbol: .malloc\nld: 0711-224 WARNING: Duplicate symbol: malloc\nld: 0711-224 WARNING: Duplicate symbol: .bzero\nld: 0711-224 WARNING: Duplicate symbol: bzero\nld: 0711-224 WARNING: Duplicate symbol: .select\nld: 0711-224 WARNING: Duplicate symbol: select\nld: 0711-224 WARNING: Duplicate symbol: .strerror\nld: 0711-224 WARNING: Duplicate symbol: strerror\nld: 0711-224 WARNING: Duplicate symbol: .pqsignal\nld: 0711-224 WARNING: Duplicate symbol: .sigemptyset\nld: 0711-224 WARNING: Duplicate symbol: sigemptyset\nld: 0711-224 WARNING: Duplicate symbol: .sigaction\nld: 0711-224 WARNING: Duplicate symbol: sigaction\nld: 0711-224 WARNING: Duplicate symbol: .send\nld: 0711-224 WARNING: Duplicate symbol: send\nld: 0711-224 WARNING: Duplicate symbol: .fflush\nld: 0711-224 WARNING: Duplicate symbol: fflush\nld: 0711-224 WARNING: Duplicate symbol: .recv\nld: 0711-224 WARNING: Duplicate symbol: recv\nld: 0711-224 WARNING: Duplicate symbol: .close\nld: 0711-224 WARNING: Duplicate symbol: close\nld: 0711-224 WARNING: Duplicate symbol: .fprintf\nld: 0711-224 WARNING: Duplicate symbol: fprintf\nld: 0711-224 WARNING: Duplicate symbol: .strncpy\nld: 0711-224 WARNING: Duplicate symbol: .DLNewElem\nld: 0711-224 WARNING: Duplicate symbol: .DLMoveToFront\nld: 0711-224 WARNING: Duplicate symbol: .DLRemHead\nld: 0711-224 WARNING: Duplicate symbol: .DLAddTail\nld: 0711-224 WARNING: Duplicate symbol: .DLAddHead\nld: 0711-224 WARNING: Duplicate symbol: .DLRemove\nld: 0711-224 WARNING: Duplicate symbol: .DLGetSucc\nld: 0711-224 WARNING: Duplicate symbol: .DLGetPred\nld: 0711-224 WARNING: Duplicate symbol: .DLRemTail\nld: 0711-224 WARNING: Duplicate symbol: .DLGetTail\nld: 0711-224 WARNING: Duplicate symbol: .DLGetHead\nld: 0711-224 WARNING: Duplicate symbol: .DLFreeElem\nld: 0711-224 WARNING: Duplicate symbol: .DLFreeList\nld: 0711-224 WARNING: Duplicate symbol: .DLNewList\nld: 0711-224 WARNING: Duplicate symbol: .memset\nld: 0711-224 WARNING: Duplicate symbol: memset\nld: 0711-224 WARNING: Duplicate symbol: .strcpy\nld: 0711-224 WARNING: Duplicate symbol: .strncmp\nld: 0711-224 WARNING: Duplicate symbol: strncmp\nld: 0711-224 WARNING: Duplicate symbol: .strtoul\nld: 0711-224 WARNING: Duplicate symbol: strtoul\nld: 0711-224 WARNING: Duplicate symbol: .strspn\nld: 0711-224 WARNING: Duplicate symbol: strspn\nld: 0711-224 WARNING: Duplicate symbol: .strdup\nld: 0711-224 WARNING: Duplicate symbol: strdup\nld: 0711-224 WARNING: Duplicate symbol: .isascii\nld: 0711-224 WARNING: Duplicate symbol: isascii\nld: 0711-224 WARNING: Duplicate symbol: .isupper\nld: 0711-224 WARNING: Duplicate symbol: isupper\nld: 0711-224 WARNING: Duplicate symbol: .tolower\nld: 0711-224 WARNING: Duplicate symbol: tolower\nld: 0711-224 WARNING: Duplicate symbol: .strcmp\nld: 0711-224 WARNING: Duplicate symbol: .PQuntrace\nld: 0711-224 WARNING: Duplicate symbol: .PQtrace\nld: 0711-224 WARNING: Duplicate symbol: .setsockopt\nld: 0711-224 WARNING: Duplicate symbol: setsockopt\nld: 0711-224 WARNING: Duplicate symbol: .fcntl\nld: 0711-224 WARNING: Duplicate symbol: fcntl\nld: 0711-224 WARNING: Duplicate symbol: .strchr\nld: 0711-224 WARNING: Duplicate symbol: strchr\nld: 0711-224 WARNING: Duplicate symbol: .strrchr\nld: 0711-224 WARNING: Duplicate symbol: strrchr\nld: 0711-224 WARNING: Duplicate symbol: .getenv\nld: 0711-224 WARNING: Duplicate symbol: getenv\nld: 0711-224 WARNING: Duplicate symbol: .isspace\nld: 0711-224 WARNING: Duplicate symbol: isspace\nld: 0711-224 WARNING: Duplicate symbol: .crypt\nld: 0711-224 WARNING: Duplicate symbol: crypt\nld: 0711-224 WARNING: Duplicate symbol: .geteuid\nld: 0711-224 WARNING: Duplicate symbol: geteuid\nld: 0711-224 WARNING: Duplicate symbol: .getpwuid\nld: 0711-224 WARNING: Duplicate symbol: getpwuid\nld: 0711-224 WARNING: Duplicate symbol: .strcasecmp\nld: 0711-224 WARNING: Duplicate symbol: strcasecmp\nld: 0711-224 WARNING: Duplicate symbol: .inet_aton\nld: 0711-224 WARNING: Duplicate symbol: inet_aton\nld: 0711-224 WARNING: Duplicate symbol: .atoi\nld: 0711-224 WARNING: Duplicate symbol: atoi\nld: 0711-224 WARNING: Duplicate symbol: .socket\nld: 0711-224 WARNING: Duplicate symbol: socket\nld: 0711-224 WARNING: Duplicate symbol: .connect\nld: 0711-224 WARNING: Duplicate symbol: connect\nld: 0711-224 WARNING: Duplicate symbol: .strcat\nld: 0711-224 WARNING: Duplicate symbol: .ngetsockname\nld: 0711-224 WARNING: Duplicate symbol: ngetsockname\nld: 0711-224 WARNING: Duplicate symbol: .lo_export\nld: 0711-224 WARNING: Duplicate symbol: .lo_import\nld: 0711-224 WARNING: Duplicate symbol: .lo_unlink\nld: 0711-224 WARNING: Duplicate symbol: .lo_tell\nld: 0711-224 WARNING: Duplicate symbol: .lo_creat\nld: 0711-224 WARNING: Duplicate symbol: .lo_lseek\nld: 0711-224 WARNING: Duplicate symbol: .lo_write\nld: 0711-224 WARNING: Duplicate symbol: .lo_read\nld: 0711-224 WARNING: Duplicate symbol: .lo_close\nld: 0711-224 WARNING: Duplicate symbol: .lo_open\nld: 0711-224 WARNING: Duplicate symbol: .open\nld: 0711-224 WARNING: Duplicate symbol: open\nld: 0711-224 WARNING: Duplicate symbol: .write\nld: 0711-224 WARNING: Duplicate symbol: write\nld: 0711-224 WARNING: Duplicate symbol: .read\nld: 0711-224 WARNING: Duplicate symbol: read\nld: 0711-224 WARNING: Duplicate symbol: .strtok\nld: 0711-224 WARNING: Duplicate symbol: strtok\nld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information.\nld: 0711-317 ERROR: Undefined symbol: .Tcl_AppendResult\nld: 0711-317 ERROR: Undefined symbol: .Tcl_Preserve\nld: 0711-317 ERROR: Undefined symbol: .Tcl_GlobalEval\nld: 0711-317 ERROR: Undefined symbol: .Tcl_AddErrorInfo\nld: 0711-317 ERROR: Undefined symbol: .Tcl_BackgroundError\nld: 0711-317 ERROR: Undefined symbol: .Tcl_Release\nld: 0711-317 ERROR: Undefined symbol: .Tcl_GetChannel\nld: 0711-317 ERROR: Undefined symbol: .Tcl_GetChannelType\nld: 0711-317 ERROR: Undefined symbol: .Tcl_SetResult\nld: 0711-317 ERROR: Undefined symbol: .Tcl_GetInt\nld: 0711-317 ERROR: Undefined symbol: .Tcl_GetChannelInstanceData\nld: 0711-317 ERROR: Undefined symbol: .Tcl_QueueEvent\nld: 0711-317 ERROR: Undefined symbol: .Tcl_DeleteFileHandler\nld: 0711-317 ERROR: Undefined symbol: .Tcl_DeleteEvents\nld: 0711-317 ERROR: Undefined symbol: .Tcl_CreateFileHandler\nld: 0711-317 ERROR: Undefined symbol: .Tcl_GetChannelName\nld: 0711-317 ERROR: Undefined symbol: .Tcl_ResetResult\nld: 0711-317 ERROR: Undefined symbol: .Tcl_FirstHashEntry\nld: 0711-317 ERROR: Undefined symbol: .Tcl_NextHashEntry\nld: 0711-317 ERROR: Undefined symbol: .Tcl_DeleteHashTable\nld: 0711-317 ERROR: Undefined symbol: .Tcl_DontCallWhenDeleted\nld: 0711-317 ERROR: Undefined symbol: .Tcl_EventuallyFree\nld: 0711-317 ERROR: Undefined symbol: .Tcl_CreateChannel\nld: 0711-317 ERROR: Undefined symbol: .Tcl_SetChannelOption\nld: 0711-317 ERROR: Undefined symbol: .Tcl_RegisterChannel\nld: 0711-317 ERROR: Undefined symbol: .Tcl_InitHashTable\nld: 0711-317 ERROR: Undefined symbol: .Tcl_CallWhenDeleted\nld: 0711-317 ERROR: Undefined symbol: .Tcl_DeleteHashEntry\nld: 0711-317 ERROR: Undefined symbol: .Tcl_SetVar\nld: 0711-317 ERROR: Undefined symbol: .Tcl_SetVar2\nld: 0711-317 ERROR: Undefined symbol: .Tcl_AppendElement\nld: 0711-317 ERROR: Undefined symbol: .Tcl_DStringInit\nld: 0711-317 ERROR: Undefined symbol: .Tcl_DStringAppendElement\nld: 0711-317 ERROR: Undefined symbol: .Tcl_DStringFree\nld: 0711-317 ERROR: Undefined symbol: .Tcl_Eval\nld: 0711-317 ERROR: Undefined symbol: .Tcl_UnsetVar\nld: 0711-317 ERROR: Undefined symbol: .Tcl_UnregisterChannel\nld: 0711-317 ERROR: Undefined symbol: .Tcl_DStringStartSublist\nld: 0711-317 ERROR: Undefined symbol: .Tcl_DStringEndSublist\nld: 0711-317 ERROR: Undefined symbol: .Tcl_DStringResult\nld: 0711-317 ERROR: Undefined symbol: .Tcl_CreateCommand\nld: 0711-317 ERROR: Undefined symbol: .Tcl_PkgProvide\ngmake[2]: *** [libpgtcl.so] Error 8\ngmake[2]: Leaving directory `/home/postgres/postgresql-7.0.2/src/interfaces/libp\ngtcl'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/postgres/postgresql-7.0.2/src/interfaces'\ngmake: *** [all] Error 2",
"msg_date": "Mon, 17 Jul 2000 19:20:47 -0400",
"msg_from": "\"Tim Dunnington\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "libpgtcl on aix"
}
] |
[
{
"msg_contents": "For the last week or so I've been getting warnings like this:\n\ngcc -c -I../../../../src/include -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -g -O1 -MMD float.c -o float.o\nIn file included from float.c:58:\n/usr/include/values.h:27: warning: `MAXINT' redefined\n/usr/local/lib/gcc-lib/hppa2.0-hp-hpux10.20/2.95.2/include/sys/param.h:46: warning: this is the location of the previous definition\n\nin half a dozen backend files. On investigation I find that the cause\nis this recent change:\n \n-#ifdef HAVE_LIMITS_H\n #include <limits.h>\n-#ifndef MAXINT\n-#define MAXINT INT_MAX\n-#endif\n-#else\n #ifdef HAVE_VALUES_H\n #include <values.h>\n-#endif\n #endif\n\nspecifically the fact that the code now tries to include *both*\n<limits.h> and <values.h> rather than only one. Well, I'm here\nto tell you that the two headers are not entirely compatible,\nat least not on this platform (HPUX 10.20, obviously).\n\nChecking the CVS logs, I see that 7.0 is our first release that tries\nto include <values.h> at all, so we have little track experience with\nthat header and none with its possible conflicts with the ANSI-standard\nheaders. The submitter of the patch that added it did not recommend\nincluding it unconditionally, but only if <limits.h> is not available.\nLooks like he knew what he was doing.\n\nDoes anyone object if I revert this code to the way it was?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jul 2000 00:21:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Warnings triggered by recent includefile cleanups"
},
{
"msg_contents": "Tom Lane writes:\n\n> specifically the fact that the code now tries to include *both*\n> <limits.h> and <values.h> rather than only one. Well, I'm here\n> to tell you that the two headers are not entirely compatible,\n> at least not on this platform (HPUX 10.20, obviously).\n> \n> Checking the CVS logs, I see that 7.0 is our first release that tries\n> to include <values.h> at all, so we have little track experience with\n> that header and none with its possible conflicts with the ANSI-standard\n> headers. The submitter of the patch that added it did not recommend\n> including it unconditionally, but only if <limits.h> is not available.\n> Looks like he knew what he was doing.\n> \n> Does anyone object if I revert this code to the way it was?\n\nConsidering that evidence shows that limits.h must have been available on\nall platforms at least since 6.5, in fact at least as long as the current\nregex engine has existed, values.h could not possibly have been included\nanywhere ever, so it's probably better to just remove it.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 18 Jul 2000 20:28:04 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Warnings triggered by recent includefile cleanups"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Does anyone object if I revert this code to the way it was?\n\n> Considering that evidence shows that limits.h must have been available on\n> all platforms at least since 6.5, in fact at least as long as the current\n> regex engine has existed, values.h could not possibly have been included\n> anywhere ever, so it's probably better to just remove it.\n\nHmm, it does look like regex has included <limits.h> unconditionally\nsince day 1, doesn't it? That sure suggests that this patch:\n\n@@ -7,7 +7,7 @@\n *\n *\n * IDENTIFICATION\n- * $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt/float.c,v 1.47 1999/07/17 20:17:55 momjian Exp $\n+ * $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt/float.c,v 1.48 1999/09/21 20:58:25 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n@@ -55,6 +55,13 @@\n #include \"postgres.h\"\n #ifdef HAVE_LIMITS_H\n #include <limits.h>\n+#ifndef MAXINT\n+#define MAXINT INT_MAX\n+#endif\n+#else\n+#ifdef HAVE_VALUES_H\n+#include <values.h>\n+#endif\n #endif\n #include \"fmgr.h\"\n #include \"utils/builtins.h\"\n\nwas dead code when it was installed. The CVS log says\n\tvalues.h patch from Alex Howansky\nbut I can't find anything from him in the mailing list archives.\n(We seem to have lost the pgsql-patches archives, however, so if it\nwas just a patch that went by then there's no remaining doco.)\n\nBruce, does this ring a bell at all? Unless someone can remember\nwhy this change seemed like a good idea, I think I will take Peter's\nadvice...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jul 2000 17:10:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Warnings triggered by recent includefile cleanups "
},
{
"msg_contents": "Tom Lane writes:\n\n> @@ -55,6 +55,13 @@\n> #include \"postgres.h\"\n> #ifdef HAVE_LIMITS_H\n> #include <limits.h>\n> +#ifndef MAXINT\n> +#define MAXINT INT_MAX\n> +#endif\n> +#else\n> +#ifdef HAVE_VALUES_H\n> +#include <values.h>\n> +#endif\n> #endif\n> #include \"fmgr.h\"\n> #include \"utils/builtins.h\"\n> \n> was dead code when it was installed. The CVS log says\n> \tvalues.h patch from Alex Howansky\n\nHe claimed that the compilation failed for him in the files\n src/backend/optimizer/path/costsize.c\n src/backend/utils/adt/date.c\n src/backend/utils/adt/float.c\nbecause MAXINT was undefined (although neither date.c nor float.c used\nMAXINT at all in 6.5).\n\nThis problem is gone and one should use INT_MAX anyway.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 19 Jul 2000 18:28:51 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Warnings triggered by recent includefile cleanups "
},
{
"msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> >> Does anyone object if I revert this code to the way it was?\n> \n> > Considering that evidence shows that limits.h must have been available on\n> > all platforms at least since 6.5, in fact at least as long as the current\n> > regex engine has existed, values.h could not possibly have been included\n> > anywhere ever, so it's probably better to just remove it.\n> \n> Hmm, it does look like regex has included <limits.h> unconditionally\n> since day 1, doesn't it? That sure suggests that this patch:\n> \n> @@ -7,7 +7,7 @@\n> *\n> *\n> * IDENTIFICATION\n> - * $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt/float.c,v 1.47 1999/07/17 20:17:55 momjian Exp $\n> + * $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt/float.c,v 1.48 1999/09/21 20:58:25 momjian Exp $\n> *\n> *-------------------------------------------------------------------------\n> */\n> @@ -55,6 +55,13 @@\n> #include \"postgres.h\"\n> #ifdef HAVE_LIMITS_H\n> #include <limits.h>\n> +#ifndef MAXINT\n> +#define MAXINT INT_MAX\n> +#endif\n> +#else\n> +#ifdef HAVE_VALUES_H\n> +#include <values.h>\n> +#endif\n> #endif\n> #include \"fmgr.h\"\n> #include \"utils/builtins.h\"\n> \n> was dead code when it was installed. The CVS log says\n> \tvalues.h patch from Alex Howansky\n> but I can't find anything from him in the mailing list archives.\n> (We seem to have lost the pgsql-patches archives, however, so if it\n> was just a patch that went by then there's no remaining doco.)\n> \n> Bruce, does this ring a bell at all? Unless someone can remember\n> why this change seemed like a good idea, I think I will take Peter's\n> advice...\n\nI have:\n\n---------------------------------------------------------------------------\n\n",
"msg_date": "Wed, 26 Jul 2000 23:13:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Warnings triggered by recent includefile cleanups"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Bruce, does this ring a bell at all? Unless someone can remember\n>> why this change seemed like a good idea, I think I will take Peter's\n>> advice...\n\n> I have:\n\nHm. Looks like the ifndef MAXINT was the part he actually cared about,\nand that's now dead code since we don't use MAXINT anywhere anymore.\nSo I'll go ahead and simplify it down to just #include <limits.h>.\nThanks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Jul 2000 00:26:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Warnings triggered by recent includefile cleanups "
}
] |
[
{
"msg_contents": "Hi all,\n\n there have been a couple of questions WRT doing untrustable\n things like file access, LDAP and the like from inside of\n triggers or functions.\n\n Tcl is a powerful language and could do all that, but the\n interpreter used in PL/Tcl is a safe one, because it is a\n trusted procedural language (any non-superuser can create\n functions). I think it should be pretty easy to build a\n second PL handler into the module, that executes the\n procedures in a full featured Tcl interpreter, that has all\n capabilities. This one would be installed as an untrusted PL,\n so only DB superusers could create functions in that\n language.\n\n Should I go for it and if so, how should this language be\n named?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Tue, 18 Jul 2000 13:36:22 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Untrusted PL/Tcl?"
},
{
"msg_contents": "Jan Wieck wrote:\n> capabilities. This one would be installed as an untrusted PL,\n> so only DB superusers could create functions in that\n> language.\n \n> Should I go for it and if so, how should this language be\n> named?\n\nYes; pl/utcl. Or pl/tclu. :-) Those facilities would be nice to have.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 18 Jul 2000 07:54:41 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Untrusted PL/Tcl?"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Should I go for it and if so, how should this language be\n> named?\n\nSounds like a fine idea.\n\nWhile you're in there, do you want to do something about supporting NULL\ninputs and results properly? Right now, a NULL input comes into a pltcl\nfunction as an empty string, which is OK as far as it goes but you can't\nalways tell that from a valid data value. There should be an inquiry\nfunction to tell whether argument N is-null or not. Also, AFAICT\nthere's no way for a pltcl function to return a NULL. The most natural\nTcl syntax for this would be something like\n\treturn -code null\nbut I'm not sure how hard it is to persuade the Tcl interpreter to\naccept a new \"-code\" keyword without actually changing the Tcl core.\nWorst-case, we could invent a new statement \"return_null\" ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jul 2000 10:52:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Untrusted PL/Tcl? "
},
{
"msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > Should I go for it and if so, how should this language be\n> > named?\n>\n> Sounds like a fine idea.\n>\n> While you're in there, do you want to do something about supporting NULL\n> inputs and results properly? Right now, a NULL input comes into a pltcl\n> function as an empty string, which is OK as far as it goes but you can't\n> always tell that from a valid data value. There should be an inquiry\n> function to tell whether argument N is-null or not. Also, AFAICT\n> there's no way for a pltcl function to return a NULL. The most natural\n> Tcl syntax for this would be something like\n> return -code null\n> but I'm not sure how hard it is to persuade the Tcl interpreter to\n> accept a new \"-code\" keyword without actually changing the Tcl core.\n> Worst-case, we could invent a new statement \"return_null\" ...\n\n Good idea! I think I could add a -code null by replacing the\n builtin \"return\" function by a custom one.\n\n While beeing in there, I could do something else too I wanted\n to do for some time now. It'll break backward compatibility\n to Tcl versions prior to 8.0, so if there are objections ...\n\n Beginning with Tcl 8.0, dual ported objects got used to deal\n with values. These have (amongst performance issues) alot of\n benefits. Changing all the call interfaces would make it\n impossible to use PL/Tcl with a pre 8.0 Tcl installation.\n Since we're now at Tcl 8.3 (the last I've seen), ISTM it's\n not a bad decision to force the upgrade.\n\n Comments?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Tue, 18 Jul 2000 18:33:06 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: Untrusted PL/Tcl?"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> While beeing in there, I could do something else too I wanted\n> to do for some time now. It'll break backward compatibility\n> to Tcl versions prior to 8.0, so if there are objections ...\n\n> Beginning with Tcl 8.0, dual ported objects got used to deal\n> with values. These have (amongst performance issues) alot of\n> benefits. Changing all the call interfaces would make it\n> impossible to use PL/Tcl with a pre 8.0 Tcl installation.\n> Since we're now at Tcl 8.3 (the last I've seen), ISTM it's\n> not a bad decision to force the upgrade.\n\nOK by me. Tcl 7.6 is getting to be ancient history... and people\nwho are using pltcl for database functions are probably going to\nwant all the speed they can get, so making a more efficient interface\nto Tcl 8 seems like a good idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jul 2000 16:55:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Untrusted PL/Tcl? "
},
{
"msg_contents": "Jan Wieck wrote:\n> Tom Lane wrote:\n> > [email protected] (Jan Wieck) writes:\n> > > Should I go for it and if so, how should this language be\n> > > named?\n> >\n> > Sounds like a fine idea.\n> >\n> > While you're in there, do you want to do something about supporting NULL\n> > inputs and results properly? Right now, a NULL input comes into a pltcl\n> > function as an empty string, which is OK as far as it goes but you can't\n> > always tell that from a valid data value. There should be an inquiry\n> > function to tell whether argument N is-null or not. Also, AFAICT\n> > there's no way for a pltcl function to return a NULL. The most natural\n> > Tcl syntax for this would be something like\n> > return -code null\n> > but I'm not sure how hard it is to persuade the Tcl interpreter to\n> > accept a new \"-code\" keyword without actually changing the Tcl core.\n> > Worst-case, we could invent a new statement \"return_null\" ...\n>\n> Good idea! I think I could add a -code null by replacing the\n> builtin \"return\" function by a custom one.\n\n Well, I've implemented an \"argisnull n\" and \"return_null\"\n command for now. So \"argisnull 1\" will tell if $1 is NULL or\n not.\n\n The \"return -code null\" would be theoretically possible. But\n it might make us more Tcl version dependant than we really\n want to be.\n\n> While beeing in there, I could do something else too I wanted\n> to do for some time now. It'll break backward compatibility\n> to Tcl versions prior to 8.0, so if there are objections ...\n>\n> Beginning with Tcl 8.0, dual ported objects got used to deal\n\n Something I had to reevaluate. All values exchanged between\n PG and Tcl have to go through the type specific\n input-/output-functions. So Tcl is dealing with strings all\n the time. Therefore, the dual ported objects might not do for\n us, what they usually do for an application. Left it as is\n for now.\n\n PL/TclU (pltclu) is in place now. I think I'll like it :-)\n\n There's just one nasty detail. If an untrusted function wants\n to load other binary Tcl modules, it needs to load\n libtcl8.0.so explicitly first (to avoid unresolved symbols).\n But after that, I was able to load libpgtcl.so and\n connect/query another database on the first try! A DB\n backend that acts as a client on another DB - not bad for a\n first test. Socket operations (to GET an html page) worked\n too, so a PG backend can be a web-browser now :-).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 19 Jul 2000 14:49:24 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: Untrusted PL/Tcl?"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Mikhail Terekhov wrote:\n> >\n> > Do you plan to make libpgtcl stubs enabled? In this case it would be\n> > possible to use any version of tcl, at least any 8.*>8.05. It seems that\n> > this is not hard to do, see http://dev.scriptics.com/doc/howto/stubs.html\n> \n> Seems you mixed up some things here.\n> \n> PL/Tcl is a \"procedural language\" living inside of the\n> database backend. One can write DB-side functions and trigger\n> procedures using it, and they are executed by the DB server\n> process itself during query execution.\n> \n> These functions have a total different interface to access\n> the DB they are running inside already.\n\nOk\n \n> This all has nothing to do with a Tcl script accessing a\n> Postgres database as a client application. It's totally\n> unrelated to libpgtcl! Even if the changes I committed today\n\nRight\n\n> enable a backend, executing a PL/TclU (a new language)\n> function, to become the client of another database using\n> libpgtcl now - to make the confusion perfect.\n> \n\nPL/Tcl and libpqtcl have one important thing in common - they use \nTcl library. The purpose of stubs mechanism is to make applications\nwhich use Tcl library independent from the Tcl version as much as\npossible. The bottom line is that if you want your PL/Tcl or PL/TclU\nor libpgtc to be linked with the Tcl dynamically, then there is a \npossibility that due to version mismatch they will not work after\nupgradig/downgrading Tcl. So it is better to use stubs to avoid this.\n\nMikhail\n",
"msg_date": "Wed, 19 Jul 2000 15:06:09 -0400",
"msg_from": "Mikhail Terekhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Untrusted PL/Tcl?"
}
] |
[
{
"msg_contents": ">>>>> \"J\" == Jan Wieck <[email protected]> writes:\n\nJ> there have been a couple of questions WRT doing untrustable\nJ> things like file access, LDAP and the like from inside of\nJ> triggers or functions.\n\n With PL/Ruby you can do this by giving the option --with-safe-level=number\n at compile time. \n\n Safe level must be >= 1, you just need to comment the line :\n\n rb_set_safe_level(1);\n\n in plruby_init_all(), if you want to run it with $SAFE = 0\n\n\nGuy Decoux\n\n\n",
"msg_date": "Tue, 18 Jul 2000 14:12:07 +0200 (MET DST)",
"msg_from": "ts <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Untrusted PL/Tcl?"
}
] |
[
{
"msg_contents": "I noticed my nightly vacuum process has started dying on a certain relation\nlast night. When I try to vaccum verbose it, I get the following output.\n\nI can still do \"SELECT * FROM filelist\".\n\nfilelist is a table which has 12146 rows in it, with some 15 columns. Two\nfields in approx 8-9000 of these rows are updated each night. No other\nactivity in the database while I vacuum (nobody even connected).\n\nRestarting the postmaster corrected the problem.\n\nDatabase is version 7.0.2 running on Linux 2.2.16/libc5. Started with\nparameter \"-B 512\".\n\n//Magnus\n\n\n\nNOTICE: --Relation filelist--\nNOTICE: Pages 847: Changed 0, reaped 821, Empty 0, New 0; Tup 12146: Vac\n10733, Keep/VTL 0/0, Crash 0, UnUsed 11548, MinLen 92, MaxLen 1165;\nRe-using: Free/Avail. Space 4329756/5420; EndEmpty/Avail. Pages 537/50. CPU\n0.36s/0.09u sec.\nNOTICE: Index filelist_date_index: Pages 129; Tuples 12146: Deleted 0. CPU\n0.04s/0.12u sec.\nNOTICE: Index filelist_dirid_index: Pages 134; Tuples 12146: Deleted 0. CPU\n0.03s/0.18u sec.\nNOTICE: Index filelist_id_index: Pages 129; Tuples 12146: Deleted 0. CPU\n0.06s/0.16u sec.\nNOTICE: Index filelist_name_index: Pages 247; Tuples 12146: Deleted 0. CPU\n0.17s/0.15u sec.\nNOTICE: Rel filelist: Pages: 847 --> 310; Tuple(s) moved: 0. CPU\n0.10s/0.07u sec.\nNOTICE: FlushRelationBuffers(filelist, 310): block 0 is referenced (private\n0, global 5)\nFATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\n",
"msg_date": "Tue, 18 Jul 2000 15:59:44 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "FlushRelationBuffers returned -2"
},
{
"msg_contents": "Magnus Hagander <[email protected]> writes:\n> I noticed my nightly vacuum process has started dying on a certain relation\n> last night. When I try to vaccum verbose it, I get the following output.\n\n> NOTICE: FlushRelationBuffers(filelist, 310): block 0 is referenced (private\n> 0, global 5)\n> FATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2\n\n> Restarting the postmaster corrected the problem.\n\nThat's what I was about to suggest trying. It sounds like something\ncrashed and left the buffer reference count incremented above zero for\none of the pages of the relation. In fact, several somethings, five of\nthem to be exact.\n\nHave you had any interesting backend crashes lately? Are you doing\nanything unusual with that table? It would seem that whatever is causing\nthis is at least moderately reproducible in your environment, since it's\nhappened more than once. It'd be easier to track down if you could\nidentify what sequence of operations causes the buffer refcount to be\nleft incremented.\n\nBTW, don't be afraid of the fact VACUUM aborts --- it's just being\nparanoid about the possibility that someone else is using this table\nthat it's supposed to have an exclusive lock on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jul 2000 11:08:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FlushRelationBuffers returned -2 "
}
] |
[
{
"msg_contents": "> Magnus Hagander <[email protected]> writes:\n> > I noticed my nightly vacuum process has started dying on a \n> certain relation\n> > last night. When I try to vaccum verbose it, I get the \n> following output.\n> \n> > NOTICE: FlushRelationBuffers(filelist, 310): block 0 is \n> referenced (private\n> > 0, global 5)\n> > FATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2\n> \n> > Restarting the postmaster corrected the problem.\n> \n> That's what I was about to suggest trying. It sounds like something\n> crashed and left the buffer reference count incremented above zero for\n> one of the pages of the relation. In fact, several \n> somethings, five of\n> them to be exact.\n> \n> Have you had any interesting backend crashes lately? Are you doing\n> anything unusual with that table? It would seem that whatever \n> is causing\n> this is at least moderately reproducible in your environment, \n> since it's\n> happened more than once. It'd be easier to track down if you could\n> identify what sequence of operations causes the buffer refcount to be\n> left incremented.\nI don't *think* there should be anything like that. Looking at my log, there\nare the following entries (other than the usual \"unexpected EOF on client\nconnection\" from scripts that are aborted...\n\nERROR: regcomp failed with error empty (sub)expression\npq_flush: send() failed: Broken pipe\nERROR: Named portals may only be used in begin/end transaction blocks\n\nApart from that, it's just the usual EXPLAIN output and misspelled queries\n(complaning about tables taht don't exist, etc, when somebody misspelled\nthem). All these queries worked with other tables, but in the same database\n(a partially different project).\n\nThere is no \"core\" file anywhere in the pgsql directory structure, and\nnothing in the log about any backend crash.\n\nI'll keep my eyes open if it happens again, and try to find a pattern.\n\n\n> BTW, don't be afraid of the fact VACUUM aborts --- it's just being\n> paranoid about the possibility that someone else is using this table\n> that it's supposed to have an exclusive lock on.\nOk. Better safe than sorry :-) Good to know.\n\n//Magnus\n",
"msg_date": "Tue, 18 Jul 2000 18:21:23 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: FlushRelationBuffers returned -2 "
}
] |
[
{
"msg_contents": "Hi,\n\nI was following some newsgroups and mailing lists dedicated to web authoring\nand I've found many messages where people ask for a\ncompare table between MySQL (that they are using) and PostgreSQL.\n\nThen I was looking at MySQL documentation (info format) and I've found a very\nbad page in comparisons chapter, \"How *MySQL* compares\nwith PostgreSQL\":\n\"`PostgreSQL' has some more advanced features like user-defined types,\ntriggers, rules and some transaction support. However, PostgreSQL lacks\nmany of the standard types and functions from ANSI SQL and ODBC. See the\n`crash-me' web page (http://www.mysql.com/crash-me-choose.htmy) for a\ncomplete list of limits and which types and functions are supported or\nunsupported.\"\n\nI understand that MySQL is not a RDBMS but a SQL front end to indexed\narchives, but people don't understand this.\n\nSo I suggest a simply comparison list between PostgreSQL and MySQL, something\nto do for pure user information and not for flame war\nabout DBMS religion.\n\nI've started to prepare this list, and I'll try to keep it updated.\nPlease help me to maintain my intentions.\nHere there is the first version:\n\n1) PostgreSQL is a real RDBMS. This means:\n - transaction SQL commands available and working: BEGIN, COMMIT, ROLLBACK.\n MySQL doesn't.\n\n - PostgreSQL is a Object-Oriented-DataBase-Management-System:\nuser defined class, functions, aggregates.\n MySQL isn't.\n\n - PostgreSQL has a quite good set of ANSI SQL-92 commands:\nSELECT INTO\nCREATE/DROP VIEW\n MySQL hasn't these commands yet.\n (please help me to say what is not supported yet in PostgreSQL and what is\nnot available on MySQL).\n\n - PostgreSQL implements sub selects, for example:\n SELECT * from orders where orderid EXCEPT\n (SELECT orderid from packages where shipment_date='today');\n MySQL lacks sub-selects.\n\n - PostgreSQL has referential integrity SQL commands available and working:\nFOREIGN KEYS, CREATE/DELETE/UPDATE TRIGGERS.\n MySQL hasn't.\n\n - PostgreSQL implements stored procedures in SQL, Perl and TCL in standard\nsource distribution. Developers are able to add more\nlanguages for stored procedures.\n MySQL doesn't support stored procedures.\n\n - PostgreSQL has embedeed SQL-C precompiler.\n MySQL doesn't.\n\n - PostgreSQL support three types of indixes (B-TREE, R-TREE, HASH), and user\n can choose which use for his purpose.\n MySQL supports ISAM only archives.\n\n - PostgreSQL implements via Multi Version Concurrency Control (MVCC) a\nmultiversion\n locking model.\n MySQL supports table lock model as unique transaction protection.\n\n The main difference between multiversion and lock models is that in MVCC\nlocks acquired for querying (reading) data don't\nconflict with locks acquired for writing data and so reading never blocks\nwriting and writing never blocks reading.\n\n PostgreSQL supports many locking models and two kind of trasnsaction\nisolation: row-level locking is possible via \"SELECT for\nUPDATE\" SQL command.\n\n - MySQL implements OUTER JOINS, PostgreSQL doesn't yet.\n\n - MySQL is a fully multi-threaded using kernel threads.\n PostgreSQL has a number of back-end processes defined at compile time, by\ndefault 32.\n\n (please, someone could explain to me which implicantions this could have in\na multi CPU environment?)\n (please, someone could explain to me which implicantions this could have on\nthe number of front-end concurrent processes and\nnumber of open connections to PostgreSQL?)\n\n - MySQL has support tools for data recovery.\n PostgreSQL doesn't trash any data by itself, so you need to backup as for\nany other user mistakes.\n\n(I would like to say about the PostgreSQL stability and reliability, but I\nfear to open flames on this point. Are there any\nsuggestions?)\n\n\nBoth PostgreSQL and MySQL are available for many operating systems;\nboth of them are released in Open Source now: MySQL under GPL since few days,\nPostgreSQL is born as Open Source project under BSD\nlicence.\nBoth of them have:\nDBI, DBD front end for Perl programming languange, \nPHP drivers\nODBC drivers\nC/C++ front-end libraries \n\n(I know that PostgreSQL has JDBC driver, I don't know about MySQL)\n\nConclusions:\nMySQL could be a good tool for any project that doesn't need transactions and\ndata integrity.\nFor all other applications PostgreSQL is the only game in the city.\n\n\nPlease send me directly by email, any corrections and improvements. I'll post\nhere next release of this simple list before to\npublish it.\n\n\n --\n\nI've also looked at crash-me test from MySQL site, I'm sure that\nit could be interesting to discuss about it and about how much\nthis test is reliable.\n\ncrash-me test URL:\n http://www.mysql.com/crash-me-choose.html\n\n\n\n\nThank you in advance, \\fer\n\n",
"msg_date": "Tue, 18 Jul 2000 21:10:58 +0100",
"msg_from": "Ferruccio Zamuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "MySQL comparison"
},
{
"msg_contents": "Ferruccio Zamuner wrote:\n\n> - MySQL is a fully multi-threaded using kernel threads.\n> PostgreSQL has a number of back-end processes defined at compile time, by\n> default 32.\n\nIt's better not to mention this. I used to work for a database vendor\nwith a process model (i.e. like postgres), and we could still beat the\nopposition who had a threaded model. The bottom line is that\nthread/process isn't that important. We even used to beat the opposition\nin number of databases/connections, by a huge margin. It's an\nimplementation detail. Don't confuse people by throwing it up.\n",
"msg_date": "Wed, 19 Jul 2000 13:52:02 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL comparison"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Ferruccio Zamuner wrote:\n>> PostgreSQL has a number of back-end processes defined at compile time, by\n>> default 32.\n\n> thread/process isn't that important. We even used to beat the opposition\n> in number of databases/connections, by a huge margin. It's an\n> implementation detail. Don't confuse people by throwing it up.\n\nAnother point worth making is that this is not a number fixed at compile\ntime, but just a postmaster parameter with a default of 32. You can set\nit much higher if your system can handle the load.\n\nThe reason the default limit is so low is just that we'd like the code\nto run out-of-the-box on platforms with small kernel limits on shared\nmem or number of semaphores. The hidden assumption is that someone\nwho really needs a higher limit will know enough to RTFM ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jul 2000 00:51:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL comparison "
},
{
"msg_contents": "On Tue, 18 Jul 2000, Ferruccio Zamuner wrote:\n\n> - PostgreSQL implements stored procedures in SQL, Perl and TCL in standard\n> source distribution. Developers are able to add more\n> languages for stored procedures.\n> MySQL doesn't support stored procedures.\nDo you mean \"user defined functions?\"\nHmmm... Sorry, but what does it mean \"strored procedure\"?\nIMHO stroed procedure is somewhat like this:\nProc dd_deleteds()\n SELECT * FROM dd WHERE deleted=1;\nEND;\nAnd you can call this procedure from client, or from a backend-parsed pl.\nA procedure may produce a some columns, and some rows. I think there are\nDBMS, who support this. PostgreSQL do it?\n\n\n--\n nek;)))\n\n",
"msg_date": "Wed, 19 Jul 2000 12:19:16 +0200 (CEST)",
"msg_from": "Peter Vazsonyi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL comparison"
}
] |
[
{
"msg_contents": "Hi,\n\nplease look at following example:\n\nCREATE TABLE picture (\nid serial not null,\ndescription text,\nfilename text);\n\nCREATE TABLE advert (\nartist text,\ncustomer text,\ntarget text)\nINHERITS (picture);\n\nCREATE TABLE work (\nid serial not null,\nadvert_id int4 not null references advert,\nvalue numeric(6,2) default 0);\n\nNOTICE: CREATE TABLE will create implicit sequence 'work_id_seq' for SERIAL\ncol\numn 'work.id'\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'work_id_key' for\ntable \n'work'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: PRIMARY KEY for referenced table \"advert\" not found\n\n\nHow can I create PRIMARY KEY CONSTRAINT for table advert?\n\n\nThank you in advance for any reply, \\fer\n\n",
"msg_date": "Tue, 18 Jul 2000 21:11:36 +0100",
"msg_from": "Ferruccio Zamuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "PRIMARY KEY & INHERITANCE (fwd)"
},
{
"msg_contents": "Something on the TODO list is that indexes should be inherited by\ndefault. Unfortunately, right now they are not. I'm not sure what the\ninteraction is here with the foreign key mechanism, so I'm CCing this to\nhackers to see if anyone there might comment.\n\nFerruccio Zamuner wrote:\n> \n> Hi,\n> \n> please look at following example:\n> \n> CREATE TABLE picture (\n> id serial not null,\n> description text,\n> filename text);\n> \n> CREATE TABLE advert (\n> artist text,\n> customer text,\n> target text)\n> INHERITS (picture);\n> \n> CREATE TABLE work (\n> id serial not null,\n> advert_id int4 not null references advert,\n> value numeric(6,2) default 0);\n> \n> NOTICE: CREATE TABLE will create implicit sequence 'work_id_seq' for SERIAL\n> col\n> umn 'work.id'\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index 'work_id_key' for\n> table\n> 'work'\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> ERROR: PRIMARY KEY for referenced table \"advert\" not found\n> \n> How can I create PRIMARY KEY CONSTRAINT for table advert?\n",
"msg_date": "Wed, 19 Jul 2000 13:56:35 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PRIMARY KEY & INHERITANCE (fwd)"
},
{
"msg_contents": "On Wed, 19 Jul 2000, Chris Bitmead wrote:\n\n> Something on the TODO list is that indexes should be inherited by\n> default. Unfortunately, right now they are not. I'm not sure what the\n> interaction is here with the foreign key mechanism, so I'm CCing this to\n> hackers to see if anyone there might comment.\n\nIf you don't specify a set of target columns for the reference, it goes to\nthe primary key of the table (if one exists). If one doesn't we error out\nas shown below. You can make the reference by saying:\nadvert_id int4 not null references advert(id) \nin the definition of table work.\n\nOf course, in this case, I don't even see a primary key being defined on\neither picture or advert, so it's not really the inheritance thing unless\nhe also made an index somewhere else (not using unique or primary key on\nthe table).\n\nIn 7.1, the ability to reference columns that are not constrained to be\nunique will probably go away, but you can also make the index on\nadvert(id) to make it happy in that case.\n\n> > CREATE TABLE picture (\n> > id serial not null,\n> > description text,\n> > filename text);\n> > \n> > CREATE TABLE advert (\n> > artist text,\n> > customer text,\n> > target text)\n> > INHERITS (picture);\n> > \n> > CREATE TABLE work (\n> > id serial not null,\n> > advert_id int4 not null references advert,\n> > value numeric(6,2) default 0);\n> > \n> > NOTICE: CREATE TABLE will create implicit sequence 'work_id_seq' for SERIAL\n> > col\n> > umn 'work.id'\n> > NOTICE: CREATE TABLE/UNIQUE will create implicit index 'work_id_key' for\n> > table\n> > 'work'\n> > NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> > ERROR: PRIMARY KEY for referenced table \"advert\" not found\n\n",
"msg_date": "Tue, 18 Jul 2000 22:25:26 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: PRIMARY KEY & INHERITANCE (fwd)"
},
{
"msg_contents": "\nOf course I had to be half asleep when I wrote the second paragraph of my\nresponse, since I totally missed he was using a serial. The rest still\napplies though...\n\nAs an aside to Chris, what interactions do you expect between the OO stuff \nyou've been working on and foreign key references? I'm going to have to\nmuck around with the trigger code to move to storing oids of tables and\nattributes rather than names, so I thought it might make sense to at least\nthink about possible future interactions.\n\nOn Tue, 18 Jul 2000, Stephan Szabo wrote:\n> \n> If you don't specify a set of target columns for the reference, it goes to\n> the primary key of the table (if one exists). If one doesn't we error out\n> as shown below. You can make the reference by saying:\n> advert_id int4 not null references advert(id) \n> in the definition of table work.\n> \n> Of course, in this case, I don't even see a primary key being defined on\n> either picture or advert, so it's not really the inheritance thing unless\n> he also made an index somewhere else (not using unique or primary key on\n> the table).\n> \n> In 7.1, the ability to reference columns that are not constrained to be\n> unique will probably go away, but you can also make the index on\n> advert(id) to make it happy in that case.\n\n",
"msg_date": "Wed, 19 Jul 2000 07:26:24 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: PRIMARY KEY & INHERITANCE (fwd)"
},
{
"msg_contents": "Stephan Szabo wrote:\n> \n> Of course I had to be half asleep when I wrote the second paragraph of my\n> response, since I totally missed he was using a serial. The rest still\n> applies though...\n> \n> As an aside to Chris, what interactions do you expect between the OO stuff\n> you've been working on and foreign key references? I'm going to have to\n> muck around with the trigger code to move to storing oids of tables and\n> attributes rather than names, so I thought it might make sense to at least\n> think about possible future interactions.\n\nAs a rule, anything that applies to a base class should also apply to\nthe sub-class automatically. For some things you may want to have the\noption of excluding it, by something like the ONLY syntax of select, but\n99% of the time everything should just apply to sub-classes.\n\nStoring oids of attributes sounds like a problem in this context because\nit may make it hard to relate these to sub-classes. I do really think\nthat the system catalogs should be re-arranged so that attributes have\ntwo parts - the parts that are specific to that class, and the parts\nthat also apply to sub-classes. For example the type and the length\nshould probably apply to sub-classes. The attnum and the name should\nprobably be individual to each class in the hierarchy. (The name should\nbe individual to support subclass renaming to avoid naming conflicts,\nlike in the draft SQL3 and Eiffel). If it is in two parts then using the\noid of the common part would make it easy for your purposes.\n",
"msg_date": "Thu, 20 Jul 2000 09:43:37 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: PRIMARY KEY & INHERITANCE (fwd)"
},
{
"msg_contents": "> As a rule, anything that applies to a base class should also apply to\n> the sub-class automatically. For some things you may want to have the\n> option of excluding it, by something like the ONLY syntax of select, but\n> 99% of the time everything should just apply to sub-classes.\nThat makes sense. I assume that you cannot remove the unique constraint\nthat\na parent provides, once those start being inherited. This is mostly because\nforeign key references really only work in the presence of a unique\nconstraint.\n\n> Storing oids of attributes sounds like a problem in this context because\n> it may make it hard to relate these to sub-classes. I do really think\n> that the system catalogs should be re-arranged so that attributes have\n> two parts - the parts that are specific to that class, and the parts\n> that also apply to sub-classes. For example the type and the length\n> should probably apply to sub-classes. The attnum and the name should\n> probably be individual to each class in the hierarchy. (The name should\n> be individual to support subclass renaming to avoid naming conflicts,\n> like in the draft SQL3 and Eiffel). If it is in two parts then using the\n> oid of the common part would make it easy for your purposes.\nHow would one refer to an attribute whose name has changed in a\nsubclass if you're doing a select on the superclass (or do you even\nneed to do anything - does it figure it out automagically?)\n\n\n",
"msg_date": "Wed, 19 Jul 2000 17:18:17 -0700",
"msg_from": "\"Stephan Szabo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] PRIMARY KEY & INHERITANCE (fwd)"
},
{
"msg_contents": "Stephan Szabo wrote:\n\n> > Storing oids of attributes sounds like a problem in this context because\n> > it may make it hard to relate these to sub-classes. I do really think\n> > that the system catalogs should be re-arranged so that attributes have\n> > two parts - the parts that are specific to that class, and the parts\n> > that also apply to sub-classes. For example the type and the length\n> > should probably apply to sub-classes. The attnum and the name should\n> > probably be individual to each class in the hierarchy. (The name should\n> > be individual to support subclass renaming to avoid naming conflicts,\n> > like in the draft SQL3 and Eiffel). If it is in two parts then using the\n> > oid of the common part would make it easy for your purposes.\n> How would one refer to an attribute whose name has changed in a\n> subclass if you're doing a select on the superclass (or do you even\n> need to do anything - does it figure it out automagically?)\n\nIf you had..\ncreate table a (aa text);\ncreate table b under a rename aa to bb ( );\ninsert into a(aa) values('aaa');\ninsert into b(bb) values('bbb');\nselect * from a;\n\naa\n---\naaa\nbbb\n\nThe system knows that a.aa is the same as b.bb. The same attribute\nlogically, just referred to by different names depending on the context.\nEiffel handles it the same way if I remember right.\n",
"msg_date": "Thu, 20 Jul 2000 10:31:37 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] PRIMARY KEY & INHERITANCE (fwd)"
},
{
"msg_contents": "\n\n> > How would one refer to an attribute whose name has changed in a\n> > subclass if you're doing a select on the superclass (or do you even\n> > need to do anything - does it figure it out automagically?)\n> \n> If you had..\n> create table a (aa text);\n> create table b under a rename aa to bb ( );\n> insert into a(aa) values('aaa');\n> insert into b(bb) values('bbb');\n> select * from a;\n> \n> aa\n> ---\n> aaa\n> bbb\n> \n> The system knows that a.aa is the same as b.bb. The same attribute\n> logically, just referred to by different names depending on the context.\n> Eiffel handles it the same way if I remember right.\n\nSo, if you did, select * from a where aa>'a', it would properly mean \nthe inherited attribute, even if an attribute aa was added to table b, \npossibly of a different type? In that case I really wouldn't need to do \nanything special to handle the subtables since I'd always be doing the \nselect for update on the table that was specified at creation time which\nis the one I have the attributes for.\n\n\n",
"msg_date": "Wed, 19 Jul 2000 17:50:06 -0700",
"msg_from": "\"Stephan Szabo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] PRIMARY KEY & INHERITANCE (fwd)"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> ... The attnum and the name should\n> probably be individual to each class in the hierarchy. (The name should\n> be individual to support subclass renaming to avoid naming conflicts,\n> like in the draft SQL3 and Eiffel). If it is in two parts then using the\n> oid of the common part would make it easy for your purposes.\n\nThis bothers me. Seems like you are saying that a subclass's column\nmight not match the parent's by *either* name or column position, but\nnonetheless the system will know that this subclass column is the same\nas that parent column. No doubt we could implement that by relying on\nOIDs of pg_attribute rows, but just because it's implementable doesn't\nmake it a good idea. I submit that this is too confusing to be of\nany practical use. There should be a *user-visible* connection between\nparent and child column, not some magic under-the-hood connection.\nIMHO it ought to be the column name.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jul 2000 00:22:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: PRIMARY KEY & INHERITANCE (fwd) "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > ... The attnum and the name should\n> > probably be individual to each class in the hierarchy. (The name should\n> > be individual to support subclass renaming to avoid naming conflicts,\n> > like in the draft SQL3 and Eiffel). If it is in two parts then using the\n> > oid of the common part would make it easy for your purposes.\n> \n> This bothers me. Seems like you are saying that a subclass's column\n> might not match the parent's by *either* name or column position, but\n> nonetheless the system will know that this subclass column is the same\n> as that parent column. No doubt we could implement that by relying on\n> OIDs of pg_attribute rows, but just because it's implementable doesn't\n> make it a good idea. I submit that this is too confusing to be of\n> any practical use. There should be a *user-visible* connection between\n> parent and child column, not some magic under-the-hood connection.\n> IMHO it ought to be the column name.\n\nWhen you multiple inherit from unrelated base classes you need a\nconflict\nresolution mechanism. That's why it can't be the name. The SQL3 draft\nrecognised this.\n\nMany programming languages deal with this issue without undue confusion.\nTo provide mapping to these programming languages such a conflict\nresolution mechanism becomes necessary.\n",
"msg_date": "Thu, 20 Jul 2000 14:56:42 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: PRIMARY KEY & INHERITANCE (fwd)"
}
] |
[
{
"msg_contents": "Jan sent me a test case that produces a \"btree: failed to add item to\nthe page\" failure message. The cause is that a btree index page needs\nto be split to make room for a new index item, but the split point is\nchosen in such a way that there still isn't room for the new item on\nthe new page it has to go into.\n\nThis appears to be a long-standing bug, but it's a problem that's not\nvery likely to occur unless you are dealing with large index items\n(approaching the pagesize/3 limit).\n\nIn the test case, the initial contents of the index page look like\n\nitemTID\t\tlength\n0,7,1\t\t1792\t\t(\"high key\" from next page)\n0,14,1\t\t40\n0,85,1\t\t1848\n0,41,1\t\t1656\n0,86,1\t\t1464\n\n(There are actually only four \"real\" keys on this page; the first entry\nis a copy of the lowest key from the next index page. BTW I believe\nthese \"high keys\" are the source of the extra index TOAST references\nthat Jan was wondering about.)\n\nAfter bt_split executes, I find the left page has\n\nitemTID\t\tlength\n0,85,1\t\t1848\t\t(\"high key\" from newly-inserted page)\n0,14,1\t\t40\n\nand the right page has\n\nitemTID\t\tlength\n0,7,1\t\t1792\t\t(still the high key from the next page)\n0,85,1\t\t1848\n0,41,1\t\t1656\n0,86,1\t\t1464\n\nUnfortunately, the incoming item of size 1416 has to go onto the right\npage, and it still doesn't fit.\n\nIn this particular situation the choice of the bad split point seems\nto be due to sloppy logic in _bt_findsplitloc --- it decides to split\nthe page \"in the middle\" but its idea of the middle is\n\tfirstindex + (lastindex - firstindex) / 2\nwhich is off by one in this case, and is fundamentally wrong anyhow\nbecause middle by item count is not necessarily middle by size.\n\nThis could be repaired with some work in findsplitloc, but I'm afraid\nthere is a more serious problem. The comments before findsplitloc say\n\n * In order to guarantee the proper handling of searches for duplicate\n * keys, the first duplicate in the chain must either be the first\n * item on the page after the split, or the entire chain must be on\n * one of the two pages. That is,\n * [1 2 2 2 3 4 5]\n * must become\n * [1] [2 2 2 3 4 5]\n * or\n * [1 2 2 2] [3 4 5]\n * but not\n * [1 2 2] [2 3 4 5].\n * However,\n * [2 2 2 2 2 3 4]\n * may be split as\n * [2 2 2 2] [2 3 4].\n\nIf this is accurate, then there will be cases where it is impossible to\nsplit a page in a way that allows insertion of the new item where it\nneeds to go. For instance, suppose in the above example that the three\nlarge items had been all equal keys and that the incoming item also has\nthat same key value. Under the rules given in this comment the only\nlegal split would be like [A] [B B B B] where A represents the 40-byte\nkey and the B's indicate the four equal keys. That will not fit.\n\nHowever, I'm not sure I believe the comment anymore: it has not changed\nsince Postgres95 and I can see that quite a bit of work has been done\non the duplicate-key logic since then. Furthermore findsplitloc itself\nsometimes ignores the claimed requirement: when it does the\nsplit-in-the-middle case quoted above, it does not pay attention to\nwhether it is splitting in the middle of a group of duplicates. (But\nthat path is taken infrequently enough that it's possible it's just\nplain broken, and we haven't noticed.)\n\nDoes anyone know whether this comment still describes the btree\nequal-key logic accurately?\n\nIf so, one possible solution is to make _bt_insertonpg smart enough to\ndetect that there's not room to insert after making a legal split, and\nthen recursively split again. That looks fairly ticklish however,\nespecially if we want to preserve the existing guarantees of no deadlock\nin concurrent insertions.\n\nA more radical way out is to do what Vadim's been saying we should do\neventually: redo the btree logic so that there are never \"equal\" keys\n(ie, use the item TID as a tiebreaker when ordering items). That would\nfix our performance problems with many equal keys as well as simplify\nthe code. But it'd be a good deal of work, I fear.\n\nComments, opinions, better ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jul 2000 16:48:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "btree split logic is fragile in the presence of large index items"
},
{
"msg_contents": "Tom Lane writes:\n\n> A more radical way out is to do what Vadim's been saying we should do\n> eventually: redo the btree logic so that there are never \"equal\" keys\n> (ie, use the item TID as a tiebreaker when ordering items). That would\n> fix our performance problems with many equal keys as well as simplify\n> the code. But it'd be a good deal of work, I fear.\n\nI wonder, if we are ever to support deferrable unique constraints (or even\nproperly working unique constraints, re update t1 set x = x + 1), wouldn't\nthe whole unique business have to disappear from the indexes anyway and be\nhandled more in the trigger area?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 19 Jul 2000 18:27:33 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: btree split logic is fragile in the presence of large index items"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> A more radical way out is to do what Vadim's been saying we should do\n>> eventually: redo the btree logic so that there are never \"equal\" keys\n>> (ie, use the item TID as a tiebreaker when ordering items). That would\n>> fix our performance problems with many equal keys as well as simplify\n>> the code. But it'd be a good deal of work, I fear.\n\n> I wonder, if we are ever to support deferrable unique constraints (or even\n> properly working unique constraints, re update t1 set x = x + 1), wouldn't\n> the whole unique business have to disappear from the indexes anyway and be\n> handled more in the trigger area?\n\nCould be, but I don't think it's relevant to this particular issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jul 2000 12:52:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: btree split logic is fragile in the presence of large index items"
}
] |
[
{
"msg_contents": "> However, I'm not sure I believe the comment anymore: it has \n> not changed since Postgres95 and I can see that quite a bit of work has\n> been done on the duplicate-key logic since then. Furthermore findsplitloc\n> itself sometimes ignores the claimed requirement: when it does the\n> split-in-the-middle case quoted above, it does not pay attention to\n> whether it is splitting in the middle of a group of duplicates. (But\n\nOps. This is bug.\n\n> that path is taken infrequently enough that it's possible it's just\n> plain broken, and we haven't noticed.)\n> \n> Does anyone know whether this comment still describes the btree\n> equal-key logic accurately?\n\nI think so.\n\nVadim\n",
"msg_date": "Tue, 18 Jul 2000 15:47:10 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: btree split logic is fragile in the presence of lar ge index\n items"
},
{
"msg_contents": "I've been chewing some more on the duplicate-key btree issue, and\ndigging through CVS logs and Postgres v4.2 to try to understand the\nhistory of the code (and boy, this code does have a lot of history\ndoesn't it?)\n\nWhat I think I now understand is this:\n\n1. The PG 4.2 code used an OID appended to the key to ensure that\nbtree items are unique. This fixes the Lehman and Yao algorithm's\nassumption of unique keys *for insertions*, since the item being\ninserted must have an OID different from existing items even if the\nkey values are equal. However there is still an issue for *lookups*,\nsince we are necessarily doing a lookup with just a key value, and\nwe want to find all index items with that same key value regardless\nof OID.\n\n2. The solution used in the PG 4.2 code, and still present now, handles\nequal-keys for lookup by mandating that a sequence of equal-keyed items\nnot cross page boundaries unless the first key in that sequence is at\nthe start of an index page. This ensures that that first key is present\nin the parent node and so a lookup for that key will descend to the\npage in which the sequence starts. Without this restriction the lookup\nwould descend to the first continuation page of the sequence and so miss\nthe first few matching items. However we now see that this restriction\ncan cause insertion failures by preventing a page from being split where\nit needs to be split to make room for the incoming item.\n\n3. Awhile back Vadim removed the added-OID code and added a bunch of\nlogic for explicit management of chains of duplicate keys. In\nretrospect this change was probably a mistake. For btree items that\npoint to heap tuples we can use the tuple TID as tie-breaker (since\nan index should never contain two items with the same key and TID).\nBtree internal pages need an added field since they have to consider the\nTID as part of the key (and the field that is the TID in a leaf page is\nused as the down-link pointer in non-leaf pages).\n\nI believe that the PG 4.2 solution for equal-key lookups is also a\nmistake, and that the correct answer is simple: split pages wherever\nyou want, but when descending through a non-leaf page, consider the\ntarget key to be *less than* index items that actually have equal keys.\nIn this way, if we are looking at a downlink pointer that exactly\nmatches the target key, we go to the page one left of the page the\npointer references, and thereby find any equal-keyed items that may\nbe lurking at the end of that page. (We could possibly implement this\nby treating the key being searched for as having an attached TID of\nminus infinity. Note we already have more or less this same idea\nin place for scanning a multi-column index using a partial key.)\n\nThis might sound klugy since it means a different comparison rule for\ndescending the tree than for actually deciding whether we should return\na particular item. But *we have to make such a distinction anyway*\nin order to handle NULL items. (The equality check must always say\nFALSE when comparing nulls, but we have to consider NULLs as sortable\ndata values when entering them into the tree.)\n\nI'm starting to feel that the right fix is to bite the bullet and redo\nthe index logic this way, rather than add another band-aid.\n\nOne thing I'm still not too clear on is how we handle backwards\nindexscans. After looking at Lehman and Yao's paper it seems like\nonly forward scans are guaranteed to work when other processes are\nbusy splitting pages. Anybody know how that's handled?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jul 2000 19:47:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: btree split logic is fragile in the presence of large index items"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n>\n> I've been chewing some more on the duplicate-key btree issue, and\n> digging through CVS logs and Postgres v4.2 to try to understand the\n> history of the code (and boy, this code does have a lot of history\n> doesn't it?)\n>\n> One thing I'm still not too clear on is how we handle backwards\n> indexscans. After looking at Lehman and Yao's paper it seems like\n> only forward scans are guaranteed to work when other processes are\n> busy splitting pages. Anybody know how that's handled?\n>\n\nThere's the following comment for backwards scan in _bt_step()\nin nbtsearch.c\n\n /*\n * If the adjacent page just split,\nthen\n we may have\n * the wrong block. Handle this\ncase.\nBecause pages\n * only split right, we don't have\nto wo\nrry about this\n * failing to terminate.\n */\n\nSeems backwards index scans sometimes move right(scan forward).\n\nRegards.\n\nHiroshi Inoue\n\n",
"msg_date": "Wed, 19 Jul 2000 12:54:56 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: btree split logic is fragile in the presence of large index items"
}
] |
[
{
"msg_contents": "Sorry for the cross-post, but, I figured I had to do that to get this\nfile to those who need it.\n\nFor those running the 7.0.2-2 RPMset AND having the -devel subpackage\ninstalled, copy the attached file to '/usr/include/pgsql/port/linux.h',\nand change the symlink '/usr/include/pgsql/os.h' to point to that\nlinux.h.\n\nThe next release of the RPMset will fix this (oft-mentioned) problem,\neven if I have to release a set just for this problem (I was hoping to\nhave some other enhancements and fixes...but....).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11",
"msg_date": "Tue, 18 Jul 2000 19:00:15 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "linux.h"
}
] |
[
{
"msg_contents": "http://www.newsalert.com/bin/story?StoryId=CoxpwqbWbtefuvtaXm&FQ=%22open%20source%22&Nav=na-search-&StoryTitle=%22open%2\n\nOr,\nhttp://linuxtoday.com/news_story.php3?ltsn=2000-07-18-026-06-PR-CY-SW if\nyou have a problem with the longer URL.\n\ntitle:\nPRNewswire: Open Source CRM Effort Adds Support for PostGreSQL\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 18 Jul 2000 19:03:26 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "In the news..."
}
] |
[
{
"msg_contents": "> 1. The PG 4.2 code used an OID appended to the key to ensure that\n> btree items are unique. This fixes the Lehman and Yao algorithm's\n> assumption of unique keys *for insertions*, since the item being\n> inserted must have an OID different from existing items even if the\n> key values are equal. However there is still an issue for *lookups*,\n> since we are necessarily doing a lookup with just a key value, and\n> we want to find all index items with that same key value regardless\n> of OID.\n\nThis is not right. OIDs were used *only* to find parent tuple in\n_bt_getstackbuf, where only *per level* uniqueness is required.\nI removed OIDs because of on any level there are no two (or more)\ntuples pointing to the same place - i.e. TID may be used.\nBTW, there were only single-key indices in Postgres-95 (and 4.2 too?) -\ni.e. OID could not be used in key.\n\n> 3. Awhile back Vadim removed the added-OID code and added a bunch of\n> logic for explicit management of chains of duplicate keys. In\n> retrospect this change was probably a mistake. For btree items that\n> point to heap tuples we can use the tuple TID as tie-breaker (since\n> an index should never contain two items with the same key and TID).\n> Btree internal pages need an added field since they have to \n> consider the TID as part of the key (and the field that is the TID in a \n> leaf page is used as the down-link pointer in non-leaf pages).\n\nWhile implementing multi-key btree-s for 6.1 I found problems with\nduplicates handling and this is why extra logic was added. But I never\nwas happy with this logic -:)\n\nNote that using TID as part of key would give us additional feature:\nfast heap tuple --> index tuple look up. With this feature vacuum wouldn't\nhave to read entire index to delete a few items... and this will be required\nfor space re-using without vacuum...\n\nVadim\n",
"msg_date": "Tue, 18 Jul 2000 17:18:15 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: btree split logic is fragile in the presence of lar\n\tge index items"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> While implementing multi-key btree-s for 6.1 I found problems with\n> duplicates handling and this is why extra logic was added. But I never\n> was happy with this logic -:)\n\nDo you not like the proposal I was suggesting? I thought it was pretty\nmuch what you said yourself a few months ago...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jul 2000 11:45:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: btree split logic is fragile in the presence of lar ge index\n\titems"
},
{
"msg_contents": "> Note that using TID as part of key would give us additional feature:\n> fast heap tuple --> index tuple look up. With this feature vacuum wouldn't\n> have to read entire index to delete a few items... and this will be required\n> for space re-using without vacuum...\n\nWow, this seems like a huge win.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 Jul 2000 23:27:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: btree split logic is fragile in the presence of lar ge index\n items"
}
] |
[
{
"msg_contents": "Hi Peter. I finally got a look at the \"set\" organization of the Postgres\ndocs. Looks pretty nice; there is a tradeoff between the \"monolithic\nToC\" from before and the more organized look of the \"books within a\nset\".\n\nI have no objection to using the \"set\" style (barring unforseen\ndifficulties); my recollection is that you are recommending that, or\nwere you just exploring the idea?\n\n - Thomas\n",
"msg_date": "Wed, 19 Jul 2000 06:42:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Docs organized as a \"set\""
}
] |
[
{
"msg_contents": "Has anything changed in the system tables pg_proc or pg_type since 7.0.0 was\nreleased?\n\nThis morning I've seen two weird reports on Interfaces that are showing\nNullPointerExceptions occuring on internal queries to those two tables (in\ndifferent classes, one queries just pg_type the other pg_proc). There's\nnothing special about these queries so I'm just wondering if it's something\nmore sinister going on.\n\nPeter\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n",
"msg_date": "Wed, 19 Jul 2000 07:53:01 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "System tables since 7.0.0"
},
{
"msg_contents": "Peter Mount <[email protected]> writes:\n> Has anything changed in the system tables pg_proc or pg_type since 7.0.0 was\n> released?\n\nNo ... we don't change system tables in patch releases, as a matter of\npolicy.\n\nThe current development tip is another story of course, but I assume\nyou are looking at reports from people running 7.0.*. Might be useful\nto get them to try the same queries in psql.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jul 2000 11:12:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: System tables since 7.0.0 "
}
] |
[
{
"msg_contents": "One of these is now fixed, but the other (pg_proc) I'm not sure about.\n\nPeter\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n\n-----Original Message-----\nFrom: Peter Mount [mailto:[email protected]]\nSent: Wednesday, July 19, 2000 7:53 AM\nTo: PostgreSQL Developers List (E-mail)\nSubject: [HACKERS] System tables since 7.0.0\n\n\nHas anything changed in the system tables pg_proc or pg_type since 7.0.0 was\nreleased?\n\nThis morning I've seen two weird reports on Interfaces that are showing\nNullPointerExceptions occuring on internal queries to those two tables (in\ndifferent classes, one queries just pg_type the other pg_proc). There's\nnothing special about these queries so I'm just wondering if it's something\nmore sinister going on.\n\nPeter\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n",
"msg_date": "Wed, 19 Jul 2000 08:07:07 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: System tables since 7.0.0"
}
] |
[
{
"msg_contents": "The Query Cache\n ~~~~~~~~~~~~~~~\n (excuse me, if you obtain this email twice; first I sent it with patch in \nattache, but this list has probably some limit, because email still not in\nthe list. Hmm...)\n\n Now, the patch is available at: \n\n ftp://ftp2.zf.jcu.cz/users/zakkr/pg/pg_qcache-07182000.patch.tar.gz \n\nThe patch must be used for current (07/18/2000) CVS version. Because code in \nthe CVS is under very active development, you can load full PG source with \nquery cache from:\n\n ftp://ftp2.zf.jcu.cz/users/zakkr/pg/pg_qcache-07182000.tar.gz\n\n or you can download source from CVS (source from 07/18/2000):\n\nexport CVSROOT=\":pserver:[email protected]:/home/projects/pgsql/cvsroot\"\ncvs login\ncvs co -D \"07/18/2000 12:00\" pgsql\ncd pgsql/src\npatch -p1 < pqsql-qcache-07182000.patch\n\n\n The Query Cache and new SPI description\n =======================================\n\n Note: cache is based on new memory design.\n \n Implementation\n ~~~~~~~~~~~~~~\n The qCache allows to save queryTree and queryPlan. Available are two space \n for data caching. \n \n LOCAL - data are cached in backend non-shared memory and data aren't\n available in other backends. \n \n SHARE - data are cached in backend shared memory and data are \n visible in all backends.\n \n Because size of share memory pool is limited and it's set during\n postmaster start, the qCache must remove all old planns if pool is \n full. You can mark each entry as \"REMOVEABLE\" or \"NOTREMOVEABLE\". \n \n The removeable entry is removed if pool is full and entry is last \n in list that keep track usage of entry. \n \n A not-removeable entry must be removed via qCache_Remove() or \n the other routines. The qCache not remove this entry itself.\n \n All records in the qCache are cached in the hash table under some key. The\n qCache knows two alternate of key --- \"KEY_STRING\" and \"KEY_BINARY\". A\n key must be always less or equal \"QCACHE_HASH_KEYSIZE\" (128b) \n \n The qCache API not allows to access to shared memory, all cached planns \n that API returns are copy to CurrentMemoryContext or to defined context. \n All (qCache_ ) routines lock shmem itself (exception is \n qCache_RemoveOldest_ShareRemoveAble()).\n\n - for locking is used spin lock.\n\n Memory management\n ~~~~~~~~~~~~~~~~~\n The qCache use for qCache's shared pool organized via memory contexts \n independent on standard aset/mcxt, but use compatible API --- it allows \n to use standard palloc() (it is very needful for basic plan-tree operations, \n an example for copyObject()). The qCache memory management is very simular \n to current aset.c code. It is chunked blocks too, but the block is smaller \n - 1024b.\n\n The number of blocks is available set in postmaster 'argv' via option\n '-Z'.\n\n For planns storing is used separate MemoryContext for each plan, it \n is good idea (Hiroshi's ?), bucause create new context is simple and \n inexpensive and allows easy destroy (free) cached plan. This method is \n used in my SPI overhaul instead TopMemoryContext feeding.\n\n Postmaster\n ~~~~~~~~~~\n The query cache memory is init during potmaster startup. The size of\n query cache pool is set via '-Z <number-of-blocks>' switch --- default \n is 100 blocks where 1 block = 1024b, it is sufficient for 20-30 cached\n planns. One query needs somewhere 3-10 blocks, for example query like\n\n PREPARE sel AS SELECT * FROM pg_class;\n\n needs 10Kb, because table pg_class has very much columns. \n -- \n\n Note: for development I add SQL function: \"SELECT qcache_state();\",\n this routine show usage of qCache.\n\n SPI\n ~~~\n I a little overwrite SPI save plan method and remove TopMemoryContext\n \"feeding\" (already discussed).\n\n Standard SPI:\n\n SPI_saveplan() - save each plan to separate standard memory context.\n\n SPI_freeplan() - free plan.\n\n By key SPI:\n\n It is SPI interface for query cache and allows save planns to SHARED\n or LOCAL cache 'by' arbitrary key (string or binary). Routines:\n\n SPI_saveplan_bykey() - save plan to query cache\n\n SPI_freeplan_bykey() - remove plan from query cache\n\n SPI_fetchplan_bykey() - fetch plan saved in query cache\n\n SPI_execp_bykey() - execute (via SPI) plan saved in query\n cache \n\n - now, users can write functions that save planns to shared memory \n and planns are visible in all backend and are persistent arcoss \n connection. \n\n Example:\n ~~~~~~~\n /* ----------\n * Save/exec query from shared cache via string key\n * ----------\n */\n int keySize = 0; \n flag = SPI_BYKEY_SHARE | SPI_BYKEY_STRING;\n char *key = \"my unique key\";\n \n res = SPI_execp_bykey(values, nulls, tcount, key, flag, keySize);\n \n if (res == SPI_ERROR_PLANNOTFOUND) \n {\n /* --- not plan in cache - must create it --- */\n \n void *plan;\n\n plan = SPI_prepare(querystr, valnum, valtypes);\n SPI_saveplan_bykey(plan, key, keySize, flag);\n \n res = SPI_execute(plan, values, Nulls, tcount);\n }\n \n elog(NOTICE, \"Processed: %d\", SPI_processed);\n\n\n PREPARE/EXECUTE\n ~~~~~~~~~~~~~~~\n * Syntax:\n \n PREPARE <name> AS <query> \n [ USING type, ... typeN ] \n [ NOSHARE | SHARE | GLOBAL ]\n \n EXECUTE <name> \n [ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table ]\n [ USING val, ... valN ]\n [ NOSHARE | SHARE | GLOBAL ]\n\n DEALLOCATE PREPARE \n [ <name> [ NOSHARE | SHARE | GLOBAL ]]\n [ ALL | ALL INTERNAL ]\n\n\n I know that it is a little out of SQL92... (use CREATE/DROP PLAN instead\n this?) --- what mean SQL standard guru?\n\n * Where:\n \n NOSHARE --- cached in local backend query cache - not accessable\n from the others backends and not is persisten a across\n conection.\n\n SHARE --- cached in shared query cache and accessable from\n all backends which work over same database.\n\n GLOBAL --- cached in shared query cache and accessable from\n all backends and all databases. \n\n - default is 'SHARE'\n \n Deallocate:\n \n ALL --- deallocate all users's plans\n\n ALL INTERNAL --- deallocate all internal plans, like planns\n cached via SPI. It is needful if user\n alter/drop table ...etc.\n\n * Parameters:\n \n \"USING\" part in the prepare statement is for datetype setting for\n paremeters in the query. For example:\n\n PREPARE sel AS SELECT * FROM pg_class WHERE relname ~~ $1 USING text;\n\n EXECUTE sel USING 'pg%';\n \n\n * Limitation:\n \n - prepare/execute allow use full statement of SELECT/INSERT/DELETE/\n UPDATE. \n - possible is use union, subselects, limit, ofset, select-into\n\n\n Performance:\n ~~~~~~~~~~~\n * the SPI\n\n - I for my tests a little change RI triggers to use SPI by_key API\n and save planns to shared qCache instead to internal RI hash table.\n\n The RI use very simple (for parsing) queries and qCache interest is \n not visible. It's better if backend very often startup and RI check \n always same tables. In this situation speed go up --- 10-12%. \n (This snapshot not include this RI change.)\n\n But all depend on how much complicate for parser is query in \n trigger.\n\n * PREPARE/EXECUTE\n \n - For tests I use query that not use some table (the executor is \n in boredom state), but is difficult for the parser. An example:\n\n SELECT 'a text ' || (10*10+(100^2))::text || ' next text ' || cast \n (date_part('year', timestamp 'now') AS text );\n \n - (10000 * this query):\n\n standard select: 54 sec\n via prepare/execute: 4 sec (93% better)\n\n IMHO it is nod bad.\n \n - For standard query like:\n\n SELECT u.usename, r.relname FROM pg_class r, pg_user u WHERE \n r.relowner = u.usesysid;\n\n it is with PREPARE/EXECUTE 10-20% faster.\n\n\n I will *very glad* if someone try and test patch; some discussion is wanted \ntoo.\n\n Thanks.\n \n Karel\n\nPS. Excuse me, my English is poor and this text is long --- it is not good \n combination...",
"msg_date": "Wed, 19 Jul 2000 10:16:13 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "The query cache - first snapshot (long)"
},
{
"msg_contents": "> * PREPARE/EXECUTE\n> \n> - For tests I use query that not use some table (the executor is \n> in boredom state), but is difficult for the parser. An example:\n> \n> SELECT 'a text ' || (10*10+(100^2))::text || ' next text ' || cast \n> (date_part('year', timestamp 'now') AS text );\n> \n> - (10000 * this query):\n> \n> standard select: 54 sec\n> via prepare/execute: 4 sec (93% better)\n> \n> IMHO it is nod bad.\n> \n> - For standard query like:\n> \n> SELECT u.usename, r.relname FROM pg_class r, pg_user u WHERE \n> r.relowner = u.usesysid;\n> \n> it is with PREPARE/EXECUTE 10-20% faster.\n> \n> \n> I will *very glad* if someone try and test patch; some discussion is wanted \n> too.\n\nWow, just when I thought we couldnd't get much faster. That is great.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 Jul 2000 23:36:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The query cache - first snapshot (long)"
},
{
"msg_contents": "\nOn Wed, 26 Jul 2000, Bruce Momjian wrote:\n\n> > * PREPARE/EXECUTE\n> > \n> > - For tests I use query that not use some table (the executor is \n> > in boredom state), but is difficult for the parser. An example:\n> > \n> > SELECT 'a text ' || (10*10+(100^2))::text || ' next text ' || cast \n> > (date_part('year', timestamp 'now') AS text );\n> > \n> > - (10000 * this query):\n> > \n> > standard select: 54 sec\n> > via prepare/execute: 4 sec (93% better)\n> > \n> > IMHO it is nod bad.\n> > \n> > - For standard query like:\n> > \n> > SELECT u.usename, r.relname FROM pg_class r, pg_user u WHERE \n> > r.relowner = u.usesysid;\n> > \n> > it is with PREPARE/EXECUTE 10-20% faster.\n> > \n> > \n> > I will *very glad* if someone try and test patch; some discussion is wanted \n> > too.\n> \n> Wow, just when I thought we couldnd't get much faster. That is great.\n> \n\n Very Thanks! \n\n Your answer is first during one week when this snapshot is outside... I'm a\nlittle worry that it is unconcern for the others.\n\n It is not only PREPARE/EXECUTE it is new SPI_saveplan() design that is\ncorrect to current new Tom's memory design, also here is new SPI 'bykey'\ninterface for query save/exec --- it can be good for PL those not must run\nsome internal save-query-management and use self hash tables, ..etc. \t \n\n \t\t\t\t\t\tKarel\n\n",
"msg_date": "Thu, 27 Jul 2000 10:26:00 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The query cache - first snapshot (long)"
},
{
"msg_contents": "\n\n Still *quiet* for this theme? I output it two weeks ago and I haven't \nstill some reaction. I can stop work on this if it is not wanted and not \ninteresting...\n\n\t\t\t\t\t\tKarel\n\n\nOn Wed, 19 Jul 2000, Karel Zak wrote:\n\n> The Query Cache and new SPI description\n> =======================================\n> \n> Note: cache is based on new memory design.\n> \n> Implementation\n> ~~~~~~~~~~~~~~\n> The qCache allows to save queryTree and queryPlan. Available are two space \n> for data caching. \n> \n> LOCAL - data are cached in backend non-shared memory and data aren't\n> available in other backends. \n> \n> SHARE - data are cached in backend shared memory and data are \n> visible in all backends.\n> \n> Because size of share memory pool is limited and it's set during\n> postmaster start, the qCache must remove all old planns if pool is \n> full. You can mark each entry as \"REMOVEABLE\" or \"NOTREMOVEABLE\". \n> \n> The removeable entry is removed if pool is full and entry is last \n> in list that keep track usage of entry. \n> \n> A not-removeable entry must be removed via qCache_Remove() or \n> the other routines. The qCache not remove this entry itself.\n> \n> All records in the qCache are cached in the hash table under some key. The\n> qCache knows two alternate of key --- \"KEY_STRING\" and \"KEY_BINARY\". A\n> key must be always less or equal \"QCACHE_HASH_KEYSIZE\" (128b) \n> \n> The qCache API not allows to access to shared memory, all cached planns \n> that API returns are copy to CurrentMemoryContext or to defined context. \n> All (qCache_ ) routines lock shmem itself (exception is \n> qCache_RemoveOldest_ShareRemoveAble()).\n> \n> - for locking is used spin lock.\n> \n> Memory management\n> ~~~~~~~~~~~~~~~~~\n> The qCache use for qCache's shared pool organized via memory contexts \n> independent on standard aset/mcxt, but use compatible API --- it allows \n> to use standard palloc() (it is very needful for basic plan-tree operations, \n> an example for copyObject()). The qCache memory management is very simular \n> to current aset.c code. It is chunked blocks too, but the block is smaller \n> - 1024b.\n> \n> The number of blocks is available set in postmaster 'argv' via option\n> '-Z'.\n> \n> For planns storing is used separate MemoryContext for each plan, it \n> is good idea (Hiroshi's ?), bucause create new context is simple and \n> inexpensive and allows easy destroy (free) cached plan. This method is \n> used in my SPI overhaul instead TopMemoryContext feeding.\n> \n> Postmaster\n> ~~~~~~~~~~\n> The query cache memory is init during potmaster startup. The size of\n> query cache pool is set via '-Z <number-of-blocks>' switch --- default \n> is 100 blocks where 1 block = 1024b, it is sufficient for 20-30 cached\n> planns. One query needs somewhere 3-10 blocks, for example query like\n> \n> PREPARE sel AS SELECT * FROM pg_class;\n> \n> needs 10Kb, because table pg_class has very much columns. \n> -- \n> \n> Note: for development I add SQL function: \"SELECT qcache_state();\",\n> this routine show usage of qCache.\n> \n> SPI\n> ~~~\n> I a little overwrite SPI save plan method and remove TopMemoryContext\n> \"feeding\" (already discussed).\n> \n> Standard SPI:\n> \n> SPI_saveplan() - save each plan to separate standard memory context.\n> \n> SPI_freeplan() - free plan.\n> \n> By key SPI:\n> \n> It is SPI interface for query cache and allows save planns to SHARED\n> or LOCAL cache 'by' arbitrary key (string or binary). Routines:\n> \n> SPI_saveplan_bykey() - save plan to query cache\n> \n> SPI_freeplan_bykey() - remove plan from query cache\n> \n> SPI_fetchplan_bykey() - fetch plan saved in query cache\n> \n> SPI_execp_bykey() - execute (via SPI) plan saved in query\n> cache \n> \n> - now, users can write functions that save planns to shared memory \n> and planns are visible in all backend and are persistent arcoss \n> connection. \n> \n> Example:\n> ~~~~~~~\n> /* ----------\n> * Save/exec query from shared cache via string key\n> * ----------\n> */\n> int keySize = 0; \n> flag = SPI_BYKEY_SHARE | SPI_BYKEY_STRING;\n> char *key = \"my unique key\";\n> \n> res = SPI_execp_bykey(values, nulls, tcount, key, flag, keySize);\n> \n> if (res == SPI_ERROR_PLANNOTFOUND) \n> {\n> /* --- not plan in cache - must create it --- */\n> \n> void *plan;\n> \n> plan = SPI_prepare(querystr, valnum, valtypes);\n> SPI_saveplan_bykey(plan, key, keySize, flag);\n> \n> res = SPI_execute(plan, values, Nulls, tcount);\n> }\n> \n> elog(NOTICE, \"Processed: %d\", SPI_processed);\n> \n> \n> PREPARE/EXECUTE\n> ~~~~~~~~~~~~~~~\n> * Syntax:\n> \n> PREPARE <name> AS <query> \n> [ USING type, ... typeN ] \n> [ NOSHARE | SHARE | GLOBAL ]\n> \n> EXECUTE <name> \n> [ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table ]\n> [ USING val, ... valN ]\n> [ NOSHARE | SHARE | GLOBAL ]\n> \n> DEALLOCATE PREPARE \n> [ <name> [ NOSHARE | SHARE | GLOBAL ]]\n> [ ALL | ALL INTERNAL ]\n> \n> \n> I know that it is a little out of SQL92... (use CREATE/DROP PLAN instead\n> this?) --- what mean SQL standard guru?\n> \n> * Where:\n> \n> NOSHARE --- cached in local backend query cache - not accessable\n> from the others backends and not is persisten a across\n> conection.\n> \n> SHARE --- cached in shared query cache and accessable from\n> all backends which work over same database.\n> \n> GLOBAL --- cached in shared query cache and accessable from\n> all backends and all databases. \n> \n> - default is 'SHARE'\n> \n> Deallocate:\n> \n> ALL --- deallocate all users's plans\n> \n> ALL INTERNAL --- deallocate all internal plans, like planns\n> cached via SPI. It is needful if user\n> alter/drop table ...etc.\n> \n> * Parameters:\n> \n> \"USING\" part in the prepare statement is for datetype setting for\n> paremeters in the query. For example:\n> \n> PREPARE sel AS SELECT * FROM pg_class WHERE relname ~~ $1 USING text;\n> \n> EXECUTE sel USING 'pg%';\n> \n> \n> * Limitation:\n> \n> - prepare/execute allow use full statement of SELECT/INSERT/DELETE/\n> UPDATE. \n> - possible is use union, subselects, limit, ofset, select-into\n> \n> \n> Performance:\n> ~~~~~~~~~~~\n> * the SPI\n> \n> - I for my tests a little change RI triggers to use SPI by_key API\n> and save planns to shared qCache instead to internal RI hash table.\n> \n> The RI use very simple (for parsing) queries and qCache interest is \n> not visible. It's better if backend very often startup and RI check \n> always same tables. In this situation speed go up --- 10-12%. \n> (This snapshot not include this RI change.)\n> \n> But all depend on how much complicate for parser is query in \n> trigger.\n> \n> * PREPARE/EXECUTE\n> \n> - For tests I use query that not use some table (the executor is \n> in boredom state), but is difficult for the parser. An example:\n> \n> SELECT 'a text ' || (10*10+(100^2))::text || ' next text ' || cast \n> (date_part('year', timestamp 'now') AS text );\n> \n> - (10000 * this query):\n> \n> standard select: 54 sec\n> via prepare/execute: 4 sec (93% better)\n> \n> IMHO it is nod bad.\n> \n> - For standard query like:\n> \n> SELECT u.usename, r.relname FROM pg_class r, pg_user u WHERE \n> r.relowner = u.usesysid;\n> \n> it is with PREPARE/EXECUTE 10-20% faster.\n> \n> \n> I will *very glad* if someone try and test patch; some discussion is wanted \n> too.\n> \n> Thanks.\n> \n> Karel\n> \n> PS. Excuse me, my English is poor and this text is long --- it is not good \n> combination...\n> \n> \n\n",
"msg_date": "Mon, 31 Jul 2000 11:49:49 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "quiet? Re: The query cache - first snapshot (long)"
},
{
"msg_contents": "* Karel Zak <[email protected]> [000731 02:52] wrote:\n> \n> \n> Still *quiet* for this theme? I output it two weeks ago and I haven't \n> still some reaction. I can stop work on this if it is not wanted and not \n> interesting...\n\nI'd really like to see it go into postgresql.\n\nWho dropped the ball on this? :)\n\n-Alfred\n",
"msg_date": "Mon, 31 Jul 2000 03:06:49 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: quiet? Re: The query cache - first snapshot (long)"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> * Karel Zak <[email protected]> [000731 02:52] wrote:\n> I'd really like to see it go into postgresql.\n\n> Who dropped the ball on this? :)\n\nIt needs review. Careful review. I haven't had time and I guess the\nother core members haven't either. I'm hoping to have more time soon\nthough...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Jul 2000 10:34:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: quiet? Re: The query cache - first snapshot (long) "
},
{
"msg_contents": "On Mon, 31 Jul 2000, Tom Lane wrote:\n\n> Alfred Perlstein <[email protected]> writes:\n> > * Karel Zak <[email protected]> [000731 02:52] wrote:\n> > I'd really like to see it go into postgresql.\n> \n> > Who dropped the ball on this? :)\n> \n> It needs review. Careful review. I haven't had time and I guess the\n> other core members haven't either. I'm hoping to have more time soon\n> though...\n\n\n Thanks Tom.\n\n\t\t\t\t\tKarel\n\n BTW. --- testers not must be always core members, it is gage to \n\t others too :-)\n\n\n",
"msg_date": "Mon, 31 Jul 2000 16:40:10 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: quiet? Re: The query cache - first snapshot (long) "
}
] |
[
{
"msg_contents": "\n\n Today afternoon I a little study libpython1.5 and I mean create\nnew PL language is not a problem.\n\n I a little play with it, and here is effect:\n\ntest=# CREATE FUNCTION py_test() RETURNS text AS '\ntest'# a = ''Hello '';\ntest'# b = ''PL/Python'';\ntest'# plpython.retval( a + b );\ntest'# ' LANGUAGE 'plpython';\nCREATE\ntest=#\ntest=#\ntest=# SELECT py_test();\n py_test\n-----------------\n Hello PL/Python\n(1 row)\n\n\n Comments? Works on this already anyone?\n\n\t\t\t\tKarel\n\n\nPS. I'am not Python guru, I love 'C' and good shared libs only :-)\n\n",
"msg_date": "Wed, 19 Jul 2000 17:24:46 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hello PL/Python"
},
{
"msg_contents": "Karel Zak wrote:\n> \n> Today afternoon I a little study libpython1.5 and I mean create\n> new PL language is not a problem.\n> \n> I a little play with it, and here is effect:\n> \n> test=# CREATE FUNCTION py_test() RETURNS text AS '\n> test'# a = ''Hello '';\n> test'# b = ''PL/Python'';\n> test'# plpython.retval( a + b );\n> test'# ' LANGUAGE 'plpython';\n> CREATE\n> test=#\n> test=#\n> test=# SELECT py_test();\n> py_test\n> -----------------\n> Hello PL/Python\n> (1 row)\n> \n> Comments? Works on this already anyone?\n\nThere is a semi-complete implementation (i.e. no trigger procedures) \nby Vello Kadarpik ([email protected]).\n\nHe is probably waiting for fmgr redesign or somesuch to complete before \nreleasing it.\n\n---------\nHannu\n",
"msg_date": "Thu, 20 Jul 2000 12:30:54 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hello PL/Python"
},
{
"msg_contents": "> > Comments? Works on this already anyone?\n> \n> There is a semi-complete implementation (i.e. no trigger procedures) \n> by Vello Kadarpik ([email protected]).\n\n Cool! Is anywhere available this implementation?\n\nThanks.\n\n\t\t\tKarel\n\n",
"msg_date": "Thu, 20 Jul 2000 12:40:04 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hello PL/Python"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> There is a semi-complete implementation (i.e. no trigger procedures) \n> by Vello Kadarpik ([email protected]).\n\n> He is probably waiting for fmgr redesign or somesuch to complete before \n> releasing it.\n\nfmgr redesign is done as far as PL language handlers need be concerned.\nI still have to turn the crank on converting individual old-style\nfunctions to new-style (about half of the builtin functions are done\nso far ... man, we have got a lot of them ...). But PL and trigger\nhandlers were done a couple months ago.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jul 2000 10:58:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hello PL/Python "
},
{
"msg_contents": "Tom Lane wrote:\n> Hannu Krosing <[email protected]> writes:\n> > There is a semi-complete implementation (i.e. no trigger procedures)\n> > by Vello Kadarpik ([email protected]).\n>\n> > He is probably waiting for fmgr redesign or somesuch to complete before\n> > releasing it.\n>\n> fmgr redesign is done as far as PL language handlers need be concerned.\n> I still have to turn the crank on converting individual old-style\n> functions to new-style (about half of the builtin functions are done\n> so far ... man, we have got a lot of them ...). But PL and trigger\n> handlers were done a couple months ago.\n\n PL/pgSQL (what you did) and PL/Tcl (done lately) are\n converted to the new FMGR NULL-capabilities.\n\n Dunno about PL/Perl.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Thu, 20 Jul 2000 17:39:14 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Hello PL/Python"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> > There is a semi-complete implementation (i.e. no trigger procedures)\n> > by Vello Kadarpik ([email protected]).\n> \n> > He is probably waiting for fmgr redesign or somesuch to complete before\n> > releasing it.\n> \n> fmgr redesign is done as far as PL language handlers need be concerned.\n> I still have to turn the crank on converting individual old-style\n> functions to new-style (about half of the builtin functions are done\n> so far ... man, we have got a lot of them ...). But PL and trigger\n> handlers were done a couple months ago.\n\nIs some documentation available or just the source ?\n\nIs the -fmgr mailing-list actually there ? \nI got some cryptic messages when I tried to subscribe a while ago.\n\n----------\nHannu\n",
"msg_date": "Fri, 21 Jul 2000 14:40:03 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hello PL/Python"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> Tom Lane wrote:\n>> fmgr redesign is done as far as PL language handlers need be concerned.\n\n> Is some documentation available or just the source ?\n\nsrc/backend/utils/fmgr/README is all there is at the moment. Updating\nthe SGML docs is still on the to-do list.\n\n> Is the -fmgr mailing-list actually there ? \n\nDarn if I know. I'm not on it ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jul 2000 10:10:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hello PL/Python "
},
{
"msg_contents": "On Fri, 21 Jul 2000, Tom Lane wrote:\n\n> Hannu Krosing <[email protected]> writes:\n> > Tom Lane wrote:\n> >> fmgr redesign is done as far as PL language handlers need be concerned.\n> \n> > Is some documentation available or just the source ?\n> \n> src/backend/utils/fmgr/README is all there is at the moment. Updating\n> the SGML docs is still on the to-do list.\n> \n> > Is the -fmgr mailing-list actually there ? \n> \n> Darn if I know. I'm not on it ...\n\nit is actually there ... the only one of the new ones that has been seeing\nany traffic, though, is the -oo one ...\n\n\n",
"msg_date": "Sat, 22 Jul 2000 09:50:45 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hello PL/Python "
},
{
"msg_contents": "> > handlers were done a couple months ago.\n> \n> PL/pgSQL (what you did) and PL/Tcl (done lately) are\n> converted to the new FMGR NULL-capabilities.\n> \n> Dunno about PL/Perl.\n\nI don't think anyone knows about PL/Perl. :-) (It is broken on many\nplatforms.)\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 Jul 2000 23:43:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hello PL/Python"
},
{
"msg_contents": "On Wed, 26 Jul 2000, Bruce Momjian wrote:\n\n> > > handlers were done a couple months ago.\n> > \n> > PL/pgSQL (what you did) and PL/Tcl (done lately) are\n> > converted to the new FMGR NULL-capabilities.\n> > \n> > Dunno about PL/Perl.\n> \n> I don't think anyone knows about PL/Perl. :-) (It is broken on many\n> platforms.)\n\nI believe that plperl failures are related to building perl as a shared\nlibrary which isn't done in many platforms. PL/Perl by itself seems pretty\ndarn stable to me.\n\nNow, when PL/Perl will have some sort of SPI interface...That'd rock ;)\nMy idea was to implement SPI as a DBD driver, such as DBD::PgSPI, but I\ndon't have time to implement it...\n\n-alex\n\n",
"msg_date": "Wed, 26 Jul 2000 23:54:33 -0400 (EDT)",
"msg_from": "Alex Pilosov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hello PL/Python"
}
] |
[
{
"msg_contents": "That's what I thought. Yes, it is 7.0.* releases the reports are coming\nfrom. One has been solved (they were closing the connection before using\nit), but the other one this morning, and one this afternoon are still\npuzzling...\n\nPeter\n\n--\nPeter Mount\nEnterprise Support Mushroom\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Wednesday, July 19, 2000 4:13 PM\nTo: Peter Mount\nCc: PostgreSQL Developers List (E-mail)\nSubject: Re: [HACKERS] System tables since 7.0.0 \n\n\nPeter Mount <[email protected]> writes:\n> Has anything changed in the system tables pg_proc or pg_type since 7.0.0\nwas\n> released?\n\nNo ... we don't change system tables in patch releases, as a matter of\npolicy.\n\nThe current development tip is another story of course, but I assume\nyou are looking at reports from people running 7.0.*. Might be useful\nto get them to try the same queries in psql.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jul 2000 16:26:30 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: System tables since 7.0.0 "
}
] |
[
{
"msg_contents": "> > While implementing multi-key btree-s for 6.1 I found problems with\n> > duplicates handling and this is why extra logic was added. \n> > But I never was happy with this logic -:)\n> \n> Do you not like the proposal I was suggesting? I thought it \n> was pretty much what you said yourself a few months ago...\n\nDo not add TID to key but store key anywhere in duplicate chain and\njust read lefter child page while positioning index scan, as we do\nright now for partial keys?\n\nThis will result in additional reads but I like it much more than\ncurrent \"logic\"... especially keeping in mind that we'll have to\nimplement redo/undo for btree very soon and this would be nice\nto simplify things -:) But if you're going to change btree then\nplease do it asap - I hope to begin btree redo/undo implementation\nin 2-3 days, just after heap...\n\nVadim\n",
"msg_date": "Wed, 19 Jul 2000 09:59:57 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: btree split logic is fragile in the presence of lar\n\tge index items"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> Do you not like the proposal I was suggesting? I thought it \n>> was pretty much what you said yourself a few months ago...\n\n> Do not add TID to key but store key anywhere in duplicate chain and\n> just read lefter child page while positioning index scan, as we do\n> right now for partial keys?\n\n> This will result in additional reads but I like it much more than\n> current \"logic\"...\n\nOffhand it seems good to me too. For the normal case where there are\nmany keys per page and not so many duplicates, an unneeded read should\nbe rare anyway.\n\nI will need to study Lehman & Yao a little more to ensure they don't\nhave a problem with it, but if not I'll do it that way. (I was\nsurprised to realize that Lehman is the same Phil Lehman I knew in\ngrad school ... in fact he was probably working on this paper when\nI knew him. Small world ain't it.)\n\n> But if you're going to change btree then\n> please do it asap - I hope to begin btree redo/undo implementation\n> in 2-3 days, just after heap...\n\nSlavedriver ;-) ... I'll see what I can do ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jul 2000 14:13:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: btree split logic is fragile in the presence of lar ge index\n\titems"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n> \n> \"Mikheev, Vadim\" <[email protected]> writes:\n> >> Do you not like the proposal I was suggesting? I thought it \n> >> was pretty much what you said yourself a few months ago...\n> \n> > Do not add TID to key but store key anywhere in duplicate chain and\n> > just read lefter child page while positioning index scan, as we do\n> > right now for partial keys?\n> \n> > This will result in additional reads but I like it much more than\n> > current \"logic\"...\n>\n\nWhat about unique key insertions ?\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Thu, 20 Jul 2000 08:45:39 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: btree split logic is fragile in the presence of lar ge index\n\titems"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>>>> Do not add TID to key but store key anywhere in duplicate chain and\n>>>> just read lefter child page while positioning index scan, as we do\n>>>> right now for partial keys?\n>> \n>>>> This will result in additional reads but I like it much more than\n>>>> current \"logic\"...\n\n> What about unique key insertions ?\n\nWhat about 'em? Doesn't seem to make any difference as far as I can\nsee. You still need to be prepared to scan right to see all of the\npotential matches.\n\n\nI have been digging in the index code and am thinking of reinstating\nanother aspect of the older implementation. Once upon a time, the code\nwas set up to treat the leftmost key on any non-leaf tree level as\nminus-infinity. That seems to me to agree with the data structure\nLehman and Yao have in mind: in their drawings, each down-link pointer\non a non-leaf level is \"between\" two keys, except that the leftmost\ndownlink has no key to its left. Their drawings all show n+1 downlinks\nin an n-key non-leaf node. We didn't match that representation, so we\nneed to fake it with a dummy key associated with the first downlink.\nThe original code did that, but it got taken out at some point and\nreplaced with pretty ugly code to propagate minimum-key changes back up\nthe tree when the leftmost child node has to be split. But I think the\nonly thing wrong with it was that not all the comparison routines knew\nthey needed to do that. btree seems to be suffering from an abundance\nof comparison routines ... I'm going to try to eliminate some of the\nredundancy.\n\nActually, we could shave some storage by not storing a key in the\nfirst data item of any non-leaf page, whether leftmost on its level\nor not. That would correspond exactly to L&Y's drawings.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jul 2000 20:21:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: btree split logic is fragile in the presence of lar ge index\n\titems"
}
] |
[
{
"msg_contents": "I'm trying to sort out the documentation regarding the SysV IPC settings,\nbut I better understand them myself first. :)\n\nWe use three shared-memory segments: One is for the spin locks and is of\nnegligible size (144 bytes currently). The other two I don't know, but one\nof them seems to be sized about 550kB + -B * BLCKSZ\n\nMy kernel has the following interesting-looking shared memory settings:\n\nSHMMAX\t-- max size per segment. Apparently must be >= 550kB + -B * BLCKSZ\nSHMMNI\t-- max number of segments system wide, better be >= 3\nSHMSEG\t-- max number of segments per process, also better be >= 3\nSHMALL\t-- max number of pages for shmem system wide. This seems to be\n fixed at some theoretical amount.\n\nThe most promising thing to promote here is evidently to raise SHMMAX.\n\n\nFor semaphores, we're using ceil(-N % 16) sets of 16 semaphores. In my\nkernel I see:\n\nSEMMNI\t-- max number of semaphore \"identifiers\" (=sets?)\nSEMMSL\t-- max semaphores per set, this is explained in storage/proc.h\nSEMMNS\t-- max semaphores in system\n\nSo, SEMMNI and SEMMNS seem to be the most promising settings to change.\n\nIs there any noteworthy relevance of some of the other parameters? I see\nFAQ_BSDI talks about SEMUME and SEMMNU.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 20 Jul 2000 00:46:17 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "About these IPC parameters"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> We use three shared-memory segments: One is for the spin locks and is of\n> negligible size (144 bytes currently). The other two I don't know, but one\n> of them seems to be sized about 550kB + -B * BLCKSZ\n\nThe shmem sizes depend on both -B and -N, but the dependence on -B is\nmuch stronger. Obviously there's 8K per -B for the buffer itself,\nand there's also some allowance for hashtable entries for the buffer\nindexing tables. The -N number drives the size of the PROC table plus\nsome hashtables --- but a PROC entry isn't very big.\n\nI believe there's no really fundamental reason why we use three shmem\nsegments and not just one. I've toyed with the idea of trying to\ncombine them, but not done anything about it yet...\n\n> My kernel has the following interesting-looking shared memory settings:\n\nFWIW, HPUX does not have SHMALL --- and since HPUX began life as SysV\nI would imagine a lot of other SysV derivatives don't either. The\nrelevant parameters here seem to be\n\nSEMA Enable Sys V Semaphores\nSEMAEM Max Value for Adjust on Exit Semaphores\nSEMMAP Max Number of Semaphore Map Entries\nSEMMNI Number of Semaphore Identifiers\nSEMMNS Max Number of Semaphores\nSEMMNU Number of Semaphore Undo Structures\nSEMUME Semaphore Undo Entries per Process\nSEMVMX Semaphore Maximum Value\nSHMEM Enable Sys V Shared Memory\nSHMMAX Max Shared Mem Segment (bytes)\nSHMMNI Number of Shared Memory Identifiers\nSHMSEG Shared Memory Segments per Process\n\nOther than shooting yourself in the foot by having SEMA or SHMEM be\n0 (OFF), it looks like the parameters that could need raising on this\nplatform would be SEMMAP, SEMMNI, SEMMNS, SHMMAX.\n\n> Is there any noteworthy relevance of some of the other parameters? I see\n> FAQ_BSDI talks about SEMUME and SEMMNU.\n\nAFAIK we don't use semaphore undo, so those are red herrings.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jul 2000 00:50:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About these IPC parameters "
},
{
"msg_contents": "On Thu, 20 Jul 2000, Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> > We use three shared-memory segments: One is for the spin locks and is of\n> > negligible size (144 bytes currently). The other two I don't know, but one\n> > of them seems to be sized about 550kB + -B * BLCKSZ\n> \n> The shmem sizes depend on both -B and -N, but the dependence on -B is\n> much stronger. Obviously there's 8K per -B for the buffer itself,\n> and there's also some allowance for hashtable entries for the buffer\n> indexing tables. The -N number drives the size of the PROC table plus\n> some hashtables --- but a PROC entry isn't very big.\n> \n> I believe there's no really fundamental reason why we use three shmem\n> segments and not just one. I've toyed with the idea of trying to\n> combine them, but not done anything about it yet...\n> \n> > My kernel has the following interesting-looking shared memory settings:\n> \n> FWIW, HPUX does not have SHMALL --- and since HPUX began life as SysV\n> I would imagine a lot of other SysV derivatives don't either. The\n> relevant parameters here seem to be\n> \n> SEMA Enable Sys V Semaphores\n> SEMAEM Max Value for Adjust on Exit Semaphores\n> SEMMAP Max Number of Semaphore Map Entries\n> SEMMNI Number of Semaphore Identifiers\n> SEMMNS Max Number of Semaphores\n> SEMMNU Number of Semaphore Undo Structures\n> SEMUME Semaphore Undo Entries per Process\n> SEMVMX Semaphore Maximum Value\n> SHMEM Enable Sys V Shared Memory\n> SHMMAX Max Shared Mem Segment (bytes)\n> SHMMNI Number of Shared Memory Identifiers\n> SHMSEG Shared Memory Segments per Process\n> \n> Other than shooting yourself in the foot by having SEMA or SHMEM be\n> 0 (OFF), it looks like the parameters that could need raising on this\n> platform would be SEMMAP, SEMMNI, SEMMNS, SHMMAX.\n> \n> > Is there any noteworthy relevance of some of the other parameters? I see\n> > FAQ_BSDI talks about SEMUME and SEMMNU.\n> \n> AFAIK we don't use semaphore undo, so those are red herrings.\n\nFirst off, this might be something we need a whole seperate FAQ for, since\nI think the concepts are pretty much common across the various OSs?\n\nfor instance, under FreeBSD, I have it set right now as:\n\n====\noptions SYSVSHM\noptions SHMMAXPGS=4096\noptions SHMSEG=256\n\noptions SYSVSEM\noptions SEMMNI=256\noptions SEMMNS=512\noptions SEMMNU=256\noptions SEMMAP=256\n\noptions SYSVMSG #SYSV-style message queues\n====\n\nTo run three postmasters, one with '-B 256 -N 128', and the other two just\nwith '-N 16' ... the thing that I just don't get is how the settings ni my\nkernel apply, and trying to find any info on taht is like pulling teeth :(\nFor instance, I'm allowing for up to 160 clients to connect, max .. does\nthat make for one semaphore identifier per client, so I need SEMMNI >=\n160? Or ... ?\n\nI grab'd this off a Sun site dealing with Solaris, but it might also be of\naid:\n\n\n Name Default Max Brief Description\n ------ ------- -------------- -------------------------------------\n\n semmap 10 2147483647 Number of entries in semaphore map\n\n semmni 10 65535 Number of semaphore sets (identifiers)\n\n semmns 60 2147483647 Number of semaphores in the system\n 65535 (usage)\n\n semmnu 30 2147483647 Number of \"undo\" structures in the system\n\n semmsl 25 2147483647 Max number of semaphores, per semaphore id\n 65535 (usage)\n\n semopm 10 2147483647 Max number of operations, per semaphore call\n\n semume 10 2147483647 Max number of \"undo\" entries, per process\n\n semusz 96 *see below* Size in bytes of \"undo\" structure\n\n semvmx 32767 2147483647 Semaphore maximum value\n 65535 (usage)\n\n semaem 16384 2147483647 Adjust on exit maximum value\n 32767 (usage)\n\n Detailed Descriptions\n ---------------------\n\n semmap\n\n Defines the size of the semaphore resource map; each block of available,\n contiguous semaphores requires one entry in this map. This is the pool from\n which semget(2) acquires semaphore sets.\n\n When a semaphore set is removed (deleted), if the block of semaphores to be\n freed is adjacent to a block of semaphores already in the resource map, the\n semaphores in the set being removed are added to the existing map entry;\n no new map entry is required. If the semaphores in the removed set are not\n adjacent to those in an existing map entry, then a new map entry is required\n to track these semaphores; if there are no more map entries available, the\n system has to discard an entry, 'permanently' losing a block of semaphores\n (permanence is relative; a reboot fixes the problem). If this should occur,\n a WARNING will be generated, the text of which will be something like\n \"rmallocmap: rmap overflow, lost ...\". The end result is that a user could\n later get ENOSPC errors from semget(2) even though it doesn't look like all\n the semaphores are allocated.\n\n semmni\n\n Defines the number of semaphore sets (identifiers), system wide. Every\n semaphore set in the system has a unique indentifier and control structure.\n The system pre-allocates kernel memory for semmni control structures; each\n control structure is 84 bytes. If no more identifiers are available,\n semget(2) returns ENOSPC.\n\n Attempting to set semmni to a value greater than 65535 will result in\n generation of a WARNING, and the value will be set to 65535.\n\n semmns\n\n Defines the number of semaphores in the system; 16 bytes of kernel memory is\n pre-allocated for each semaphores. If there is not a large enough block of\n contiguous semaphores in the resource map (see semmap) to satisfy the request,\n semget(2) returns ENOSPC.\n\n Fragmentation of the semaphore map will result in ENOSPC errors, even though\n there may appear to be ample free semaphores. Despite attempts by the system\n to merge free sets (see semmap), the size of the clusters of free semaphores\n generally decreases over time. For this reason, semmns frequently must be set\n higher than the actual number of semaphores required.\n\n semmnu\n\n Defines the number of semaphore undo structures in the system. semusz (see\n below) bytes of kernel memory are pre-allocated for each undo structure; one\n undo structure is required for every process for which undo information must\n be recorded. semop() will return ENOSPC if it is requested to record undo\n information and there are no undo structures available.\n\n semmsl\n\n Limits the number of semaphores that can be created for a single semaphore id.\n If semget(2) returns EINVAL, this limit should be increased. This parameter\n is only used to validate the argument passed to semget(2). Logically, it\n should be less than or equal to semmns (see above). Setting semmsl too high\n might allow a few identifiers to hog all the semaphores in the system.\n\n semopm\n\n Limits the number of operations that are allowed in a single semop(2) call.\n If semop(2) returns E2BIG, this limit should be increased. This parameter is\n only used to validate the argument passed to semop(2).\n\n semume\n\n Limits the number of undo records that can exist for a process. If semop(2)\n returns EINVAL, this limit should be increased. In addition to its use in\n validating arguments to semop(2), this parameter is used to calculate the\n value of semusz (see below).\n\n semusz\n\n Defines the size of the semaphore undo structure. Any attempt to modify this\n parameter directly will be ignored; semusz is always calculated based upon\n the value of semume (see above); semusz = 8 * (semume + 2).\n\n semvmx\n\n Limits the maximum value of a semaphore. Due to the interaction with undo\n structures and semaem (see below), this tuneable should not be increased\n beyond its default value of 32767, unless you can guarantee that SEM_UNDO is\n never and will never be used. It can be safely reduced, but doing so provides\n no savings.\n\n semaem\n\n Limits the maximum value of an adjust-on-exit undo element. No system\n resources are allocated based on this value.\n\n",
"msg_date": "Thu, 20 Jul 2000 12:50:59 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About these IPC parameters "
},
{
"msg_contents": "Tom Lane writes:\n\n> Other than shooting yourself in the foot by having SEMA or SHMEM be\n> 0 (OFF), it looks like the parameters that could need raising on this\n> platform would be SEMMAP, SEMMNI, SEMMNS, SHMMAX.\n\nCan you give me a couple of lines on how to change them (e.g., edit some\nfile and reboot) and perhaps a comment whether some of these tend to be\ntoo low in the default configuration?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 21 Jul 2000 20:29:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: About these IPC parameters "
},
{
"msg_contents": "The Hermit Hacker writes:\n\n> First off, this might be something we need a whole seperate FAQ for, since\n> I think the concepts are pretty much common across the various OSs?\n\nWorking on that...\n\n> for instance, under FreeBSD, I have it set right now as:\n\nIs SysV IPC still off in stock FreeBSD kernels?\n\n> For instance, I'm allowing for up to 160 clients to connect, max .. does\n> that make for one semaphore identifier per client, so I need SEMMNI >=\n> 160? Or ... ?\n\nSEMMNI = 10\n\n> I grab'd this off a Sun site dealing with Solaris, but it might also be of\n> aid:\n\nYes, that helped me a lot. I wrote a section about all this for the Admin\nGuide. It should pop up in the next day or so.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 21 Jul 2000 20:30:07 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: About these IPC parameters "
},
{
"msg_contents": "On Fri, 21 Jul 2000, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n> \n> > First off, this might be something we need a whole seperate FAQ for, since\n> > I think the concepts are pretty much common across the various OSs?\n> \n> Working on that...\n> \n> > for instance, under FreeBSD, I have it set right now as:\n> \n> Is SysV IPC still off in stock FreeBSD kernels?\n\nChecking the GENERIC config file, it is enabled by default now ...\n\n> > For instance, I'm allowing for up to 160 clients to connect, max .. does\n> > that make for one semaphore identifier per client, so I need SEMMNI >=\n> > 160? Or ... ?\n> \n> SEMMNI = 10\n\nOuch ... so I'm running a bit high on values :)\n\n\n",
"msg_date": "Fri, 21 Jul 2000 15:34:15 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About these IPC parameters "
},
{
"msg_contents": "At 3:34 PM -0300 7/21/00, The Hermit Hacker wrote:\n>On Fri, 21 Jul 2000, Peter Eisentraut wrote:\n> > Is SysV IPC still off in stock FreeBSD kernels?\n>\n>Checking the GENERIC config file, it is enabled by default now ...\n\nIt's on by default in NetBSD also.\n\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n",
"msg_date": "Fri, 21 Jul 2000 11:47:43 -0700",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About these IPC parameters"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> Other than shooting yourself in the foot by having SEMA or SHMEM be\n>> 0 (OFF), it looks like the parameters that could need raising on this\n>> platform would be SEMMAP, SEMMNI, SEMMNS, SHMMAX.\n\n> Can you give me a couple of lines on how to change them (e.g., edit some\n> file and reboot) and perhaps a comment whether some of these tend to be\n> too low in the default configuration?\n\nOn HPUX the usual advice is \"use SAM\" (System Administration Manager).\nIt's a pretty decent point-and-drool tool. You go into Kernel\nConfiguration / Configurable Parameters and double-click on the items\nyou don't like in the resulting list. When you're done, hit Create\nA New Kernel. SAM used to have some memorable deficiencies (I still\nrecall that when I first used it, if you let it create a user's home\ndirectory it would leave /users world-writable...) but it seems reliable\nenough in HPUX 10.\n\nIf I've found the right file to look at, the factory defaults are\n\nsemmni 64 Number of Semaphore Identifiers \nsemmns 128 Max Number of Semaphores \nshmmax 0x4000000 Max Shared Mem Segment (bytes) \nshmmni 200 Number of Shared Memory Identifiers \nshmseg 120 Shared Memory Segments per Process \n\nso you'd need to raise these to run a big installation (more than,\nsay, 100 backends) but not for a default-sized setup.\n\nWhat I tend to want to raise are not the IPC parameters but\n\nmaxdsiz 0x04000000 Max Data Segment Size (bytes) \nmaxssiz 0x00800000 Max Stack Segment Size (bytes) \nmaxfiles 60 Soft File Limit per Process \nmaxfiles_lim 1024 Hard File Limit per Process \nmaxuprc 75 Max Number of User Processes (per user)\nmaxusers 32 Value of MAXUSERS macro\nnfile (16*(NPROC+16+MAXUSERS)/10+32+2*(NPTY+NSTRPTY)) Max Number of Open Files\nninode ((NPROC+16+MAXUSERS)+32+(2*NPTY)+(10*NUM_CLIENTS)) Max Number of Open Inodes\n\nIn particular, the default maxuprc would definitely be a problem for\nrunning a lot of backends, and you'd likely start running into nfile\nor ninode limits too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jul 2000 14:59:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About these IPC parameters "
},
{
"msg_contents": "> So, SEMMNI and SEMMNS seem to be the most promising settings to change.\n> \n> Is there any noteworthy relevance of some of the other parameters? I see\n> FAQ_BSDI talks about SEMUME and SEMMNU.\n\nI wrote FAQ_BSDI because it was not trivial to figure out how to modify\nthose parameters. I figured other OS's either don't need to do it, or\nhave an easier way of doing it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 27 Jul 2000 09:06:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About these IPC parameters"
},
{
"msg_contents": "\nPeter ...\n\n\tHere is the 'latest and greatest' NOTES that one of the FreeBSD\nguys has been working on for shared memory/semaphores ... not sure if it\nhelps or not, but I believe it was you that was working on \"organizing\nthis\"?\n\n=============================\n#####################################################################\n# SYSV IPC KERNEL PARAMETERS\n#\n# Maximum number of entries in a semaphore map.\noptions SEMMAP=31\n\n# Maximum number of System V semaphores that can be used on the system at\n# one time.\noptions SEMMNI=11\n\n# Total number of semaphores system wide\noptions SEMMNS=61\n\n# Total number of undo structures in system\noptions SEMMNU=31\n\n# Maximum number of System V semaphores that can be used by a single\nprocess\n# at one time.\noptions SEMMSL=61\n\n# Maximum number of operations that can be outstanding on a single System\nV\n# semaphore at one time.\noptions SEMOPM=101\n\n# Maximum number of undo operations that can be outstanding on a single\n# System V semaphore at one time.\noptions SEMUME=11\n\n# Maximum number of shared memory pages system wide.\noptions SHMALL=1025\n\n# Maximum size, in bytes, of a single System V shared memory region.\noptions SHMMAX=\"(SHMMAXPGS*PAGE_SIZE+1)\"\noptions SHMMAXPGS=1025\n\n# Minimum size, in bytes, of a single System V shared memory region.\noptions SHMMIN=2\n\n# Maximum number of shared memory regions that can be used on the system\n# at one time.\noptions SHMMNI=33\n\n# Maximum number of System V shared memory regions that can be attached to\n# a single process at one time.\noptions SHMSEG=9\n========================================\n\nOn Thu, 27 Jul 2000, Bruce Momjian wrote:\n\n> > So, SEMMNI and SEMMNS seem to be the most promising settings to change.\n> > \n> > Is there any noteworthy relevance of some of the other parameters? I see\n> > FAQ_BSDI talks about SEMUME and SEMMNU.\n> \n> I wrote FAQ_BSDI because it was not trivial to figure out how to modify\n> those parameters. I figured other OS's either don't need to do it, or\n> have an easier way of doing it.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 27 Jul 2000 12:27:00 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About these IPC parameters"
},
{
"msg_contents": "The IPC killer is that different OS's have different methods for\nchanging kernel parameters, and some have different kernel parameter\nnames.\n\n> Peter Eisentraut <[email protected]> writes:\n> > Tom Lane writes:\n> >> Other than shooting yourself in the foot by having SEMA or SHMEM be\n> >> 0 (OFF), it looks like the parameters that could need raising on this\n> >> platform would be SEMMAP, SEMMNI, SEMMNS, SHMMAX.\n> \n> > Can you give me a couple of lines on how to change them (e.g., edit some\n> > file and reboot) and perhaps a comment whether some of these tend to be\n> > too low in the default configuration?\n> \n> On HPUX the usual advice is \"use SAM\" (System Administration Manager).\n> It's a pretty decent point-and-drool tool. You go into Kernel\n> Configuration / Configurable Parameters and double-click on the items\n> you don't like in the resulting list. When you're done, hit Create\n> A New Kernel. SAM used to have some memorable deficiencies (I still\n> recall that when I first used it, if you let it create a user's home\n> directory it would leave /users world-writable...) but it seems reliable\n> enough in HPUX 10.\n> \n> If I've found the right file to look at, the factory defaults are\n> \n> semmni 64 Number of Semaphore Identifiers \n> semmns 128 Max Number of Semaphores \n> shmmax 0x4000000 Max Shared Mem Segment (bytes) \n> shmmni 200 Number of Shared Memory Identifiers \n> shmseg 120 Shared Memory Segments per Process \n> \n> so you'd need to raise these to run a big installation (more than,\n> say, 100 backends) but not for a default-sized setup.\n> \n> What I tend to want to raise are not the IPC parameters but\n> \n> maxdsiz 0x04000000 Max Data Segment Size (bytes) \n> maxssiz 0x00800000 Max Stack Segment Size (bytes) \n> maxfiles 60 Soft File Limit per Process \n> maxfiles_lim 1024 Hard File Limit per Process \n> maxuprc 75 Max Number of User Processes (per user)\n> maxusers 32 Value of MAXUSERS macro\n> nfile (16*(NPROC+16+MAXUSERS)/10+32+2*(NPTY+NSTRPTY)) Max Number of Open Files\n> ninode ((NPROC+16+MAXUSERS)+32+(2*NPTY)+(10*NUM_CLIENTS)) Max Number of Open Inodes\n> \n> In particular, the default maxuprc would definitely be a problem for\n> running a lot of backends, and you'd likely start running into nfile\n> or ninode limits too.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 27 Jul 2000 14:34:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About these IPC parameters"
}
] |
[
{
"msg_contents": "> > > Do not add TID to key but store key anywhere in duplicate \n> > > chain and just read lefter child page while positioning index scan,\n> > > as we do right now for partial keys?\n> > \n> > > This will result in additional reads but I like it much more than\n> > > current \"logic\"...\n> >\n> \n> What about unique key insertions ?\n\nWe'll have to find leftmost key in this case and do what we do now.\n\nVadim\n \n",
"msg_date": "Wed, 19 Jul 2000 17:17:11 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: btree split logic is fragile in the presence of lar\n\tge index items"
},
{
"msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim\n> \n> > > > Do not add TID to key but store key anywhere in duplicate \n> > > > chain and just read lefter child page while positioning index scan,\n> > > > as we do right now for partial keys?\n> > > \n> > > > This will result in additional reads but I like it much more than\n> > > > current \"logic\"...\n> > >\n> > \n> > What about unique key insertions ?\n> \n> We'll have to find leftmost key in this case and do what we do now.\n>\n\nCurrently the page contains the leftmost key is the target page of\ninsertion and is locked exclusively but it may be different in extra\nTID implementation. There may be a very rare deadlock possibility.\n \nHiroshi Inoue\[email protected] \n",
"msg_date": "Thu, 20 Jul 2000 09:41:55 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: btree split logic is fragile in the presence of lar ge index\n\titems"
}
] |
[
{
"msg_contents": "> > > What about unique key insertions ?\n> > \n> > We'll have to find leftmost key in this case and do what we do now.\n> >\n> \n> Currently the page contains the leftmost key is the target page of\n> insertion and is locked exclusively but it may be different in extra\n> TID implementation. There may be a very rare deadlock possibility.\n\nFirst, Tom is not going to do TID implementation now...\nBut anyway while we hold lock on a page we are able to go right\nand lock pages (and we do this now). There is no possibility for\ndeadlock here: backward scans always unlock page before reading/locking\nleft page.\n\nVadim\n",
"msg_date": "Wed, 19 Jul 2000 18:39:09 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: btree split logic is fragile in the presence of lar\n\tge index items"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> Currently the page contains the leftmost key is the target page of\n>> insertion and is locked exclusively but it may be different in extra\n>> TID implementation. There may be a very rare deadlock possibility.\n\n> But anyway while we hold lock on a page we are able to go right\n> and lock pages (and we do this now). There is no possibility for\n> deadlock here: backward scans always unlock page before reading/locking\n> left page.\n\nReaders can only hold one page read lock at a time (they unlock before\ntrying to lock next page). Writers can hold more than one page lock\nat a time, but they always step in the same direction (right or up)\nso no deadlock is possible. It looks fine to me.\n\nI have been studying the btree code all day today and finally understand\nthat the equal-key performance problems don't really have anything to do\nwith the Lehman-Yao assumption of no equal keys. The L-Y algorithm\nreally only needs unique keys so that a writer who's split a page can\nreturn to the parent page and find the right place to insert the link\nfor the new righthand page. As Vadim says, you can use the downlinks\nthemselves to determine which entry is for the page you split; at worst\nyou have to do some linear instead of binary search when there are lots\nof equal keys. That should be OK, since we only have to do this when we\nsplit a page.\n\nI believe that the equal-key performance problem largely comes from the\nbogus way we've been handling duplicates, in particular the fact that\nfindsplitloc has to worry about choosing a \"legal split point\" for a\nseries of duplicates. Once we get rid of that, I think findsplitloc\ncan use a binary search.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jul 2000 01:08:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: btree split logic is fragile in the presence of lar ge index\n\titems"
}
] |
[
{
"msg_contents": "Any comments on my ideas on the macaddr_manuf stuff?\n\nLarry\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 19 Jul 2000 21:14:06 -0500 (CDT)",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "mac.c"
}
] |
[
{
"msg_contents": "Hi,\n\n Sorry about the delay in getting the bit-code to you -- it has been a\nbit busy. Attached is a tar file with the c-code for the bit functions\nand an implementation of a user-defined type on these. I hope Thomas is\nstill prepared to integrate this as a proper type into postgres.\n\nThe main difference is that I do not allow binary operations on bit\nstrings of unequal length anymore (doubt it would have been very relevant\nanyway, and this solves a lot of conceptual problems), and I discovered\nthat the standard requires bit_length and octet_length functions on bit\nstrings, so I've added those. It runs on linux and alphas.\n\nIf anybody has any comments on the code or suggestions for improvements,\nplease let me know.\n\nRegards,\n\nAdriaan",
"msg_date": "Thu, 20 Jul 2000 14:11:21 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: varbit type"
}
] |
[
{
"msg_contents": "Hi,\n\n I'm having serious problems trying to load large amounts of data\ninto the database. I have this data in binary, database compatible,\nform, but there seems no way left to upload this data except to turn it\nall back into strings (since 7.0.2 binary copy has been disabled, making\nquite a few libpq functions superfluous). This is quite a serious\ndeficiency in my view.\n\nSo, I want to know what my options are. For this type of thing I would\nhave thought that Karel's stored queries would be useful, combined with\nsome way of uploading binary data to the database. Something along the\nlines of\n\nprepare insert into my_table (col1, col2) values (?,?);\n\nexecute <handle to the query> 3, 4;\n\nTo upload binary data there would have to be a libpq call that uploads\nthe data in the execute statement as binary (there is a specification\nfor this already in the current libpq), and executes the prepared plan.\nFor any application that generates a lot of data (especially floating\npoint data), this would be a huge win. An added advantage would be that\nthis type of schema would allow a serial value on the table to be\nincremented as in any normal insert, which has always been annoying when\nusing copy.\n\nI have no idea how hard this is and whether I'm the only person in the\nworld that will find this useful. I seem to be the only one who moaned\nabout the binary copy vanishing, and breaking code, so perhaps nobody\nelse sees this as a problem?\n\nAdriaan\n\n\n\n",
"msg_date": "Thu, 20 Jul 2000 14:36:34 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Loading binary data into the database"
},
{
"msg_contents": "Adriaan Joubert <[email protected]> writes:\n> since 7.0.2 binary copy has been disabled, making\n> quite a few libpq functions superfluous\n\nSuch as? IIRC, the reason we disabled it was precisely that there was\nno support on the client side. (What's worse, there's no support in\nthe FE/BE protocol either. I don't see how you could have made this\nwork...)\n\nCross-machine binary copy is a dangerous thing anyway, since it opens\nyou up to all sorts of compatibility problems. If your app is running\non the same machine as the server, you can write data to a file and\nthen send a command to do a binary copy from that file.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jul 2000 11:08:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Loading binary data into the database "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Adriaan Joubert <[email protected]> writes:\n> > since 7.0.2 binary copy has been disabled, making\n> > quite a few libpq functions superfluous\n>\n> Such as? IIRC, the reason we disabled it was precisely that there was\n> no support on the client side. (What's worse, there's no support in\n> the FE/BE protocol either. I don't see how you could have made this\n> work...)\n\nI issued a 'copy binary <table> from stdin;' and then sent the data with\nPQputnbytes (this is now obsolete, isn't it?) and as this was from a\nCORBA server running on the same machine as the database this worked fine\nand was very fast (not being able to update a serial was a pain, and I\nended up doing it by hand in the server). As some of the data I have to\nwrite goes into bytea fields, i now have to convert all non-printable\ncharacters to octet codes, which is a total pain in the neck.\n\n> Cross-machine binary copy is a dangerous thing anyway, since it opens\n> you up to all sorts of compatibility problems. If your app is running\n> on the same machine as the server, you can write data to a file and\n> then send a command to do a binary copy from that file.\n\nYes sure, if you write from a machine with a different architecture it is\ngoing to cause trouble. Reading and writing binary files on the host\nmachine seems kind-of a slow solution to the problem and leads to yet\nanother load of permission problems (Ok, they can be resolved, but it is\nyet another place where things can go wrong).\n\nPerhaps libpq is not the answer. I've even been thinking about writing a\nSPI function that acts as a CORBA server -- but decided that that is just\ntoo ugly to contemplate. So what is the solution?\n\nAdriaan\n\n",
"msg_date": "Thu, 20 Jul 2000 18:24:57 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Loading binary data into the database"
},
{
"msg_contents": "Adriaan Joubert <[email protected]> writes:\n>> Such as? IIRC, the reason we disabled it was precisely that there was\n>> no support on the client side. (What's worse, there's no support in\n>> the FE/BE protocol either. I don't see how you could have made this\n>> work...)\n\n> I issued a 'copy binary <table> from stdin;' and then sent the data with\n> PQputnbytes\n\nHow did you get out of COPY state? In binary mode CopyFrom will only\nrecognize EOF as end of data, and there's no provision in the FE/BE\nprotocol for making it see an EOF. You'd have had to break the\nconnection to get out of that --- and I'd have expected the loss of\nconnection to cause a transaction abort, preventing your data from\ngetting committed. (If it didn't abort, that's a bug that needs to be\nfixed... if the line drops halfway through a copy, you don't want it\nto commit do you?)\n\nThe real bottom line here is that the FE/BE protocol would need to be\nchanged to support binary copy properly, and no one's excited about\nputting more work into the existing protocol, nor about the ensuing\ncompatibility problems.\n\n\n> Perhaps libpq is not the answer. I've even been thinking about writing a\n> SPI function that acts as a CORBA server -- but decided that that is just\n> too ugly to contemplate. So what is the solution?\n\nA CORBA-based replacement protocol has been discussed seriously, though\nI haven't noticed any work getting done on it lately. Feel free to\npitch in if you think it's a good idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jul 2000 11:43:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Loading binary data into the database "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Adriaan Joubert <[email protected]> writes:\n> >> Such as? IIRC, the reason we disabled it was precisely that there was\n> >> no support on the client side. (What's worse, there's no support in\n> >> the FE/BE protocol either. I don't see how you could have made this\n> >> work...)\n>\n> > I issued a 'copy binary <table> from stdin;' and then sent the data with\n> > PQputnbytes\n>\n> How did you get out of COPY state? In binary mode CopyFrom will only\n> recognize EOF as end of data, and there's no provision in the FE/BE\n> protocol for making it see an EOF. You'd have had to break the\n> connection to get out of that --- and I'd have expected the loss of\n> connection to cause a transaction abort, preventing your data from\n> getting committed. (If it didn't abort, that's a bug that needs to be\n> fixed... if the line drops halfway through a copy, you don't want it\n> to commit do you?)\n\nDon't know. I first sent the length of the binary buffer, then the buffer (I\njust stored the whole thing in an STL vector) and PQendcopy to terminate it.\nBut, as you said, libpq is probably not the right way to go about it. Also,\nthe docs for the binary structure were not quite correct, but it took only a\nlittle bit of fiddling to get the structure right. I could not find the\ndescription of the binary structure back in the current docs on\npostgresql.org, so I guess this really has been ripped out.\n\n>\n> A CORBA-based replacement protocol has been discussed seriously, though\n> I haven't noticed any work getting done on it lately. Feel free to\n> pitch in if you think it's a good idea.\n\nYes, I've been looking through the mailing list. Problem is to settle on a\nCORBA system that runs everywhere. And it is much more natural to program\nCORBA in C++, but if I see the problems people have had just compiling the C++\ninterface to postgres, this looks like a no-go. I'll look around at the\nvarious bits and pieces floating around the net.\n\nIf anybody is working on a CORBA interface to postgres, please let me know!\n\nAdriaan\n\n",
"msg_date": "Fri, 21 Jul 2000 08:17:00 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Loading binary data into the database"
},
{
"msg_contents": "> Adriaan Joubert <[email protected]> writes:\n> > since 7.0.2 binary copy has been disabled, making\n> > quite a few libpq functions superfluous\n> \n> Such as? IIRC, the reason we disabled it was precisely that there was\n> no support on the client side. (What's worse, there's no support in\n> the FE/BE protocol either. I don't see how you could have made this\n> work...)\n> \n> Cross-machine binary copy is a dangerous thing anyway, since it opens\n> you up to all sorts of compatibility problems. If your app is running\n> on the same machine as the server, you can write data to a file and\n> then send a command to do a binary copy from that file.\n\nWe disabled binary copy? Using \\copy or COPY?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 27 Jul 2000 10:17:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Loading binary data into the database"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> We disabled binary copy? Using \\copy or COPY?\n\nCOPY BINARY to stdout or from stdin is disallowed now. It never\nreally worked anyway. AFAIK, psql's \\copy has never had a binary\noption.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Jul 2000 11:26:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Loading binary data into the database "
}
] |
[
{
"msg_contents": "In the atache is patch with this:\n\n SET LOCALE TO <value>\n\t\tSet locale to <value>, the <value> must be correct for\n\t\tcurrent OS locale setting. \n\n SHOW LOCALE\n\t\tShow current locale setting for all categories.\n\n RESET LOCALE\n\t\tSet locale back to start-up setting.\n\n Now, possible is change locale environment from client without backend \nrestart and under one postmaster can run more backends with different \nlocale setting.\n\n All routines (formatting.c, cash.c, main.c) are correct for this change. \n\n BTW. --- how plan is 'money' datetype in 7.1, remove?\n \n \t\t\t\t\tKarel\n\n An example:\n\ntest=# SHOW LOCALE;\nNOTICE: Locale setting: LANG=C, CTYPE=C, NUMERIC=C, TIME=C, COLLATE=C,\nMONETARY=C, MESSAGES=C\nSHOW VARIABLE\ntest=# SELECT to_char(1023.5, 'L 9999D9');\n to_char\n-----------\n 1023.5\n(1 row)\n\ntest=# SET LOCALE TO 'de_DE';\nSET VARIABLE\ntest=# SELECT to_char(1023.5, 'L 9999D9');\n to_char\n------------\n DM 1023,5\n(1 row)\n\ntest=# SET LOCALE TO 'en_US';\nSET VARIABLE\ntest=# SELECT to_char(1023.5, 'L 9999D9');\n to_char\n-----------\n $ 1023.5\n(1 row)\n\ntest=# RESET LOCALE;\nRESET VARIABLE\ntest=# SELECT to_char(1023.5, 'L 9999D9');\n to_char\n-----------\n 1023.5\n(1 row)",
"msg_date": "Thu, 20 Jul 2000 14:18:59 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "locale changes"
},
{
"msg_contents": "Karel Zak <[email protected]> writes:\n> Now, possible is change locale environment from client without backend \n> restart and under one postmaster can run more backends with different \n> locale setting.\n\nNo, no, NOOOOO!!!!\n\nThis *will* destroy your database.\n\nThink about indexes on text columns. Change LOCALE, now the sort order\nof the data is different. Even if the btree code doesn't crash and burn\ncompletely, it will fail to find stuff it should have found and/or\ninsert new items at positions that will be wrong after the next LOCALE\nchange.\n\nNot only is on-the-fly LOCALE change not acceptable, but we really\nought to be recording the LOCALE settings at initdb time and forcing the\npostmaster to adopt them when it starts up. Right now you can shoot\nyourself in the foot if you don't start the postmaster with the same\nLOCALE every time.\n\nSomeday we may support per-column LOCALE settings, but it's not likely\never to be safe to change LOCALE on the fly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jul 2000 11:16:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locale changes "
},
{
"msg_contents": "\nOn Thu, 20 Jul 2000, Tom Lane wrote:\n\n> Karel Zak <[email protected]> writes:\n> > Now, possible is change locale environment from client without backend \n> > restart and under one postmaster can run more backends with different \n> > locale setting.\n> \n> No, no, NOOOOO!!!!\n> \n> This *will* destroy your database.\n> \n> Think about indexes on text columns. Change LOCALE, now the sort order\n\n But, it is solvable, I save original start-up setting to serarate struct\nthat is never changed. All routines that can allow on-the-fly locale change\ncan use change-able structs and index (..etc.) can use original setting.\n\n Now, locale (on-the-fly change-able locale) use formatting.c and money (may\nbe), all others routines can use start-up setting.\n\n For example:\n\n\tchange_to_on_the_fly_locale_setting()\n\tto_char();\n\tset_original_locale();\n\n... and SET LOCALE can be used for specific routines only.\n\n Comments?\n\n But, yes this solution is not in my patch. \n\n\t\t\t\t\tKarel\t\t\n\n\n\n\n",
"msg_date": "Thu, 20 Jul 2000 17:31:01 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locale changes "
},
{
"msg_contents": "Karel Zak <[email protected]> writes:\n> \tchange_to_on_the_fly_locale_setting()\n> \tto_char();\n> \tset_original_locale();\n\nAnd if to_char throws an elog?\n\nAnother objection is that (if my experience with libc's\ntimezone-dependent code is any guide) changing the locale that much\nis likely to be *slow*. libc is designed on the assumption that you\nset these things once at application startup, so it precomputes a ton\nof stuff whenever you change 'em.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jul 2000 11:48:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locale changes "
},
{
"msg_contents": "\nOn Thu, 20 Jul 2000, Tom Lane wrote:\n\n> Karel Zak <[email protected]> writes:\n> > \tchange_to_on_the_fly_locale_setting()\n> > \tto_char();\n> > \tset_original_locale();\n> \n> And if to_char throws an elog?\n\n elog can check how locale struct is actual..\n\n> Another objection is that (if my experience with libc's\n> timezone-dependent code is any guide) changing the locale that much\n> is likely to be *slow*. libc is designed on the assumption that you\n\n It is better argument for me :-). I hope that nobody be angry at me about\nthis patch. I was probably really a little premature... grrr 2 programming\nhours are in /dev/null.. \n\n If I good remember, one year ago when I discussed (via private mails)\nabout formatting date/time with Thomas, we said something about locales\nout of libc. Any system table? Any planns?\n\n\t\t\t\t\tKarel\n\n",
"msg_date": "Thu, 20 Jul 2000 18:16:15 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locale changes "
},
{
"msg_contents": "Karel Zak writes:\n\n> ... and SET LOCALE can be used for specific routines only.\n\nI don't think that locale should be a session parameter ever. The locale\ninformation is naturally associated with the stored data (i.e., what\nlanguage is the text). If you're talking about currency display then it's\neven worse, unless you're proposing on the fly conversion.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 21 Jul 2000 02:46:10 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] locale changes "
}
] |
[
{
"msg_contents": "\nThere is a yet another updated version of pg_dump (with blob support) at:\n\n ftp://ftp.rhyme.com.au/pub/postgresql/pg_dump/blobs/\n\nThis version fixes issues with restores done via a direct DB connection on\ndumps that were done with the --insert flag.\n\nThere are two versions to download:\n\n pg_dump_...CVS.tar.gz\nand\n pg_dump_...702.tar.gz\n\nwhich can be used to build against CVS and version 7.0.2 respectively.\n\n\nAs usual, any comments & suggestions would be appreciated...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 20 Jul 2000 23:24:34 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump with BLOBs UPDATED"
}
] |
[
{
"msg_contents": "FYI,\n\n the huge memory allocation problem of the PGLZ compressor is\n fixed. It uses a fixed, static history table, organized as a\n double linked list, now.\n\n No performance loss on small- to medium-sized attributes. But\n a win on huge (multi-MB) attributes because of much lesser\n memory consumption.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Thu, 20 Jul 2000 16:23:01 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "PGLZ memory problems fixed"
}
] |
[
{
"msg_contents": "Hi,\n\n up to now, TOAST would be a v7.1 show-stopper, because due to\n the upper-block btree references problem, it's not VACUUM\n safe if there's a btree index on a toastable attribute (like\n text).\n\n The only clean way to get rid of these upper-block references\n is to recreate the indices from scratch, instead of vacuuming\n them in the crash-safe manner we do now. But doing so needs\n file versioning, and I don't expect it to be implemented in\n v7.1.\n\n So at the time beeing, I think index tuples should not\n contain any external toast references. I'll change the heap-\n am/toaster combo temporarily to do that. All that will be\n covered by #ifdef, so we can switch back easily at the time\n we have file versioning to unlimit indexed attribute sizes as\n well.\n\n Comments?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Thu, 20 Jul 2000 17:20:38 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "About TOAST and indices"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> So at the time beeing, I think index tuples should not\n> contain any external toast references.\n\nSeems like a good stopgap solution. You'll still allow them to be\ncompressed in-line, though, right?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jul 2000 11:51:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About TOAST and indices "
},
{
"msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > So at the time beeing, I think index tuples should not\n> > contain any external toast references.\n>\n> Seems like a good stopgap solution. You'll still allow them to be\n> compressed in-line, though, right?\n\n Of course. There's no problem with in-line compressed items.\n It's only these external references that cause trouble after\n a vacuum.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Thu, 20 Jul 2000 18:41:16 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: About TOAST and indices"
}
] |
[
{
"msg_contents": "I let FreeBSD's pkg_version reinstall postgres, the bad news is it re-did the initdb,\nand now I can't access my database files in /usr/local/pgsql/data/base/ler. \n\nThe files are still there, but the system catalog doesn't point to them.\n\nIs there a way to recover this?\n\nThanks!\n\nLarry Rosenman\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 20 Jul 2000 10:40:59 -0500 (CDT)",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Stupid Me..."
}
] |
[
{
"msg_contents": "Forgive me for the tacky subject, but the analogies are not far-fetched.\n\nI was just looking whether the UNSAFE_FLOATS compile-time option could\nperhaps be a run-time option, but I'm getting the feeling that it\nshouldn't be an option at all.\n\n\"Safe\" floats check after each floating point function call whether the\nresult is \"in bounds\". This leads to interesting results such as\n\npeter=# select 'nan'::float8;\n ?column?\n----------\n NaN\n(1 row)\n \npeter=# select 'infinity'::float8;\nERROR: Bad float8 input format -- overflow\n\nWhat happened?\n\nThe \"safe floats\" mode checker will fail if `result > FLOAT8_MAX', which\nwill kick in for 'infinity' but is false for 'nan'. The carefully crafted\nsupport for infinities is dead code.\n\nAlso:\n\npeter=# select 1.0/0.0;\nERROR: float8div: divide by zero error\n\nDivision by zero is not an \"error\" in floating point arithmetic.\n\n\nI think the CheckFloat{4,8}Val() functions should be abandoned and the\nfloating point results should be allowed to overflow to +Infinity or\nunderflow to -Infinity. There also need to be isinf() and isnan()\nfunctions, because surely \"x = 'infinity'\" isn't going to work.\n\n\nThis is not a high-priority issue to me, nor do I feel particularly\nqualified on the details, but at least we might agree that something's\nwrong.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 20 Jul 2000 18:10:24 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "How PostgreSQL's floating-point hurts everyone everywhere"
},
{
"msg_contents": "> What happened?\n\nThis may be platform-specific behavior. I see on my Linux/Mandrake box\nthe following:\n\nlockhart=# select 'nan'::float8;\nERROR: Bad float8 input format -- overflow\nlockhart=# select 'infinity'::float8;\nERROR: Bad float8 input format -- overflow\n\nTypically, machines will trap overflow/underflow/NaN problems in\nfloating point, but silently ignore these properties with integer\narithmetic. It would be nice to have consistant behavior across all\ntypes, but I'll stick to floats for discussion now.\n\nLots of machines (but not all) now support IEEE arithmetic. On those\nmachines, istm that we can and should support some of the IEEE\nconventions such as NaN and +Inf/-Inf. But on those machines which do\nnot have this concept, we can either try to detect this during data\nentry, or trap this explicitly during floating point exceptions, or\nwatch the backend go up in flames (which would be the default behavior).\n\nSame with divide-by-zero troubles.\n\n> I think the CheckFloat{4,8}Val() functions should be abandoned and the\n> floating point results should be allowed to overflow to +Infinity or\n> underflow to -Infinity. There also need to be isinf() and isnan()\n> functions, because surely \"x = 'infinity'\" isn't going to work.\n\nIt does work if \"infinity\" is first interpreted by our input routines.\n\nI recall running into some of these issues when coding some data\nhandling routines on my late, great Alpha boxes. In this case, I tried\nto use the isinf() (??) routine provided by DEC (and defined in IEEE?)\nto test for bad values coming from a real-time GPS tracking system. But\nuntil I futzed with the compiler options, calling isinf() failed on any\ninfinity value since the value was being checked during the call to the\nroutine, so the value was never getting to the test!\n\n> This is not a high-priority issue to me, nor do I feel particularly\n> qualified on the details, but at least we might agree that something's\n> wrong.\n\nI'd think that, on some platforms, we can try coding things a bit\ndifferently. But the code in there now does do some useful things for\nsome of the platforms we run on (though there are still holes in\npossible failure modes). imho if we change things it would be to turn\nsome of the checking into NOOP macros on some platforms, but preserve\nthese for others.\n\n - Thomas\n",
"msg_date": "Thu, 20 Jul 2000 16:47:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How PostgreSQL's floating-point hurts everyone everywhere"
},
{
"msg_contents": "At 4:47 PM +0000 7/20/00, Thomas Lockhart wrote:\n>Typically, machines will trap overflow/underflow/NaN problems in\n>floating point, but silently ignore these properties with integer\n>arithmetic. It would be nice to have consistant behavior across all\n>types, but I'll stick to floats for discussion now.\n\nThe IEEE standard says that that behavior must be configurable. The \nstandard behavior in Fortran is to ignore floating point exceptions \nas well. Unfortunately the name of the C routine which changes it is \nnot defined in the standard.\n\nThis is a bit off-topic but we have this problem with the DS1 \nspacecraft software. Everything is run with the exceptions enabled \nbecause we don't want to allow those values undetected in the \nattitude control calculations. OTOH we are vulnerable to reboots \n(and have had one) due to mistakes in other code.\n\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n",
"msg_date": "Thu, 20 Jul 2000 10:03:51 -0700",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How PostgreSQL's floating-point hurts everyone\n everywhere"
},
{
"msg_contents": "Peter Eisentraut wrote:\n>\n> peter=# select 1.0/0.0;\n> ERROR: float8div: divide by zero error\n>\n> Division by zero is not an \"error\" in floating point arithmetic.\n\n No?\n\n Oh, then 7 = 5 because:\n\n Assuming 2a = b | * 2\n 4a = 2b | + 10a\n 14a = 10a + 2b | - 7b\n 14a - 7b = 10a - 5b | ()\n 7 (2a - b) = 5 (2a - b) | / (2a - b)\n 7 = 5\n\n In the given context, you should find the mistake pretty\n easy. Maybe the message should be changed to\n\n ERROR: floatdiv: divide by zero nonsense\n ^^^^^^^^\n\n because a division by zero results in nonsense? Or should it\n just return a random value? What is the result of a division\n by zero? It's definitely not infinity, as the above\n demonstrates!\n\n You might be looking from a managers PoV. Managers usually\n use this kind of arithmetic to choose salaries. Any engineer\n knows that\n\n work = power * time\n\n We all know that time is money and that power is knowlede. So\n\n work = knowledge * money\n\n and thus\n\n work\n money = ---------\n knowledge\n\n Obviously, the lesser you know, the more money you get,\n independant of the work to be done. As soon as you know\n nothing (zero), any money you get for the work is nonsense!\n\n This applies well to the real world, so it makes sense from\n an OO PoV. But in science, I think it's still an error.\n\n Since PostgreSQL is an ORDBMS (not an OODBMS), I think it's\n correct to report an error instead of returning some random.\n\n :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Thu, 20 Jul 2000 19:28:59 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: How PostgreSQL's floating-point hurts everyone everywhere"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I'd think that, on some platforms, we can try coding things a bit\n> differently. But the code in there now does do some useful things for\n> some of the platforms we run on (though there are still holes in\n> possible failure modes).\n\nYes. But on machines that do have IEEE-compliant math, it would be\nnice to act more IEEE-ish than we do. Perhaps a compile-time option\nfor IEEE vs \"traditional Postgres\" behavior?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jul 2000 02:57:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How PostgreSQL's floating-point hurts everyone everywhere "
},
{
"msg_contents": "> Yes. But on machines that do have IEEE-compliant math, it would be\n> nice to act more IEEE-ish than we do. Perhaps a compile-time option\n> for IEEE vs \"traditional Postgres\" behavior?\n\nSure. Sounds like a job for configure...\n\n - Thomas\n",
"msg_date": "Fri, 21 Jul 2000 07:08:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How PostgreSQL's floating-point hurts everyone everywhere"
},
{
"msg_contents": "In case you didn't notice:\n\nhttp://biz.yahoo.com/prnews/000725/ca_inprise_2.html\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2582\nHowitzvej 75 �ben 14.00-18.00 Email: [email protected]\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n",
"msg_date": "Tue, 25 Jul 2000 18:17:44 +0200",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Inprise InterBase(R) 6.0 Now Free and Open Source"
},
{
"msg_contents": "\nKaare Rasmussen <[email protected]> wrote: \n\n> In case you didn't notice:\n> \n> http://biz.yahoo.com/prnews/000725/ca_inprise_2.html\n\nYes, and now the story is up on slashdot: \n\nhttp://slashdot.org/comments.pl?sid=00%2F07%2F25%2F1439226&cid=&pid=0&startat=&threshold=-1&mode=nested&commentsort=3&op=Change\n\nSo if anyone feels like taking time out to do some\nevangelizing, this might be a good moment to say a few words\nabout Postgresql vs. \"Inprise, The Open Source Database\". \n\nIn particular, this guy could use a detailed reply: \n\n> Interbase\n> \n> (Score:4, Interesting)\n> by jallen02 (:-( .) on Tuesday July 25, @09:06AM PST\n> (User #124384 Info) http://gdev.net/~jallen\n>\n> I found myself wondering exactly what Interbase could do\n> for me. So I dug through their site (not hard to find)\n> and found this lil gem\n>\n> Interbase Product Overview \n>\n> Interbase has some very awesome features. The overview\n> took the tone of a semi marketing type item yet it was\n> infomrative and if you read through some of the garbage\n> its rather clear to see as a programmer/developer what\n> Interbase offers.\n\n> Some of the features that stuck out in my mind from the\n> over view.\n> \n> -Small memory footprint\n> -Triggers \n> -Stored Procedures\n> -User Definable Functions with some 'libraries' per say\n> already defined for math and string handling\n> -Alert events \n>\n> EX:A certain item goes below xyz dollars it can send an\n> alert using some sort of constant polling method. I am\n> not sure exactly what this one was.. but basically it\n> looks like whenever changes are done to the table if\n> certain criteria are met it can call up a stored\n> proc/UDF or something. This is a bit more powerful than\n> a trigger or a stored procedure since you do not have to\n> do any speical coding on a insert/update/delete.\n>\n> Some other interesting things... There was a *LOAD* of\n> case studies on the interbase site.\n>\n> Case Studies \n> \n> I looked at some of these and they were real industry\n> proven case studies IMO. \n> \n> Its Free.. and it has a good reputation \n> \n> You can buy support for it \n> \n> It appears to be VERY ANSI Compliant and supports all the\n> trappings of MS SQL Server.\n> \n> It also claimed to be self optimizing... anyways hope this\n> provided a little information. \n\n",
"msg_date": "Tue, 25 Jul 2000 15:40:30 -0700",
"msg_from": "Joe Brenner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inprise InterBase(R) 6.0 Now Free and Open Source "
},
{
"msg_contents": "At 03:40 PM 7/25/00 -0700, Joe Brenner wrote:\n>\n>Kaare Rasmussen <[email protected]> wrote: \n>\n>> In case you didn't notice:\n>> \n>> http://biz.yahoo.com/prnews/000725/ca_inprise_2.html\n>\n>Yes, and now the story is up on slashdot: \n>\n>http://slashdot.org/comments.pl?sid=00%2F07%2F25%2F1439226&cid=&pid=0&start\nat=&threshold=-1&mode=nested&commentsort=3&op=Change\n>\n>So if anyone feels like taking time out to do some\n>evangelizing, this might be a good moment to say a few words\n>about Postgresql vs. \"Inprise, The Open Source Database\". \n>\n>In particular, this guy could use a detailed reply: \n\nJust be sure to avoid any talk of outer joins ...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 25 Jul 2000 15:54:03 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inprise InterBase(R) 6.0 Now Free and Open\n Source"
},
{
"msg_contents": "$ find InterBase -name \\*.c |xargs cat |wc\n 481977 1417203 11430946\n$ find postgresql-7.0.2 -name \\*.c |xargs cat |wc\n 329582 1087860 8649018\n\n$ wc InterBase/dsql/parse.y\n 4217 13639 103059 InterBase/dsql/parse.y\n$ wc postgresql-7.0.2/src/backend/parser/gram.y\n 5858 20413 149104 postgresql-7.0.2/src/backend/parser/gram.y\n\nI downloaded it, just to have a poke around. It doesn't build very\neasily, if at all. The best way I can describe the source is that it is\ndry reading. Not much interesting commentary. Very big source files,\nwith very long functions and huge case statements is how I would\ncharacterise it. I suspect it's reasonably well written but not very\nextensible in nature. I don't think this is going to set the world on\nfire anytime soon.\n",
"msg_date": "Wed, 26 Jul 2000 14:30:33 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inprise InterBase(R) 6.0 Now Free and Open Source"
},
{
"msg_contents": "> $ wc InterBase/dsql/parse.y\n> 4217 13639 103059 InterBase/dsql/parse.y\n> $ wc postgresql-7.0.2/src/backend/parser/gram.y\n> 5858 20413 149104 postgresql-7.0.2/src/backend/parser/gram.y\n\nHmm. I suspect that I could shrink our gram.y by ~25% just by removing\ncomments and C support routines, and by consolidating some execution\nblocks onto fewer lines. Does it look like their parse.y is more dense\nthan ours, do they do a lot of postprocessing to eliminate the yacc\nrules, or have we missed the boat on writing the grammar in yacc?\n\nJust curious; I probably won't look myself since I don't want to run the\nrisk of compromising our code and licensing. Or is that not an issue\nwith the Inprise license?\n\n - Thomas\n",
"msg_date": "Wed, 26 Jul 2000 05:21:06 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inprise InterBase(R) 6.0 Now Free and Open Source"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > $ wc InterBase/dsql/parse.y\n> > 4217 13639 103059 InterBase/dsql/parse.y\n> > $ wc postgresql-7.0.2/src/backend/parser/gram.y\n> > 5858 20413 149104 postgresql-7.0.2/src/backend/parser/gram.y\n> \n> Hmm. I suspect that I could shrink our gram.y by ~25% just by removing\n> comments and C support routines, and by consolidating some execution\n> blocks onto fewer lines. Does it look like their parse.y is more dense\n> than ours, do they do a lot of postprocessing to eliminate the yacc\n> rules, or have we missed the boat on writing the grammar in yacc?\n> \n> Just curious; I probably won't look myself since I don't want to run the\n> risk of compromising our code and licensing. Or is that not an issue\n> with the Inprise license?\n\nI had a bit of a look. There's no obvious reason, just maybe postgres\nhas a few more comments and a bit more code inside the action blocks. No\nobvious problem here.\n\nIt would be a pity if we can't look and learn from Interbase in this\ninstance, because this is one area where there is at least a possibility\nof borrowing something useful.\n",
"msg_date": "Wed, 26 Jul 2000 15:56:12 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inprise InterBase(R) 6.0 Now Free and Open Source"
},
{
"msg_contents": "\n\nChris Bitmead <[email protected]> wrote: \n\nThomas Lockhart <[email protected]> wrote: \n\n> > Just curious; I probably won't look myself since I don't want to run the\n> > risk of compromising our code and licensing. Or is that not an issue\n> > with the Inprise license?\n> \n> I had a bit of a look. There's no obvious reason, just maybe postgres\n> has a few more comments and a bit more code inside the action blocks. No\n> obvious problem here.\n> \n> It would be a pity if we can't look and learn from Interbase in this\n> instance, because this is one area where there is at least a possibility\n> of borrowing something useful.\n\nWell, the license is just the Mozilla Public License with\nthe names changed. I've just read through it several times,\nand I think the main trouble with it is you probably really \ndo need to have a lawyer look at it... but I think you could\ngo as far as to include some of the Inprise source files \ninto postgresql:\n\nhttp://www.inprise.com/IPL.html\n\n 3.7. Larger Works. \n \n You may create a Larger Work by combining Covered Code with\n other code not governed by the terms of this License and\n distribute the Larger Work as a single product. In such a\n case, You must make sure the requirements of this License\n are fulfilled for the Covered Code.\n\nThe requirements seem to be pretty commonsense things... \nIf you use some source code from Inprise, you've got to \nkeep track of where the source came from, label it with\ntheir license, list any modifications you've made, always\nprovide the source with any executables, etc.\n\nThere's also a bunch of stuff about how this license doesn't\nrelease you from any third party intellectual property\nclaims (duh! Legal docs always seem to state the obvious at\ngreat length). I might wonder what would happen if Borland\nowned a software patent on some algorithm that's included in\nthis code...\n\nBut no, I *think* that's a non-issue: \n\n The Initial Developer hereby grants You a world-wide,\n royalty-free, non-exclusive license, subject to third\n party intellectual property claims: \n\n",
"msg_date": "Thu, 27 Jul 2000 13:44:52 -0700",
"msg_from": "Joe Brenner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inprise InterBase(R) 6.0 Now Free and Open Source "
},
{
"msg_contents": "> http://www.inprise.com/IPL.html\n> \n> 3.7. Larger Works. \n> \n> You may create a Larger Work by combining Covered Code with\n> other code not governed by the terms of this License and\n> distribute the Larger Work as a single product. In such a\n> case, You must make sure the requirements of this License\n> are fulfilled for the Covered Code.\n> \n> The requirements seem to be pretty commonsense things... \n> If you use some source code from Inprise, you've got to \n> keep track of where the source came from, label it with\n> their license, list any modifications you've made, always\n> provide the source with any executables, etc.\n\nBut the BSD license doesn't require source for distributed binaries. \nSounds like a GPL-style restriction.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 28 Jul 2000 11:33:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inprise InterBase(R) 6.0 Now Free and Open Source"
},
{
"msg_contents": "> > The requirements seem to be pretty commonsense things... \n> > If you use some source code from Inprise, you've got to \n> > keep track of where the source came from, label it with\n> > their license, list any modifications you've made, always\n> > provide the source with any executables, etc.\n> \n> But the BSD license doesn't require source for distributed binaries. \n> Sounds like a GPL-style restriction.\n\nWhat is more important to my mind is if the license permits a developer to look\nat the code and get inspired, or if the developer's mind will be \"tainted\" just\nby looking.\nI hope someone can tell; I always wake up later with my head on the keyboard\nwhen I try to read license stuff...\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2582\nHowitzvej 75 �ben 14.00-18.00 Email: [email protected]\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n",
"msg_date": "Sat, 29 Jul 2000 12:13:33 +0200",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inprise InterBase(R) 6.0 Now Free and Open Source"
},
{
"msg_contents": "Kaare Rasmussen <[email protected]> writes:\n> What is more important to my mind is if the license permits a\n> developer to look at the code and get inspired, or if the developer's\n> mind will be \"tainted\" just by looking.\n\nIt is not possible to be \"tainted\" by looking. There are only two kinds\nof intellectual property rights (at least in the USA) and neither one\ncreates that risk:\n\n1. Copyright means you can't take the code verbatim, just like you can't\nplagiarize a novel. You can use the same ideas (plot, characters, etc)\nbut you have to express 'em in your own words. Structure the code\ndifferently, use different names, write your own comments, etc, and\nyou're clear even if you lifted the algorithm lock stock & barrel.\n\n2. Patent means you can't use the algorithm. However, looking doesn't\ncreate extra risk here, because you can't use a patented algorithm\n(without paying) no matter how you learned of it --- not even if you\ninvented it independently.\n\nAs far as I've heard, Inprise isn't claiming any patent rights in\nconnection with the Interbase code anyway, but it might be a good idea\nfor someone to check before we all start reading their code...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 Jul 2000 11:43:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inprise InterBase(R) 6.0 Now Free and Open Source "
},
{
"msg_contents": "Kaare Rasmussen wrote:\n> \n> > > The requirements seem to be pretty commonsense things...\n> > > If you use some source code from Inprise, you've got to\n> > > keep track of where the source came from, label it with\n> > > their license, list any modifications you've made, always\n> > > provide the source with any executables, etc.\n> >\n> > But the BSD license doesn't require source for distributed binaries.\n> > Sounds like a GPL-style restriction.\n> \n> What is more important to my mind is if the license permits a developer to look\n> at the code and get inspired, or if the developer's mind will be \"tainted\" just\n> by looking.\n> I hope someone can tell; I always wake up later with my head on the keyboard\n> when I try to read license stuff...\n\nI don't think the licence terms can have any effect on this. If you take\nan idea from one code base and apply it to another code-bases with a\ndifferent licence, then the applicable law is going to be fair use. And\nlicence terms cannot affect fair use one way or the other.\n",
"msg_date": "Mon, 31 Jul 2000 09:43:31 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inprise InterBase(R) 6.0 Now Free and Open Source"
},
{
"msg_contents": "* Chris Bitmead <[email protected]> [000730 16:52] wrote:\n> Kaare Rasmussen wrote:\n> > \n> > > > The requirements seem to be pretty commonsense things...\n> > > > If you use some source code from Inprise, you've got to\n> > > > keep track of where the source came from, label it with\n> > > > their license, list any modifications you've made, always\n> > > > provide the source with any executables, etc.\n> > >\n> > > But the BSD license doesn't require source for distributed binaries.\n> > > Sounds like a GPL-style restriction.\n> > \n> > What is more important to my mind is if the license permits a developer to look\n> > at the code and get inspired, or if the developer's mind will be \"tainted\" just\n> > by looking.\n> > I hope someone can tell; I always wake up later with my head on the keyboard\n> > when I try to read license stuff...\n> \n> I don't think the licence terms can have any effect on this. If you take\n> an idea from one code base and apply it to another code-bases with a\n> different licence, then the applicable law is going to be fair use. And\n> licence terms cannot affect fair use one way or the other.\n\nWith the obvious exception of patented algorithms. You do need to\nbe very careful, at least one major open source project violates\nUSL patents.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Sun, 30 Jul 2000 17:54:52 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inprise InterBase(R) 6.0 Now Free and Open Source"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.