threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Does anyone want to do the coding we've talked about? I'm afraid I'll\nbreak stuff.... (I also don't have the current dev sources checked out, \nI just have 7.0.2 running). \n\nI'd really like to see this in 7.1....\n\nLarry\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 20 Aug 2000 08:47:28 -0500",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "mac.c"
},
{
"msg_contents": "> Does anyone want to do the coding we've talked about? I'm afraid I'll\n> break stuff.... (I also don't have the current dev sources checked out,\n> I just have 7.0.2 running).\n\nOK, I'll work on it for a couple of hours this morning.\n\nI like Alex's suggestion for a standalone macaddr_trunc() function, then\na query which can lookup the brand from the external table.\n\nThe enthusiastic (but negative ;) feedback on putting LIKE into the mix\nsuggests that we should not use that mechanism, so \"manuf()\" or\n\"brand()\" for an explicit function may be the best way to go.\n\nComments?\n\n - Thomas\n",
"msg_date": "Sun, 20 Aug 2000 14:47:19 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mac.c"
},
{
"msg_contents": "> Does anyone want to do the coding we've talked about?\n\nOK, here is what I have so far, based on the work of Larry, Alex, and\nothers:\n\no Two new functions, trunc(macaddr) and text(macaddr), where the former\nreturns the mac address with the low (non-manufacturer) fields zeroed\nout, and the latter converts to a text string. The text conversion is\nnice because capabilities such as LIKE can be used transparently :) Will\nneed to add macaddr(text) for symmetry.\n\no Two utilities for contrib/mac, createoui and updateoui. The former\ncreates a table \"macoui\" with the fields oui and manufacturer. The\nlatter populates it with the contents of the file oui.txt, fetched from\nthe IEEE web site and processed by a slightly modified version of\nLarry's awk script.\n\no An sql definition file, manuf.sql, which defines a function\nmanuf(macaddr) along the lines Alex had suggested. It returns a text\nstring of the manufacturer's name, or NULL if none is matched. You can\nuse COALESCE() to return something other than NULL if you want.\n\nShould we have updateoui use wget to fetch oui.txt from the IEEE web\nsite? Or perhaps better we could have that in a separate utility?\n\nComments?\n\n - Thomas\n\nSome examples are\n\nmyst> ./createoui\nmyst> ./updateoui\n\nlockhart=# select trunc(macaddr '00:01:a0:aa:bb:cc');\n trunc\n-------------------\n 00:01:a0:00:00:00\n(1 row)\n\nlockhart=# select manuf('01:02:03:00:00:00');\n manuf\n-------\n\n(1 row)\n\nlockhart=# select manuf('00:01:a0:00:00:00');\n manuf\n------------------------\n Infinilink Corporation\n(1 row)\n\nlockhart=# select manuf('00:01:A0:00:00:00');\n manuf\n------------------------\n Infinilink Corporation\n(1 row)\n\nlockhart=# select manuf('00:01:A0:00:00:01');\n manuf\n------------------------\n Infinilink Corporation\n(1 row)\n\nlockhart=# select coalesce(manuf('01:02:03:00:00:00'), 'nada');\n case\n------\n nada\n(1 row)\n\nlockhart=# select * from macoui where oui like '00:aa%';\n oui | manufacturer \n-------------------+-------------------------------\n 00:aa:00:00:00:00 | INTEL CORPORATION\n 00:aa:01:00:00:00 | INTEL CORPORATION\n 00:aa:02:00:00:00 | INTEL CORPORATION\n 00:aa:3c:00:00:00 | OLIVETTI TELECOM SPA (OLTECO)\n(4 rows)\n\nlockhart=# select * from macoui where oui like '00:AA%';\n oui | manufacturer \n-----+--------------\n(0 rows)\n\nlockhart=# select * from macoui where oui ilike '00:AA%';\n oui | manufacturer \n-------------------+-------------------------------\n 00:aa:00:00:00:00 | INTEL CORPORATION\n 00:aa:01:00:00:00 | INTEL CORPORATION\n 00:aa:02:00:00:00 | INTEL CORPORATION\n 00:aa:3c:00:00:00 | OLIVETTI TELECOM SPA (OLTECO)\n(4 rows)\n",
"msg_date": "Sun, 20 Aug 2000 19:34:27 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mac.c"
},
{
"msg_contents": "Thanks Thomas!\n\nI just didn't want the ideas to die. \n\nLarry\n\n* Thomas Lockhart <[email protected]> [000820 14:28]:\n> > Does anyone want to do the coding we've talked about?\n> \n> OK, here is what I have so far, based on the work of Larry, Alex, and\n> others:\n> \n> o Two new functions, trunc(macaddr) and text(macaddr), where the former\n> returns the mac address with the low (non-manufacturer) fields zeroed\n> out, and the latter converts to a text string. The text conversion is\n> nice because capabilities such as LIKE can be used transparently :) Will\n> need to add macaddr(text) for symmetry.\n> \n> o Two utilities for contrib/mac, createoui and updateoui. The former\n> creates a table \"macoui\" with the fields oui and manufacturer. The\n> latter populates it with the contents of the file oui.txt, fetched from\n> the IEEE web site and processed by a slightly modified version of\n> Larry's awk script.\n> \n> o An sql definition file, manuf.sql, which defines a function\n> manuf(macaddr) along the lines Alex had suggested. It returns a text\n> string of the manufacturer's name, or NULL if none is matched. You can\n> use COALESCE() to return something other than NULL if you want.\n> \n> Should we have updateoui use wget to fetch oui.txt from the IEEE web\n> site? Or perhaps better we could have that in a separate utility?\n> \n> Comments?\n> \n> - Thomas\n> \n> Some examples are\n> \n> myst> ./createoui\n> myst> ./updateoui\n> \n> lockhart=# select trunc(macaddr '00:01:a0:aa:bb:cc');\n> trunc\n> -------------------\n> 00:01:a0:00:00:00\n> (1 row)\n> \n> lockhart=# select manuf('01:02:03:00:00:00');\n> manuf\n> -------\n> \n> (1 row)\n> \n> lockhart=# select manuf('00:01:a0:00:00:00');\n> manuf\n> ------------------------\n> Infinilink Corporation\n> (1 row)\n> \n> lockhart=# select manuf('00:01:A0:00:00:00');\n> manuf\n> ------------------------\n> Infinilink Corporation\n> (1 row)\n> \n> lockhart=# select manuf('00:01:A0:00:00:01');\n> manuf\n> ------------------------\n> Infinilink Corporation\n> (1 row)\n> \n> lockhart=# select coalesce(manuf('01:02:03:00:00:00'), 'nada');\n> case\n> ------\n> nada\n> (1 row)\n> \n> lockhart=# select * from macoui where oui like '00:aa%';\n> oui | manufacturer \n> -------------------+-------------------------------\n> 00:aa:00:00:00:00 | INTEL CORPORATION\n> 00:aa:01:00:00:00 | INTEL CORPORATION\n> 00:aa:02:00:00:00 | INTEL CORPORATION\n> 00:aa:3c:00:00:00 | OLIVETTI TELECOM SPA (OLTECO)\n> (4 rows)\n> \n> lockhart=# select * from macoui where oui like '00:AA%';\n> oui | manufacturer \n> -----+--------------\n> (0 rows)\n> \n> lockhart=# select * from macoui where oui ilike '00:AA%';\n> oui | manufacturer \n> -------------------+-------------------------------\n> 00:aa:00:00:00:00 | INTEL CORPORATION\n> 00:aa:01:00:00:00 | INTEL CORPORATION\n> 00:aa:02:00:00:00 | INTEL CORPORATION\n> 00:aa:3c:00:00:00 | OLIVETTI TELECOM SPA (OLTECO)\n> (4 rows)\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 20 Aug 2000 17:50:58 -0500",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: mac.c"
}
] |
[
{
"msg_contents": "Hi!\n\n I've done an interface for GNU-Prolog/PostgreSQL. It's is still in alpha\nstages in terms of functionality, but what is implemented seems to be\nworking with no problems. If someone is interested in this please contact\nme.\n Probably in the next couple of weeks I'll make a first public release.\n\nBest regards,\nTiago Ant�o\n\n",
"msg_date": "Sun, 20 Aug 2000 14:56:16 +0100 (WEST)",
"msg_from": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "A GNU-Prolog/PostgreSQL interface"
}
] |
[
{
"msg_contents": "Hello Everybody,\n Sorry for mailing at both the addresses. The situation is a nightmare \nat our installation facility, hence the need to capture as much attention as \npossible.\n We are using PostgreSQL 7.0 along with Enhydra 3.0 application server \nto host a web site. It has been observed that sometimes (can't pinpoint when \nit starts) the postmaster instance 'hangs' and another starts. Then the new \none hangs and another starts. This happens until the max limit for backends \nis reached (32 in our case). Then the whole application crashes.\n After some debugging in our code, we have come to the conclusion that \nthis problem could be due to some internal locking problem in Postgres.\n This issue of the locking abilities of the postmaster has been \ndiscussed before (see the reference section below). However, it seems that \nit was dropped without any action plan, especially the part about point 3 : \n\"Two PID files will be necessary, one to prevent mulitple instances of \npostmasters from running against the same data base, and one to prevent \nmultiple instances from using the same port.\"\n Can anybody point us in the right direction? Thanks in advance.\n\nReferences:\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1998-10/msg00295.html\nThis is the first mail in the thread by Bill Allie.\n\nThank You,\nSuchet Singh,\nIMRglobal Corp.\n\nP.S. Please mail any suggestions to [email protected]\n________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com\n\n",
"msg_date": "Mon, 21 Aug 2000 02:57:59 GMT",
"msg_from": "\"suchet singh khalsa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postmaster locking issues"
},
{
"msg_contents": "\"suchet singh khalsa\" <[email protected]> writes:\n> This issue of the locking abilities of the postmaster has been \n> discussed before (see the reference section below). However, it seems that \n> it was dropped without any action plan, especially the part about point 3 : \n> \"Two PID files will be necessary, one to prevent mulitple instances of \n> postmasters from running against the same data base, and one to prevent \n> multiple instances from using the same port.\"\n\nNo, this was fixed long since. In 7.0 I see the following behavior:\n\nTry to start a postmaster on an already-in-use port number:\n\n\tFATAL: StreamServerPort: bind() failed: Address already in use\n\t Is another postmaster already running on that port?\n\t If not, wait a few seconds and retry.\n\tpostmaster: cannot create INET stream port\n\nTry to start a postmaster on a free port in an in-use data directory:\n\n\tCan't create pid file: /home/postgres/testversion/data/postmaster.pid\n\tIs another postmaster (pid: 3124) running?\n\nProper detection of port conflicts may be platform-dependent ...\nwhat platform are you running on?\n\nActually, given your stated observation:\n\n> We are using PostgreSQL 7.0 along with Enhydra 3.0 application server \n> to host a web site. It has been observed that sometimes (can't pinpoint when \n> it starts) the postmaster instance 'hangs' and another starts. Then the new \n> one hangs and another starts. This happens until the max limit for backends \n> is reached (32 in our case). Then the whole application crashes.\n\nI'll bet that what you are seeing is not multiple postmasters at all,\nbut multiple backends. Does \"Enhydra\" open up new database connections\nwithout bothering to close old ones? If so, that's where the problem\nlies. A backend will normally not quit until it sees a proper\ntermination message or connection closure from its client. We've heard\nof quite a number of broken apps that do not reliably close\nconnections...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Aug 2000 00:50:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postmaster locking issues "
},
{
"msg_contents": ">\n> Hi all,\n\nI am using postgres 7.0.2 and having some problems after adding new tables to\nan existing database. I initially created a database for a web application\nusing Perl, CGI and Apache webserver on Redhat Linux 6.2. After createing the\ndatabase I added few tables using \"Create table\" command. Using the psql\ncommands I can querry the newly created tables and I can insert into and delete\nrecords from the new tables. But For some reason, If I use the same querries in\nweb application, it does not return any data. In otherwords, the web\napplication sees only the tables created initially when the database was\ncreated and it does not see the tables added after the creation of the tables.\nAre there any commands to update the database server about the newly created\ntables. I restarted the postgres database server, restarted the Apache\nwebserver. It still does not recognise the new tables in my web application. I\nmade sure that all the querries are alright, because I copied and pasted the\nquerries from my web application onto the psql command line, they all worked. I\ndon't know if this is postgres related problem or not.\n\nThanks, I really appreciate your help,\nNataraj\n\n\n \nHi all,\nI am using postgres 7.0.2 and having some problems after adding new tables\nto an existing database. I initially created a database for a web application\nusing Perl, CGI and Apache webserver on Redhat Linux 6.2. After createing\nthe database I added few tables using \"Create table\" command. Using the\npsql commands I can querry the newly created tables and I can insert into\nand delete records from the new tables. But For some reason, If I use the\nsame querries in web application, it does not return any data. In otherwords,\nthe web application sees only the tables created initially when the database\nwas created and it does not see the tables added after the creation of\nthe tables. Are there any commands to update the database server about\nthe newly created tables. I restarted the postgres database server,\nrestarted the Apache webserver. It still does not recognise the new tables\nin my web application. I made sure that all the querries are alright, because\nI copied and pasted the querries from my web application onto the psql\ncommand line, they all worked. I don't know if this is postgres related\nproblem or not.\nThanks, I really appreciate your help,\nNataraj",
"msg_date": "Tue, 22 Aug 2000 09:41:17 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "postgres 7.0.2"
},
{
"msg_contents": "[email protected] writes:\n\nAre you sure you created the tables in the right database?\n\nMike.\n",
"msg_date": "22 Aug 2000 11:13:29 -0400",
"msg_from": "Michael Alan Dorman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2"
},
{
"msg_contents": "[email protected] writes:\n> I added few tables using \"Create table\" command. Using the psql\n> commands I can querry the newly created tables and I can insert into\n> and delete records from the new tables. But For some reason, If I use\n> the same querries in web application, it does not return any data. In\n> otherwords, the web application sees only the tables created initially\n> when the database was created and it does not see the tables added\n> after the creation of the tables.\n\nOffhand I'm betting that your webserver is connecting to a different\ndatabase than the one you created the tables in. If you aren't\ncareful to specify, the default is to connect to a database named the\nsame as your username, so it's easy to see how the webserver might be\nconnecting to a different db. \"psql -l\" should list the databases\nthat your postmaster has, or you can do \"select * from pg_database\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Aug 2000 11:30:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2 "
},
{
"msg_contents": "Tom Lane wrote:\n> same as your username, so it's easy to see how the webserver might be\n> connecting to a different db. \"psql -l\" should list the databases\n> that your postmaster has, or you can do \"select * from pg_database\".\n\nHi Tom - it was nice to meet you at the show last week.\n\nThis command you just showed here is extremely subtle and few people\nknow about it.\n\nWhat would it take for you to map \"show databases\" to get that same\noutput? Same for \"show tables\" and \"describe tbl_name\". I think these\nare standard SQL conventions which would be helpful for a lot of people.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Tue, 22 Aug 2000 08:49:49 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2"
},
{
"msg_contents": "> What would it take for you to map \"show databases\" to get that same\n> output? Same for \"show tables\" and \"describe tbl_name\". I think these\n> are standard SQL conventions which would be helpful for a lot of people.\n\nI don't see anything about such commands in the spec ;-). These things\nstrike me as user-interface operations rather than something the backend\nought to provide on its own.\n\nI am a bit surprised to note that psql doesn't seem to have any\nbackslash command for listing databases. Peter, what do you think?\nMaybe \"\\db\", seeing that \\dd is already taken?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Aug 2000 11:54:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2 "
},
{
"msg_contents": "On Tue, 22 Aug 2000, Tom Lane wrote:\n\n> I am a bit surprised to note that psql doesn't seem to have any\n> backslash command for listing databases. Peter, what do you think?\n> Maybe \"\\db\", seeing that \\dd is already taken?\n\nsauron=# \\l\n List of databases\n Database | Owner\n---------------+---------\n domains | sauron\n sauron | sauron\n spares | sauron\n template1 | sauron\n udmsearch | sauron\n infinite_test | sauron\n(6 rows)\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Tue, 22 Aug 2000 11:11:32 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2 "
},
{
"msg_contents": "Tom Lane <[email protected]> el d�a Tue, 22 Aug 2000 11:54:49 -0400, \nescribi�:\n\n>I am a bit surprised to note that psql doesn't seem to have any\n>backslash command for listing databases. Peter, what do you think?\n>Maybe \"\\db\", seeing that \\dd is already taken?\n\nI don't know why postgres insist in using this cryptics escape commands ...\n(and as you see, this commands are running out of namespace)\n\nsergio\n\n",
"msg_date": "Tue, 22 Aug 2000 13:14:19 -0300",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2"
},
{
"msg_contents": "At 8/22/2000 11:54 AM -0400, Tom Lane wrote:\n>I am a bit surprised to note that psql doesn't seem to have any\n>backslash command for listing databases. Peter, what do you think?\n>Maybe \"\\db\", seeing that \\dd is already taken?\n\n\\l lists all databases\n\n:)\n\n",
"msg_date": "Tue, 22 Aug 2000 11:16:50 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2 "
},
{
"msg_contents": "At 8/22/2000 11:54 AM -0400, Tom Lane wrote:\n> > What would it take for you to map \"show databases\" to get that same\n> > output? Same for \"show tables\" and \"describe tbl_name\". I think these\n> > are standard SQL conventions which would be helpful for a lot of people.\n>\n>I don't see anything about such commands in the spec ;-). These things\n>strike me as user-interface operations rather than something the backend\n>ought to provide on its own.\n\nActually, I think I understand the question. The original person wants to \nbe able to do a query and get a result containing a list of \ndatabases. AFAIK, there isn't a way to do this using standard SQL-like \nstatements. Somebody correct me if I'm wrong.\n\nThomas\n\n",
"msg_date": "Tue, 22 Aug 2000 11:22:23 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2 "
},
{
"msg_contents": "On Tue, 22 Aug 2000, Thomas Swan wrote:\n\n> Actually, I think I understand the question. The original person wants to \n> be able to do a query and get a result containing a list of \n> databases. AFAIK, there isn't a way to do this using standard SQL-like \n> statements. Somebody correct me if I'm wrong.\n\nSELECT pg_database.datname as \"Database\", pg_user.usename as \"Owner\" FROM\n pg_database, pg_user WHERE pg_database.datdba = pg_user.usesysid\nUNION\nSELECT pg_database.datname as \"Database\", NULL as \"Owner\" FROM pg_database\n WHERE pg_database.datdba NOT IN (SELECT usesysid FROM pg_user) \n ORDER BY \"Database\";\n\n(Which is what psql sends to the backend in response to the \"\\l\" command.)\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n\n",
"msg_date": "Tue, 22 Aug 2000 12:04:27 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2 "
},
{
"msg_contents": "Thomas Swan <[email protected]> writes:\n> Actually, I think I understand the question. The original person wants to \n> be able to do a query and get a result containing a list of \n> databases. AFAIK, there isn't a way to do this using standard SQL-like \n> statements. Somebody correct me if I'm wrong.\n\nSELECT datname FROM pg_database;\n\nI think Tim's real gripe is about having to know enough about the\ncontents of the system tables to be able to construct such a query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Aug 2000 13:06:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2 "
},
{
"msg_contents": "Tom Lane wrote:\n\n> I think Tim's real gripe is about having to know enough about the\n> contents of the system tables to be able to construct such a query.\n\nThat's exactly right. I've used this thing for quite a while now and\nstill couldn't tell you what the various system tables do or even how to\ntell what system tables exist.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Tue, 22 Aug 2000 10:19:21 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2"
},
{
"msg_contents": "Yes, I am sure. But after creating tables if I use\n\"grant READ,WRITE on shipment_history TO user\";\nit gives the error:\nERROR: parser: parse error at or near \"read\". I thought the grant\ncommand is similar to ORACLE. Is it different?\n\nThanks,\nNataraj\n\nMichael Alan Dorman wrote:\n\n> [email protected] writes:\n>\n> Are you sure you created the tables in the right database?\n>\n> Mike.\n\n",
"msg_date": "Fri, 25 Aug 2000 15:46:35 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2"
},
{
"msg_contents": "[email protected] writes:\n> Yes, I am sure. But after creating tables if I use\n> \"grant READ,WRITE on shipment_history TO user\";\n> it gives the error:\n> ERROR: parser: parse error at or near \"read\". I thought the grant\n> command is similar to ORACLE. Is it different?\n\nuse \\h grant in psql, or read the documentation. Yes, it's different.\n\nMike.\n",
"msg_date": "25 Aug 2000 17:02:53 -0400",
"msg_from": "Michael Alan Dorman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.0.2"
}
] |
[
{
"msg_contents": "> > o Two new functions, trunc(macaddr) and text(macaddr), where the former\n> > returns the mac address with the low (non-manufacturer) fields zeroed\n> > out, and the latter converts to a text string. The text conversion is\n> > nice because capabilities such as LIKE can be used transparently :) Will\n> > need to add macaddr(text) for symmetry.\n> A cast shouldn't alter the value of the, er, value. This is like\n> text(55) returning 'fifty-five'. What if I want to use substr() on the\n> macaddr?\n\n?? Just go ahead and use it! :)\n\nThere isn't anything new here, other than having the feature available\nfor macaddr as it is for other data types.\n\n - Thomas\n",
"msg_date": "Mon, 21 Aug 2000 04:24:25 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: mac.c"
}
] |
[
{
"msg_contents": "I have made a first cut at completing integration of Adriaan Joubert's\nBIT code into the backend. There are a couple little things left to\ndo (for example, scalarltsel doesn't know what to do with BIT values)\nas well as some not-so-little things:\n\n1. SQL92 mentions a bitwise position function, which we do not have.\n\n2. We don't handle <bit string> and <hex string> literals correctly;\nthe scanner converts them into integers which seems quite at variance\nwith the spec's semantics.\n\nWe could solve #2 fairly easily if we don't mind breaking backwards\ncompatibility with existing apps that expect B'101' or X'5' to be\nequivalent to 5. I'm not sure how to handle it without breaking that\ncompatibility. Thoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Aug 2000 01:05:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "BIT/BIT VARYING status"
},
{
"msg_contents": "> We could solve #2 fairly easily if we don't mind breaking backwards\n> compatibility with existing apps that expect B'101' or X'5' to be\n> equivalent to 5. I'm not sure how to handle it without breaking that\n> compatibility. Thoughts?\n\nBreak \"compatibility\". I implemented the syntax in the lexer so that we\ncould deal with it somehow (rather than just dying); but we should\nalways be willing to implement something the right way when we can. In\nthis case (and probably many others coming up ;) there is no great glory\nin the original implementation...\n\n - Thomas\n",
"msg_date": "Mon, 21 Aug 2000 05:51:11 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING status"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I have made a first cut at completing integration of Adriaan Joubert's\n> BIT code into the backend. There are a couple little things left to\n> do (for example, scalarltsel doesn't know what to do with BIT values)\n> as well as some not-so-little things:\n> \n> 1. SQL92 mentions a bitwise position function, which we do not have.\n\nSorry, I have been very busy, so only got down to implementing a\nposition function last night. It's a bit messy (lots of masks and\nbit-twiddling), but I feel fairly happy now that it is doing the right\nthing. I tested it with my own loadable types, as the integration into\npostgres proper stumped my somewhat. The next oid up for a bit function\nis in use already. Anyway, the patches are attached, and I'm hoping that\nsome friendly sole will integrate the new position function into\npostgres proper.\n \n> 2. We don't handle <bit string> and <hex string> literals correctly;\n> the scanner converts them into integers which seems quite at variance\n> with the spec's semantics.\n\nThis is still a problem that needs to be fixed. Also, it the parser did\nnot seem to be too happy about the 'position' syntax, but I may have it\nwrong of course. I don;t know how to attach the position function to a\npiece of syntax such as (position <substr> in <field>) either, so I'm\nhoping that somebody can pick this up.\n\nAlso, i have started putting together a file for regression testing. I\nnoticed that the substring syntax does not seem to work:\n\nSELECT SUBSTRING(b FROM 2 FOR 4)\n FROM ZPBIT_TABLE;\n\ngives:\n\nERROR: Function 'substr(bit, int4, int4)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\n\nand similar for a varying bit argument.\n\nIf somebody with better knowledge of postgres could do the integration,\nplease, I will finish off a regression test.\n\nThanks!\n\nAdriaan",
"msg_date": "Mon, 30 Oct 2000 08:32:53 +0000",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING status"
},
{
"msg_contents": "Adriaan Joubert writes:\n\n> > 2. We don't handle <bit string> and <hex string> literals correctly;\n> > the scanner converts them into integers which seems quite at variance\n> > with the spec's semantics.\n> \n> This is still a problem that needs to be fixed.\n\nI have gotten the B'1001'-style syntax to work, but the zpbit_in function\nrejects the input. You need to change the *_in functions to accept input\nin the form of a string of only 1's and 0's. Also, the output functions\nshould print 1's and 0's.\n\nI'm somewhat confused about the <hex string>s; according to the standard\nthey might also be a BLOB literal. I'd say we get the binary version\nworking first, and then wonder about this.\n\n> Also, it the parser did not seem to be too happy about the 'position'\n> syntax,\n\nThe parser converted 'position(a in b)' into 'strpos(b, a)'. I changed it\nso it converts it into 'position(b, a)' and aliased the other functions\nappropriately. I changed the order of your arguments for that.\n\n> I noticed that the substring syntax does not seem to work:\n\nSimilar issue as above. Should work now.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 31 Oct 2000 11:27:23 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: BIT/BIT VARYING status"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Adriaan Joubert writes:\n> \n> > > 2. We don't handle <bit string> and <hex string> literals correctly;\n> > > the scanner converts them into integers which seems quite at variance\n> > > with the spec's semantics.\n> >\n> > This is still a problem that needs to be fixed.\n> \n> I have gotten the B'1001'-style syntax to work, but the zpbit_in function\n> rejects the input. You need to change the *_in functions to accept input\n> in the form of a string of only 1's and 0's. Also, the output functions\n> should print 1's and 0's.\n> \n> I'm somewhat confused about the <hex string>s; according to the standard\n> they might also be a BLOB literal. I'd say we get the binary version\n> working first, and then wonder about this.\n\nPeter, I think it is a problem if the B or X are dropped from the input,\nas that is the only way to determine whether it is a binary or hex\nstring. Isn't it possible to just remove the quotes, or even do nothing?\nThe current code expects a string of the form Bxxxxx or Xyyyyy. If the\nquotes are left in, I can easily modify the code, but guessing whether\nthe string 1001 is hex or binary is an issue, and I seem to recall that\nthe SQL standard requires both to be valid input.\n\nAlso, on output, shouldn't we poduce B'xxxx' and X'yyyyy' to conform\nwith the input strings?\n\nAdriaan\n",
"msg_date": "Tue, 31 Oct 2000 13:51:34 +0200",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: BIT/BIT VARYING status"
},
{
"msg_contents": "Adriaan Joubert writes:\n\n> Peter, I think it is a problem if the B or X are dropped from the input,\n> as that is the only way to determine whether it is a binary or hex\n> string.\n\nWell, you just assume it's a binary string, because it's unclear as of yet\nwhether you're going to get to handle hex strings at all. However, I\nchanged the scanner to include a leading 'b', so now it works:\n\npeter=# select B'1001';\n ?column?\n----------\n X9\n(1 row)\n \npeter=# select B'1001' | b'11';\n ?column?\n----------\n XC\n(1 row)\n\nThe output definitely ought to be in binary though (\"b1001\").\n\nYou also might want to make the leading 'b' optional because this seems\nconfusing:\n\npeter=# select cast ('1001' as bit);\nERROR: zpbit_in: 1001 is not a valid bitstring\n\n> Also, on output, shouldn't we poduce B'xxxx' and X'yyyyy' to conform\n> with the input strings?\n\nIf you did that, then your input function has to be prepared for values\nlike \"B'1001'\". (Think copy out/copy in.) I think the above plan should\nwork okay.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 31 Oct 2000 15:12:15 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: BIT/BIT VARYING status"
},
{
"msg_contents": "Thanks Peter. I will download tomorrow when the new snapshot is\navailable. So how do we find out whether hex needs to be supported? I\nsee what you mean with ('1001' as bit), but shouldn't that be (B'1001'\nas bit)? Certainly if hex values are allowed the first version is\nambiguous. I would have to make the error message a bit more sensible\nthough.\n\nAdriaan\n\n> \n> > Peter, I think it is a problem if the B or X are dropped from the input,\n> > as that is the only way to determine whether it is a binary or hex\n> > string.\n> \n> Well, you just assume it's a binary string, because it's unclear as of yet\n> whether you're going to get to handle hex strings at all. However, I\n> changed the scanner to include a leading 'b', so now it works:\n> \n> peter=# select B'1001';\n> ?column?\n> ----------\n> X9\n> (1 row)\n> \n> peter=# select B'1001' | b'11';\n> ?column?\n> ----------\n> XC\n> (1 row)\n> \n> The output definitely ought to be in binary though (\"b1001\").\n> \n> You also might want to make the leading 'b' optional because this seems\n> confusing:\n> \n> peter=# select cast ('1001' as bit);\n> ERROR: zpbit_in: 1001 is not a valid bitstring\n> \n> > Also, on output, shouldn't we poduce B'xxxx' and X'yyyyy' to conform\n> > with the input strings?\n> \n> If you did that, then your input function has to be prepared for values\n> like \"B'1001'\". (Think copy out/copy in.) I think the above plan should\n> work okay.\n> \n> --\n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n",
"msg_date": "Tue, 31 Oct 2000 16:47:59 +0200",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: BIT/BIT VARYING status"
},
{
"msg_contents": "Can someone tell me if this patch should be applied? Seems like it was\njust for testing, right?\n\n> Tom Lane wrote:\n> > \n> > I have made a first cut at completing integration of Adriaan Joubert's\n> > BIT code into the backend. There are a couple little things left to\n> > do (for example, scalarltsel doesn't know what to do with BIT values)\n> > as well as some not-so-little things:\n> > \n> > 1. SQL92 mentions a bitwise position function, which we do not have.\n> \n> Sorry, I have been very busy, so only got down to implementing a\n> position function last night. It's a bit messy (lots of masks and\n> bit-twiddling), but I feel fairly happy now that it is doing the right\n> thing. I tested it with my own loadable types, as the integration into\n> postgres proper stumped my somewhat. The next oid up for a bit function\n> is in use already. Anyway, the patches are attached, and I'm hoping that\n> some friendly sole will integrate the new position function into\n> postgres proper.\n> \n> > 2. We don't handle <bit string> and <hex string> literals correctly;\n> > the scanner converts them into integers which seems quite at variance\n> > with the spec's semantics.\n> \n> This is still a problem that needs to be fixed. Also, it the parser did\n> not seem to be too happy about the 'position' syntax, but I may have it\n> wrong of course. I don;t know how to attach the position function to a\n> piece of syntax such as (position <substr> in <field>) either, so I'm\n> hoping that somebody can pick this up.\n> \n> Also, i have started putting together a file for regression testing. I\n> noticed that the substring syntax does not seem to work:\n> \n> SELECT SUBSTRING(b FROM 2 FOR 4)\n> FROM ZPBIT_TABLE;\n> \n> gives:\n> \n> ERROR: Function 'substr(bit, int4, int4)' does not exist\n> Unable to identify a function that satisfies the given argument\n> types\n> You may need to add explicit typecasts\n> \n> and similar for a varying bit argument.\n> \n> If somebody with better knowledge of postgres could do the integration,\n> please, I will finish off a regression test.\n> \n> Thanks!\n> \n> Adriaan\n\n> *** src/backend/utils/adt/varbit.c.old\tSun Oct 29 11:05:11 2000\n> --- src/backend/utils/adt/varbit.c\tMon Oct 30 04:58:35 2000\n> ***************\n> *** 1053,1060 ****\n> \t/* Negative shift is a shift to the left */\n> \tif (shft < 0)\n> \t\tPG_RETURN_DATUM(DirectFunctionCall2(bitshiftleft,\n> ! \t\t\t\t\t\t\t\t\t\t\tVarBitPGetDatum(arg),\n> ! \t\t\t\t\t\t\t\t\t\t\tInt32GetDatum(-shft)));\n> \n> \tresult = (VarBit *) palloc(VARSIZE(arg));\n> \tVARATT_SIZEP(result) = VARSIZE(arg);\n> --- 1053,1060 ----\n> \t/* Negative shift is a shift to the left */\n> \tif (shft < 0)\n> \t\tPG_RETURN_DATUM(DirectFunctionCall2(bitshiftleft,\n> ! \t\t\t\t\t\t VarBitPGetDatum(arg),\n> ! \t\t\t\t\t\t Int32GetDatum(-shft)));\n> \n> \tresult = (VarBit *) palloc(VARSIZE(arg));\n> \tVARATT_SIZEP(result) = VARSIZE(arg);\n> ***************\n> *** 1145,1148 ****\n> --- 1145,1242 ----\n> \tresult >>= VARBITPAD(arg);\n> \n> \tPG_RETURN_INT32(result);\n> + }\n> + \n> + /* Determines the position of S1 in the bitstring S2 (1-based string).\n> + * If S1 does not appear in S2 this function returns 0.\n> + * If S1 is of length 0 this function returns 1.\n> + */\n> + Datum\n> + bitposition(PG_FUNCTION_ARGS)\n> + {\n> + \tVarBit\t\t*substr = PG_GETARG_VARBIT_P(0);\n> + \tVarBit\t\t*arg = PG_GETARG_VARBIT_P(1);\n> + \tint\t\t\tsubstr_length, \n> + \t\t\t\targ_length,\n> + \t\t\t\ti,\n> + \t\t\t\tis;\n> + \tbits8\t\t*s,\t\t\t\t/* pointer into substring */\n> + \t\t\t\t*p;\t\t\t\t/* pointer into arg */\n> + \tbits8\t\tcmp,\t\t\t/* shifted substring byte to compare */ \n> + \t\t\t\tmask1, /* mask for substring byte shifted right */\n> + \t\t\t\tmask2, /* mask for substring byte shifted left */\n> + \t\t\t\tend_mask, /* pad mask for last substring byte */\n> + \t\t\t\targ_mask;\t\t/* pad mask for last argument byte */\n> + \tbool\t\tis_match;\n> + \n> + \t/* Get the substring length */\n> + \tsubstr_length = VARBITLEN(substr);\n> + \targ_length = VARBITLEN(arg);\n> + \n> + \t/* Argument has 0 length or substring longer than argument, return 0 */\n> + \tif (arg_length == 0 || substr_length > arg_length)\n> + \t\tPG_RETURN_INT32(0);\t\n> + \t\n> + \t/* 0-length means return 1 */\n> + \tif (substr_length == 0)\n> + \t\tPG_RETURN_INT32(1);\n> + \n> + \t/* Initialise the padding masks */\n> + \tend_mask = BITMASK << VARBITPAD(substr);\n> + \targ_mask = BITMASK << VARBITPAD(arg);\n> + \tfor (i = 0; i < VARBITBYTES(arg) - VARBITBYTES(substr) + 1; i++) \n> + \t{\n> + \t\tfor (is = 0; is < BITS_PER_BYTE; is++) {\n> + \t\t\tis_match = true;\n> + \t\t\tp = VARBITS(arg) + i;\n> + \t\t\tmask1 = BITMASK >> is;\n> + \t\t\tmask2 = ~mask1;\n> + \t\t\tfor (s = VARBITS(substr); \n> + \t\t\t\t is_match && s < VARBITEND(substr); s++) \n> + \t\t\t{\n> + \t\t\t\tcmp = *s >> is;\n> + \t\t\t\tif (s == VARBITEND(substr) - 1) \n> + \t\t\t\t{\n> + \t\t\t\t\tmask1 &= end_mask >> is;\n> + \t\t\t\t\tif (p == VARBITEND(arg) - 1) {\n> + \t\t\t\t\t\t/* Check that there is enough of arg left */\n> + \t\t\t\t\t\tif (mask1 & ~arg_mask) {\n> + \t\t\t\t\t\t\tis_match = false;\n> + \t\t\t\t\t\t\tbreak;\n> + \t\t\t\t\t\t}\n> + \t\t\t\t\t\tmask1 &= arg_mask;\n> + \t\t\t\t\t}\n> + \t\t\t\t}\n> + \t\t\t\tis_match = ((cmp ^ *p) & mask1) == 0;\n> + \t\t\t\tif (!is_match)\n> + \t\t\t\t\tbreak;\n> + \t\t\t\t// Move on to the next byte\n> + \t\t\t\tp++;\n> + \t\t\t\tif (p == VARBITEND(arg)) {\n> + \t\t\t\t\tmask2 = end_mask << (BITS_PER_BYTE - is);\n> + \t\t\t\t\tis_match = mask2 == 0;\n> + \t\t\t\t\telog(NOTICE,\"S. %d %d em=%2x sm=%2x r=%d\",\n> + \t\t\t\t\t\t i,is,end_mask,mask2,is_match);\n> + \t\t\t\t\tbreak;\n> + \t\t\t\t}\n> + \t\t\t\tcmp = *s << (BITS_PER_BYTE - is);\n> + \t\t\t\tif (s == VARBITEND(substr) - 1) \n> + \t\t\t\t{\n> + \t\t\t\t\tmask2 &= end_mask << (BITS_PER_BYTE - is);\n> + \t\t\t\t\tif (p == VARBITEND(arg) - 1) {\n> + \t\t\t\t\t\tif (mask2 & ~arg_mask) {\n> + \t\t\t\t\t\t\tis_match = false;\n> + \t\t\t\t\t\t\tbreak;\n> + \t\t\t\t\t\t}\n> + \t\t\t\t\t\tmask2 &= arg_mask;\n> + \t\t\t\t\t}\n> + \t\t\t\t}\n> + \t\t\t\tis_match = ((cmp ^ *p) & mask2) == 0;\n> + \t\t\t}\n> + \t\t\t/* Have we found a match */\n> + \t\t\tif (is_match)\n> + \t\t\t\tPG_RETURN_INT32(i*BITS_PER_BYTE + is + 1);\n> + \t\t}\n> + \t}\n> + \tPG_RETURN_INT32(0);\n> }\n\n> *** src/include/utils/varbit.h.old\tSun Oct 29 11:04:58 2000\n> --- src/include/utils/varbit.h\tSun Oct 29 11:05:58 2000\n> ***************\n> *** 87,91 ****\n> --- 87,92 ----\n> extern Datum bitoctetlength(PG_FUNCTION_ARGS);\n> extern Datum bitfromint4(PG_FUNCTION_ARGS);\n> extern Datum bittoint4(PG_FUNCTION_ARGS);\n> + extern Datum bitposition(PG_FUNCTION_ARGS);\n> \n> #endif\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Nov 2000 21:39:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: BIT/BIT VARYING status"
},
{
"msg_contents": "Peter,\n\n\tI've looked at the current implementation of the bit types and still\nhave some doubts concerning the following issues:\n\n1. Constants. The current behaviour just seems somewhat strange, and I\nhave no idea where to fix it.\n\ntest=# select B'1001';\n ?column? \n----------\n X9\n(1 row)\n\ntest=# select B'1001'::bit;\nERROR: Cannot cast this expression to type 'bit'\ntest=# select B'1001'::varbit;\nERROR: Cannot cast this expression to type 'varbit'\ntest=# select 'B1001'::varbit;\n ?column? \n----------\n B1001\n(1 row)\n\ntest=# select 'B1001'::bit;\n ?column? \n----------\n X9\n(1 row)\n\ntest=# select X'1001'::varbit;\nERROR: varbit_in: The bit string 4097 must start with B or X\ntest=# select 'X1001'::varbit;\n ?column? \n-------------------\n B0001000000000001\n(1 row)\n\ntest=# select 'X1001'::bit;\n ?column? \n----------\n X1001\n(1 row)\n\ntest=# select X'1001'::bit;\nERROR: zpbit_in: The bit string 4097 must start with B or X\n\nAlso, I have two output routines, that have been renames to zpbit_out\nand varbit_out. In fact, both will work just fine for bot bit and\nvarbit, but the first prints as hex and the second as a bit string.\nPrinting as hex is more compact, so good for long strings, but printing\nas a bit string is much more intuitive. One solution would be to make\nthem both print to a bit string by default and define a function to\ngenerate a hex string. Another would be to have this under control of a\nvariable. Most people who contacted me about bit strings seemed to want\nto use them for flags, so I guess the default should be to print them as\na bit string.\n\nMore for my information, if a user does not know about varbit, how does\nhe cast to bit varying? \n\n2. This is not a problem, more a question. There is no default way to\ncompare bit to varbit, as in \n\ntest=# select 'b10'::bit='b10'::varbit;\nERROR: Unable to identify an operator '=' for types 'bit' and 'varbit'\n You will have to retype this query using an explicit cast\n\nThis may be a good thing, as the comparison does depend on the lenght of\nthe bit strings.\n\n3. The ^ operator seems to attempt to coerce the arguments to float8?\n\nselect 'B110011'::bit ^ 'B011101'::bit;\nERROR: Function 'float8(bit)' does not exist\n Unable to identify a function that satisfies the given argument\ntypes\n You may need to add explicit typecasts\n\n4. This is a policy question. When I use the bit shift operator, this\nalways shifts within the current string only. So if I do\n\nselect ('B010'::bit(6) >> 2)::varbit;\n ?column? \n-----------\n B000100\n\u0003\nI get what I would expect. But if I have a bit varying(6) field (in a\ntable, this is just an example), I only get\n\nselect ('B010'::varbit >> 2)::varbit;\n ?column? \n-----------\n B000\n\nwhich I find counter-intuitive. I have thus added 'zpshiftright' and\n'varbitshiftright' functions. The second extends the bitstring to the\nright, while the first is the old bitshiftright function. I find this\nmore intuitive at least. \n\nQuestion is what a shift left function should do? Should I shorten the\nstring in the case of a shift left, to keep it symmetrical to shift\nright? This seems a pure policy decision, as there are arguments for\nboth behaviours, although I am a great fan of symmetry. Let me know and\nI can implement a separate function.\n\n\nI have made a start on a file for regression tests, which I append with\nthe diffs for the varbit files. Please let me know what else is needed\nand where I can help.\n\u0003\n\nThanks!\n\nAdriaan",
"msg_date": "Sun, 05 Nov 2000 20:52:25 +0200",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: BIT/BIT VARYING status"
},
{
"msg_contents": "Adriaan Joubert writes:\n\n> 1. Constants. The current behaviour just seems somewhat strange, and I\n> have no idea where to fix it.\n> \n> test=# select B'1001';\n> ?column? \n> ----------\n> X9\n> (1 row)\n\nThat's because the zpbit output function chooses to represent values that\nway. Whether or not it's good to do that is another question, but it's\nlegal as it stands. Then again, I do think that a binary output format\nwould be preferred.\n\n> test=# select B'1001'::bit;\n> ERROR: Cannot cast this expression to type 'bit'\n> test=# select B'1001'::varbit;\n> ERROR: Cannot cast this expression to type 'varbit'\n\nI notice now that casting of constants is handled separately, but it'll\nget fixed.\n\n> test=# select X'1001'::varbit;\n> ERROR: varbit_in: The bit string 4097 must start with B or X\n\nThat's because X'1001'-style constants get converted to integer currently.\n\n> Also, I have two output routines, that have been renames to zpbit_out\n> and varbit_out. In fact, both will work just fine for bot bit and\n> varbit, but the first prints as hex and the second as a bit string.\n\nThat's obviously not good.\n\n> More for my information, if a user does not know about varbit, how does\n> he cast to bit varying? \n\nCAST(value to BIT VARYING)\n\n\n> test=# select 'b10'::bit='b10'::varbit;\n> ERROR: Unable to identify an operator '=' for types 'bit' and 'varbit'\n> You will have to retype this query using an explicit cast\n\nOuch. I thought these types where binary equivalent, but evidently that\ndoesn't help here.\n\n> 3. The ^ operator seems to attempt to coerce the arguments to float8?\n\nThe ^ operator is exponentiation. We changed the xor operator to '#',\nbecause we now also have bit-wise operators on integers, and making 'int ^\nint' to do xor would have been very confusing.\n\n> 4. This is a policy question. When I use the bit shift operator, this\n> always shifts within the current string only. So if I do\n> \n> select ('B010'::bit(6) >> 2)::varbit;\n> ?column? \n> -----------\n> B000100\n> \u0003\n> I get what I would expect.\n\nReally? I would expect 'b010'::bit(6) to be B'000010', thus shifting two\nto the right gives 0.\n\n\n> I have made a start on a file for regression tests, which I append with\n> the diffs for the varbit files. Please let me know what else is needed\n> and where I can help.\n\nA few notes here: Do not use 'B10001' as bit input, use B'10001'. The\nformer is a text constant (sort of); it's only through the goodness of\nheart that the system casts anything to just about anything else if\nthere's a way to get there, but I personally hope that we can stop it from\ndoing that sometime. The latter on the other hand is guaranteed to be a\nbit string only (zpbit, to be precise).\n\nThat also means you do not have to put casts everwhere. You'd only need a\ncast to varbit, but in that case write it as CAST(B'1001' AS BIT\nVARYING(x)). Better yet, also be sure to have a few cases where bit and\nbit varying are read from a table; that way you're really sure what type\nyou're dealing with.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 6 Nov 2000 19:58:07 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: BIT/BIT VARYING status"
},
{
"msg_contents": "Adriaan Joubert writes:\n\n> 1. Constants. The current behaviour just seems somewhat strange, and I\n> have no idea where to fix it.\n> \n> test=# select B'1001';\n> ?column? \n> ----------\n> X9\n> (1 row)\n\nFixed. (Prints '1001'.)\n\n> test=# select B'1001'::bit;\n> ERROR: Cannot cast this expression to type 'bit'\n> test=# select B'1001'::varbit;\n> ERROR: Cannot cast this expression to type 'varbit'\n\nWorks now.\n\n> test=# select X'1001'::varbit;\n> ERROR: varbit_in: The bit string 4097 must start with B or X\n\nNot sure what we'll do with this. X'1001' is currently an integer. \nAccording to SQL it may be a bit string or a binary object, but I don't\nhave a clue how they decide it. Maybe we should look at other\nimplementations.\n\n> Also, I have two output routines, that have been renames to zpbit_out\n> and varbit_out. In fact, both will work just fine for bot bit and\n> varbit, but the first prints as hex and the second as a bit string.\n\nBoth print binary now.\n\n> Printing as hex is more compact, so good for long strings, but printing\n> as a bit string is much more intuitive. One solution would be to make\n> them both print to a bit string by default and define a function to\n> generate a hex string.\n\nSounds okay.\n\n> test=# select 'b10'::bit='b10'::varbit;\n> ERROR: Unable to identify an operator '=' for types 'bit' and 'varbit'\n> You will have to retype this query using an explicit cast\n\nWorks now.\n\n> select ('B010'::varbit >> 2)::varbit;\n> ?column? \n> -----------\n> B000\n\n> Question is what a shift left function should do?\n\nI'd say that having shift left and shift right be symmetrical (although\nthey're obviously not strict inverses) is definitely important. I could\nlive both with shortening the string on shift left or with keeping the\nstring length fixed in shift right (as above). Your call.\n\n(I haven't installed your patch for this yet. Please submit one that\nimplements whatever you think it should do completely both ways.)\n\n> I have made a start on a file for regression tests,\n\nI've put a modified version at http://www.postgresql.org/~petere/bit.sql\n(and bit.out) for you to work with.\n\nThe substring tests are commented out because they cause the backend to\ncrash. bitsubstr() needs to be modified to handle -1 as its third\nargument, meaning \"the rest of the string\".\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 18 Nov 2000 20:20:39 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: BIT/BIT VARYING status"
},
{
"msg_contents": "\nAre there any open items related to the BIT type?\n\n> Tom Lane wrote:\n> > \n> > I have made a first cut at completing integration of Adriaan Joubert's\n> > BIT code into the backend. There are a couple little things left to\n> > do (for example, scalarltsel doesn't know what to do with BIT values)\n> > as well as some not-so-little things:\n> > \n> > 1. SQL92 mentions a bitwise position function, which we do not have.\n> \n> Sorry, I have been very busy, so only got down to implementing a\n> position function last night. It's a bit messy (lots of masks and\n> bit-twiddling), but I feel fairly happy now that it is doing the right\n> thing. I tested it with my own loadable types, as the integration into\n> postgres proper stumped my somewhat. The next oid up for a bit function\n> is in use already. Anyway, the patches are attached, and I'm hoping that\n> some friendly sole will integrate the new position function into\n> postgres proper.\n> \n> > 2. We don't handle <bit string> and <hex string> literals correctly;\n> > the scanner converts them into integers which seems quite at variance\n> > with the spec's semantics.\n> \n> This is still a problem that needs to be fixed. Also, it the parser did\n> not seem to be too happy about the 'position' syntax, but I may have it\n> wrong of course. I don;t know how to attach the position function to a\n> piece of syntax such as (position <substr> in <field>) either, so I'm\n> hoping that somebody can pick this up.\n> \n> Also, i have started putting together a file for regression testing. I\n> noticed that the substring syntax does not seem to work:\n> \n> SELECT SUBSTRING(b FROM 2 FOR 4)\n> FROM ZPBIT_TABLE;\n> \n> gives:\n> \n> ERROR: Function 'substr(bit, int4, int4)' does not exist\n> Unable to identify a function that satisfies the given argument\n> types\n> You may need to add explicit typecasts\n> \n> and similar for a varying bit argument.\n> \n> If somebody with better knowledge of postgres could do the integration,\n> please, I will finish off a regression test.\n> \n> Thanks!\n> \n> Adriaan\n\n> *** src/backend/utils/adt/varbit.c.old\tSun Oct 29 11:05:11 2000\n> --- src/backend/utils/adt/varbit.c\tMon Oct 30 04:58:35 2000\n> ***************\n> *** 1053,1060 ****\n> \t/* Negative shift is a shift to the left */\n> \tif (shft < 0)\n> \t\tPG_RETURN_DATUM(DirectFunctionCall2(bitshiftleft,\n> ! \t\t\t\t\t\t\t\t\t\t\tVarBitPGetDatum(arg),\n> ! \t\t\t\t\t\t\t\t\t\t\tInt32GetDatum(-shft)));\n> \n> \tresult = (VarBit *) palloc(VARSIZE(arg));\n> \tVARATT_SIZEP(result) = VARSIZE(arg);\n> --- 1053,1060 ----\n> \t/* Negative shift is a shift to the left */\n> \tif (shft < 0)\n> \t\tPG_RETURN_DATUM(DirectFunctionCall2(bitshiftleft,\n> ! \t\t\t\t\t\t VarBitPGetDatum(arg),\n> ! \t\t\t\t\t\t Int32GetDatum(-shft)));\n> \n> \tresult = (VarBit *) palloc(VARSIZE(arg));\n> \tVARATT_SIZEP(result) = VARSIZE(arg);\n> ***************\n> *** 1145,1148 ****\n> --- 1145,1242 ----\n> \tresult >>= VARBITPAD(arg);\n> \n> \tPG_RETURN_INT32(result);\n> + }\n> + \n> + /* Determines the position of S1 in the bitstring S2 (1-based string).\n> + * If S1 does not appear in S2 this function returns 0.\n> + * If S1 is of length 0 this function returns 1.\n> + */\n> + Datum\n> + bitposition(PG_FUNCTION_ARGS)\n> + {\n> + \tVarBit\t\t*substr = PG_GETARG_VARBIT_P(0);\n> + \tVarBit\t\t*arg = PG_GETARG_VARBIT_P(1);\n> + \tint\t\t\tsubstr_length, \n> + \t\t\t\targ_length,\n> + \t\t\t\ti,\n> + \t\t\t\tis;\n> + \tbits8\t\t*s,\t\t\t\t/* pointer into substring */\n> + \t\t\t\t*p;\t\t\t\t/* pointer into arg */\n> + \tbits8\t\tcmp,\t\t\t/* shifted substring byte to compare */ \n> + \t\t\t\tmask1, /* mask for substring byte shifted right */\n> + \t\t\t\tmask2, /* mask for substring byte shifted left */\n> + \t\t\t\tend_mask, /* pad mask for last substring byte */\n> + \t\t\t\targ_mask;\t\t/* pad mask for last argument byte */\n> + \tbool\t\tis_match;\n> + \n> + \t/* Get the substring length */\n> + \tsubstr_length = VARBITLEN(substr);\n> + \targ_length = VARBITLEN(arg);\n> + \n> + \t/* Argument has 0 length or substring longer than argument, return 0 */\n> + \tif (arg_length == 0 || substr_length > arg_length)\n> + \t\tPG_RETURN_INT32(0);\t\n> + \t\n> + \t/* 0-length means return 1 */\n> + \tif (substr_length == 0)\n> + \t\tPG_RETURN_INT32(1);\n> + \n> + \t/* Initialise the padding masks */\n> + \tend_mask = BITMASK << VARBITPAD(substr);\n> + \targ_mask = BITMASK << VARBITPAD(arg);\n> + \tfor (i = 0; i < VARBITBYTES(arg) - VARBITBYTES(substr) + 1; i++) \n> + \t{\n> + \t\tfor (is = 0; is < BITS_PER_BYTE; is++) {\n> + \t\t\tis_match = true;\n> + \t\t\tp = VARBITS(arg) + i;\n> + \t\t\tmask1 = BITMASK >> is;\n> + \t\t\tmask2 = ~mask1;\n> + \t\t\tfor (s = VARBITS(substr); \n> + \t\t\t\t is_match && s < VARBITEND(substr); s++) \n> + \t\t\t{\n> + \t\t\t\tcmp = *s >> is;\n> + \t\t\t\tif (s == VARBITEND(substr) - 1) \n> + \t\t\t\t{\n> + \t\t\t\t\tmask1 &= end_mask >> is;\n> + \t\t\t\t\tif (p == VARBITEND(arg) - 1) {\n> + \t\t\t\t\t\t/* Check that there is enough of arg left */\n> + \t\t\t\t\t\tif (mask1 & ~arg_mask) {\n> + \t\t\t\t\t\t\tis_match = false;\n> + \t\t\t\t\t\t\tbreak;\n> + \t\t\t\t\t\t}\n> + \t\t\t\t\t\tmask1 &= arg_mask;\n> + \t\t\t\t\t}\n> + \t\t\t\t}\n> + \t\t\t\tis_match = ((cmp ^ *p) & mask1) == 0;\n> + \t\t\t\tif (!is_match)\n> + \t\t\t\t\tbreak;\n> + \t\t\t\t// Move on to the next byte\n> + \t\t\t\tp++;\n> + \t\t\t\tif (p == VARBITEND(arg)) {\n> + \t\t\t\t\tmask2 = end_mask << (BITS_PER_BYTE - is);\n> + \t\t\t\t\tis_match = mask2 == 0;\n> + \t\t\t\t\telog(NOTICE,\"S. %d %d em=%2x sm=%2x r=%d\",\n> + \t\t\t\t\t\t i,is,end_mask,mask2,is_match);\n> + \t\t\t\t\tbreak;\n> + \t\t\t\t}\n> + \t\t\t\tcmp = *s << (BITS_PER_BYTE - is);\n> + \t\t\t\tif (s == VARBITEND(substr) - 1) \n> + \t\t\t\t{\n> + \t\t\t\t\tmask2 &= end_mask << (BITS_PER_BYTE - is);\n> + \t\t\t\t\tif (p == VARBITEND(arg) - 1) {\n> + \t\t\t\t\t\tif (mask2 & ~arg_mask) {\n> + \t\t\t\t\t\t\tis_match = false;\n> + \t\t\t\t\t\t\tbreak;\n> + \t\t\t\t\t\t}\n> + \t\t\t\t\t\tmask2 &= arg_mask;\n> + \t\t\t\t\t}\n> + \t\t\t\t}\n> + \t\t\t\tis_match = ((cmp ^ *p) & mask2) == 0;\n> + \t\t\t}\n> + \t\t\t/* Have we found a match */\n> + \t\t\tif (is_match)\n> + \t\t\t\tPG_RETURN_INT32(i*BITS_PER_BYTE + is + 1);\n> + \t\t}\n> + \t}\n> + \tPG_RETURN_INT32(0);\n> }\n\n> *** src/include/utils/varbit.h.old\tSun Oct 29 11:04:58 2000\n> --- src/include/utils/varbit.h\tSun Oct 29 11:05:58 2000\n> ***************\n> *** 87,91 ****\n> --- 87,92 ----\n> extern Datum bitoctetlength(PG_FUNCTION_ARGS);\n> extern Datum bitfromint4(PG_FUNCTION_ARGS);\n> extern Datum bittoint4(PG_FUNCTION_ARGS);\n> + extern Datum bitposition(PG_FUNCTION_ARGS);\n> \n> #endif\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 19 Jan 2001 23:43:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: BIT/BIT VARYING status"
},
{
"msg_contents": "Main open item is the handling of hex strings: they are still converted\nto integers by default. They could be BLOBs or bit strings, and the SQL\nstandard gives no hint, so it is not clear what the solution should be.\n\nThe only other complaint has been on the output of bit strings. I\nbelieve the current policy is that they are always output in binary, but\nI have had a request for output in hex. We may have to add a function\nthat allows the user to use the hex output routine (a SET variable or\nsome type of conversion function). I agree with leaving the default\nbinary as most people seem to want to use it for bit masks anyway.\n\nAdriaan\n",
"msg_date": "Sun, 21 Jan 2001 08:43:36 +0200",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING status"
},
{
"msg_contents": "> Main open item is the handling of hex strings: they are still converted\n> to integers by default. They could be BLOBs or bit strings, and the SQL\n> standard gives no hint, so it is not clear what the solution should be.\n\nShould I add this to the TODO list?\n\n> \n> The only other complaint has been on the output of bit strings. I\n> believe the current policy is that they are always output in binary, but\n> I have had a request for output in hex. We may have to add a function\n> that allows the user to use the hex output routine (a SET variable or\n> some type of conversion function). I agree with leaving the default\n> binary as most people seem to want to use it for bit masks anyway.\n> \n> Adriaan\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 21 Jan 2001 08:57:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING status"
}
] |
[
{
"msg_contents": "As of latest commit:\n\tnew-style internal functions: 1061\n\told-style internal functions: 0\n\nMan, I'm glad to be done with that...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Aug 2000 01:42:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "fmgr rewrite milestone"
}
] |
[
{
"msg_contents": "There's several problems here.\n\n1. JDBC doesn't support more than one transaction per connection - the\nmethods for controlling them are in Connection.\n\n2. PostgreSQL doesn't (yet) support nested transactions (unless someones\ndone that without me noticing). This is a problem with some of the metadata\nmethods, as they issue queries, and if they fail then the user's transaction\ngets rolled back without any apparent warning.\n\n3. The proper way of speeding up things like inserts (where the SQL is\nvirtually identical) is to use PreparedStatement. However we currently\nemulate this behavour in Java so when we can prepare statements in the\nbackend, there will be a huge performance boost here.\n\n4. There may be some SQL problem that may be slowing things down. Double\ncheck that nothing simple is hampering things...\n\nPeter\n\n--\nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council\n\n\n-----Original Message-----\nFrom: Rini Dutta [mailto:[email protected]]\nSent: Friday, August 18, 2000 9:56 PM\nTo: [email protected]\nSubject: [HACKERS] multiple transactions\n\n\nHi,\n\nI'm using the postgresql 7.0.2., the JDBC interface. I\nneed to optimise the database storage speeds in my\napplication. In order to do this I considered having\nmore than one connection so that I can have separate\ntransactions for performing a group of inserts into a\nspecific table - 1 transaction/connection for one\ntable. But this seems to take the same time or even a\nlittle longer than having the transactions occur\nsequentially, contrary to my expectation especially\nconsidering that these are inserts into separate\ntables. \nCould you shed some light on this, and what I need to\ndo to make inserts using JDBC faster ?\n\nThanks,\nRini\n\n__________________________________________________\nDo You Yahoo!?\nSend instant messages & get email alerts with Yahoo! Messenger.\nhttp://im.yahoo.com/\n",
"msg_date": "Mon, 21 Aug 2000 07:34:42 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: multiple transactions"
}
] |
[
{
"msg_contents": "> > * I changed the meaning of \"-l\" from \"Listen to only SSL\" to\n> > \"Disable SSL\". It seems safe to me to do this since the \n> > previous function of \"-l\" never worked anyway.\n> > Using this switch, you can start the postmaster without\n> > having the secret key and the certificate file in place.\n> \n> I'd rather see SSL off by default and `-l' enabling it, but that's a\n> trivial change if we agree on it.\nNo problem with me :-)\nIt should just be to change the default of RequireSSL to false, and then set\nit to \"true\" when -l is specified.\n\n\n> > Right now, the only way to set \"requiressl\" for psql is to use\n> > an environment variable. I'd like it to be possible to do this \n> > using the commandline for example, probably using a \"psql \n> variable\". \n> \n> We need to think in terms of all client applications though. \n> Ideally we'd\n> use some sort of option letter, but we'd never find one that's\n> consistently available. What do people think about optionally \n> making the\n> host paramater URI style, e.g. \"pgsql://localhost\" or\n> \"pgsql-ssl://localhost\" or even \n> \"pgsql://user:[email protected]:6543\". A\n> bare host defaults to \"pgsql://name:5432\". Hmm, I think I \n> would like that\n> in terms of extensibility. Doesn't JDBC work like that already?\n\nI think I wasn't clear enough. :-) It can *already* be specified by any\nclient application as long as you use PQconnectdb(). For example:\nPQconnectdb(\"dbname='foo' host='localhost' requiressl=1\")\n\n(I just put it into the \"PQconninfoOptions\" array.)\n(Now that I think of it, I never really *tested* that part, though :-) But I\nthink it shuold work. [testing]. Yes, it works.)\n\n\n> > But that would require changing psql to use PQconnectDb() instead \n> > of PQsetdbLogin(), so I figured I should check first :-) [BTW, \n> > PQconnectDb() is the recommended way of doing it nowadays, right?]\n> \n> In theory yes, and this might well be a good reason to start doing so,\n> because you won't get away with changing the prototype of \n> PQsetdbLogin().\nExactly my thougts :-)\n\n\n> > Documentation is coming up,\n> \n> Nice...\n> \n> Any thoughts about client (and server) authentication via SSL?\nYup, I've been thinking about it. :-)\nI was thinking of adding a authentication type \"sslcert\" (in addition to the\nident, trust, password etc that exist today) only valid for \"sslhost\" lines.\nThen a map somewhere similar to the \"ident-map\" in concept mapping a SSL\ncertificate subject name to a postgres username. (Or maybe that should be\ndone similar to pg_shadow, modifyable from inside the db?)\n\n\n//Magnus\n",
"msg_date": "Mon, 21 Aug 2000 09:55:50 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: SSL Patch - again :-)"
}
] |
[
{
"msg_contents": "> > can't you just do a link test that checks that AF_UNIX is defined?\n> \n> That doesn't say anything about whether the Unix sockets \n> really work, as\n> systems where they don't work define this occasionally.\n\nMy idea was to create a test that will try to compile, link and run a small\nprogram. The program should contain \"int s = socket(AF_UNIX, ...) and a test\nfor succesful creation of 's'. Only when this program will compile and run\nOK then it should be defined HAVE_AF_UNIX_SOCKET.\n\n\t\tDan\n",
"msg_date": "Mon, 21 Aug 2000 11:11:58 +0200",
"msg_from": "=?iso-8859-1?Q?Hor=E1k_Daniel?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: autoconf check for AF_UNIX sockets"
}
] |
[
{
"msg_contents": "\n> foo=# \\d pg_attribute_relid_attnam_index\n> Index \"pg_attribute_relid_attnam_index\"\n> Attribute | Type\n> -----------+------\n> attrelid | oid\n> attname | name\n> unique btree\n> \n> foo=# \\d pg_attribute_relid_attnum_index\n> Index \"pg_attribute_relid_attnum_index\"\n> Attribute | Type\n> -----------+----------\n> attrelid | oid\n> attnum | smallint\n> unique btree\n> \n> Since table OIDs keep increasing, this formulation ensures that new\n> entries will always sort to the end of the index, and so space freed\n> internally in the indexes can never get re-used. Swapping the column\n> order may eliminate that problem --- but I'm not sure what if any\n> speed penalty would be incurred. Thoughts anyone?\n\nIsn't pg_attribute often accessed with a \"where oid=xxx\" restriction\nto get all cols for a given table ?\n\nAndreas\n",
"msg_date": "Mon, 21 Aug 2000 11:28:03 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: pg_attribute growing and growing and growing "
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> Since table OIDs keep increasing, this formulation ensures that new\n>> entries will always sort to the end of the index, and so space freed\n>> internally in the indexes can never get re-used. Swapping the column\n>> order may eliminate that problem --- but I'm not sure what if any\n>> speed penalty would be incurred. Thoughts anyone?\n\n> Isn't pg_attribute often accessed with a \"where oid=xxx\" restriction\n> to get all cols for a given table ?\n\nHmm, good point. I don't think the system itself does that --- AFAIR\nit just looks up specific rows by relid+name or relid+num --- but making\nthis change would make the indexes useless for applications that make\nthat kind of query.\n\nOh well, back to the drawing board...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Aug 2000 10:23:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: pg_attribute growing and growing and growing "
}
] |
[
{
"msg_contents": "Mark Hollomon <[email protected]> writes:\n> Attached is a patch to allow\n> functinal indecies to use functions that\n> are for 'binary-compatible' types.\n\nUm, thanks for working on this, but I already fixed that...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Aug 2000 11:19:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] functional index arg matching patch "
},
{
"msg_contents": "Attached is a patch to allow\nfunctinal indecies to use functions that\nare for 'binary-compatible' types.\n\neg\n\ncreate function foobar(text) returns text as ....\n\ncreate table vc ( a varchar );\n\ncreate index on vc ( foobar(a) );\n\nshould now work.\n\n-- \nMark Hollomon\[email protected]",
"msg_date": "Mon, 21 Aug 2000 11:47:54 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": false,
"msg_subject": "functional index arg matching patch"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Mark Hollomon <[email protected]> writes:\n> > Attached is a patch to allow\n> > functinal indecies to use functions that\n> > are for 'binary-compatible' types.\n> \n> Um, thanks for working on this, but I already fixed that...\n> \n> regards, tom lane\n\nWhen? A message you sent on 8-11 indicated it wasn't.\n\nAnd I didn't see any code in CVS as of ~ 8-17\nwhen I started. But I may have missed it.\n\n\nI'll be interested in seeing how you\ndid it.\n\n-- \n\nMark Hollomon\[email protected]\n",
"msg_date": "Mon, 21 Aug 2000 11:59:42 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: functional index arg matching patch"
},
{
"msg_contents": "\"Mark Hollomon\" <[email protected]> writes:\n> Tom Lane wrote:\n>> Um, thanks for working on this, but I already fixed that...\n\n> When? A message you sent on 8-11 indicated it wasn't.\n\n(Checks CVS) ... yesterday, actually.\n\n> I'll be interested in seeing how you did it.\n\nI just called the ambiguous-function-name-resolution code in\nparse_func.c and then checked to make sure it hadn't selected\nsomething the executor wasn't prepared to cope with --- ie,\nfunctions requiring runtime conversions of input data types.\n\nIt looked like you had copied out a bunch of the parse_func.c code,\nwhich is OK in the short run but the duplicated code might be\na headache to keep in sync later on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Aug 2000 23:45:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: functional index arg matching patch "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Mark Hollomon\" <[email protected]> writes:\n> > Tom Lane wrote:\n> >> Um, thanks for working on this, but I already fixed that...\n> \n> > When? A message you sent on 8-11 indicated it wasn't.\n> \n> (Checks CVS) ... yesterday, actually.\n\nYea, I saw it when a refreshed last night.\n\n> \n> > I'll be interested in seeing how you did it.\n> \n> I just called the ambiguous-function-name-resolution code in\n> parse_func.c and then checked to make sure it hadn't selected\n> something the executor wasn't prepared to cope with --- ie,\n> functions requiring runtime conversions of input data types.\n> \n> It looked like you had copied out a bunch of the parse_func.c code,\n> which is OK in the short run but the duplicated code might be\n> a headache to keep in sync later on.\n> \n\nI had thought about doing it the way you did, but didn't know the\nconsequences of some of the other coersions that parse_func.c\ntried to do. My guess that it wasn't harmless was correct judging from\nyour code.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Tue, 22 Aug 2000 08:43:03 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: functional index arg matching patch"
}
] |
[
{
"msg_contents": "Is there anyway to do a query from one database that accesses tables in another database, like synonyms in oracle.\n\nThanks,\nAdam\n\n\n\n\n\n\n\n\n\nIs there anyway to do a query from one database \nthat accesses tables in another database, like synonyms in oracle.\n \nThanks,\nAdam",
"msg_date": "Mon, 21 Aug 2000 16:30:35 -0400",
"msg_from": "\"Adam Siegel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Synonyms"
}
] |
[
{
"msg_contents": "> Well, this is how it is supposed to work. \"select for update\" only\n> works within a transaction and holds the lock until the transaction\n> is complete.\n>\n> What exactly is it that you're trying to do?\n\nLet us suppose that we have 2 transactions, and attempt to block the\nsame row in the two transactions. One of them will wait until in the other\nit is commited or rollbacked.\n\nOracle has \"select for update nowait\", it does that instead of waiting\nthe conclusion of the other transaction, gives back an error to us saying\nthat the row already has been blocked.\n\nI am looking for something similar to this, or in its defect, knowledge\nif a row has been blocked, to avoid this waits, or information to make the\nparameter 'nowait' to realize this operation\n\nJuan Carlos Perez Vazquez\[email protected]\n\n\n______________________________________________\nFREE Personalized Email at Mail.com\nSign up at http://www.mail.com/?sr=signup\n\n",
"msg_date": "Mon, 21 Aug 2000 16:42:47 -0400 (EDT)",
"msg_from": "Juan Carlos Perez Vazquez <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Row Level Locking Problem"
}
] |
[
{
"msg_contents": "How nasty would dropping columns be?\n\nI've just now started going through the source and am trying to find where \nit would fit in... if it could be done without a nightmare.\n\nAny pointers?\n-\n- Thomas Swan\n- Graduate Student - Computer Science\n- The University of Mississippi\n-\n- \"People can be categorized into two fundamental\n- groups, those that divide people into two groups\n- and those that don't.\"\nHow nasty would dropping columns be?\n\nI've just now started going through the source and am trying to find\nwhere it would fit in... if it could be done without a nightmare.\n\nAny pointers?\n\n\n- \n- Thomas Swan\n \n- Graduate Student - Computer Science\n- The University of Mississippi\n- \n- \"People can be categorized into two fundamental \n- groups, those that divide people into two groups \n- and those that don't.\"",
"msg_date": "Mon, 21 Aug 2000 18:32:13 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dropping Columns"
},
{
"msg_contents": "On Mon, Aug 21, 2000 at 06:32:13PM -0500, Thomas Swan wrote:\n> \n> How nasty would dropping columns be?\n> \n> I've just now started going through the source and am trying to find where \n> it would fit in... if it could be done without a nightmare.\n> \n> Any pointers?\n\nLong discussion in February, and again in June in HACKERS. (Hmm, the\nnew User's Lounge seems to have gone in, but now we've no links to\nmail archives. Ah, here's a bookmark I can use to get in.)\n\nHere's one pointer:\n\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/2000-06/msg00324.html\n\nThe search feature on the front page will search the mailing lists for\nyou, it's just a bit slow.\n\nOh, look in <your pgsql source tree>/doc/TODO.detail/drop\n\nLooks like the thread from HACKERS is in there.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Tue, 22 Aug 2000 12:01:07 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dropping Columns"
},
{
"msg_contents": "On Tue, 22 Aug 2000, Ross J. Reedstrom wrote:\n\n> On Mon, Aug 21, 2000 at 06:32:13PM -0500, Thomas Swan wrote:\n> > \n> > How nasty would dropping columns be?\n> > \n> > I've just now started going through the source and am trying to find where \n> > it would fit in... if it could be done without a nightmare.\n> > \n> > Any pointers?\n> \n> Long discussion in February, and again in June in HACKERS. (Hmm, the\n> new User's Lounge seems to have gone in, but now we've no links to\n> mail archives. Ah, here's a bookmark I can use to get in.)\n> \n> Here's one pointer:\n> \n> http://www.postgresql.org/mhonarc/pgsql-hackers/2000-06/msg00324.html\n\nFor now you can get there from the general info page in the user's \nlounge. I'm working up the search page now.\n\n> \n> The search feature on the front page will search the mailing lists for\n> you, it's just a bit slow.\n> \n> Oh, look in <your pgsql source tree>/doc/TODO.detail/drop\n> \n> Looks like the thread from HACKERS is in there.\n> \n> Ross\n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 22 Aug 2000 13:22:39 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dropping Columns"
},
{
"msg_contents": "-----Original Message-----\nFrom: Thomas Swan\n\n> How nasty would dropping columns be?\n\nI have an solution which uses 2(logical/physical) attribute numbers.\nHowever I'm not satisfied with it.\n\nIt isn't so clean as I expected.\nIt breaks backward compatibility ... etc.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 23 Aug 2000 16:43:33 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Dropping Columns"
}
] |
[
{
"msg_contents": "\n\n---------- Forwarded message ----------\nDate: Mon, 21 Aug 2000 18:33:39 +0300 (EEST)\nFrom: sergiy grigoriev <[email protected]>\nTo: [email protected]\nSubject: postgresql-java (fwd)\n\n\n\n\n\t\tHi !\n\tWhile developing a Java GUI front-end to Postgresql I've faced the\nnext problem:\n\n\tI want my program to \n 1. list all the available tables in the database\t\t\t\t\t\n 2. Show the structure of the chosen table.\n \n 1.The first task is OK,it can easily be done by inspecting the PG_TABLES\n 2. Please tell me how can I determine the structure of the table\n (field names and types) using Postgres JDBC Driver in order to\ncomplete the second task.\n\n\tThank you in advance for your reply.\n \n\t\n\n\n\n",
"msg_date": "Tue, 22 Aug 2000 07:03:58 +0300 (EEST)",
"msg_from": "sergiy grigoriev <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql-java (fwd)"
},
{
"msg_contents": "> 1. list all the available tables in the database\n> 2. Show the structure of the chosen table.\n\nYou might want to inspect the code for psql to see how this is done.\n\"\\d\" and \"\\d tablename\" seem to do what you want.\n\n - Thomas\n",
"msg_date": "Tue, 22 Aug 2000 06:44:10 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql-java (fwd)"
},
{
"msg_contents": "sergiy grigoriev wrote:\n> \n> ---------- Forwarded message ----------\n> Date: Mon, 21 Aug 2000 18:33:39 +0300 (EEST)\n> From: sergiy grigoriev <[email protected]>\n> To: [email protected]\n> Subject: postgresql-java (fwd)\n> \n> Hi !\n> While developing a Java GUI front-end to Postgresql I've faced the\n> next problem:\n> \n> I want my program to\n> 1. list all the available tables in the database\n> 2. Show the structure of the chosen table.\n> \n> 1.The first task is OK,it can easily be done by inspecting the PG_TABLES\n> 2. Please tell me how can I determine the structure of the table\n> (field names and types) using Postgres JDBC Driver in order to\n> complete the second task.\n> \n> Thank you in advance for your reply.\n> \n> \nYou want to use the JDBC DatabaseMetaData class. In contains the the\nmethods getTables() and getColumns(). \nRefer to the JDBC API at\nhttp://java.sun.com/products//jdk/1.2/docs/guide/jdbc/index.html\n-- \nDave Smith\nCandata Systems Ltd.\n(416) 493-9020\[email protected]\n",
"msg_date": "Tue, 22 Aug 2000 08:25:03 -0400",
"msg_from": "Dave Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql-java (fwd)"
}
] |
[
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>Actually, the one that gets me is those that refer to it as Postgres\n>... postgres was a project out of Berkeley way back in the 80's, early\n>90's ... hell, it was based on a PostQuel language ... this ain't\n>postgres, its only based on it :(\n\nYes, and besides, if you call it Postgres, you are certain to offend the \nstruggling Postgres community. Those PostQuel language zealots have enough\nproblems keeping their technology alive as it is without people going around\ncreating gratuitous brand confusion.\n\n\t-Michael Robinson\n\n",
"msg_date": "Tue, 22 Aug 2000 21:33:50 +0800 (+0800)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How Do You Pronounce \"PostgreSQL\"?"
}
] |
[
{
"msg_contents": "> > Also, I propose to consolidate and eliminate README.fsync, which\n> > duplicates (or will duplicate) info available in the Admin Guide. The\n> > fact that it hasn't been touched since 1996, and still refers to\n> > Postgres'95, is a clue that some changes are in order.\n> A coupla weeks too late... :-)\n\nGreat! I was looking on my desktop system, which I haven't been keeping\nup to date since my development is happening on my laptop.\n\nDid you have to move info over to sgml, or was it already duplicated\nthere?\n\n - Thomas\n",
"msg_date": "Tue, 22 Aug 2000 14:13:37 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LIKE gripes (and charset docs)"
}
] |
[
{
"msg_contents": "A thanks to everyone on this list and especially; Jeffery Rhines, Chris\nKnight, Chris Bitmead, and Sevo Stille.\n\nThe solution turned out to be very simple. After catching a SCSI BUS\nspeed mismatch problem which caused a NT Backup 'Restore' failure I\ndiscovered that the actual data was in .mdb files! Copied the files to a\nsystem running MS Access (Office 97) and was able to export them to a\ndelimited format which went into PostgreSQL with very few problems.\nMostly there were split lines which the \\copy command didn't like. Hand\ncorrected them.\n\nI was able to get the table format by using MS Access. Only question left\nis what is the corresponding field type in PostgreSQL for a memo field in\nSQL Server/Access (varchar(nnnn))?\n\nAgain thanks for all the help,\nRod\n--\nRoderick A. Anderson\[email protected] Altoplanos Information Systems, Inc.\nVoice: 208.765.6149 212 S. 11th Street, Suite 5\nFAX: 208.664.5299 Coeur d'Alene, ID 83814\n\n",
"msg_date": "Tue, 22 Aug 2000 09:37:42 -0700 (PDT)",
"msg_from": "\"Roderick A. Anderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "Le 22.08.00 a 09:37, \"Roderick A. Anderson\" m'ecrivait :\n\n)I was able to get the table format by using MS Access. Only question left\n)is what is the corresponding field type in PostgreSQL for a memo field in\n)SQL Server/Access (varchar(nnnn))?\n\n'text' type perhaps ?\n\nLionel\n\n",
"msg_date": "Tue, 22 Aug 2000 18:50:29 +0200 (CEST)",
"msg_from": "Tressens Lionel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "I hate it when I do this. See an answer I want and run with it rather\nthan find the real answer.\n\nTurned out the data files (.mdb) _didn't_ belong to the database. They\nwere a piece of the database that was used for a report.\n\nBack to the old grind wheel.\n\n\nRod\n--\nRoderick A. Anderson\[email protected] Altoplanos Information Systems, Inc.\nVoice: 208.765.6149 212 S. 11th Street, Suite 5\nFAX: 208.664.5299 Coeur d'Alene, ID 83814\n\n",
"msg_date": "Tue, 22 Aug 2000 10:19:17 -0700 (PDT)",
"msg_from": "\"Roderick A. Anderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "I lied! [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "Tressens Lionel <[email protected]> writes:\n> Le 22.08.00 a 09:37, \"Roderick A. Anderson\" m'ecrivait :\n> )I was able to get the table format by using MS Access. Only question left\n> )is what is the corresponding field type in PostgreSQL for a memo field in\n> )SQL Server/Access (varchar(nnnn))?\n\n> 'text' type perhaps ?\n\nUh ... what's wrong with varchar(n) ?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Aug 2000 13:52:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL "
},
{
"msg_contents": "On Tue, 22 Aug 2000, Tom Lane wrote:\n\n> Tressens Lionel <[email protected]> writes:\n> > Le 22.08.00 a 09:37, \"Roderick A. Anderson\" m'ecrivait :\n> > )I was able to get the table format by using MS Access. Only question left\n> > )is what is the corresponding field type in PostgreSQL for a memo field in\n> > )SQL Server/Access (varchar(nnnn))?\n> \n> > 'text' type perhaps ?\n> \n> Uh ... what's wrong with varchar(n) ?\n\nHow big can our n be for varchar? By looking at his description I'm\nthinking SQL Server allows a large n. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 22 Aug 2000 15:02:50 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL "
},
{
"msg_contents": "I've wondered that myself, actually. What are the benefits and\ndrawbacks to going with one over the other, besides the obvious 255-char\nfield length limit for varchar? The reason to stay away from \"memo\"\nfields in other serious RDBMSs are typically more difficult maintenance,\nsignificantly lower performance, and requiring special function calls to\nget the data out. Do any of those apply to PG?\n\nJeff\n\nTom Lane wrote:\n> \n> Tressens Lionel <[email protected]> writes:\n> > Le 22.08.00 a 09:37, \"Roderick A. Anderson\" m'ecrivait :\n> > )I was able to get the table format by using MS Access. Only question left\n> > )is what is the corresponding field type in PostgreSQL for a memo field in\n> > )SQL Server/Access (varchar(nnnn))?\n> \n> > 'text' type perhaps ?\n> \n> Uh ... what's wrong with varchar(n) ?\n> \n> regards, tom lane\n",
"msg_date": "Tue, 22 Aug 2000 14:06:09 -0500",
"msg_from": "\"Jeffrey A. Rhines\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "\"Jeffrey A. Rhines\" <[email protected]> writes:\n>> Uh ... what's wrong with varchar(n) ?\n>\n> I've wondered that myself, actually. What are the benefits and\n> drawbacks to going with one over the other, besides the obvious 255-char\n> field length limit for varchar?\n\nAFAIK there has *never* been a 255-char limit on char or varchar in\npgsql ... you must be thinking of Some Other DBMS.\n\nThe limit for these datatypes in 7.0 and before is BLCKSZ less some\noverhead --- ~8000 bytes in a default setup. Beginning in 7.1 it's\nan essentially arbitrary number. I set it at 10Mb in current sources,\nbut there's no strong reason for that number over any other. In theory\nit could be up to 1Gb, but as Jan Wieck points out in a nearby thread,\nyou probably wouldn't like the performance of shoving gigabyte-sized\ntext values around. We need to think about offering API functions that\nwill allow reading and writing huge field values in bite-sized chunks.\n\nThere's no essential performance difference between char(n), varchar(n),\nand text in Postgres, given the same-sized data value. char(n)\ntruncates or blank-pads to exactly n characters; varchar(n) truncates\nif more than n characters; text never truncates nor pads. Beyond that\nthey are completely identical in storage requirements. Pick one based\non the semantics you want for your application.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Aug 2000 23:11:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL "
},
{
"msg_contents": "I think the ODBC spec limits varchar to 255 bytes.\nSome ODBC drivers enforce that limit.\n\nTom Lane wrote:\n\n> \"Jeffrey A. Rhines\" <[email protected]> writes:\n> >> Uh ... what's wrong with varchar(n) ?\n> >\n> > I've wondered that myself, actually. What are the benefits and\n> > drawbacks to going with one over the other, besides the obvious 255-char\n> > field length limit for varchar?\n>\n> AFAIK there has *never* been a 255-char limit on char or varchar in\n> pgsql ... you must be thinking of Some Other DBMS.\n>\n> [snip]\n> regards, tom lane\n\n",
"msg_date": "Tue, 22 Aug 2000 22:06:26 -0700",
"msg_from": "Craig Johannsen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "*** Tom Lane <[email protected]> [Tuesday, 22.August.2000, 23:11 -0400]:\n> There's no essential performance difference between char(n), varchar(n),\n> and text in Postgres, given the same-sized data value. char(n)\n> truncates or blank-pads to exactly n characters; varchar(n) truncates\n> if more than n characters; text never truncates nor pads. Beyond that\n> they are completely identical in storage requirements. \n[.rs.]\n\nDoes varchar(188) takes 188 bytes (+ bytes for length storage) every\ntime, no matter if it contains 'my text' or 'my long 188 char text.....'\n?\n\n\n-- \nradoslaw.stachowiak.........................................http://alter.pl/\n",
"msg_date": "Fri, 1 Sep 2000 20:12:08 +0200",
"msg_from": "Radoslaw Stachowiak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": ">>>>> \"Radoslaw\" == Radoslaw Stachowiak <[email protected]> writes:\nRadoslaw> Does varchar(188) takes 188 bytes (+ bytes for length\nRadoslaw> storage) every time, no matter if it contains 'my text' or\nRadoslaw> 'my long 188 char text.....' ?\n\nThe way I understand it varchar(n) is variable-length, while char(n)\nis fixed-lenght. Thus the behaviour you describe above is that of\nchar(n).\n\nMartin\n\n-- \nGPG public key: http://home1.stofanet.dk/factotum/gpgkey.txt\n",
"msg_date": "Sat, 02 Sep 2000 21:05:45 GMT",
"msg_from": "Martin Christensen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "Martin Christensen wrote:\n> >>>>> \"Radoslaw\" == Radoslaw Stachowiak <[email protected]> writes:\n> Radoslaw> Does varchar(188) takes 188 bytes (+ bytes for length\n> Radoslaw> storage) every time, no matter if it contains 'my text' or\n> Radoslaw> 'my long 188 char text.....' ?\n>\n> The way I understand it varchar(n) is variable-length, while char(n)\n> is fixed-lenght. Thus the behaviour you describe above is that of\n> char(n).\n\n Right for any pre-7.1 version.\n\n From 7.1 on the system will try to compress all types\n internally stored as variable length (char(), varchar(), text\n and some more). So the real amount of bytes for a char(188)\n will be \"at maximum 192 - probably less\".\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Sat, 2 Sep 2000 18:33:48 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> From 7.1 on the system will try to compress all types\n> internally stored as variable length (char(), varchar(), text\n> and some more). So the real amount of bytes for a char(188)\n> will be \"at maximum 192 - probably less\".\n\nDon't variable-length records incur a performance overhead? In this case,\nought I be able to specify the length for a record if I know ahead of time\nit will be the same in every case? :o\n\nIan Turner\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.1 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE5sabgfn9ub9ZE1xoRAhayAKCwMjh/5tYlg8zZiAimJlgFSfCLsQCghBce\nGxx6X8sSwIACIHvdbxBsgGQ=\n=bogc\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Sat, 2 Sep 2000 18:18:22 -0700 (PDT)",
"msg_from": "Ian Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "Ian Turner <[email protected]> writes:\n> Don't variable-length records incur a performance overhead?\n\nOnly to the extent that the system can't cache offset information for\nlater columns in that table. While someone evidently once thought that\nwas worthwhile, I've never seen the column-access code show up as a\nparticularly hot spot in any profile I've run. I doubt you could\nactually measure any difference, let alone show it to be important\nenough to be worth worrying about.\n\nIn any case, char(n) will still do what you want for reasonable-size\nrecords. The TOAST code only kicks in when the total tuple size exceeds\nBLCKSZ/4 ... and at that point, compression is a good idea in any case.\n\nNow that you mention it, though, doesn't TOAST break heapam's assumption\nthat char(n) is fixed length? Seems like we'd better either remove that\nassumption or mark char(n) nontoastable. Any opinions which is better?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Sep 2000 01:07:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL "
},
{
"msg_contents": "Tom Lane wrote:\n> Now that you mention it, though, doesn't TOAST break heapam's assumption\n> that char(n) is fixed length? Seems like we'd better either remove that\n> assumption or mark char(n) nontoastable. Any opinions which is better?\n\n Is the saved overhead from assuming char(n) is fixed really\n that big that it's worth NOT to gain the TOAST advantages?\n After the GB benchmarks we know that we have some spare\n performance to waste for such things :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Sun, 3 Sep 2000 04:03:31 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "Tom Lane wrote:\n> Now that you mention it, though, doesn't TOAST break heapam's assumption\n> that char(n) is fixed length? Seems like we'd better either remove that\n> assumption or mark char(n) nontoastable. Any opinions which is better?\n\n Is the saved overhead from assuming char(n) is fixed really\n that big that it's worth NOT to gain the TOAST advantages?\n After the GB benchmarks we know that we have some spare\n performance to waste for such things :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n\n\n",
"msg_date": "Sun, 3 Sep 2000 10:02:51 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "Ian Turner <[email protected]> writes:\n> Don't variable-length records incur a performance overhead?\n\nOnly to the extent that the system can't cache offset information for\nlater columns in that table. While someone evidently once thought that\nwas worthwhile, I've never seen the column-access code show up as a\nparticularly hot spot in any profile I've run. I doubt you could\nactually measure any difference, let alone show it to be important\nenough to be worth worrying about.\n\nIn any case, char(n) will still do what you want for reasonable-size\nrecords. The TOAST code only kicks in when the total tuple size exceeds\nBLCKSZ/4 ... and at that point, compression is a good idea in any case.\n\nNow that you mention it, though, doesn't TOAST break heapam's assumption\nthat char(n) is fixed length? Seems like we'd better either remove that\nassumption or mark char(n) nontoastable. Any opinions which is better?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 3 Sep 2000 10:03:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "Jan Wieck <[email protected]> writes:\n> Tom Lane wrote:\n>> Now that you mention it, though, doesn't TOAST break heapam's assumption\n>> that char(n) is fixed length? Seems like we'd better either remove that\n>> assumption or mark char(n) nontoastable. Any opinions which is better?\n\n> Is the saved overhead from assuming char(n) is fixed really\n> that big that it's worth NOT to gain the TOAST advantages?\n\nNo, I don't think so. Instead of pulling out the code entirely,\nhowever, we could extend the VARLENA_FIXED_SIZE macro to also check\nwhether attstorage = 'p' before reporting that a char(n) field is\nfixed-size. Then someone who's really intent on keeping the old\nbehavior could hack the attribute entry to make it so.\n\nI seem to recall that your original idea for TOAST included an ALTER\ncommand to allow adjustment of attstorage settings, but that didn't\nget done did it? Seems like it would be risky to change the setting\nexcept on an empty table.\n\nNot sure if any of this is worth keeping, or if we should just simplify\nthe code in heaptuple.c to get rid of the notion of \"fixed size\"\nvarlena attributes. It's certainly not going to be a mainstream case\nanymore, so I question whether the check has any hope of saving more\ncycles than it costs. Yet it seems a shame to wipe out this hack\nentirely...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Sep 2000 20:37:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Viability of VARLENA_FIXED_SIZE()"
},
{
"msg_contents": "> Not sure if any of this is worth keeping, or if we should just simplify\n> the code in heaptuple.c to get rid of the notion of \"fixed size\"\n> varlena attributes. It's certainly not going to be a mainstream case\n> anymore, so I question whether the check has any hope of saving more\n> cycles than it costs. Yet it seems a shame to wipe out this hack\n> entirely...\n\nNot sure if this is relevant (but when does that stop me ;):\n\nThe only truly \"fixed length\" string from a storage standpoint is for\nsingle-byte encodings (and Unicode, I suppose). Eventually, we will need\nthe notion of both \"octet length\" *and* \"character length\" in our\nbackend code, and for non-ASCII encodings nothing will be of fixed octet\nlength anyway.\n\n - Thomas\n",
"msg_date": "Mon, 04 Sep 2000 17:47:13 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Viability of VARLENA_FIXED_SIZE()"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Ian Turner <[email protected]> writes:\n> > Don't variable-length records incur a performance overhead?\n> \n> Only to the extent that the system can't cache offset information for\n> later columns in that table. While someone evidently once thought that\n> was worthwhile, I've never seen the column-access code show up as a\n> particularly hot spot in any profile I've run. I doubt you could\n> actually measure any difference, let alone show it to be important\n> enough to be worth worrying about.\n\nIt clearly is a hot-spot. That monster macro, fastgetattr(), in\nheapam.h is in there for a reason. It accounts for about 5% for straight\nsequential scan case, last I heard from someone who ran a test.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 14 Oct 2000 23:28:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "> Ian Turner <[email protected]> writes:\n> > Don't variable-length records incur a performance overhead?\n> \n> Only to the extent that the system can't cache offset information for\n> later columns in that table. While someone evidently once thought that\n> was worthwhile, I've never seen the column-access code show up as a\n> particularly hot spot in any profile I've run. I doubt you could\n> actually measure any difference, let alone show it to be important\n> enough to be worth worrying about.\n> \n> In any case, char(n) will still do what you want for reasonable-size\n> records. The TOAST code only kicks in when the total tuple size exceeds\n> BLCKSZ/4 ... and at that point, compression is a good idea in any case.\n\nMy logic is that I use char() when I want the length to be fixed, like\n2-letter state codes, and varchar() for others where I just want a\nmaximum allowed, like last name. I use text for arbitrary length stuff.\nTom is right that though there is a small performance difference, it is\nbetter just to use the right type.\n\n> \n> Now that you mention it, though, doesn't TOAST break heapam's assumption\n> that char(n) is fixed length? Seems like we'd better either remove that\n> assumption or mark char(n) nontoastable. Any opinions which is better?\n\nI am sure Jan handled that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 14 Oct 2000 23:32:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
},
{
"msg_contents": "> Tom Lane wrote:\n> > Now that you mention it, though, doesn't TOAST break heapam's assumption\n> > that char(n) is fixed length? Seems like we'd better either remove that\n> > assumption or mark char(n) nontoastable. Any opinions which is better?\n> \n> Is the saved overhead from assuming char(n) is fixed really\n> that big that it's worth NOT to gain the TOAST advantages?\n> After the GB benchmarks we know that we have some spare\n> performance to waste for such things :-)\n\nOh, now I get it. Some TOAST values may be out-of line. Can we really\nthrow char() into TOAST? I guess we can. We have to record somewhere\nthat we have toasted that tuple and disable the offset cache for it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 14 Oct 2000 23:33:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] SQL Server to PostgreSQL"
}
] |
[
{
"msg_contents": "...\n>Actually, I think I understand the question. The original person wants to\n>be able to do a query and get a result containing a list of\n>databases. AFAIK, there isn't a way to do this using standard SQL-like\n>statements. Somebody correct me if I'm wrong.\n\n\nI cannot test this now, but I think\n\"select * from pg_database\" should do it.\n\nMario\n\n",
"msg_date": "Tue, 22 Aug 2000 18:57:23 +0200",
"msg_from": "\"Mario Weilguni\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 7.0.2 "
}
] |
[
{
"msg_contents": "I am trying to create a view and have run across a, to me, bizarre\noccurance. One CREATE VIEW statement creates the view fine; changing\nthe name of the view and repeating the same statement does not. This\nhas nothing to do with conflicting names as appropriate DROP commands\nare issued first.\n\nTo be specific, here are the queries and the results:\n\n -- precipitation_xxx_verify view created fine (see below)\n\n drop view precipitation_xxx_verify;\n create view precipitation_xxx_verify as\n select p.id, p.verified,\n\t w.name,\n\t w.country, w.state, w.county,\n\t p.date, p.precipitation / 2.54 as precipitation,\t\t-- 2.54 mm/inch\n\t p.inserted_by, p.inserted_on, p.verified_by, p.verified_on\n from precipitation_data p, weather_stations w\n where w.id = p.weather_station_id\n\t and verified != true;\n\n -- precipitation_english_verify view is not created as a view (see below)\n\n drop view precipitation_english_verify;\t\t-- XXX - fails because a view is not\n created (see below)\n drop table precipitation_english_verify;\t\t-- XXX - why not a view?\n create view precipitation_english_verify as\n select p.id, p.verified,\n\t w.name,\n\t w.country, w.state, w.county,\n\t p.date, p.precipitation / 2.54 as precipitation,\t\t-- 2.54 mm/inch\n\t p.inserted_by, p.inserted_on, p.verified_by, p.verified_on\n from precipitation_data p, weather_stations w\n where w.id = p.weather_station_id\n\t and verified != true;\n\n \\d precipitation_xxx_verify\n \\d precipitation_english_verify\n\n\tView \"precipitation_xxx_verify\"\n\tAttribute | Type | Modifier \n ---------------+-----------+----------\n id | integer | \n verified | boolean | \n name | text | \n country | text | \n state | text | \n county | text | \n date | timestamp | \n precipitation | float8 | \n inserted_by | name | \n inserted_on | timestamp | \n verified_by | name | \n verified_on | timestamp | \n View definition: SELECT p.id, p.verified, w.name, w.country, w.state, w.county, p.date, (p.precipitation / 2.54) AS precipitation, p.inserted_by, p.inserted_on, p.verified_by, p.verified_on FROM precipitation_data p, weather_stations w WHERE ((w.id = p.weather_station_id) AND (p.verified <> 't'::bool));\n\n View \"precipitation_english_verify\"\n\tAttribute | Type | Modifier \n ---------------+-----------+----------\n id | integer | \n verified | boolean | \n name | text | \n country | text | \n state | text | \n county | text | \n date | timestamp | \n precipitation | float8 | \n inserted_by | name | \n inserted_on | timestamp | \n verified_by | name | \n verified_on | timestamp | \n View definition: Not a view\n\nIt seems that the problem is with the word \"english\" as part of the\nview name. Variants of the name that lack it (e.g., replacing xxx\nabove with eng, englih, etc.) seem to work fine, but variants that\ninclude it (e.g., replacing xxx with english, eenglish, englishh)\nsuffer as above.\n\nIs there something special involved in handling view names that would\npreclude such names?\n\nAny explanations for this behavior are welcome.\n\nThanks for your help.\n\nCheers,\nBrook\n",
"msg_date": "Tue, 22 Aug 2000 12:40:21 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "when does CREATE VIEW not create a view?"
},
{
"msg_contents": " It seems that the problem is with the word \"english\" as part of the\n view name. Variants of the name that lack it (e.g., replacing xxx\n above with eng, englih, etc.) seem to work fine, but variants that\n include it (e.g., replacing xxx with english, eenglish, englishh)\n suffer as above.\n\nThe problem also seems to occur if xxx is replaced by british,\nimperial, and american. I haven't tried other location names, but\nthere seems to be a trend.\n\nCheers,\nBrook\n",
"msg_date": "Tue, 22 Aug 2000 12:55:29 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": "Brook Milligan wrote:\n> \n> It seems that the problem is with the word \"english\" as part of the\n> view name. Variants of the name that lack it (e.g., replacing xxx\n> above with eng, englih, etc.) seem to work fine, but variants that\n> include it (e.g., replacing xxx with english, eenglish, englishh)\n> suffer as above.\n> \n> The problem also seems to occur if xxx is replaced by british,\n> imperial, and american. I haven't tried other location names, but\n> there seems to be a trend.\n\nThis is probably wrong, but could it be the length of the name?\n\nTry replacing 'english' with some other seven letters e.g.\nprecipitation_abdefgh_verify\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Tue, 22 Aug 2000 15:10:19 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": "Brook, \nThis smells like a identifier length limit problem to me. Let's see:\n\nprecipitation_english_verify is 29 characters, default NAMEDATALEN is\n32. Creating a view creates a table, and attaches a SELCT DO INSTEAD\nrule to it, named _RET<tablename>, so that tacks 4 characters on, giving\nus 29+4 = 33, bingo, rule doesn't get made. All your other attemps were\nlonger, except for xxx. You'll find that replacing english with xxxxxxx\nwon't work, either (and it's not the vchip).\n\nSounds like a missing error check, or truncation, in the CREATE VIEW\nrule generation code.\n\nRoss\n\nOn Tue, Aug 22, 2000 at 12:40:21PM -0600, Brook Milligan wrote:\n> I am trying to create a view and have run across a, to me, bizarre\n> occurance. One CREATE VIEW statement creates the view fine; changing\n> the name of the view and repeating the same statement does not. This\n> has nothing to do with conflicting names as appropriate DROP commands\n> are issued first.\n> \n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Tue, 22 Aug 2000 15:16:01 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": " This is probably wrong, but could it be the length of the name?\n\n Try replacing 'english' with some other seven letters e.g.\n precipitation_abdefgh_verify\n\nGood guess, but I'm still confused. precipitation_abcdefgh_verify\ndoes not work; precipitation_abcdef_verify does. The latter is 27\ncharacters. I thought identifiers could be 32 before truncation\noccurred (and for tables the name is just truncated anyway but\notherwise unchanged).\n\nDoes the backend add something to a view identifier to push it over 32\ncharacters? Is that added as a prefix or a suffix? If the latter,\nperhaps it should be a prefix? Or is the problem with the select rule\nformed by CREATE VIEW? If the latter, should there be different\ntruncation rules for view names than for table names so that the\nassociated rule and table names have the appropriate relationship?\n\nCheers,\nBrook\n",
"msg_date": "Tue, 22 Aug 2000 14:21:04 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": "See my other reply about what gets added: the problem is the rewrite\nrule name, as you guessed.\n\nHere's a patch that silently truncates the generated rule name. Unlike\ntablename or generated sequence name truncation, there's no need in\nnormal operation for the DBA to know the name of this rule, so I didn't\nput in a NOTICE about the truncation.\n\nI found every accurance of _RET in the source that refered to a view rule,\nand patched them to do the right thing.\n\nRoss\n\nOn Tue, Aug 22, 2000 at 02:21:04PM -0600, Brook Milligan wrote:\n> \n> Does the backend add something to a view identifier to push it over 32\n> characters? Is that added as a prefix or a suffix? If the latter,\n> perhaps it should be a prefix? Or is the problem with the select rule\n> formed by CREATE VIEW? If the latter, should there be different\n> truncation rules for view names than for table names so that the\n> associated rule and table names have the appropriate relationship?\n> \n> Cheers,\n> Brook\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005",
"msg_date": "Tue, 22 Aug 2000 16:05:19 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] when does CREATE VIEW not create a view?"
},
{
"msg_contents": " See my other reply about what gets added: the problem is the rewrite\n rule name, as you guessed.\n\n Here's a patch that silently truncates the generated rule name.\n\nTHANKS!!! Once again, for all practical purposes _instant_ service from\nthe mailing list. Very impressive!\n\nCheers,\nBrook\n",
"msg_date": "Tue, 22 Aug 2000 15:20:36 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": "On Tue, Aug 22, 2000 at 04:05:19PM -0500, Ross J. Reedstrom wrote:\n> \n> I found every accurance of _RET in the source that refered to a view rule,\n> and patched them to do the right thing.\n\nSigh. 5 minutes after sending this, I find one last one, in pg_dump. Patch\nattached.\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005",
"msg_date": "Tue, 22 Aug 2000 16:24:15 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] when does CREATE VIEW not create a view?"
},
{
"msg_contents": "On Tue, Aug 22, 2000 at 03:20:36PM -0600, Brook Milligan wrote:\n> See my other reply about what gets added: the problem is the rewrite\n> rule name, as you guessed.\n> \n> Here's a patch that silently truncates the generated rule name.\n> \n> THANKS!!! Once again, for all practical purposes _instant_ service from\n> the mailing list. Very impressive!\n> \n\nWarning: these patches are against current source. Let me know if it doesn;t\npatch in for you. And be sure to get the extra bit, so pg_dump doesn't break.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Tue, 22 Aug 2000 16:33:23 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": "> See my other reply about what gets added: the problem is the rewrite\n> rule name, as you guessed.\n> \n> Here's a patch that silently truncates the generated rule name. Unlike\n> tablename or generated sequence name truncation, there's no need in\n> normal operation for the DBA to know the name of this rule, so I didn't\n> put in a NOTICE about the truncation.\n> \n> I found every accurance of _RET in the source that refered to a view rule,\n> and patched them to do the right thing.\n\nOh, the patch strikes me since it is not \"multibyte aware.\" Are you\ngoing to put it into the CVS? If so, please let me know after you do\nit so that I could add the multibyte awareness to that.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 23 Aug 2000 10:02:02 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] when does CREATE VIEW not create a view?"
},
{
"msg_contents": "On Wed, Aug 23, 2000 at 10:02:02AM +0900, Tatsuo Ishii wrote:\n> > See my other reply about what gets added: the problem is the rewrite\n> > rule name, as you guessed.\n> > \n> > Here's a patch that silently truncates the generated rule name. Unlike\n> > tablename or generated sequence name truncation, there's no need in\n> > normal operation for the DBA to know the name of this rule, so I didn't\n> > put in a NOTICE about the truncation.\n> > \n> > I found every accurance of _RET in the source that refered to a view rule,\n> > and patched them to do the right thing.\n> \n> Oh, the patch strikes me since it is not \"multibyte aware.\" Are you\n> going to put it into the CVS? If so, please let me know after you do\n> it so that I could add the multibyte awareness to that.\n\nWell, I meant it to go into CVS, if noone objected. I consider your raising\nthe multibyte issue sufficent objection to have it held off. No point\npatching and repatching.\n\nThe problem is that I just chop it off at NAMEDATALEN, which might be\nin the middle of a multibyte character, correct?\n\nAh, I see code in parser/scan.l that does the multibyte aware version\nof the chop. Should I just rewrite my patch with that code as a model?\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Wed, 23 Aug 2000 12:55:18 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] when does CREATE VIEW not create a view?"
},
{
"msg_contents": "Brook Milligan wrote:\n> \n> See my other reply about what gets added: the problem is the rewrite\n> rule name, as you guessed.\n> \n> Here's a patch that silently truncates the generated rule name.\n> \n\nWhat are the consequences of changing the NAMEDATALEN and recompiling?\nDoesn't that seem like a better solution then to truncate the view name? \n\n-- \nYou can hit reply if you want \"malcontent\" is a legit email.\n",
"msg_date": "Thu, 24 Aug 2000 00:30:00 -0600",
"msg_from": "Malcontent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": "On Thu, Aug 24, 2000 at 12:30:00AM -0600, Malcontent wrote:\n> Brook Milligan wrote:\n> > \n> > See my other reply about what gets added: the problem is the rewrite\n> > rule name, as you guessed.\n> > \n> > Here's a patch that silently truncates the generated rule name.\n> > \n> \n> What are the consequences of changing the NAMEDATALEN and recompiling?\n> Doesn't that seem like a better solution then to truncate the view name? \n\nIncreasing NAMEDATALEN is a relatively common customization, but\nit does cost you efficency of storage in the system tables: all the\nidentifiers take fixed NAMEDATALEN char fields, for speed of access. In\nthis particular case, the view name is not getting truncated (that will\nalready happen, if you try to create a view or table with a name longer\nthan NAMEDATALEN). The problem is that creation of a view involves\nthe backend creating a table with the supplied name, building an ON\nSELECT INSTEAD rule, whose (unique) name is created by prepending _RET\nto the supplied view name. Since this goes into a NAMEDATALEN field in a\nsystem table, it needs to be truncated. Current code fails by not creating\nthe rule if the supplied name is within 4 characters of NAMEDATALEN,\nbut leaving the underlying table around. Since end user code _never_\nneeds to manipulate the rule directly, truncating the name is not a\nproblem. \n\nThe patch I proposed has not been put in CVS, because I need to add\nmultibyte support. Hmm, anyone got any spare round tuits? (I've got\nplenty of square ones...)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Mon, 28 Aug 2000 10:21:16 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": " > Here's a patch that silently truncates the generated rule name.\n\n What are the consequences of changing the NAMEDATALEN and recompiling?\n Doesn't that seem like a better solution then to truncate the view name? \n\nAll names are truncated. The bug arises from the fact that view names\nwere being incorrectly truncated by not taking into account the extra\ncharacters added to enforce the automatic \"on select\" rule. The point\nis to make the truncation rules internally consistent.\n\nCheers,\nBrook\n",
"msg_date": "Mon, 28 Aug 2000 09:26:13 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": "> > Oh, the patch strikes me since it is not \"multibyte aware.\" Are you\n> > going to put it into the CVS? If so, please let me know after you do\n> > it so that I could add the multibyte awareness to that.\n> \n> Well, I meant it to go into CVS, if noone objected. I consider your raising\n> the multibyte issue sufficent objection to have it held off. No point\n> patching and repatching.\n\nNo problem for repatching I think, since we are in the development\ncycle anyway.\n\n> The problem is that I just chop it off at NAMEDATALEN, which might be\n> in the middle of a multibyte character, correct?\n\nExactly.\n\n> Ah, I see code in parser/scan.l that does the multibyte aware version\n> of the chop. Should I just rewrite my patch with that code as a model?\n\nPlease do so. If you need any help, please let me know.\n--\nTatsuo Ishii\n\n",
"msg_date": "Tue, 29 Aug 2000 10:12:38 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] when does CREATE VIEW not create a view?"
},
{
"msg_contents": "On Tue, Aug 29, 2000 at 10:12:38AM +0900, [email protected] wrote:\n> \n> No problem for repatching I think, since we are in the development\n> cycle anyway.\n\nOh well.\n\n> \n> > The problem is that I just chop it off at NAMEDATALEN, which might be\n> > in the middle of a multibyte character, correct?\n> \n> Exactly.\n\nGood. Understanding the problem is critical to fixing it. ;-)\n\n> \n> > Ah, I see code in parser/scan.l that does the multibyte aware version\n> > of the chop. Should I just rewrite my patch with that code as a model?\n> \n> Please do so. If you need any help, please let me know.\n\nO.K.\n\nI'm just about done with it. Since there are three places in the code that\nseem to know about how to make a rulename from a viewname, and one of them\nis a function named MakeRetrieveViewRuleName(), I've put the #ifdef MULTIBYTE\nin there, and called this function from the other places that need it.\n\nOnly problem is in utils/adt/ruleutils.c\n\nThere's code in there that constructs potential rule names that start with\n'_ret' as well as '_RET', in order to use an SPI query to find the rule\nassociated with a view. This is the only occurance of the string '\"_ret'\nin the codebase, and I can't find a way a rule might get that name, nor an\nexample in either the 6.5.0 and 7.0.2 databases I've got here. \n\nSomeone when to the trouble of writing the query that way, but I'm not\nconvinced it's needed anymore. I'm guessing there was an extra tolower\nsomewhere that doesn't happen anymore (Tom Lane tracked down a bunch\nof these when I whined about MultiCase tablenames breaking, nigh on a\nyear ago)\n\nShould I trash it? Anyone have anything returned from\n\nSELECT rulename from pg_rewrite where rulename ~ '^_ret';\n\non any database with view defined?\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Tue, 29 Aug 2000 12:46:46 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> Only problem is in utils/adt/ruleutils.c\n\n> There's code in there that constructs potential rule names that start with\n> '_ret' as well as '_RET', in order to use an SPI query to find the rule\n> associated with a view. This is the only occurance of the string '\"_ret'\n> in the codebase, and I can't find a way a rule might get that name, nor an\n> example in either the 6.5.0 and 7.0.2 databases I've got here. \n\nMost likely it's dead code. I'd say simplify.\n\nMark Hollomon's question about adding a relisview column to pg_class\nspurs another possibility: add a column to pg_class, but instead of\njust a boolean, make it be 0 if not a view and the OID of the view rule\nif it is. That'd get rid of the dependency on rule names altogether\nfor code that needs to find the associated rule.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Aug 2000 18:38:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: when does CREATE VIEW not create a view? "
},
{
"msg_contents": "On Tue, Aug 29, 2000 at 10:12:38AM +0900, [email protected] wrote:\n> > > Oh, the patch strikes me since it is not \"multibyte aware.\"\n\nO.K. - \nHere's the multibyte aware version of my patch to fix the truncation\nof the rulename autogenerated during a CREATE VIEW. I've modified all\nthe places in the backend that want to construct the rulename to use\nthe MakeRetrieveViewRuleName(), where I put the #ifdef MULTIBYTE, so\nthat's the only place that knows how to construct a view rulename. Except\npg_dump, where I replicated the code, since it's a standalone binary.\n\nThe only effect the enduser will see is that views with names len(name)\n> NAMEDATALEN-4 will fail to be created, if the derived rulename clases\nwith an existing rule: i.e. the user is trying to create two views with\nlong names whose first difference is past NAMEDATALEN-4 (but before\nNAMEDATALEN: that'll error out after the viewname truncation.) In no\ncase will the user get left with a table without a view rule, as the\ncurrent code does.\n\n>\n> Please do so. If you need any help, please let me know.\n> --\n> Tatsuo Ishii\n\nI haven't tested the MULTIBYTE part. Could you give it a quick once over?\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005",
"msg_date": "Wed, 30 Aug 2000 11:35:47 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] when does CREATE VIEW not create a view?"
},
{
"msg_contents": "Applied.\n\n> On Tue, Aug 29, 2000 at 10:12:38AM +0900, [email protected] wrote:\n> > > > Oh, the patch strikes me since it is not \"multibyte aware.\"\n> \n> O.K. - \n> Here's the multibyte aware version of my patch to fix the truncation\n> of the rulename autogenerated during a CREATE VIEW. I've modified all\n> the places in the backend that want to construct the rulename to use\n> the MakeRetrieveViewRuleName(), where I put the #ifdef MULTIBYTE, so\n> that's the only place that knows how to construct a view rulename. Except\n> pg_dump, where I replicated the code, since it's a standalone binary.\n> \n> The only effect the enduser will see is that views with names len(name)\n> > NAMEDATALEN-4 will fail to be created, if the derived rulename clases\n> with an existing rule: i.e. the user is trying to create two views with\n> long names whose first difference is past NAMEDATALEN-4 (but before\n> NAMEDATALEN: that'll error out after the viewname truncation.) In no\n> case will the user get left with a table without a view rule, as the\n> current code does.\n> \n> >\n> > Please do so. If you need any help, please let me know.\n> > --\n> > Tatsuo Ishii\n> \n> I haven't tested the MULTIBYTE part. Could you give it a quick once over?\n> \n> Ross\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 00:15:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] when does CREATE VIEW not create a view?"
},
{
"msg_contents": "I hate to say this, but this patch fails to apply on our current tree. \nCan you send me a version that applies? Thanks.\n\n\n> On Tue, Aug 29, 2000 at 10:12:38AM +0900, [email protected] wrote:\n> > > > Oh, the patch strikes me since it is not \"multibyte aware.\"\n> \n> O.K. - \n> Here's the multibyte aware version of my patch to fix the truncation\n> of the rulename autogenerated during a CREATE VIEW. I've modified all\n> the places in the backend that want to construct the rulename to use\n> the MakeRetrieveViewRuleName(), where I put the #ifdef MULTIBYTE, so\n> that's the only place that knows how to construct a view rulename. Except\n> pg_dump, where I replicated the code, since it's a standalone binary.\n> \n> The only effect the enduser will see is that views with names len(name)\n> > NAMEDATALEN-4 will fail to be created, if the derived rulename clases\n> with an existing rule: i.e. the user is trying to create two views with\n> long names whose first difference is past NAMEDATALEN-4 (but before\n> NAMEDATALEN: that'll error out after the viewname truncation.) In no\n> case will the user get left with a table without a view rule, as the\n> current code does.\n> \n> >\n> > Please do so. If you need any help, please let me know.\n> > --\n> > Tatsuo Ishii\n> \n> I haven't tested the MULTIBYTE part. Could you give it a quick once over?\n> \n> Ross\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 14 Oct 2000 23:39:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": "OK, the bad news is that this does not apply to the current development\ntree. Ross, can you make a more corrent one? Sorry.\n\n\n> On Tue, Aug 29, 2000 at 10:12:38AM +0900, [email protected] wrote:\n> > > > Oh, the patch strikes me since it is not \"multibyte aware.\"\n> \n> O.K. - \n> Here's the multibyte aware version of my patch to fix the truncation\n> of the rulename autogenerated during a CREATE VIEW. I've modified all\n> the places in the backend that want to construct the rulename to use\n> the MakeRetrieveViewRuleName(), where I put the #ifdef MULTIBYTE, so\n> that's the only place that knows how to construct a view rulename. Except\n> pg_dump, where I replicated the code, since it's a standalone binary.\n> \n> The only effect the enduser will see is that views with names len(name)\n> > NAMEDATALEN-4 will fail to be created, if the derived rulename clases\n> with an existing rule: i.e. the user is trying to create two views with\n> long names whose first difference is past NAMEDATALEN-4 (but before\n> NAMEDATALEN: that'll error out after the viewname truncation.) In no\n> case will the user get left with a table without a view rule, as the\n> current code does.\n> \n> >\n> > Please do so. If you need any help, please let me know.\n> > --\n> > Tatsuo Ishii\n> \n> I haven't tested the MULTIBYTE part. Could you give it a quick once over?\n> \n> Ross\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 12:22:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": "On Mon, Oct 16, 2000 at 12:22:23PM -0400, Bruce Momjian wrote:\n> OK, the bad news is that this does not apply to the current development\n> tree. Ross, can you make a more corrent one? Sorry.\n\nI think it won't apply because it's already in there. There were also\nsubsequent fixes to how pg_dump deals with views by Phil.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n",
"msg_date": "Mon, 16 Oct 2000 15:31:08 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: when does CREATE VIEW not create a view?"
},
{
"msg_contents": "On Mon, Oct 16, 2000 at 03:31:08PM -0500, Ross J. Reedstrom wrote:\n> On Mon, Oct 16, 2000 at 12:22:23PM -0400, Bruce Momjian wrote:\n> > OK, the bad news is that this does not apply to the current development\n> > tree. Ross, can you make a more corrent one? Sorry.\n> \n> I think it won't apply because it's already in there. There were also\n> subsequent fixes to how pg_dump deals with views by Phil.\n\nErr, I mean fixes by Philip to how pg_dump deals with views. AFAIK,\nthere's no special cases in the code for views created by Philip. ;->\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n",
"msg_date": "Mon, 16 Oct 2000 16:02:42 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Re: when does CREATE VIEW not create a view?"
}
] |
[
{
"msg_contents": "I am having some problems getting optimised queries when I use TEXT\nfields in records. It seems that PostgreSQL is assuming that these\nfields are 4 bytes wide so the record width calculation is wrong and\nthis means that all of the dependant calculations are wrong.\n\nWill it be a big deal to change teh width estimate for a record? I see\nthat vacuum effectively collects this statistic already, but is it\nsaved?\n\nFor example, from the following vacuum we can see that the average\nrecord size on my table is approximately:\n\t(1392*BLCKSZ)/24986 or ~ 456 bytes\n\nNOTICE: --Relation story--\nNOTICE: Pages 1392: Changed 0, reaped 528, Empty 0, New 0; Tup 24986:\nVac 0, Keep/VTL 0/0, Crash 0, UnUsed 18161, MinLen 76, MaxLen 574;\nRe-using: Free/Avail. Space 111804/104284; EndEmpty/Avail. Pages 0/376.\nCPU 0.31s/3.00u sec.\nNOTICE: Index story_pkey: Pages 201; Tuples 24986: Deleted 0. CPU\n0.04s/0.24u sec.\n\n\nOn the other hand, a basic query shows that the optimiser is estimating\nonly around 20% of that:\n\nnewsroom=# explain select * from story;\nNOTICE: QUERY PLAN:\n\nSeq Scan on story (cost=0.00..1641.86 rows=24986 width=91)\n\n\n\nSo the cost guesses are out by a factor of 5 and indexes are being used\na lot less often than I would like. I have a query which does a reverse\nindexscan when I use LIMIT 30, giving a seemingly instant response, but\nswitches to sequential scan and sort when I use LIMIT 35, taking around\n10 seconds to return (Pentium 233).\n\nAnyone have any thoughts on how to go about fixing this?\n\nRegards,\n\t\t\t\t\tAndrew McMillan\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n",
"msg_date": "Wed, 23 Aug 2000 09:24:06 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Optimisation and TEXT fields"
},
{
"msg_contents": "Andrew McMillan <[email protected]> writes:\n> I am having some problems getting optimised queries when I use TEXT\n> fields in records. It seems that PostgreSQL is assuming that these\n> fields are 4 bytes wide so the record width calculation is wrong and\n> this means that all of the dependant calculations are wrong.\n\n4 bytes? I'd have expected 12 (see _DEFAULT_ATTRIBUTE_WIDTH_ as used\nin src/backend/optimizer/path/costsize.c). While this is obviously\npretty brain-dead, I have not seen many cases in which that particular\nbogosity was the limiting factor in the accuracy of the optimizer's\ncalculations. Usually it's the row count rather than row width that\nwe're hopelessly lost on :-(\n\nAt some point it might be useful for VACUUM to calculate a real\naverage-field-width value for varlena columns and store same in\npg_statistic. I can't get excited about it quite yet though.\nIf you dig into costsize.c you'll see that the estimated row width\nis just a minor factor in the estimates. In particular, it has no\nrelevance whatever for seqscan-vs-indexscan choices.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Aug 2000 23:20:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Optimisation and TEXT fields "
}
] |
[
{
"msg_contents": "Allright, I'm running 7.0.2 with Tom Lane's backwards index scan patch\napplied.\n\nI'm attempting to select out of a large table (10GB) with about 4\nmillion rows, and it winds up just sitting and doing \"nothing\" forever.\nIf I check the process list, I see it using about 9% of the CPU.\n\nThis table is vacuum analyzed nightly - here's a description and EXPLAIN\nfrom the query I'm trying to run.\n\nAny ideas? I haven't been able to run the admin pages on Geocrawler ever\nsince I upgraded to 7.0.2\n\nTim\n\n\ndb_geocrawler=# \\d tbl_mail_archive\n Table \"tbl_mail_archive\"\n Attribute | Type | \nModifier \n----------------------+----------+----------------------------------------------\n fld_mailid | integer | not null default\nnextval('seq_mailid'::text)\n fld_mail_list | integer | \n fld_mail_date | char(14) | \n fld_mail_is_followup | integer | \n fld_mail_from | text | \n fld_mail_subject | text | \n fld_mail_body | text | \n fld_mail_email | text | \n fld_mail_year | integer | \n fld_mail_month | integer | \nIndices: idx_archive_list,\n idx_archive_list_date,\n idx_archive_year,\n idx_mail_archive_list_yr_mo,\n tbl_mail_archive_pkey\n\n\nI'm manually deleting the rows without knowing what they are - and\nthat's bad - this query shows that the rows do exist, but for some\nreason you can't select them out of the db.\n\ndb_geocrawler=# begin;\nBEGIN\ndb_geocrawler=# delete from tbl_mail_archive where fld_mail_list=0;\nDELETE 1032\ndb_geocrawler=# delete from tbl_mail_chunks where fld_mail_list=0;\nDELETE 39\ndb_geocrawler=# commit;\nCOMMIT\n\n\ndb_geocrawler=# explain SELECT * FROM tbl_mail_archive WHERE\nfld_mail_list=0 ORDER BY fld_mailid ASC LIMIT 10 OFFSET 0;\nNOTICE: QUERY PLAN:\n\nIndex Scan using tbl_mail_archive_pkey on tbl_mail_archive \n(cost=0.00..6402391.68 rows=19357 width=80)\n\nEXPLAIN\n\n\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Tue, 22 Aug 2000 18:17:55 -0500",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Interesting new bug?"
},
{
"msg_contents": "Tim Perdue <[email protected]> writes:\n> I'm attempting to select out of a large table (10GB) with about 4\n> million rows, and it winds up just sitting and doing \"nothing\" forever.\n\n> db_geocrawler=# explain SELECT * FROM tbl_mail_archive WHERE\n> fld_mail_list=0 ORDER BY fld_mailid ASC LIMIT 10 OFFSET 0;\n> NOTICE: QUERY PLAN:\n\n> Index Scan using tbl_mail_archive_pkey on tbl_mail_archive \n> (cost=0.00..6402391.68 rows=19357 width=80)\n\nInteresting. Since there's no explicit sort in the plan, I infer that\nindex tbl_mail_archive_pkey is on fld_mailid, meaning that the indexscan\nyields data already sorted by fld_mailid --- otherwise a sort step would\nbe needed. Evidently the optimizer is guessing that \"scan in fld_mailid\norder until you have 10 rows where fld_mail_list=0\" is faster than\n\"find all rows with fld_mail_list=0 and then sort by fld_mailid\".\n\nSince you're complaining, I guess that this is not so :-( ... but I'm\nnot sure how the optimizer might be taught to guess that. What exactly\nare the indexes *on* here; how many rows are in the table; and how many\nrows satisfy fld_mail_list=0?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Aug 2000 23:35:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interesting new bug? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Tim Perdue <[email protected]> writes:\n> > I'm attempting to select out of a large table (10GB) with about 4\n> > million rows, and it winds up just sitting and doing \"nothing\" forever.\n> \n> > db_geocrawler=# explain SELECT * FROM tbl_mail_archive WHERE\n> > fld_mail_list=0 ORDER BY fld_mailid ASC LIMIT 10 OFFSET 0;\n> > NOTICE: QUERY PLAN:\n> \n> > Index Scan using tbl_mail_archive_pkey on tbl_mail_archive\n> > (cost=0.00..6402391.68 rows=19357 width=80)\n> \n> Interesting. Since there's no explicit sort in the plan, I infer that\n> index tbl_mail_archive_pkey is on fld_mailid, meaning that the indexscan\n> yields data already sorted by fld_mailid --- otherwise a sort step would\n> be needed. Evidently the optimizer is guessing that \"scan in fld_mailid\n> order until you have 10 rows where fld_mail_list=0\" is faster than\n> \"find all rows with fld_mail_list=0 and then sort by fld_mailid\".\n> \n> Since you're complaining, I guess that this is not so :-( ... but I'm\n> not sure how the optimizer might be taught to guess that. What exactly\n> are the indexes *on* here; how many rows are in the table; and how many\n> rows satisfy fld_mail_list=0?\n\nThere is an index on fld_mail_list and there were 1093 rows that matched\nout of about 4.1 million.\n\nI wonder if this is the same problem we had before where I need to order\nby fld_mail_list, fld_mailid instead of just on fld_mailid. If so, you\nneed to get that fixed in the optimizer.\n\ndb_geocrawler=# explain\ndb_geocrawler-# SELECT * FROM tbl_mail_archive WHERE\ndb_geocrawler-# fld_mail_list=0 ORDER BY fld_mail_list ASC,fld_mailid\nASC LIMIT 10 OFFSET 0;\nNOTICE: QUERY PLAN:\n\nSort (cost=78282.54..78282.54 rows=19357 width=80)\n -> Index Scan using idx_archive_list on tbl_mail_archive \n(cost=0.00..76904.24 rows=19357 width=80)\n\nEXPLAIN\n\nNotice how it is now using the right index, because I am doing a sort on\nfld_mail_list first.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Tue, 22 Aug 2000 21:56:18 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Interesting new bug?"
},
{
"msg_contents": "What did you think of this? I fixed my problem by changing my query -\nbut I shouldn't have had to. This looks like a weakness in your\noptimizer, having to first sort on criteria that you don't care about.\n\nTim\n\n\n\nTim Perdue wrote:\n> \n> Tom Lane wrote:\n> >\n> > Tim Perdue <[email protected]> writes:\n> > > I'm attempting to select out of a large table (10GB) with about 4\n> > > million rows, and it winds up just sitting and doing \"nothing\" forever.\n> >\n> > > db_geocrawler=# explain SELECT * FROM tbl_mail_archive WHERE\n> > > fld_mail_list=0 ORDER BY fld_mailid ASC LIMIT 10 OFFSET 0;\n> > > NOTICE: QUERY PLAN:\n> >\n> > > Index Scan using tbl_mail_archive_pkey on tbl_mail_archive\n> > > (cost=0.00..6402391.68 rows=19357 width=80)\n> >\n> > Interesting. Since there's no explicit sort in the plan, I infer that\n> > index tbl_mail_archive_pkey is on fld_mailid, meaning that the indexscan\n> > yields data already sorted by fld_mailid --- otherwise a sort step would\n> > be needed. Evidently the optimizer is guessing that \"scan in fld_mailid\n> > order until you have 10 rows where fld_mail_list=0\" is faster than\n> > \"find all rows with fld_mail_list=0 and then sort by fld_mailid\".\n> >\n> > Since you're complaining, I guess that this is not so :-( ... but I'm\n> > not sure how the optimizer might be taught to guess that. What exactly\n> > are the indexes *on* here; how many rows are in the table; and how many\n> > rows satisfy fld_mail_list=0?\n> \n> There is an index on fld_mail_list and there were 1093 rows that matched\n> out of about 4.1 million.\n> \n> I wonder if this is the same problem we had before where I need to order\n> by fld_mail_list, fld_mailid instead of just on fld_mailid. If so, you\n> need to get that fixed in the optimizer.\n> \n> db_geocrawler=# explain\n> db_geocrawler-# SELECT * FROM tbl_mail_archive WHERE\n> db_geocrawler-# fld_mail_list=0 ORDER BY fld_mail_list ASC,fld_mailid\n> ASC LIMIT 10 OFFSET 0;\n> NOTICE: QUERY PLAN:\n> \n> Sort (cost=78282.54..78282.54 rows=19357 width=80)\n> -> Index Scan using idx_archive_list on tbl_mail_archive\n> (cost=0.00..76904.24 rows=19357 width=80)\n> \n> EXPLAIN\n> \n> Notice how it is now using the right index, because I am doing a sort on\n> fld_mail_list first.\n\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Thu, 24 Aug 2000 08:43:25 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Interesting new bug?"
}
] |
[
{
"msg_contents": "I've committed changes to the main tree which update the pg_proc system\ncatalog, so initdb is required.\n\nAs discussed recently, I've added some contrib/mac/ routines to support\ngenerating a table, macoui, which contains current manufacturers'\nidentification fields for hardware MAC addresses. I've made a few other\nchanges, including dropping the macaddr_manuf() built-in function.\n\nNote that I did *not* add a current copy of the oui.txt file from IEEE,\nsince it is over half a meg uncompressed. But we could add it to cvs if\nit is advisable.\n\nThe contrib/mac directory does not yet have a README, but should.\n\nMore details below...\n\n - Thomas\n\nThe CVS log:\n\nAdd functions to convert to and from text, and to truncate to MAC OUI.\nRemove hardcoded macaddr_manuf(), which had really old, obsolete info.\n Replace this with some contrib/mac/ code to maniag OUI info from IEEE.\n",
"msg_date": "Wed, 23 Aug 2000 06:17:40 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "New MAC OUI capabilities"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Note that I did *not* add a current copy of the oui.txt file from IEEE,\n> since it is over half a meg uncompressed. But we could add it to cvs if\n> it is advisable.\n\nThen we'd have to worry about keeping it up to date. That's also\nmore distribution-bloat than I think is advisable for a relatively\nlittle-used feature. I vote for just providing a README that tells\nwhere to get the current oui.txt file.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Aug 2000 10:03:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New MAC OUI capabilities "
},
{
"msg_contents": "> Then we'd have to worry about keeping it up to date. That's also\n> more distribution-bloat than I think is advisable for a relatively\n> little-used feature. I vote for just providing a README that tells\n> where to get the current oui.txt file.\n\nWe do even better than that: there is an \"updateoui\" routine which uses\nwget to go out and fetch the file for you.\n\n - Thomas\n",
"msg_date": "Wed, 23 Aug 2000 16:01:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New MAC OUI capabilities"
}
] |
[
{
"msg_contents": "Around here the primary product is Postgres. The version of the RDBM\nbeing used is PostgreSQL and the ODBC driver is Postdrv... etc. Kind of\nlike using Windows and the version is NT.... So any plans for trade\nmarking Postgres?\n\nAllan in Belgium\n\n",
"msg_date": "Wed, 23 Aug 2000 08:41:32 +0200",
"msg_from": "\"Allan Huffman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How Do You Pronounce \"PostgreSQL\""
}
] |
[
{
"msg_contents": "Hi!\n\n About analyze.c:\n If taken out vacuum, couldn't it be completly taken out of pg? Say,\nto an external program? What's the big reason not to do that? I know that\nthere is some code in analyze.c (like comparing) that uses other parts of\npg, but that seems to be easily fixed.\n\n I'm leaning toward the implementation of end-biased histograms. There is\nan introductory reference in the IEEE Data Engineering Bulletin, september\n1995 (available on microsoft research site).\n\nBest Regards,\nTiago\n\n\n",
"msg_date": "Wed, 23 Aug 2000 12:18:19 +0100 (WEST)",
"msg_from": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "analyze.c"
},
{
"msg_contents": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]> writes:\n> About analyze.c:\n> If taken out vacuum, couldn't it be completly taken out of pg? Say,\n> to an external program?\n\nNot if you want to do anything useful with it --- direct access to the\ndatabase is only possible within the context of a backend, because of\nall the locking, buffering, etc behavior that you must adhere to.\n\n> What's the big reason not to do that? I know that\n> there is some code in analyze.c (like comparing) that uses other parts of\n> pg, but that seems to be easily fixed.\n\nAre you proposing not to do any comparisons? It will be interesting to\nsee how you can compute a histogram without any idea of equality or\nordering. But if you want that, then you still need the function-call\nmanager as well as the type-specific comparison routines for every\ndatatype that you might be asked to operate on (don't forget\nuser-defined types here).\n\nIn short, I doubt you can build a useful analyze-engine that's\nsignificantly smaller than a full backend. Besides, having ANALYZE\navailable as a regular SQL command is just too useful to want to see\nit moved out to some outside program that would have to be run\nseparately.\n\n> I'm leaning toward the implementation of end-biased histograms. There is\n> an introductory reference in the IEEE Data Engineering Bulletin, september\n> 1995 (available on microsoft research site).\n\nSounds interesting. Can you give us an exact URL?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Aug 2000 10:46:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: analyze.c "
},
{
"msg_contents": "\n\nOn Wed, 23 Aug 2000, Tom Lane wrote:\n\n> > What's the big reason not to do that? I know that\n> > there is some code in analyze.c (like comparing) that uses other parts of\n> > pg, but that seems to be easily fixed.\n> \n> Are you proposing not to do any comparisons? It will be interesting to\n> see how you can compute a histogram without any idea of equality or\n> ordering. But if you want that, then you still need the function-call\n> manager as well as the type-specific comparison routines for every\n> datatype that you might be asked to operate on (don't forget\n> user-defined types here).\n\n I forgot user defined data types :-(, but regarding histograms I think\nthe code can be made external (at least for testing purposes):\n 1. I was not suggesting not to do any comparisons, but I think the only\ncomparison I need is equality, I don't need order as I don't need to\ncalculate mins or maxs (I just need mins and maxes on frequencies, NOT on \ndat itself) to make a histogram.\n 2. The mapping to text guarantees that I have (PQgetvalue returns\nalways char* and pg_statistics keeps a \"text\" anyway) a way of knowing\nabout equality regardless of type.\n\n But at least anything relating to order has to be in.\n\n> > I'm leaning toward the implementation of end-biased histograms. There is\n> > an introductory reference in the IEEE Data Engineering Bulletin, september\n> > 1995 (available on microsoft research site).\n> \n> Sounds interesting. Can you give us an exact URL?\n\nhttp://www.research.microsoft.com/research/db/debull/default.htm\n\nBTW, you can get access to SIGMOD CDs with lots of goodies for a very low\nprice (at least in 1999 it was a bargain), check out ACM membership for\nsigmod.\n\nI've been reading something about implementation of histograms, and,\nAFAIK, in practice histograms is just a cool name for no more than:\n 1. top ten with frequency for each\n 2. the same for top ten worse\n 3. average for the rest\n\nI'm writing code get this info (outside pg for now - for testing\npurposes).\n\nBest Regards,\nTiago\nPS - again: I'm starting, so, some of my comments can be completly dumb.\n\n",
"msg_date": "Wed, 23 Aug 2000 18:22:40 +0100 (WEST)",
"msg_from": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: analyze.c "
},
{
"msg_contents": "> Hi!\n> \n> About analyze.c:\n> If taken out vacuum, couldn't it be completly taken out of pg? Say,\n> to an external program? What's the big reason not to do that? I know that\n> there is some code in analyze.c (like comparing) that uses other parts of\n> pg, but that seems to be easily fixed.\n> \n> I'm leaning toward the implementation of end-biased histograms. There is\n> an introductory reference in the IEEE Data Engineering Bulletin, september\n> 1995 (available on microsoft research site).\n\nWhy take it out of the backend? Seems like a real pain, especially when\nyou realize what functions it would have to call. \n\nAlso, keep in mind that the current analyze generates perfect estimates for\ncolumns containing only two unique values, and columns containing only\nunique values. All other cases generate imperfect statistics.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 14:20:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: analyze.c"
},
{
"msg_contents": "> > > I'm leaning toward the implementation of end-biased histograms. There is\n> > > an introductory reference in the IEEE Data Engineering Bulletin, september\n> > > 1995 (available on microsoft research site).\n> > \n> > Sounds interesting. Can you give us an exact URL?\n> \n> http://www.research.microsoft.com/research/db/debull/default.htm\n> \n> BTW, you can get access to SIGMOD CDs with lots of goodies for a very low\n> price (at least in 1999 it was a bargain), check out ACM membership for\n> sigmod.\n\nThanks. I will look into that. SIGMOD has some real valuable stuff.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 14:22:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: analyze.ct"
},
{
"msg_contents": "> BTW, you can get access to SIGMOD CDs with lots of goodies for a very low\n> price (at least in 1999 it was a bargain), check out ACM membership for\n> sigmod.\n> \n> I've been reading something about implementation of histograms, and,\n> AFAIK, in practice histograms is just a cool name for no more than:\n> 1. top ten with frequency for each\n> 2. the same for top ten worse\n> 3. average for the rest\n\nI wonder if just increasing the number of buckets in analyze.c would\nhelp?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 14:23:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: analyze.c"
}
] |
[
{
"msg_contents": "Hi,\nI am doing a \"Vaccum verbose analyze;\" on the database. The first time\nI did the vacuum it stucked at a table and I canceled it ctrl z. Then I\ndeleted the file pg_Vlock and I tried to do it again, but this time it\ndoesn't do anything. It is just hangs there doing nothing.\n\nI use PostgreSQL 6.5.2.\n\n\nThanks\n\n\n",
"msg_date": "Wed, 23 Aug 2000 15:30:38 +0300",
"msg_from": "Antonio Antoniou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum dooes not respond"
}
] |
[
{
"msg_contents": "Here's a quick patch for contrib/mac/ouiparse.awk to fix some commenting\nchanges due to Thomas' updates.\n\n\n*** ouiparse.awk.old\tWed Aug 23 01:02:23 2000\n--- ouiparse.awk\tWed Aug 23 07:44:09 2000\n***************\n*** 10,19 ****\n # manufacturer text);\n # the table name is set by setting the AWK variable TABLE\n # \n! # we translate the character apostrophe (') to space inside the company name\n! # to avoid SQL errors.\n #\n- # match ONLY lines that begin with 2 hex numbers, -, and another hex number\n \n BEGIN {\n \tTABLE=\"macoui\";\n--- 10,18 ----\n # manufacturer text);\n # the table name is set by setting the AWK variable TABLE\n # \n! # we translate the character apostrophe (') to double apostrophe ('') inside \n! # the company name to avoid SQL errors.\n #\n \n BEGIN {\n \tTABLE=\"macoui\";\n***************\n*** 27,32 ****\n--- 26,32 ----\n \tprintf \"COMMIT TRANSACTION;\";\n }\n \n+ # match ONLY lines that begin with 2 hex numbers, -, and another hex number\n /^[0-9a-fA-F][0-9a-fA-F]-[0-9a-fA-F]/ { \n #\tif (nrec >= 100) {\n #\t\tprintf \"COMMIT TRANSACTION;\";\n***************\n*** 47,53 ****\n \t\tCompany=Company \" \" $i;\n \t# Modify any apostrophes (') to avoid grief below.\n \tgsub(\"'\",\"''\",Company);\n! \t# Print out for the 'C' structure in mac.c\n \tprintf \"INSERT INTO %s (addr, name) VALUES (trunc(macaddr \\'%s\\'),\\'%s\\');\\n\",\n \t\tTABLE,OUI,Company;\n }\n--- 47,53 ----\n \t\tCompany=Company \" \" $i;\n \t# Modify any apostrophes (') to avoid grief below.\n \tgsub(\"'\",\"''\",Company);\n! \t# Print out for the SQL table insert\n \tprintf \"INSERT INTO %s (addr, name) VALUES (trunc(macaddr \\'%s\\'),\\'%s\\');\\n\",\n \t\tTABLE,OUI,Company;\n }\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 23 Aug 2000 07:45:59 -0500",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "minor comment fixes for ouiparse.awk"
},
{
"msg_contents": "> Here's a quick patch for contrib/mac/ouiparse.awk to fix some commenting\n> changes due to Thomas' updates.\n\nThanks. Got it...\n\n - Thomas\n",
"msg_date": "Wed, 23 Aug 2000 13:36:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: minor comment fixes for ouiparse.awk"
}
] |
[
{
"msg_contents": "The query:\n\nINSERT INTO table_resultat( origine, service, noeud, rubrique,\nnb_passage, temps, date) \n\tSELECT DISTINCT temp2.origine, temp2.service, temp2.noeud,\ntemp2.rubrique, temp2.nb_passage, temp2.temps, temp2.date FROM temp2\nWHERE not exists \n\t\t( SELECT table_resultat.origine, table_resultat.service,\ntable_resultat.noeud, table_resultat.rubrique, table_resultat.date FROM\ntable_brut WHERE table_resultat.origine=temp2.origine AND\ntable_resultat.service=temp2.service AND\ntable_resultat.noeud=temp2.noeud AND\ntable_resultat.rubrique=temp2.rubrique AND\ntable_resultat.date=temp2.date )\n\n\nproduces the error :\nERROR: replace_vars_with_subplan_refs: variable not in subplan target\nlist\n\n\nanyone can explain me ?\n\nThanks. Jerome.\n",
"msg_date": "Wed, 23 Aug 2000 14:51:00 +0200",
"msg_from": "Jerome Raupach <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with insert"
},
{
"msg_contents": "Jerome Raupach <[email protected]> writes:\n> The query:\n> INSERT INTO table_resultat( origine, service, noeud, rubrique,\n> nb_passage, temps, date) \n> \tSELECT DISTINCT temp2.origine, temp2.service, temp2.noeud,\n> temp2.rubrique, temp2.nb_passage, temp2.temps, temp2.date FROM temp2\n> WHERE not exists \n> \t\t( SELECT table_resultat.origine, table_resultat.service,\n> table_resultat.noeud, table_resultat.rubrique, table_resultat.date FROM\n> table_brut WHERE table_resultat.origine=temp2.origine AND\n> table_resultat.service=temp2.service AND\n> table_resultat.noeud=temp2.noeud AND\n> table_resultat.rubrique=temp2.rubrique AND\n> table_resultat.date=temp2.date )\n\n> produces the error :\n> ERROR: replace_vars_with_subplan_refs: variable not in subplan target\n> list\n\nThat's pretty interesting. I was not able to reproduce this failure\nusing stripped-down table definitions --- I tried\n\ncreate table foo (f1 int);\ncreate table bar (f1 int);\ncreate table baz (f1 int);\n\ninsert into foo(f1)\n select distinct f1 from bar\n where not exists (select foo.f1 from baz where\n foo.f1 = bar.f1);\n\nSo I think there must be some special feature of your tables that you\nhaven't shown us. Could we see a schema dump (pg_dump -s) for these\ntables?\n\nBTW the inner select seems pretty weird --- what is the point of joining\nagainst table_brut when you're not using it? But that doesn't look like\nit could provoke this error.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Aug 2000 11:20:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with insert "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Jerome Raupach <[email protected]> writes:\n> > The query:\n> > INSERT INTO table_resultat( origine, service, noeud, rubrique,\n> > nb_passage, temps, date)\n> > SELECT DISTINCT temp2.origine, temp2.service, temp2.noeud,\n> > temp2.rubrique, temp2.nb_passage, temp2.temps, temp2.date FROM temp2\n> > WHERE not exists\n> > ( SELECT table_resultat.origine, table_resultat.service,\n> > table_resultat.noeud, table_resultat.rubrique, table_resultat.date FROM\n> > table_brut WHERE table_resultat.origine=temp2.origine AND\n> > table_resultat.service=temp2.service AND\n> > table_resultat.noeud=temp2.noeud AND\n> > table_resultat.rubrique=temp2.rubrique AND\n> > table_resultat.date=temp2.date )\n> \n> > produces the error :\n> > ERROR: replace_vars_with_subplan_refs: variable not in subplan target\n> > list\n> \n> That's pretty interesting. I was not able to reproduce this failure\n> using stripped-down table definitions --- I tried\n> \n> create table foo (f1 int);\n> create table bar (f1 int);\n> create table baz (f1 int);\n> \n> insert into foo(f1)\n> select distinct f1 from bar\n> where not exists (select foo.f1 from baz where\n> foo.f1 = bar.f1);\n> \n> So I think there must be some special feature of your tables that you\n> haven't shown us. Could we see a schema dump (pg_dump -s) for these\n> tables?\n> \n> BTW the inner select seems pretty weird --- what is the point of joining\n> against table_brut when you're not using it? But that doesn't look like\n> it could provoke this error.\n> \n> regards, tom lane\n\n\nthe error is produced if temp2 is a view. If temp2 is a table, there is\nno problem.\n\n?\n\nThanks. Jerome.\n",
"msg_date": "Wed, 23 Aug 2000 17:49:43 +0200",
"msg_from": "Jerome Raupach <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem with insert"
},
{
"msg_contents": "Jerome Raupach <[email protected]> writes:\n>> So I think there must be some special feature of your tables that you\n>> haven't shown us. Could we see a schema dump (pg_dump -s) for these\n>> tables?\n\n> the error is produced if temp2 is a view.\n\nI had suspected there might be a view involved. But if you want this\nfixed, you're going to need to be more forthcoming about providing a\ncomplete, reproducible example. I have other things to do than guess\nwhat your view and table definitions are...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Aug 2000 12:07:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with insert "
}
] |
[
{
"msg_contents": "> > I think I wasn't clear enough. :-) It can *already* be \n> specified by any\n> > client application as long as you use PQconnectdb(). For example:\n> > PQconnectdb(\"dbname='foo' host='localhost' requiressl=1\")\n> \n> I understand but this setting needs to be made available externally in\n> some cases like psql and pg_dump and I was afraid of option letter\n> inflation.\nI was thinking we could use a \"psql variable\" in the case of psql, if we\nwanted. For example:\npsql -h localhost template1 -v \"requiressl=1\"\nor something like that?\n\nOh, and it's still available by\nPGREQUIRE_SSL=1 pgdump <whatever>\n\n\n> Actually, isn't there a trichotomy here: 1. require SSL, 2. use SSL if\n> available, 3. refuse SSL. The server side already handles all \n> cases: 1 -\n> \"hostssl\" in pg_hba.conf, 2 - `postmaster -l', 3 - default. The client\n> side should perhaps also have these choices, not sure.\nGood point. The reason for the client to not do SSL when both client and\nserver supports it could be performance, I guess.\nPerhaps we shuold replace PGREQUIRE_SSL with \"PGSSLMODE\", being:\n0 - Refuse SSL\n1 - Negotiate, Prefer non-SSL\n2 - Negotiate, Prefer SSL (default)\n3 - Require SSL\n\n\n\nAnything else you guys will need on this patch before it's fine? :-) No\nrush, but just so I know what to work on...\n\n//Magnus\n",
"msg_date": "Wed, 23 Aug 2000 16:11:33 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: RE: SSL Patch - again :-)"
}
] |
[
{
"msg_contents": "\nWe had a crash this morning of our server ... not the machine, just the\npostmaster processes ... all three of them spread across three seperate\nports ...\n\nJust looking through logs now, I'm finding:\n\nFATAL: s_lock(20048065) at spin.c:127, stuck spinlock. Aborting.\n\nIn one of them ...\n\nI've got a load of core files all from about the same time, from the\nvarious processes running on teh different ports:\n\n714266 9936 -rw------- 1 pgsql pgsql 5074944 Aug 23 10:32 ./data/base/petpostings/postgres.core\n944391 9984 -rw------- 1 pgsql pgsql 5099520 Aug 23 10:32 ./data/base/trends_acctng/postgres.core\n1015889 9984 -rw------- 1 pgsql pgsql 5099520 Aug 23 10:33 ./data/base/pg_banners/postgres.core\n1055605 10768 -rw------- 1 pgsql pgsql 5505024 Aug 23 10:34 ./data/base/rockwell/postgres.core\n904800 10032 -rw------- 1 pgsql pgsql 5124096 Aug 23 10:32 ./data/base/area902/postgres.core\n619085 74720 -rw------- 1 pgsql pgsql 38219776 Jun 7 21:09 ./data/base/thtphone/postgres.core\n1944360 9936 -rw------- 1 pgsql pgsql 5074944 Aug 23 10:32 ./data/base/counter/postgres.core\n896891 9808 -rw------- 1 pgsql pgsql 5009408 Aug 23 10:32 ./data/base/hub_traf_stats/postgres.core\n1849088 20656 -rw------- 1 pgsql pgsql 10567680 Aug 23 09:56 ./data/base/horde/postgres.core\n849311 7136 -rw------- 1 pgsql pgsql 3645440 Aug 23 12:36 ./special/mukesh/base/archies/postgres.core\n857377 7104 -rw------- 1 pgsql pgsql 3629056 Aug 23 12:36 ./special/mukesh/base/water/postgres.core\n 8009 44176 -rw------- 1 pgsql pgsql 22589440 Aug 23 10:39 ./data2/udmsearch/postgres.core\n\nif any of those help any? \n\nlooking in /var/log/messages, I'm seeing the following just before the\ncrashes:\n\nAug 23 12:33:47 pgsql syslogd: /dev/console: Too many open files in system: Too many open files in system\nAug 23 12:33:47 pgsql syslogd: /var/run/utmp: Too many open files in system\nAug 23 12:33:47 pgsql /kernel: file: table is full\n\nwould this be the cause? if so, I have to raise some limits, no problem\nthere, but just want to confirm ...\n\nthanks ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 23 Aug 2000 14:21:24 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "[7.0.2] problems with spinlock under FreeBSD?"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> looking in /var/log/messages, I'm seeing the following just before the\n> crashes:\n\n> Aug 23 12:33:47 pgsql syslogd: /dev/console: Too many open files in system: Too many open files in system\n> Aug 23 12:33:47 pgsql syslogd: /var/run/utmp: Too many open files in system\n> Aug 23 12:33:47 pgsql /kernel: file: table is full\n\nThat sure looks like you'd better tweak your kernel settings ... but\noffhand I don't see how it could lead to \"stuck spinlock\" errors.\nWhat do you get from gdb backtraces on the corefiles?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Aug 2000 00:05:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] problems with spinlock under FreeBSD? "
},
{
"msg_contents": "On Wed, 23 Aug 2000, The Hermit Hacker wrote:\n\n> \n> We had a crash this morning of our server ... not the machine, just the\n> postmaster processes ... all three of them spread across three seperate\n> ports ...\n> \n> Just looking through logs now, I'm finding:\n\n[snip]\n\n> \n> looking in /var/log/messages, I'm seeing the following just before the\n> crashes:\n> \n> Aug 23 12:33:47 pgsql syslogd: /dev/console: Too many open files in system: Too many open files in system\n> Aug 23 12:33:47 pgsql syslogd: /var/run/utmp: Too many open files in system\n> Aug 23 12:33:47 pgsql /kernel: file: table is full\n> \n> would this be the cause? if so, I have to raise some limits, no problem\n> there, but just want to confirm ...\n\nWhat's maxusers set to in the kernel? If you want to try raising it on\nthe fly, try \n\n# sysctl -w kern.maxfiles=abiggernumberthanitisnow\n\nand set it to a bigger number than it is now. \n\n# sysctl kern.maxfiles\n\nwill tell you what it's currently set to. Whether or not that has\nanything to do with the spinlock problem, no idea.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 24 Aug 2000 06:04:23 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] problems with spinlock under FreeBSD?"
},
{
"msg_contents": "On Thu, 24 Aug 2000, Vince Vielhaber wrote:\n\n> On Wed, 23 Aug 2000, The Hermit Hacker wrote:\n> \n> > \n> > We had a crash this morning of our server ... not the machine, just the\n> > postmaster processes ... all three of them spread across three seperate\n> > ports ...\n> > \n> > Just looking through logs now, I'm finding:\n> \n> [snip]\n> \n> > \n> > looking in /var/log/messages, I'm seeing the following just before the\n> > crashes:\n> > \n> > Aug 23 12:33:47 pgsql syslogd: /dev/console: Too many open files in system: Too many open files in system\n> > Aug 23 12:33:47 pgsql syslogd: /var/run/utmp: Too many open files in system\n> > Aug 23 12:33:47 pgsql /kernel: file: table is full\n> > \n> > would this be the cause? if so, I have to raise some limits, no problem\n> > there, but just want to confirm ...\n> \n> What's maxusers set to in the kernel? If you want to try raising it on\n\nmaxusers 128\n\n\n> the fly, try \n> \n> # sysctl -w kern.maxfiles=abiggernumberthanitisnow\n> \n> and set it to a bigger number than it is now. \n> \n> # sysctl kern.maxfiles\n\njust up'd her to 8192 from 4136 ... thanks, forgot about the sysctl value\nfor this, was dreading having to recompile :(\n\n\n",
"msg_date": "Thu, 24 Aug 2000 09:32:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [7.0.2] problems with spinlock under FreeBSD?"
},
{
"msg_contents": "Attached ...\n\nOn Thu, 24 Aug 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > looking in /var/log/messages, I'm seeing the following just before the\n> > crashes:\n> \n> > Aug 23 12:33:47 pgsql syslogd: /dev/console: Too many open files in system: Too many open files in system\n> > Aug 23 12:33:47 pgsql syslogd: /var/run/utmp: Too many open files in system\n> > Aug 23 12:33:47 pgsql /kernel: file: table is full\n> \n> That sure looks like you'd better tweak your kernel settings ... but\n> offhand I don't see how it could lead to \"stuck spinlock\" errors.\n> What do you get from gdb backtraces on the corefiles?\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org",
"msg_date": "Thu, 24 Aug 2000 09:40:21 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [7.0.2] problems with spinlock under FreeBSD? "
},
{
"msg_contents": "On Thu, 24 Aug 2000, The Hermit Hacker wrote:\n\n> > the fly, try \n> > \n> > # sysctl -w kern.maxfiles=abiggernumberthanitisnow\n> > \n> > and set it to a bigger number than it is now. \n> > \n> > # sysctl kern.maxfiles\n> \n> just up'd her to 8192 from 4136 ... thanks, forgot about the sysctl value\n> for this, was dreading having to recompile :(\n\nI had to do it a couple of months ago on one of my machines. It was \nalmost at a year of uptime and I didn't want to reboot if I didn't\nhave to!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 24 Aug 2000 09:35:26 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] problems with spinlock under FreeBSD?"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> What do you get from gdb backtraces on the corefiles?\n\n> #2 0x80ee847 in s_lock_stuck (lock=0x20048065 \"\\001\", file=0x816723c \"spin.c\", line=127) at s_lock.c:51\n> #3 0x80ee8c3 in s_lock (lock=0x20048065 \"\\001\", file=0x816723c \"spin.c\", line=127) at s_lock.c:80\n> #4 0x80f1580 in SpinAcquire (lockid=7) at spin.c:127\n> #5 0x80f3903 in LockRelease (lockmethod=1, locktag=0xbfbfe674, lockmode=1) at lock.c:1044\n> #6 0x80f2af9 in UnlockRelation (relation=0x82063f0, lockmode=1) at lmgr.c:178\n> #7 0x806f25e in index_endscan (scan=0x8208780) at indexam.c:284\n\nThat's interesting ... someone failing to release lock.c's master\nspinlock, it looks like. Do you have anything in the postmaster log\nfrom just before the crashes?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Aug 2000 10:22:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] problems with spinlock under FreeBSD? "
},
{
"msg_contents": "On Thu, 24 Aug 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> >> What do you get from gdb backtraces on the corefiles?\n> \n> > #2 0x80ee847 in s_lock_stuck (lock=0x20048065 \"\\001\", file=0x816723c \"spin.c\", line=127) at s_lock.c:51\n> > #3 0x80ee8c3 in s_lock (lock=0x20048065 \"\\001\", file=0x816723c \"spin.c\", line=127) at s_lock.c:80\n> > #4 0x80f1580 in SpinAcquire (lockid=7) at spin.c:127\n> > #5 0x80f3903 in LockRelease (lockmethod=1, locktag=0xbfbfe674, lockmode=1) at lock.c:1044\n> > #6 0x80f2af9 in UnlockRelation (relation=0x82063f0, lockmode=1) at lmgr.c:178\n> > #7 0x806f25e in index_endscan (scan=0x8208780) at indexam.c:284\n> \n> That's interesting ... someone failing to release lock.c's master\n> spinlock, it looks like. Do you have anything in the postmaster log\n> from just before the crashes?\n\nokay, nothing that I can see that is 'unusual' in the log files, but as\nshown below, at ~10:30am today, the same thing appears to have happened\n...\n\n%ls -lt */*.core\n-rw------- 1 pgsql pgsql 22589440 Aug 23 10:39 udmsearch/postgres.core\n-rw------- 1 pgsql pgsql 5505024 Aug 23 10:34 rockwell/postgres.core\n-rw------- 1 pgsql pgsql 5099520 Aug 23 10:33 pg_banners/postgres.core\n-rw------- 1 pgsql pgsql 5009408 Aug 23 10:32 hub_traf_stats/postgres.core\n-rw------- 1 pgsql pgsql 5099520 Aug 23 10:32 trends_acctng/postgres.core\n-rw------- 1 pgsql pgsql 5124096 Aug 23 10:32 area902/postgres.core\n-rw------- 1 pgsql pgsql 5074944 Aug 23 10:32 petpostings/postgres.core\n-rw------- 1 pgsql pgsql 5074944 Aug 23 10:32 counter/postgres.core\n-rw------- 1 pgsql pgsql 10567680 Aug 23 09:56 horde/postgres.core\n\nCheck the gdb on a couple of them:\n\n(gdb) where\n#0 0x18271d90 in kill () from /usr/lib/libc.so.4\n#1 0x182b2e09 in abort () from /usr/lib/libc.so.4\n#2 0x80ee847 in s_lock_stuck (lock=0x20048065 \"\\001\", file=0x816723c \"spin.c\", line=127) at s_lock.c:51\n#3 0x80ee8c3 in s_lock (lock=0x20048065 \"\\001\", file=0x816723c \"spin.c\", line=127) at s_lock.c:80\n#4 0x80f1580 in SpinAcquire (lockid=7) at spin.c:127\n\n(gdb) where\n#0 0x18271d90 in kill () from /usr/lib/libc.so.4\n#1 0x182b2e09 in abort () from /usr/lib/libc.so.4\n#2 0x80ee847 in s_lock_stuck (lock=0x20048065 \"\\001\", file=0x816723c \"spin.c\", line=127) at s_lock.c:51\n#3 0x80ee8c3 in s_lock (lock=0x20048065 \"\\001\", file=0x816723c \"spin.c\", line=127) at s_lock.c:80\n#4 0x80f1580 in SpinAcquire (lockid=7) at spin.c:127\n\nthey all appear to be in the same place ...\n\nnow, I'm running 4 seperate postmaster daemons, with seperate data\ndirectories, as:\n\nps ux | grep postmaster | grep 543\npgsql 50554 0.0 0.1 6904 556 p0- I 1:12PM 0:04.88 /pgsql/bin/postmaster -D/pgsql/special/sales.org -i -p 5434 (postgres)\npgsql 61821 0.0 0.1 7080 636 p6- S 4:38PM 3:03.86 /pgsql/bin/postmaster -B 256 -N 128 -o -F -o /pgsql/logs/5432.61820 -S 32768 -i -p 5432 -D/pgsql/data (postgres)\npgsql 62268 0.0 0.0 5488 0 p4- IW - 0:00.00 /pgsql/bin/postmaster -d 1 -N 16 -o -F -o /pgsql/logs/5433.62267 -S 32768 -i -p 5433 -D/pgsql/special/lee (postgres)\npgsql 27084 0.0 0.1 5496 596 p4- S 8:25AM 0:54.11 /pgsql/bin/postmaster -d 1 -N 16 -o -F -o /pgsql/logs/5437.27083 -S 32768 -i -p 5437 -D/pgsql/special/mukesh (postgres)\n\nand the above core files are from the one running on 5432 ...\n\nyou still have your account on that machine if you want to take a quick\nlook around ... else, anything else I should be looking at?\n\n",
"msg_date": "Thu, 24 Aug 2000 13:44:38 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [7.0.2] problems with spinlock under FreeBSD? "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> you still have your account on that machine if you want to take a quick\n> look around ... else, anything else I should be looking at?\n\nI poked around and couldn't learn much of anything --- the logfiles from\nyesterday are already gone, apparently. I did find some interesting\nentries in today's logfiles:\n\n%grep Lru *\n5432.61820:FATAL 1: ReleaseLruFile: No open files available to be closed\npostmaster.5437.36290:FATAL 1: ReleaseLruFile: No open files available to be closed\npostmaster.5437.62218:FATAL 1: ReleaseLruFile: No open files available to be closed\n\nWhat we see here are backends choking because there are no free kernel\nfile descriptor slots, even after they've closed *all* of their own\ndiscretionary FDs. So you've definitely got a serious problem with\ninsufficient FD slots. Time to tweak those kernel parameters.\n\nI still don't see a linkage between too few FDs and the stuck-spinlock\ncrashes, but maybe there is one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Aug 2000 15:18:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] problems with spinlock under FreeBSD? "
},
{
"msg_contents": "\nokay, I just doubled my FDs to 8192 from 4136 and will watch things\n... anyone know of a way of telling how many are currently in use, and\nwhere they peaked? somethign similar to 'netstat -m' showing mbufs?\n\nthanks tom ...\n\nOn Thu, 24 Aug 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > you still have your account on that machine if you want to take a quick\n> > look around ... else, anything else I should be looking at?\n> \n> I poked around and couldn't learn much of anything --- the logfiles from\n> yesterday are already gone, apparently. I did find some interesting\n> entries in today's logfiles:\n> \n> %grep Lru *\n> 5432.61820:FATAL 1: ReleaseLruFile: No open files available to be closed\n> postmaster.5437.36290:FATAL 1: ReleaseLruFile: No open files available to be closed\n> postmaster.5437.62218:FATAL 1: ReleaseLruFile: No open files available to be closed\n> \n> What we see here are backends choking because there are no free kernel\n> file descriptor slots, even after they've closed *all* of their own\n> discretionary FDs. So you've definitely got a serious problem with\n> insufficient FD slots. Time to tweak those kernel parameters.\n> \n> I still don't see a linkage between too few FDs and the stuck-spinlock\n> crashes, but maybe there is one.\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 24 Aug 2000 16:26:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [7.0.2] problems with spinlock under FreeBSD? "
},
{
"msg_contents": "On Thu, 24 Aug 2000, The Hermit Hacker wrote:\n\n> \n> okay, I just doubled my FDs to 8192 from 4136 and will watch things\n> ... anyone know of a way of telling how many are currently in use, and\n> where they peaked? somethign similar to 'netstat -m' showing mbufs?\n\nPossibly pstat will do it. It'll give current but I don't know about\npeak.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 24 Aug 2000 15:53:17 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] problems with spinlock under FreeBSD? "
},
{
"msg_contents": "On Thu, 24 Aug 2000, The Hermit Hacker wrote:\n\n> Date: Thu, 24 Aug 2000 16:26:07 -0300 (ADT)\n> From: The Hermit Hacker <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] [7.0.2] problems with spinlock under FreeBSD? \n> \n> \n> okay, I just doubled my FDs to 8192 from 4136 and will watch things\n> ... anyone know of a way of telling how many are currently in use, and\n> where they peaked? somethign similar to 'netstat -m' showing mbufs?\n\n\nI have on rather busy site running FreeBSD\n12:57:38[info]:/home/megera$ sysctl kern.maxfiles\nkern.maxfiles: 16424\n\nDid you try systat -vm ?\n\n\tRegards,\n\n\t\tOleg\n\n> \n> thanks tom ...\n> \n> On Thu, 24 Aug 2000, Tom Lane wrote:\n> \n> > The Hermit Hacker <[email protected]> writes:\n> > > you still have your account on that machine if you want to take a quick\n> > > look around ... else, anything else I should be looking at?\n> > \n> > I poked around and couldn't learn much of anything --- the logfiles from\n> > yesterday are already gone, apparently. I did find some interesting\n> > entries in today's logfiles:\n> > \n> > %grep Lru *\n> > 5432.61820:FATAL 1: ReleaseLruFile: No open files available to be closed\n> > postmaster.5437.36290:FATAL 1: ReleaseLruFile: No open files available to be closed\n> > postmaster.5437.62218:FATAL 1: ReleaseLruFile: No open files available to be closed\n> > \n> > What we see here are backends choking because there are no free kernel\n> > file descriptor slots, even after they've closed *all* of their own\n> > discretionary FDs. So you've definitely got a serious problem with\n> > insufficient FD slots. Time to tweak those kernel parameters.\n> > \n> > I still don't see a linkage between too few FDs and the stuck-spinlock\n> > crashes, but maybe there is one.\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 25 Aug 2000 11:59:50 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] problems with spinlock under FreeBSD? "
}
] |
[
{
"msg_contents": "Hi!\n\n I've made a small (2 hours work) program to make \"histograms\" on data.\n Giving a table and column he tries to put in buckets special values\n(special means the most/least used values). It tries to be a little smart,\nie, if there are lots of guys with most/least values it will not put them\nin buckets.\n\n Example:\n bucket size=10\n$ ./a.out \"dbname=teste\" d_pags uid\nDistinct values: 1028\nNumber of tuples: 6880\nOn a uniform distribution: 6.692607 tuples/value\n\n# of values with more references\n1 - 1\n2 - 1\n3 - 1\n[...]\nThis means that there is only one value as the most referenced (110 times)\n# of values with less references\n1 - 253\n2 - 153\n3 - 109\n[..]\nThis means that there ara 253 values that have the least references (once)\nBest case buckets\n1 - u805156 (110) \n2 - u1503927 (103) \n3 - u110525 (82) \n4 - u91106009 (78) \n5 - u1106837 (60) \n6 - u1714112 (55) \n7 - u1414335 (53) \n8 - u1105732 (50) \n9 - u302719 (49) \nWorst case buckets\n[there are so many guys with only one ref... nothing can be put in\nbuckets]\nRemoved values: 9\nRemoved tuples: 640\nExpected tuples for each of unclassified values: 6.123651 tuples/value\n\nIf all values have equal prob of being chosen:\nError in normal case: 5.464981\nError in hist case: 4.905326\n\nIf my calculations are right:\nFor 9 of 1028 values (9.5% of relation) there is a very precise idea,\nbut in general (for a random value being selected) that's not a great\nadvance. At least for this case ...\n\n\nTiago\n\n",
"msg_date": "Wed, 23 Aug 2000 21:13:55 +0100 (WEST)",
"msg_from": "=?iso-8859-1?Q?Tiago_Ant=E3o?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "statistics"
}
] |
[
{
"msg_contents": "\nHow do pronounce PostgreSQL - the final word (pun intended). Listen to \nthe wav file to know for sure how it's pronounced.\n\n http://www.postgresql.org/postgresql.wav\n\nAnd no, it's not my voice.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 23 Aug 2000 16:47:47 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "How do pronounce PostgreSQL - the final word."
},
{
"msg_contents": "> \n> How do pronounce PostgreSQL - the final word (pun intended). Listen to \n> the wav file to know for sure how it's pronounced.\n> \n> http://www.postgresql.org/postgresql.wav\n> \n> And no, it's not my voice.\n\nWho's voice is it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 14:30:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How do pronounce PostgreSQL - the final word."
},
{
"msg_contents": "On Sun, 15 Oct 2000, Bruce Momjian wrote:\n\n> > \n> > How do pronounce PostgreSQL - the final word (pun intended). Listen to \n> > the wav file to know for sure how it's pronounced.\n> > \n> > http://www.postgresql.org/postgresql.wav\n> > \n> > And no, it's not my voice.\n> \n> Who's voice is it?\n\nMy partner. He's a long time broadcaster. He's had his own radio \nshows over the years, also a former radio DJ. We're fixin to do \nsome voiceover demos and music-on-hold CDs with customized messages.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 15 Oct 2000 15:26:07 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How do pronounce PostgreSQL - the final word."
}
] |
[
{
"msg_contents": "At 8/23/2000 04:47 PM -0400, Vince Vielhaber wrote:\n\n>How do pronounce PostgreSQL - the final word (pun intended). Listen to\n>the wav file to know for sure how it's pronounced.\n>\n> http://www.postgresql.org/postgresql.wav\n\nYou couldn't ship it as an mp3 stream? :)\n\n",
"msg_date": "Wed, 23 Aug 2000 16:29:44 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How do pronounce PostgreSQL - the final word."
},
{
"msg_contents": "On Wed, 23 Aug 2000, Thomas Swan wrote:\n\n> At 8/23/2000 04:47 PM -0400, Vince Vielhaber wrote:\n> \n> >How do pronounce PostgreSQL - the final word (pun intended). Listen to\n> >the wav file to know for sure how it's pronounced.\n> >\n> > http://www.postgresql.org/postgresql.wav\n> \n> You couldn't ship it as an mp3 stream? :)\n\nHmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm.........................\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 23 Aug 2000 18:26:22 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How do pronounce PostgreSQL - the final word."
}
] |
[
{
"msg_contents": "I take it the \"PostgreSQL BugTool Submission\" posts that've been\nappearing on pgsql-bugs for the last day or so are output from our\nmuch-discussed new bug tracking system. I have a couple of major\nproblems with these reports:\n\t1. Utterly useless Subject: line. Why does the form ask\n\t for a \"short description\" if not to use as a subject?\n\t2. From: line is [email protected], ie, the mail\n\t list address, not the address of the bug submitter.\n\t This makes it hard to reply to the submitter.\n\t3. Webform does not ask for platform etc. info requested\n\t by our standard bug report form. Indeed, webform\n\t seems designed to discourage any sort of complete report.\n\t A 70x10 input window and no way to attach files :-(\n\nI may be feeling a tad crabby tonight, but as far as I can see we\nhave learned nothing whatsoever from the failure of the Keystone\ngo-round. It won't take too many more of these annoyances before\nI set my mail filters to bit-bucket all traffic with \"PostgreSQL\nBugTool\" in the subject line.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Aug 2000 00:54:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some gripes about BugTool"
},
{
"msg_contents": "On Thu, 24 Aug 2000, Tom Lane wrote:\n\n> I take it the \"PostgreSQL BugTool Submission\" posts that've been\n> appearing on pgsql-bugs for the last day or so are output from our\n> much-discussed new bug tracking system. I have a couple of major\n> problems with these reports:\n> \t1. Utterly useless Subject: line. Why does the form ask\n> \t for a \"short description\" if not to use as a subject?\n\neasily changed.\n\n> \t2. From: line is [email protected], ie, the mail\n> \t list address, not the address of the bug submitter.\n> \t This makes it hard to reply to the submitter.\n\nYou have to be on the list to send mail to it. Since every user isn't\non the list they can't be in the From line. Complain to Marc.\n\n> \t3. Webform does not ask for platform etc. info requested\n> \t by our standard bug report form. Indeed, webform\n> \t seems designed to discourage any sort of complete report.\n> \t A 70x10 input window and no way to attach files :-(\n\nThe upload files function is commented out since I had no place to put\nthe file when uploaded. \n\n> I may be feeling a tad crabby tonight, but as far as I can see we\n> have learned nothing whatsoever from the failure of the Keystone\n> go-round. It won't take too many more of these annoyances before\n> I set my mail filters to bit-bucket all traffic with \"PostgreSQL\n> BugTool\" in the subject line.\n\nI can change the subject line easy enuf but don't plan on any further\ndevelopment on the tool since Ben is supposed to be presenting us with\nthe ultimate bug tool. If it's going to be more than a month or so \nbefore we'll see this thing then I'll add/enable some functionality,\notherwise I can't see the point.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 24 Aug 2000 07:03:14 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some gripes about BugTool"
},
{
"msg_contents": "On Thu, 24 Aug 2000, Vince Vielhaber wrote:\n\n> On Thu, 24 Aug 2000, Tom Lane wrote:\n> \n> > I take it the \"PostgreSQL BugTool Submission\" posts that've been\n> > appearing on pgsql-bugs for the last day or so are output from our\n> > much-discussed new bug tracking system. I have a couple of major\n> > problems with these reports:\n> > \t1. Utterly useless Subject: line. Why does the form ask\n> > \t for a \"short description\" if not to use as a subject?\n> \n> easily changed.\n> \n> > \t2. From: line is [email protected], ie, the mail\n> > \t list address, not the address of the bug submitter.\n> > \t This makes it hard to reply to the submitter.\n> \n> You have to be on the list to send mail to it. Since every user isn't\n> on the list they can't be in the From line. Complain to Marc.\n\nhrmm, how about setting a Reply-To: address that contains both the\nsubmitter email and the list?\n\n\n",
"msg_date": "Thu, 24 Aug 2000 09:26:24 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some gripes about BugTool"
},
{
"msg_contents": "On Thu, 24 Aug 2000, The Hermit Hacker wrote:\n\n> On Thu, 24 Aug 2000, Vince Vielhaber wrote:\n> \n> > On Thu, 24 Aug 2000, Tom Lane wrote:\n> > \n> > > I take it the \"PostgreSQL BugTool Submission\" posts that've been\n> > > appearing on pgsql-bugs for the last day or so are output from our\n> > > much-discussed new bug tracking system. I have a couple of major\n> > > problems with these reports:\n> > > \t1. Utterly useless Subject: line. Why does the form ask\n> > > \t for a \"short description\" if not to use as a subject?\n> > \n> > easily changed.\n> > \n> > > \t2. From: line is [email protected], ie, the mail\n> > > \t list address, not the address of the bug submitter.\n> > > \t This makes it hard to reply to the submitter.\n> > \n> > You have to be on the list to send mail to it. Since every user isn't\n> > on the list they can't be in the From line. Complain to Marc.\n> \n> hrmm, how about setting a Reply-To: address that contains both the\n> submitter email and the list?\n\nMade the change but haven't seen any responses yet - I'm not on that \nlist so I can't see for myself.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 24 Aug 2000 09:33:51 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some gripes about BugTool"
}
] |
[
{
"msg_contents": "\n> The Hermit Hacker <[email protected]> writes:\n> >Actually, the one that gets me is those that refer to it as Postgres\n> >... postgres was a project out of Berkeley way back in the 80's, early\n> >90's ... hell, it was based on a PostQuel language ... this ain't\n> >postgres, its only based on it :(\n> \n> Yes, and besides, if you call it Postgres, you are certain to offend the \n> struggling Postgres community.\n\nIs there anybody out there to offend ?\n\n> Those PostQuel language zealots have enough\n> problems keeping their technology alive as it is without people going\naround\n> creating gratuitous brand confusion.\n\nHmm? Are you suggesting that the PostQuel Postgres is still\nunder development, and those zealots exist ? \nI thought it is dead since Version 4.2.\nCould you give me a pointer ?\n\nThanks\nAndreas\n",
"msg_date": "Thu, 24 Aug 2000 10:29:50 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Re: How Do You Pronounce \"PostgreSQL\"?"
}
] |
[
{
"msg_contents": "\n> In my personal experience, out in the real world, people refer to it as\n> \"Postgres\". The QL being a mouthful, and contrary to the common practice\n> of pronouncing SQL as SEQUEL. While Marc points out that technically\n> Postgres died when it left Berkeley, that discontinuity is really only\n> something we choose to acknowledge. As Henry points out, SQL \n> is only one\n> feature that happened to be added. Apart from not owning the domain\n> name, why shouldn't it just be \"Postgres\"?\n\nEverybody I know also still sais \"Postgres\", leaving out the Q L\nbecaus it is too long. In german we would not have the \"sequel\" problem, \nsince we pronounce it \"ess ku ell\". I think they all know that they are\nreally \nreferring to PostgreSQL.\n\nGuess what you find under www.postgres.com ? Yes, it is Great Bridge.\npostgresql.com is taken by some Korean domain grabber.\n\nBTW Marc, I would make www.postgres.org point to postgresql.\n\nAndreas\n",
"msg_date": "Thu, 24 Aug 2000 10:55:11 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: How Do You Pronounce \"PostgreSQL\"?"
},
{
"msg_contents": "On Thu, 24 Aug 2000, Zeugswetter Andreas SB wrote:\n\n> \n> > In my personal experience, out in the real world, people refer to it as\n> > \"Postgres\". The QL being a mouthful, and contrary to the common practice\n> > of pronouncing SQL as SEQUEL. While Marc points out that technically\n> > Postgres died when it left Berkeley, that discontinuity is really only\n> > something we choose to acknowledge. As Henry points out, SQL \n> > is only one\n> > feature that happened to be added. Apart from not owning the domain\n> > name, why shouldn't it just be \"Postgres\"?\n> \n> Everybody I know also still sais \"Postgres\", leaving out the Q L\n> becaus it is too long. In german we would not have the \"sequel\" problem, \n> since we pronounce it \"ess ku ell\". I think they all know that they are\n> really \n> referring to PostgreSQL.\n> \n> Guess what you find under www.postgres.com ? Yes, it is Great Bridge.\n> postgresql.com is taken by some Korean domain grabber.\n> \n> BTW Marc, I would make www.postgres.org point to postgresql.\n\nNot our domain to point ... it too belongs to Great Bridge ...\n\n",
"msg_date": "Thu, 24 Aug 2000 09:29:06 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: How Do You Pronounce \"PostgreSQL\"?"
},
{
"msg_contents": "The Hermit Hacker wrote:\n\n> > Guess what you find under www.postgres.com ? Yes, it is Great Bridge.\n> > postgresql.com is taken by some Korean domain grabber.\n> >\n> > BTW Marc, I would make www.postgres.org point to postgresql.\n> \n> Not our domain to point ... it too belongs to Great Bridge ...\n\nThat's interesting, when did GreatBridge acquire them?\n\nIn my opinion, we should change the name to Postgres, and get\nGreatBridge to donate the .org domain to the opensource project. That's\ngood for Greatbridge because they own the .com which would actually then\nbecome useful. It's good for the free project because people are calling\nit postgres anyway and it's a better brand name.\n",
"msg_date": "Fri, 25 Aug 2000 09:33:15 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: How Do You Pronounce \"PostgreSQL\"?"
},
{
"msg_contents": "On Fri, 25 Aug 2000, Chris Bitmead wrote:\n\n> The Hermit Hacker wrote:\n> \n> > > Guess what you find under www.postgres.com ? Yes, it is Great Bridge.\n> > > postgresql.com is taken by some Korean domain grabber.\n> > >\n> > > BTW Marc, I would make www.postgres.org point to postgresql.\n> > \n> > Not our domain to point ... it too belongs to Great Bridge ...\n> \n> That's interesting, when did GreatBridge acquire them?\n> \n> In my opinion, we should change the name to Postgres, and get\n> GreatBridge to donate the .org domain to the opensource project. That's\n> good for Greatbridge because they own the .com which would actually then\n> become useful. It's good for the free project because people are calling\n> it postgres anyway and it's a better brand name.\n\nJust because ppl are referring to the project by the wrong name doesn't\nmake it right ... just because MySQL changes their name to MaxSQL, are you\ngoing to accept them any differently? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 24 Aug 2000 23:46:00 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: How Do You Pronounce \"PostgreSQL\"?"
},
{
"msg_contents": "The Hermit Hacker wrote:\n\n> Just because ppl are referring to the project by the wrong name doesn't\n> make it right...\n\nWhich do you prefer? To be the only one who is right, when everyone else\nis wrong. Or to change the definition of right so that the software is\nuniversally called by its correct name?\n",
"msg_date": "Fri, 25 Aug 2000 13:58:02 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: How Do You Pronounce \"PostgreSQL\"?"
},
{
"msg_contents": "On Fri, 25 Aug 2000, Chris Bitmead wrote:\n\n> The Hermit Hacker wrote:\n> \n> > Just because ppl are referring to the project by the wrong name doesn't\n> > make it right...\n> \n> Which do you prefer? To be the only one who is right, when everyone else\n> is wrong. Or to change the definition of right so that the software is\n> universally called by its correct name?\n> \n\nHmmm. Let's make a little table, shall we?\n\nProper name\t\tNickname\n-----------\t\t--------\nChevrolet\t\tChevy\nPostgreSQL\t\tPostgres\nChristopher\t\tChris\nMarcus\t\t\tMarc\nRobert\t\t\tBob\nRichard\t\t\tRick\n\nSince I don't see Chevrolet changing their name to Chevy just because\neveryone calls it that, or Christopher Chris, Marcus Marc, Robert Bob,\netc. why do you feel it necessary to change PostgreSQL to Postgres? \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 25 Aug 2000 11:18:09 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: How Do You Pronounce \"PostgreSQL\"?"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> Hmmm. Let's make a little table, shall we?\n> \n> Proper name Nickname\n> ----------- --------\n> Chevrolet Chevy\n> PostgreSQL Postgres\n> Christopher Chris\n> Marcus Marc\n> Robert Bob\n> Richard Rick\n> \n> Since I don't see Chevrolet changing their name to Chevy just because\n> everyone calls it that, or Christopher Chris, Marcus Marc, Robert Bob,\n> etc. why do you feel it necessary to change PostgreSQL to Postgres?\n\nI would be far overstating my case to say it is \"necessary\". Only\nputting forward an opinion that it is desirable. \n\nThe current release of PostgreSQL is 7.0. In reality it is release 3.0,\nthe four releases prior to that were known as Postgres. I don't see the\nname change as having been desirable.\n\nFor Christopher -> Chris, I must say that this is one aspect I dislike\nabout my own name. I have deliberately chosen names for my own children\nwhich are unlikely to require an abbreviation.\n\nI could also draw up my own table of abbreviations which have become so\nubiquitous that the original names are all but forgotten. In fact one\ncould go through the dictionary and point out many of the words as\nhaving arisen from an abbreviation of a longer expression.\n",
"msg_date": "Sat, 26 Aug 2000 23:43:34 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: How Do You Pronounce \"PostgreSQL\"?"
},
{
"msg_contents": "On Sat, 26 Aug 2000, Chris wrote:\n\n> Vince Vielhaber wrote:\n> > Hmmm. Let's make a little table, shall we?\n> > \n> > Proper name Nickname\n> > ----------- --------\n> > Chevrolet Chevy\n> > PostgreSQL Postgres\n> > Christopher Chris\n> > Marcus Marc\n> > Robert Bob\n> > Richard Rick\n> > \n> > Since I don't see Chevrolet changing their name to Chevy just because\n> > everyone calls it that, or Christopher Chris, Marcus Marc, Robert Bob,\n> > etc. why do you feel it necessary to change PostgreSQL to Postgres?\n> \n> I would be far overstating my case to say it is \"necessary\". Only\n> putting forward an opinion that it is desirable. \n> \n> The current release of PostgreSQL is 7.0. In reality it is release 3.0,\n> the four releases prior to that were known as Postgres. I don't see the\n> name change as having been desirable.\n\nHuh? v1.09 was Postgres95, v2.x (our v6.x) was PostgreSQL, and that was\nover 4 years ago ...\n\n",
"msg_date": "Sat, 26 Aug 2000 12:21:21 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: How Do You Pronounce \"PostgreSQL\"?"
},
{
"msg_contents": "\n> > The current release of PostgreSQL is 7.0. In reality it is release 3.0,\n> > the four releases prior to that were known as Postgres. I don't see the\n> > name change as having been desirable.\n> \n> Huh? v1.09 was Postgres95, v2.x (our v6.x) was PostgreSQL, and that was\n> over 4 years ago ...\n\nI stand corrected. The current release should be 2.0 of PostgreSQL.\n",
"msg_date": "Sun, 27 Aug 2000 03:17:56 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: How Do You Pronounce \"PostgreSQL\"?"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > In my personal experience, out in the real world, people refer to it as\n> > \"Postgres\". The QL being a mouthful, and contrary to the common practice\n> > of pronouncing SQL as SEQUEL. While Marc points out that technically\n> > Postgres died when it left Berkeley, that discontinuity is really only\n> > something we choose to acknowledge. As Henry points out, SQL \n> > is only one\n> > feature that happened to be added. Apart from not owning the domain\n> > name, why shouldn't it just be \"Postgres\"?\n> \n> Everybody I know also still sais \"Postgres\", leaving out the Q L\n> becaus it is too long. In german we would not have the \"sequel\" problem, \n> since we pronounce it \"ess ku ell\". I think they all know that they are\n> really \n> referring to PostgreSQL.\n\nSomeone once described our name as anti-marketing. That point hit home\nwith me.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 14:38:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: How Do You Pronounce \"PostgreSQL\"?"
},
{
"msg_contents": "> On Fri, 25 Aug 2000, Chris Bitmead wrote:\n> \n> > The Hermit Hacker wrote:\n> > \n> > > Just because ppl are referring to the project by the wrong name doesn't\n> > > make it right...\n> > \n> > Which do you prefer? To be the only one who is right, when everyone else\n> > is wrong. Or to change the definition of right so that the software is\n> > universally called by its correct name?\n> > \n> \n> Hmmm. Let's make a little table, shall we?\n> \n> Proper name\t\tNickname\n> -----------\t\t--------\n> Chevrolet\t\tChevy\n> PostgreSQL\t\tPostgres\n> Christopher\t\tChris\n> Marcus\t\t\tMarc\n> Robert\t\t\tBob\n> Richard\t\t\tRick\n> \n> Since I don't see Chevrolet changing their name to Chevy just because\n> everyone calls it that, or Christopher Chris, Marcus Marc, Robert Bob,\n> etc. why do you feel it necessary to change PostgreSQL to Postgres? \n\nThis is unfair because Vince(Vincent?) works for Chrysler. :-)\n\nAlso, I never realized Marc was short for Marcus. I thought it was just\na funny (Canuk/Canadian) spelling of Mark. :-) Shows you how stupid I\nam in some things.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 14:42:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: How Do You Pronounce \"PostgreSQL\"?"
}
] |
[
{
"msg_contents": "[Meta: Please advise if this is the wrong list. I think, since this\nobservation relates to Pg internals, this might be the right one, but\nfeel free to move it back to -general if I'm wrong; I subscribe to\nboth in any case]\n\nAs some of you will have inferred if you've read my last couple of\nposts, I'm working on a database with a structure like this (I'm\nabstracting; I'm not unfortunately allowed to show you what my client\nis really doing here)\n\nA table called 'things' with two columns 'name' and 'category'. The\npair ('name','category') is a primary key.\n\nThere are ~ 10 000 000 rows in the table, and category takes values\nfrom a more-or-less fixed set of ~1000 possibilities. As previously\ndescribed the most popular category holds around half the rows, the\nnext most popular holds nearly half of those left, and most categories\noccur very rarely indeed. The median is probably around 1000 (which is\nless than the 10 000 you'd expect).\n\nAnyhow, this question isn't about speeding up queries --- we already\nhave that in the histogram thread. This question is about speeding up\ninserts.\n\nMy standard configuration is to have a unique index on (name,category)\nand a non-unique index on (category). The main table is ~ 2G on disk,\nthe index on (name,cat) is about the same size, the index on (cat) is\naround 0.6G.\n\nIn this set-up inserts have dropped to the terrifyingly slow rate of\nseveral hours per 10 000 insertions. This is not adequate to my needs,\nI occasionally have to process 1 000 000 insertions or more!\n\nI have several ideas for speeding this up at the SQL level (including\ninserting into a temp table and then using INSERT ... SELECT to remove\nthe overhead of separate inserts) but that's not what I want to talk\nabout either...\n\nWhat I did to day, which made a staggering difference, is dropping the\nnon-unique index on (category). Suddenly I can insert at approx 40 000\ninsertions per minute, which is fine for my needs!\n\nSo why is updating the huge (2G) unique index on (name,cat) not too\nmuch of a problem, but updating the small (600M) non-unique index on\n(cat) sufficient to increase speed by around two orders of magnitude?\n\nA possible reason has occurred to me:\n\nThe real slow-down is noticeable when I'm doing a bulk insert where\nall new rows belong to the most popular category. I know that some\nbtree implementations don't behave particularly sanely with several\nmillion rows in a single key.. is the btree implementation used too\nslow in this case?\n\nI haven't collected all the performance statistics I'd like to have,\ndue to external time pressures, but I can say that under the new\nfaster configuration, the insertion process is CPU bound, with disk\naccess far below the levels the machine is capable of. If I have a chance\nI'll collect these stats for the old method too.\n\nAny ideas as to what's going on here appreciated (if not, perhaps it\nwill point you towards an optimisation you ought to make for 7.1)\n\nJules\n",
"msg_date": "Thu, 24 Aug 2000 11:04:25 +0100",
"msg_from": "Jules Bean <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance on inserts"
}
] |
[
{
"msg_contents": "At 10:29 AM 8/24/00 +0200, Zeugswetter Andreas SB wrote:\n>\n>> The Hermit Hacker <[email protected]> writes:\n>> >Actually, the one that gets me is those that refer to it as Postgres\n>> >... postgres was a project out of Berkeley way back in the 80's, early\n>> >90's ... hell, it was based on a PostQuel language ... this ain't\n>> >postgres, its only based on it :(\n>> \n>> Yes, and besides, if you call it Postgres, you are certain to offend the \n>> struggling Postgres community.\n>\n>Is there anybody out there to offend ?\n>\n>> Those PostQuel language zealots have enough\n>> problems keeping their technology alive as it is without people going\n>around\n>> creating gratuitous brand confusion.\n>\n>Hmm? Are you suggesting that the PostQuel Postgres is still\n>under development, and those zealots exist ? \n>I thought it is dead since Version 4.2.\n>Could you give me a pointer ?\n\nUhhh...satire alert! satire alert!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 24 Aug 2000 06:41:06 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: Re: How Do You Pronounce \"PostgreSQL\"?"
}
] |
[
{
"msg_contents": "All,\n\nAs I mentioned the other day, we're working on a free project\nhosting site for tools and applications that work with PostgreSQL.\n\nIs anyone familiar with the tigris.org hosting infrastructure?\nThey're doing a lot of what we're going to be doing, and we'd be\ninterested in people's opinions of their design choices, etc.\n\nPlease reply to me off-list, as this is somewhat OT, but I wanted to\nthrow it out to the -hackers group.\n\nThanks,\n\nNed Lilly\nVP Hacker Relations\nGreat Bridge\n\n",
"msg_date": "Thu, 24 Aug 2000 10:19:58 -0400",
"msg_from": "Ned Lilly <[email protected]>",
"msg_from_op": true,
"msg_subject": "somewhat OT: tigris.org"
}
] |
[
{
"msg_contents": "> Hello.\n> \n> Has anybody got any experience using libpq.dll from Visual Basic\n> (currently using Visual Studio 6), and if so, do you have the\n> declarations handy. I have a little trouble finding out how the PGconn\n> and PGresult should look like, and as far as I know these are required\n> to use the connect and execute functions in the library.\n> \n> Yours faithfully.\n> Finn Kettner.\n> PS. If anybody know a program to automatically extract informations\n> from a dll and create a api.txt file for Visual Basic, then please let\n> me know.\n\nI don't know exactly which format VB expects, but you can get a list of\nexports from the DLL using:\ndumpbin /exports libpq.dll\n\nIf you also need the function definitions, check libpq-fe.h for C style\nsyntax. (It's in src/interfaces/libpq)\n\nAs a sidenote, you may be much better off using ADO with the ODBC driver -\nit's definitly move VB-friendly.\n\n//Magnus\n",
"msg_date": "Thu, 24 Aug 2000 16:51:28 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: libpq.dll and VB"
},
{
"msg_contents": "On 24 Aug 00 at 16:51, Magnus Hagander wrote about RE: [HACKERS] \nlibpq.dll and VB:\n\n> > Has anybody got any experience using libpq.dll from Visual Basic\n> > (currently using Visual Studio 6), and if so, do you have the\n> > declarations handy. I have a little trouble finding out how the PGconn\n> > and PGresult should look like, and as far as I know these are required\n> > to use the connect and execute functions in the library.\n\n> > PS. If anybody know a program to automatically extract informations\n> > from a dll and create a api.txt file for Visual Basic, then please let\n> > me know.\n\n> I don't know exactly which format VB expects, but you can get a list\n> of exports from the DLL using: dumpbin /exports libpq.dll\n\nYes, I've tried that, but unfortunately, that is not exactly the \nformat that VB need, but I have considered using it as a starting \npoint.\n\n> If you also need the function definitions, check libpq-fe.h for C\n> style syntax. (It's in src/interfaces/libpq)\n\nI need function definitions and structure (called Type in VB) \ndefinitions, and yes I have looked in libpq-fe.h, actually this is \nthe placed where the dll is build :-). But as mentioned earlier, the \nstructures for PGconn and PGresult is not in this file (they are \ntypedef'ed directly from pg_conn and pg_result, which I can't find in \nany of the included files, so I actually wonder how the dll is build \nin the first place???).\n\n> As a sidenote, you may be much better off using ADO with the ODBC\n> driver - it's definitly move VB-friendly.\n\nYes, but what I forgot to tell you, is that I'm trying to create a \nactivex control, which is to be placed on a (intranet) web page, \nusing the Esker plugin, so ODBC is not the best way to go, as that \nwould need a ODBC-connection be set up on each client machine, which \nare to use the activex control, so that's why I need to go directly \nto the dll file (which can be fetched from the page). To set up an \nODBC connection you would need to install the psqlodbc.dll anyway, so \nwhy not take the direct way.\n\nYours faithfully.\nFinn Kettner.\n",
"msg_date": "Fri, 25 Aug 2000 12:58:43 +0100",
"msg_from": "\"Finn Kettner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: libpq.dll and VB"
}
] |
[
{
"msg_contents": "Hello.\n\nHas anybody got any experience using libpq.dll from Visual Basic\n(currently using Visual Studio 6), and if so, do you have the\ndeclarations handy. I have a little trouble finding out how the PGconn\nand PGresult should look like, and as far as I know these are required\nto use the connect and execute functions in the library.\n\nYours faithfully.\nFinn Kettner.\nPS. If anybody know a program to automatically extract informations\nfrom a dll and create a api.txt file for Visual Basic, then please let\nme know.\n",
"msg_date": "Thu, 24 Aug 2000 16:09:54 +0100",
"msg_from": "\"Finn Kettner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "libpq.dll and VB"
}
] |
[
{
"msg_contents": "",
"msg_date": "Thu, 24 Aug 2000 10:37:41 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Mainframe access"
},
{
"msg_contents": "I've looked through the docs and didn't see reference to connecting to\nPostgreSQL from the \"outside\" -- via a port.\n\nI'm writing a program to access a PostgreSQL database from the mainframe. I\ncan hit an IP address:port -- the question what is the protocol to establish\ncommunications with the DB?\n\nTed\n\n\n\n\n\nMainframe access\n\n\nI've looked through the docs and didn't see reference to connecting to PostgreSQL from the \"outside\" -- via a port.\n\nI'm writing a program to access a PostgreSQL database from the mainframe. I can hit an IP address:port -- the question what is the protocol to establish communications with the DB?\nTed",
"msg_date": "Thu, 24 Aug 2000 10:41:27 -0700",
"msg_from": "\"Rolle, Ted\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Mainframe access"
}
] |
[
{
"msg_contents": "Has anyone else seen this sort of behaviour? After my CLUSTER command\nmy other index has vanished! I know someone mentioned they had done a\nlot of work on CLUSTER, so perhaps this has been fixed.\n\nI'm running Debian 7.0.2-4, on i386.\n\nRegards,\n\t\t\t\t\tAndrew.\n\nnewsroom=# \\d codetable\n Table \"codetable\"\n Attribute | Type | Modifier \n-------------+---------+----------\n table_id | text | not null\n code | text | not null\n seq | integer | \n description | text | \n misc | text | \nIndices: codetable_pkey,\n xak1_codetable\n\nnewsroom=# CLUSTER xak1_codetable ON codetable;\nCLUSTER\nnewsroom=# \\d codetable\n Table \"codetable\"\n Attribute | Type | Modifier \n-------------+---------+----------\n table_id | text | \n code | text | \n seq | integer | \n description | text | \n misc | text | \nIndex: xak1_codetable\n\n--\n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n",
"msg_date": "Fri, 25 Aug 2000 12:49:10 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": true,
"msg_subject": "CLUSTER removing index from table?"
},
{
"msg_contents": "Known problem with CLUSTER. See CLUSTER manual page.\n\n> Has anyone else seen this sort of behaviour? After my CLUSTER command\n> my other index has vanished! I know someone mentioned they had done a\n> lot of work on CLUSTER, so perhaps this has been fixed.\n> \n> I'm running Debian 7.0.2-4, on i386.\n> \n> Regards,\n> \t\t\t\t\tAndrew.\n> \n> newsroom=# \\d codetable\n> Table \"codetable\"\n> Attribute | Type | Modifier \n> -------------+---------+----------\n> table_id | text | not null\n> code | text | not null\n> seq | integer | \n> description | text | \n> misc | text | \n> Indices: codetable_pkey,\n> xak1_codetable\n> \n> newsroom=# CLUSTER xak1_codetable ON codetable;\n> CLUSTER\n> newsroom=# \\d codetable\n> Table \"codetable\"\n> Attribute | Type | Modifier \n> -------------+---------+----------\n> table_id | text | \n> code | text | \n> seq | integer | \n> description | text | \n> misc | text | \n> Index: xak1_codetable\n> \n> --\n> _____________________________________________________________________\n> Andrew McMillan, e-mail: [email protected]\n> Catalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\n> Me: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Aug 2000 22:15:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER removing index from table?"
}
] |
[
{
"msg_contents": "Weird. When I sent this 24 hours ago, it didn't go through to the\nlist, although I successfully sent a message to the list only an hour\nor so before that.... as far as my mail-server can tell it went off in\nthe right direcion...\n\n\n----- Forwarded message from Jules Bean <[email protected]> -----\n\nDate: Thu, 24 Aug 2000 11:04:25 +0100\nFrom: Jules Bean <[email protected]>\nTo: [email protected]\nSubject: Performance on inserts\nUser-Agent: Mutt/1.2i\n\n[Meta: Please advise if this is the wrong list. I think, since this\nobservation relates to Pg internals, this might be the right one, but\nfeel free to move it back to -general if I'm wrong; I subscribe to\nboth in any case]\n\nAs some of you will have inferred if you've read my last couple of\nposts, I'm working on a database with a structure like this (I'm\nabstracting; I'm not unfortunately allowed to show you what my client\nis really doing here)\n\nA table called 'things' with two columns 'name' and 'category'. The\npair ('name','category') is a primary key.\n\nThere are ~ 10 000 000 rows in the table, and category takes values\nfrom a more-or-less fixed set of ~1000 possibilities. As previously\ndescribed the most popular category holds around half the rows, the\nnext most popular holds nearly half of those left, and most categories\noccur very rarely indeed. The median is probably around 1000 (which is\nless than the 10 000 you'd expect).\n\nAnyhow, this question isn't about speeding up queries --- we already\nhave that in the histogram thread. This question is about speeding up\ninserts.\n\nMy standard configuration is to have a unique index on (name,category)\nand a non-unique index on (category). The main table is ~ 2G on disk,\nthe index on (name,cat) is about the same size, the index on (cat) is\naround 0.6G.\n\nIn this set-up inserts have dropped to the terrifyingly slow rate of\nseveral hours per 10 000 insertions. This is not adequate to my needs,\nI occasionally have to process 1 000 000 insertions or more!\n\nI have several ideas for speeding this up at the SQL level (including\ninserting into a temp table and then using INSERT ... SELECT to remove\nthe overhead of separate inserts) but that's not what I want to talk\nabout either...\n\nWhat I did to day, which made a staggering difference, is dropping the\nnon-unique index on (category). Suddenly I can insert at approx 40 000\ninsertions per minute, which is fine for my needs!\n\nSo why is updating the huge (2G) unique index on (name,cat) not too\nmuch of a problem, but updating the small (600M) non-unique index on\n(cat) sufficient to increase speed by around two orders of magnitude?\n\nA possible reason has occurred to me:\n\nThe real slow-down is noticeable when I'm doing a bulk insert where\nall new rows belong to the most popular category. I know that some\nbtree implementations don't behave particularly sanely with several\nmillion rows in a single key.. is the btree implementation used too\nslow in this case?\n\nI haven't collected all the performance statistics I'd like to have,\ndue to external time pressures, but I can say that under the new\nfaster configuration, the insertion process is CPU bound, with disk\naccess far below the levels the machine is capable of. If I have a chance\nI'll collect these stats for the old method too.\n\nAny ideas as to what's going on here appreciated (if not, perhaps it\nwill point you towards an optimisation you ought to make for 7.1)\n\nJules\n\n----- End forwarded message -----\n\n-- \nJules Bean | Any sufficiently advanced \[email protected] | technology is indistinguishable\[email protected] | from a perl script\n",
"msg_date": "Fri, 25 Aug 2000 12:19:04 +0100",
"msg_from": "Jules Bean <[email protected]>",
"msg_from_op": true,
"msg_subject": "[[email protected]: Performance on inserts]"
},
{
"msg_contents": "Jules Bean <[email protected]> writes:\n> So why is updating the huge (2G) unique index on (name,cat) not too\n> much of a problem, but updating the small (600M) non-unique index on\n> (cat) sufficient to increase speed by around two orders of magnitude?\n\n> The real slow-down is noticeable when I'm doing a bulk insert where\n> all new rows belong to the most popular category. I know that some\n> btree implementations don't behave particularly sanely with several\n> million rows in a single key.. is the btree implementation used too\n> slow in this case?\n\nIndeed, the btree code used in 7.0 and before is pretty awful in the\nface of large numbers of equal keys. I have improved it somewhat,\nbut it's still got a problem once there are enough equal keys to span\nmany index pages. I just did this experiment:\n\ncreate table fill(f1 text);\ncreate index filli on fill(f1);\ninsert into fill values('foo');\ninsert into fill values('bar');\ninsert into fill values('baz');\n\ninsert into fill select * from fill;\t-- repeat this many times\n\nThe runtime of the insert/select scales in a pretty horrid fashion:\n\n# tuples inserted\t\t6.5\t\tcurrent\n\n1536\t\t\t\t<1sec\t\t<1sec\n3072\t\t\t\t1.56\t\t<1sec\n6144\t\t\t\t3.70\t\t1.84\n12288\t\t\t\t9.73\t\t4.07\n24576\t\t\t\t93.26\t\t38.72\n49152\t\t\t\t363.23\t\t231.56\n\nAt the end of this process we have about 100 pages of index entries\nfor each of the three distinct key values, with about 330 items per page.\nThe fixes that I applied a month or two back improve behavior for\nmultiple equal keys within a page, but they do little for multiple\npages full of the same key value. Here's the problem: the code for\nINSERT starts by finding the first occurrence of the target key (using\nbtree descent, so that's fast enough). But then it does this:\n\n /*\n * If we will need to split the page to put the item here,\n * check whether we can put the tuple somewhere to the right,\n * instead. Keep scanning until we find enough free space or\n * reach the last page where the tuple can legally go.\n */\n while (PageGetFreeSpace(page) < itemsz &&\n !P_RIGHTMOST(lpageop) &&\n _bt_compare(rel, keysz, scankey, page, P_HIKEY) == 0)\n {\n /* step right one page */\n BlockNumber rblkno = lpageop->btpo_next;\n\n _bt_relbuf(rel, buf, BT_WRITE);\n buf = _bt_getbuf(rel, rblkno, BT_WRITE);\n page = BufferGetPage(buf);\n lpageop = (BTPageOpaque) PageGetSpecialPointer(page);\n }\n\nIn other words, we do a linear scan over all the pages containing equal\nkey values, in the hope of finding one where we can shoehorn in the new\nentry. If the keys are roughly equal-sized, that hope is usually vain,\nand we end up splitting the rightmost page that contains any keys\nequal to the target key. Subsequent inserts of the same key value fill\nfirst the left-half split page, then the right-half, then split the\nright-half page again and repeat --- but for each insert we first had\nto scan over all the preceding pages containing equal keys. That means\ninserting N equal keys takes O(N^2) time, for large enough N.\n\nAfter thinking about this for a while, I propose a probabilistic\nsolution. Suppose that we add to the above loop a random-number check\nthat causes us to fall out of the loop with probability 1/3 or so;\nthat is, if the current page is full of the target key, then we move\nright to check the next page with probability 2/3, but with probability\n1/3 we give up and split the current page to insert the new key.\n\nWith this approach we'd hardly ever move more than, say, 10 pages before\nsplitting, so the time to insert a new key is bounded no matter how many\nduplicates it has.\n\nThe reason for using a random decision is that if we always stopped\nafter scanning a fixed number of pages, then the space freed up by the\nprevious decision to split would always be just out of reach, and we'd\nend up with lots of half-full pages of the same key. With a random\ndecision we will visit and fill both the left and right halves of a\npreviously split page.\n\nComments? Is there a better way? What's the best probability to use?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Aug 2000 11:13:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "* Tom Lane <[email protected]> [000825 08:25] wrote:\n> \n> In other words, we do a linear scan over all the pages containing equal\n> key values, in the hope of finding one where we can shoehorn in the new\n> entry. If the keys are roughly equal-sized, that hope is usually vain,\n> and we end up splitting the rightmost page that contains any keys\n> equal to the target key. Subsequent inserts of the same key value fill\n> first the left-half split page, then the right-half, then split the\n> right-half page again and repeat --- but for each insert we first had\n> to scan over all the preceding pages containing equal keys. That means\n> inserting N equal keys takes O(N^2) time, for large enough N.\n> \n> After thinking about this for a while, I propose a probabilistic\n> solution. Suppose that we add to the above loop a random-number check\n> that causes us to fall out of the loop with probability 1/3 or so;\n> that is, if the current page is full of the target key, then we move\n> right to check the next page with probability 2/3, but with probability\n> 1/3 we give up and split the current page to insert the new key.\n> \n> With this approach we'd hardly ever move more than, say, 10 pages before\n> splitting, so the time to insert a new key is bounded no matter how many\n> duplicates it has.\n> \n> The reason for using a random decision is that if we always stopped\n> after scanning a fixed number of pages, then the space freed up by the\n> previous decision to split would always be just out of reach, and we'd\n> end up with lots of half-full pages of the same key. With a random\n> decision we will visit and fill both the left and right halves of a\n> previously split page.\n> \n> Comments? Is there a better way? What's the best probability to use?\n\n\nI'm unsure if it's possible, but somehow storing the last place one\n'gave up' and decided to split the page could offer a useful next-start\nfor the next insert. Sort of attempting the split work amongst multiple\nrequests. For some reason it looks like your algorithm might cause\nproblems because it plain gives up after 10 pages?\n \nhope this helps,\n-Alfred\n",
"msg_date": "Fri, 25 Aug 2000 08:52:19 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> I'm unsure if it's possible, but somehow storing the last place one\n> 'gave up' and decided to split the page could offer a useful next-start\n> for the next insert.\n\nI think that that would create problems with concurrency --- the\nLehman-Yao btree algorithm is designed around the assumption that\nwriters only move right and never want to go back left to change\na prior page. So once we've moved right we don't get to go back to\nthe start of the chain of duplicates.\n\n> For some reason it looks like your algorithm might cause\n> problems because it plain gives up after 10 pages?\n\n\"Give up\" just means \"stop looking for already-existing free space,\nand make some the hard way\". The steady-state average space utilization\nof this way would be somewhat worse than the existing code, probably,\nbut I don't see that as a big problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Aug 2000 14:25:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "* Tom Lane <[email protected]> [000825 11:30] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > I'm unsure if it's possible, but somehow storing the last place one\n> > 'gave up' and decided to split the page could offer a useful next-start\n> > for the next insert.\n> \n> I think that that would create problems with concurrency --- the\n> Lehman-Yao btree algorithm is designed around the assumption that\n> writers only move right and never want to go back left to change\n> a prior page. So once we've moved right we don't get to go back to\n> the start of the chain of duplicates.\n\nYes, but inconsistancies about the starting page should be handled\nbecause i'm sure the code takes care of multiple scans from the\nstart. A slightly incorrect startscan point is better than starting\nfrom the beginning every time. It's a hack, but no more than a random\nguess where in my situation (if it's do-able) will eventually get\npast the first bunch of full pages.\n\n> > For some reason it looks like your algorithm might cause\n> > problems because it plain gives up after 10 pages?\n> \n> \"Give up\" just means \"stop looking for already-existing free space,\n> and make some the hard way\". The steady-state average space utilization\n> of this way would be somewhat worse than the existing code, probably,\n> but I don't see that as a big problem.\n\nWell there's a possibility of the end of the sequence containing\nfree space, allowing previous failures to be accounted for can make\na difference.\n\nBut it's just a suggestion from someone who really ought to be\nstudying the internals a bit more. :)\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Fri, 25 Aug 2000 11:35:49 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> A slightly incorrect startscan point is better than starting\n> from the beginning every time.\n\nNot for lookups, it isn't ;-).\n\nI did think about making the btree-descending code act differently\nfor inserts than lookups, but AFAICS that wouldn't buy anything for\nthis problem, since until you get down to the leaf level you can't\nsee where there's free space anyway. You'd simply be exchanging the\nproblem of missing free space that's too far to the right for the\nproblem of missing free space that's to the left of where you chose\nto start looking.\n\nThe algorithm does seem to work quite nicely just the way I described\nit, although it turns out I was off base about a good probability\nsetting. I find that something up around 0.99 seems to be good.\nUsing the same (perhaps overly simplistic) test case:\n\n# tuples inserted\t\t6.5\t\tcurrent+random hack @ 0.99\n\t\t\tTime\tindex size\tTime\tindex size\n1536\t\t\t<1sec\t90112\t\t<1sec\t106496\n3072\t\t\t1.56\t163840\t\t<1sec\t188416\n6144\t\t\t3.70\t286720\t\t1.40\t376832\n12288\t\t\t9.73\t532480\t\t2.65\t688128\n24576\t\t\t93.26\t1024000\t\t5.22\t1368064\n49152\t\t\t363.23\t2007040\t\t10.34\t2727936\n98304\t\t\t\t\t\t22.07\t5545984\n196608\t\t\t\t\t\t45.60\t11141120\n393216\t\t\t\t\t\t92.53\t22290432\n\nI tried probabilities from 0.67 to 0.999 and found that runtimes didn't\nvary a whole lot (though this is near the minimum), while index size\nconsistently got larger as the probability of moving right decreased.\nThe runtime is nicely linear throughout the range.\n\nThe index size increase might look like a bit of a jump, but actually\nthis is right where we want it to be. The old code was effectively\npacking each page as full as it could be under these conditions. That's\nnot what you want for a btree. Steady-state load factor for btrees is\nusually quoted as somewhere around 70%, and this method manages to\napproximate that pretty well with a move-right probability of 0.99 or\na bit less.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Aug 2000 19:00:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "> Comments? Is there a better way? What's the best probability to use?\n\nFor this particular example, \"partial indices\" seems to be the best fit.\nThe index can be chosen to omit the most common value(s), since those\nwould indicate a sequential scan anyway.\n\nOther DBs allow a parameter to set the \"fill ratio\" of index pages,\nwhich might also help. But probably not as much as you might like when\none is doing a large number of inserts at a time.\n\nYour \"randomized\" algorithm looks very promising. What is the status of\npartial indices? Are they functional now, or have they been broken\nforever (I'm not recalling)?\n\n - Thomas\n",
"msg_date": "Sat, 26 Aug 2000 01:36:57 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> What is the status of\n> partial indices? Are they functional now, or have they been broken\n> forever (I'm not recalling)?\n\nThey've been diked out of gram.y's syntax for CREATE INDEX at least\nsince Postgres95. No way to tell who did that, why or when, AFAIK.\nThere is still an awful lot of support code for them, however.\n\nI have no good way to guess how much bit-rot has occurred in all that\nunexercised code ... but it'd be interesting to try to get it going\nagain.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Aug 2000 21:44:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Thomas Lockhart <[email protected]> writes:\n> > What is the status of\n> > partial indices? Are they functional now, or have they been broken\n> > forever (I'm not recalling)?\n> \n> They've been diked out of gram.y's syntax for CREATE INDEX at least\n> since Postgres95. No way to tell who did that, why or when, AFAIK.\n> There is still an awful lot of support code for them, however.\n\nI suspect that current indexes don't store nulls (and are thereby \npartial indexes in relation to nulls ;)\n\nAt least the following suggests it:\n---8<------------8<------------8<------------8<---------\nhannu=> explain select * from test1 where i=777;\nNOTICE: QUERY PLAN:\n\nIndex Scan using test1_i_ndx on test1 (cost=2.05 rows=2 width=8)\n\nEXPLAIN\nhannu=> explain select * from test1 where i is null;\nNOTICE: QUERY PLAN:\n\nSeq Scan on test1 (cost=3144.36 rows=27307 width=8)\n---8<------------8<------------8<------------8<---------\n\nAs the logic to include or not include something in index seems to be\nthere \nfor nulls (and thus can't be very badly bit-rotten) it should be\npossible to \nextend it for simpler =x or in(x,y,z) conditions.\n\n> I have no good way to guess how much bit-rot has occurred in all that\n> unexercised code ... but it'd be interesting to try to get it going\n> again.\n\nOf course the IS NULL case may just be hard-wired in the optimiser in\nwhich \ncase there may be not much use of current code. \n\nIIRC Postgres95 was the first postgres with SQL, so if already that did\nnot \nhave partial indexes then no SQL-grammar for postgreSQL had. (Or maybe \nIllustra did but it was a separate effort anyway ;)\n\n---------------\nHannu\n",
"msg_date": "Sat, 26 Aug 2000 10:26:26 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "> I have no good way to guess how much bit-rot has occurred in all that\n> unexercised code ... but it'd be interesting to try to get it going\n> again.\n\nYes, it is a *great* feature, since it directly addresses the problems\nassociates with one of the most common non-random data distributions\n(the index can be constructed to completely ignore those most common\nvalues, and hence be smaller, less often updated, and holding only those\nvalues which might actually be used in an index scan). If we don't get\nto outer joins, this would be a good second choice for 7.1 ;)\n\n - Thomas\n",
"msg_date": "Sat, 26 Aug 2000 07:27:28 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "On Fri, Aug 25, 2000 at 07:00:22PM -0400, Tom Lane wrote:\n> The algorithm does seem to work quite nicely just the way I described\n> it, although it turns out I was off base about a good probability\n> setting. I find that something up around 0.99 seems to be good.\n> Using the same (perhaps overly simplistic) test case:\n> \n> # tuples inserted\t\t6.5\t\tcurrent+random hack @ 0.99\n> \t\t\tTime\tindex size\tTime\tindex size\n> 1536\t\t\t<1sec\t90112\t\t<1sec\t106496\n> 3072\t\t\t1.56\t163840\t\t<1sec\t188416\n> 6144\t\t\t3.70\t286720\t\t1.40\t376832\n> 12288\t\t\t9.73\t532480\t\t2.65\t688128\n> 24576\t\t\t93.26\t1024000\t\t5.22\t1368064\n> 49152\t\t\t363.23\t2007040\t\t10.34\t2727936\n> 98304\t\t\t\t\t\t22.07\t5545984\n> 196608\t\t\t\t\t\t45.60\t11141120\n> 393216\t\t\t\t\t\t92.53\t22290432\n> \n> I tried probabilities from 0.67 to 0.999 and found that runtimes didn't\n> vary a whole lot (though this is near the minimum), while index size\n> consistently got larger as the probability of moving right decreased.\n> The runtime is nicely linear throughout the range.\n\nThat looks brilliant!! (Bearing in mind that I have over 10 million\ntuples in my table, you can imagine what performance was like for me!)\nIs there any chance you could generate a patch against released 7.0.2\nto add just this functionality... It would be the kiss of life for my\ncode!\n\n(Not in a hurry, I'm not back in work until Wednesday, as it happens)\n\nAnd, of course, what would /really/ get my code going speedily would\nbe the partial indices mentioned elsewhere in this thread. If the\nbackend could automagically drop keys containing > 10% (tunable) of\nthe rows from the index, then my index would be (a) about 70% smaller!\nand (b) only used when it's faster. [This means it would have to\nupdate some simple histogram data. However, I can't see that being\nmuch of an overhead]\n\nFor the short term, if I can get a working version of the above\nrandomisation patch, I think I shall 'fake' a partial index by\nmanually setting 'enable_seqscan=off' for all but the 4 or 5 most\ncommon categories. Those two factors combined will speed up my bulk\ninserts a lot.\n\nOne other idea, though:\n\nIs there any simple way for Pg to combine inserts into one bulk?\nSpecifically, their effect on the index files. It has always seemed\nto me to be one of the (many) glaring flaws in SQL that the INSERT\nstatement only takes one row at a time. But, using INSERT ... SELECT,\nI can imagine that it might be possible to do 'bulk' index\nupdating. so that scanning process is done once per 'batch'.\n\nIf I can make an analogy with sorted files (which indices are rather\nlike), if I wanted to add another 100 lines to a 1000 line sorted\nfile, I'd sort the 100 first, and then 'merge' them in. Whilst I\nrealise that indices aren't stored sorted (no need), I think it ought\nto be possible to construct an efficient algorithm for merging two\nbtrees?\n\nJules\n\n-- \nJules Bean | Any sufficiently advanced \[email protected] | technology is indistinguishable\[email protected] | from a perl script\n",
"msg_date": "Sat, 26 Aug 2000 11:48:58 +0100",
"msg_from": "Jules Bean <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "On Sat, 26 Aug 2000, Jules Bean wrote:\n\n> Is there any simple way for Pg to combine inserts into one bulk?\n> Specifically, their effect on the index files. It has always seemed\n> to me to be one of the (many) glaring flaws in SQL that the INSERT\n> statement only takes one row at a time.\n\nOne of MySQL's little syntax abuses allows:\n\nINSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..);\n\nwhich is nice for avoiding database round trips. It's one\nof the reasons that mysql can do a bulk import so quickly.\n\n> But, using INSERT ... SELECT, I can imagine that it might be possible\n> to do 'bulk' index updating. so that scanning process is done once per\n> 'batch'.\n\nLogic for these two cases would be excellent.\n\nMatthew.\n\n",
"msg_date": "Sat, 26 Aug 2000 12:14:06 +0100 (BST)",
"msg_from": "Matthew Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "* Matthew Kirkwood <[email protected]> [000826 04:22] wrote:\n> On Sat, 26 Aug 2000, Jules Bean wrote:\n> \n> > Is there any simple way for Pg to combine inserts into one bulk?\n> > Specifically, their effect on the index files. It has always seemed\n> > to me to be one of the (many) glaring flaws in SQL that the INSERT\n> > statement only takes one row at a time.\n> \n> One of MySQL's little syntax abuses allows:\n> \n> INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..);\n> \n> which is nice for avoiding database round trips. It's one\n> of the reasons that mysql can do a bulk import so quickly.\n\nThat would be an _extremely_ useful feature if it made a difference\nin postgresql's insert speed.\n\n> \n> > But, using INSERT ... SELECT, I can imagine that it might be possible\n> > to do 'bulk' index updating. so that scanning process is done once per\n> > 'batch'.\n> \n> Logic for these two cases would be excellent.\n\nWe do this sometimes, works pretty nicely.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Sat, 26 Aug 2000 04:32:51 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "On Sat, Aug 26, 2000 at 12:14:06PM +0100, Matthew Kirkwood wrote:\n> On Sat, 26 Aug 2000, Jules Bean wrote:\n> \n> > Is there any simple way for Pg to combine inserts into one bulk?\n> > Specifically, their effect on the index files. It has always seemed\n> > to me to be one of the (many) glaring flaws in SQL that the INSERT\n> > statement only takes one row at a time.\n> \n> One of MySQL's little syntax abuses allows:\n> \n> INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..);\n> \n> which is nice for avoiding database round trips. It's one\n> of the reasons that mysql can do a bulk import so quickly.\n>\ncopy seems to be very fast ... i dont know why all people use\nmultiple inserts ??? just use copy ...\n\nyours, oliver teuber\n \n",
"msg_date": "Sat, 26 Aug 2000 13:52:01 +0200",
"msg_from": "Oliver Teuber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "Matthew Kirkwood <[email protected]> writes:\n> One of MySQL's little syntax abuses allows:\n> INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..);\n\nActually, that's perfectly standard SQL92, just an item we haven't\ngot round to supporting yet. (Until we do the fabled querytree\nrestructuring, it seems a lot harder than it is worth.)\n\nCOPY FROM stdin is definitely the fastest way of inserting data,\nhowever, since you avoid a ton of parse/plan overhead that way.\nOf course you also lose the ability to have column defaults\ncomputed for you, etc ... there's no free lunch ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 Aug 2000 11:45:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "Jules Bean <[email protected]> writes:\n> Is there any chance you could generate a patch against released 7.0.2\n> to add just this functionality... It would be the kiss of life for my\n> code!\n\nWill look at it. Are you brave enough to want to try the rest of the\n7.1 rewrite of the btree code, or do you just want this one hack?\n\n> And, of course, what would /really/ get my code going speedily would\n> be the partial indices mentioned elsewhere in this thread. If the\n> backend could automagically drop keys containing > 10% (tunable) of\n> the rows from the index, then my index would be (a) about 70% smaller!\n\nI don't think anyone was envisioning \"automagic\" drop of most common\nvalues. The partial-index support that's partially there ;-) is\ndesigned around manual specification of a predicate, ie, you'd say\n\n\tCREATE INDEX mypartialindex ON table (column)\n\t\tWHERE column != 42 AND column != 1066\n\nif you wanted a partial index omitting values 42 and 1066. The backend\nwould then consider using the index to process queries wherein it can\nprove that the query's WHERE implies the index predicate. For example\n\n\tSELECT * FROM table WHERE column = 11\n\nwould be able to use this index but\n\n\tSELECT * FROM table WHERE column < 100\n\nwould not.\n\nYou could certainly write a little periodic-maintenance script to\ndetermine the most common values in your tables and recreate your\npartial indexes accordingly ... but I doubt it'd make sense to try\nto get the system to do that automatically on-the-fly.\n\n> For the short term, if I can get a working version of the above\n> randomisation patch, I think I shall 'fake' a partial index by\n> manually setting 'enable_seqscan=off' for all but the 4 or 5 most\n> common categories. Those two factors combined will speed up my bulk\n> inserts a lot.\n\nUh, enable_seqscan has nothing to do with how inserts are handled...\n\n> Is there any simple way for Pg to combine inserts into one bulk?\n\nCOPY.\n\n> Specifically, their effect on the index files.\n\nThis particular problem couldn't be cured by batching inserts anyway.\nThe performance problem was coming from the actual act of inserting\na key (or more specifically, making room for the key) and that's just\ngot to be done for each key AFAICS.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 Aug 2000 12:32:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "On Sat, Aug 26, 2000 at 12:32:20PM -0400, Tom Lane wrote:\n> Jules Bean <[email protected]> writes:\n> > Is there any chance you could generate a patch against released 7.0.2\n> > to add just this functionality... It would be the kiss of life for my\n> > code!\n> \n> Will look at it. Are you brave enough to want to try the rest of the\n> 7.1 rewrite of the btree code, or do you just want this one hack?\n\nHmm. I don't know :-) because I don't really know the scope of the\nchanges... I do need this database to be stable, but OTOH, changes to\nthe index code are unlikely to corrupt my data.\n\n> \n> I don't think anyone was envisioning \"automagic\" drop of most common\n> values. The partial-index support that's partially there ;-) is\n> designed around manual specification of a predicate, ie, you'd say\n> \n> \tCREATE INDEX mypartialindex ON table (column)\n> \t\tWHERE column != 42 AND column != 1066\n\nFair enough.\n\n> \n> > For the short term, if I can get a working version of the above\n> > randomisation patch, I think I shall 'fake' a partial index by\n> > manually setting 'enable_seqscan=off' for all but the 4 or 5 most\n> > common categories. Those two factors combined will speed up my bulk\n> > inserts a lot.\n> \n> Uh, enable_seqscan has nothing to do with how inserts are handled...\n\nUm, that was a thinko! What I was trying to say is: the randomisation\nwill speed up my bulk inserts, whereas enable_seqscan=off will speed\nup the other slow queries in my application, namely \"SELECT * where\ncategory='foo'\" type queries.\n\n> > Specifically, their effect on the index files.\n> \n> This particular problem couldn't be cured by batching inserts anyway.\n> The performance problem was coming from the actual act of inserting\n> a key (or more specifically, making room for the key) and that's just\n> got to be done for each key AFAICS.\n\nIn principle, it could be helpful to know you're inserting 10000\ntuples with category='xyz'. Then you only make the left-to-right\nscan once, and you fill up every hole you see, before finally\nsplitting the last page and inserting approx (10000)/(num_per_page)\nnew pages all at once. This is surely quicker that doing the\nleft-to-right scan for each tuple (even though the difference would be\nfar less noticeable in the presence of your probablistic patch).\n\nJules\n\n-- \nJules Bean | Any sufficiently advanced \[email protected] | technology is indistinguishable\[email protected] | from a perl script\n",
"msg_date": "Sat, 26 Aug 2000 17:57:38 +0100",
"msg_from": "Jules Bean <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n> Jules Bean <[email protected]> writes:\n>> Is there any chance you could generate a patch against released 7.0.2\n>> to add just this functionality... It would be the kiss of life for my\n>> code!\n\n> Will look at it.\n\nI looked at it and decided that I don't want to mess with it. The\nBTP_CHAIN logic in 7.0 is so weird and fragile that it's hard to tell\nwhat will happen if we try to split anything but the last page of a\nchain of duplicates. The idea I'd had of just dropping in the whole\nbtree module from current sources doesn't look very practical either;\na lot of changes that span module boundaries would have to be backed out\nof it.\n\nMy recommendation is to try out current sources (from a nightly snapshot\nor CVS). I realize that running a non-released version might make you\nuncomfortable, and rightly so; but I believe that current sources are in\npretty good shape right now. In any case, working out any bugs lurking\nin current strikes me as a much more profitable activity than trying to\nback-patch the old btree code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 Aug 2000 17:09:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "Matthew Kirkwood wrote:\n> \n> On Sat, 26 Aug 2000, Jules Bean wrote:\n> \n> > Is there any simple way for Pg to combine inserts into one bulk?\n> > Specifically, their effect on the index files. It has always seemed\n> > to me to be one of the (many) glaring flaws in SQL that the INSERT\n> > statement only takes one row at a time.\n> \n> One of MySQL's little syntax abuses allows:\n> \n> INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..);\n\nWouldn't....\n\nINSERT INTO tab (col1, ..); INSERT INTO tab VALUES (val1, ..); INSERT\nINTO tab (val2, ..);\n\nalso avoid a database round trip??\n",
"msg_date": "Mon, 28 Aug 2000 10:11:49 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "Oliver Teuber wrote:\n> \n> On Sat, Aug 26, 2000 at 12:14:06PM +0100, Matthew Kirkwood wrote:\n> > On Sat, 26 Aug 2000, Jules Bean wrote:\n> >\n> > > Is there any simple way for Pg to combine inserts into one bulk?\n> > > Specifically, their effect on the index files. It has always seemed\n> > > to me to be one of the (many) glaring flaws in SQL that the INSERT\n> > > statement only takes one row at a time.\n> >\n> > One of MySQL's little syntax abuses allows:\n> >\n> > INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..);\n> >\n> > which is nice for avoiding database round trips. It's one\n> > of the reasons that mysql can do a bulk import so quickly.\n> >\n> copy seems to be very fast ... i dont know why all people use\n> multiple inserts ??? just use copy ...\n>\n\nCould copy be extended to support a more SQL-friendly syntax. like\n\nCOPY tablename FROM VALUES(\n (x1,y1,z1),\n (x2,y2,z2),\n (x3,y3,z3)\n);\n\nThe main reason I have not used COPY very much is it's non-SQL syntax \nfor field values.\n\nExtending the COPY command would probably be much easier than speeding up\nINSERTS.\n\n-------------\nHannu\n",
"msg_date": "Mon, 28 Aug 2000 13:05:19 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "SQL COPY syntax extension (was: Performance on inserts)"
},
{
"msg_contents": "> Could copy be extended to support a more SQL-friendly syntax. like\n> COPY tablename FROM VALUES(\n> (x1,y1,z1),\n> (x2,y2,z2),\n> (x3,y3,z3)\n> );\n> Extending the COPY command would probably be much easier than speeding up\n> INSERTS.\n\nThat syntax is a lot like a real SQL9x INSERT. Supporting multiple rows\nfor inserts is probably not that difficult; but since INSERT is used so\nmuch we would have to make sure we don't screw up something else. At the\nmoment, expanding a line of SQL into multiple internal querytree nodes\nis a bit crufty but is possible.\n\n - Thomas\n",
"msg_date": "Mon, 28 Aug 2000 15:39:53 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL COPY syntax extension (was: Performance on inserts)"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> That syntax is a lot like a real SQL9x INSERT.\n\nMultiple row constructors in INSERT is one of my to-do items for the\nplanned querytree redesign. I have not thought it was worth messing\nwith until we're ready to bite that bullet, however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Aug 2000 13:48:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL COPY syntax extension (was: Performance on inserts) "
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > Could copy be extended to support a more SQL-friendly syntax. like\n> > COPY tablename FROM VALUES(\n> > (x1,y1,z1),\n> > (x2,y2,z2),\n> > (x3,y3,z3)\n> > );\n> > Extending the COPY command would probably be much easier than speeding up\n> > INSERTS.\n> \n> That syntax is a lot like a real SQL9x INSERT. Supporting multiple rows\n> for inserts is probably not that difficult; but since INSERT is used so\n> much we would have to make sure we don't screw up something else. At the\n> moment, expanding a line of SQL into multiple internal querytree nodes\n> is a bit crufty but is possible.\n\nWhat I actually had in mind was a more SQL-like syntax for copy, i.e. no \ndefault arguments, all fields required etc. that would we easy to bolt\non \ncurrent copy machinery but still use 'SQL' syntax (no . or \\. or \\\\. for\nEOD, \nNULL for NULL values, quotes around strings ...)\n\n------------\nHannu\n",
"msg_date": "Mon, 28 Aug 2000 21:20:27 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL COPY syntax extension (was: Performance on inserts)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Thomas Lockhart <[email protected]> writes:\n> > That syntax is a lot like a real SQL9x INSERT.\n> \n> Multiple row constructors in INSERT is one of my to-do items for the\n> planned querytree redesign. I have not thought it was worth messing\n> with until we're ready to bite that bullet, however.\n\nWhat is the status of this querytree redesign ?\n\nI'v spent some time trying to get a grip ofthe _exact_ meaning of the\nWITH RECURSIVE syntax in SQL3/99 as I badly need it in a project of\nmine.\n\n(I know the _general_ meaning - it is for querying tree-structured data\n;)\n\nThe things the new querytree should address sould be (at least ;) - \n\n1. OUTER JOINS\n2. WITH RECURSIVE\n3. Support for positioned UPDATE & DELETE (requires probably lot more\nthan\n just querytree redesign)\n4. Easyer special-casing of optimisations (like using an index on x for \n 'select max(x) from t', but not for 'select max(x) from t where n=7'\n\nIs the special mailing-list for querytree redesing active ?\n\n--------------\nHannu\n",
"msg_date": "Mon, 28 Aug 2000 21:29:17 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL COPY syntax extension (was: Performance on inserts)"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> What is the status of this querytree redesign ?\n\nWaiting for 7.2 cycle, as far as I know.\n\n> The things the new querytree should address sould be (at least ;) - \n> 2. WITH RECURSIVE\n\nI don't think RECURSIVE is a querytree issue --- it looks like a much\nbigger problem than that :-(\n\nThe things I'm concerned about fixing with querytree redesign are\n\t* full SQL92 joins\n\t* subselects in FROM\n\t* view bugs (grouping and aggregates in views)\n\t* INSERT ... SELECT bugs\n\t* reimplement UNION/INTERSECT/EXCEPT in a less hacky way,\n\t make cases like SELECT ... UNION ... ORDER BY work.\n\t Not to mention UNION etc in a subselect or in INSERT/SELECT.\n\t* convert WHERE x IN (subselect) to a join-like representation\n\nThese are all things that have gone unfixed for years because they're\nessentially unfixable with the current single-level representation of\na query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Aug 2000 17:05:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL COPY syntax extension (was: Performance on inserts) "
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> What I actually had in mind was a more SQL-like syntax for copy,\n> i.e. no default arguments, all fields required etc. that would we easy\n> to bolt on current copy machinery but still use 'SQL' syntax (no . or\n> \\. or \\\\. for EOD, NULL for NULL values, quotes around strings ...)\n\nSeems like a reasonable idea, although I'd recommend sticking to the\nconvention that \\. on a line means EOD, to avoid having to hack the\nclient-side libraries. As long as you leave that alone, libpq,\nlibpgtcl, etc etc should be transparent to the copy data format.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Aug 2000 17:14:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL COPY syntax extension (was: Performance on inserts) "
},
{
"msg_contents": "> The things I'm concerned about fixing with querytree redesign are\n...\n\nAnd it will be a building block for\n\n *distributed databases\n *replication\n *CORBA/alternate protocols\n *other cool stuff ;)\n\n - Thomas\n",
"msg_date": "Tue, 29 Aug 2000 04:10:51 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL COPY syntax extension (was: Performance on inserts)"
},
{
"msg_contents": "On Sat, Aug 26, 2000 at 05:09:53PM -0400, Tom Lane wrote:\n> Tom Lane <[email protected]> writes:\n> > Jules Bean <[email protected]> writes:\n> >> Is there any chance you could generate a patch against released 7.0.2\n> >> to add just this functionality... It would be the kiss of life for my\n> >> code!\n> \n> > Will look at it.\n> \n> I looked at it and decided that I don't want to mess with it. The\n> BTP_CHAIN logic in 7.0 is so weird and fragile that it's hard to tell\n> what will happen if we try to split anything but the last page of a\n> chain of duplicates. The idea I'd had of just dropping in the whole\n> btree module from current sources doesn't look very practical either;\n> a lot of changes that span module boundaries would have to be backed out\n> of it.\n> \n> My recommendation is to try out current sources (from a nightly snapshot\n> or CVS). I realize that running a non-released version might make you\n> uncomfortable, and rightly so; but I believe that current sources are in\n> pretty good shape right now. In any case, working out any bugs lurking\n> in current strikes me as a much more profitable activity than trying to\n> back-patch the old btree code.\n\nOK. Thanks very much for going through this with me.\n\nI'm actually going to simply do without the index for this release of\nmy software -- it's an internal project, and the speed problems of\nhaving no index aren't disastrous. However, I'd rather not fight any\ninstabilities, this project runs long-term and mostly unattended.\n\nI'll certainly keenly upgrade to 7.1 when it comes out, though, with\nthe new btree logic.\n\nJules\n",
"msg_date": "Thu, 31 Aug 2000 12:40:42 +0100",
"msg_from": "Jules Bean <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> > What is the status of\n> > partial indices? Are they functional now, or have they been broken\n> > forever (I'm not recalling)?\n> \n> They've been diked out of gram.y's syntax for CREATE INDEX at least\n> since Postgres95. No way to tell who did that, why or when, AFAIK.\n> There is still an awful lot of support code for them, however.\n\nIt may have been me. Not sure. At the time, no one was sure what they\ndid, or if we even wanted them. They were broken, I think.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 17:37:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "> > 98304\t\t\t\t\t\t22.07\t5545984\n> > 196608\t\t\t\t\t\t45.60\t11141120\n> > 393216\t\t\t\t\t\t92.53\t22290432\n> > \n> > I tried probabilities from 0.67 to 0.999 and found that runtimes didn't\n> > vary a whole lot (though this is near the minimum), while index size\n> > consistently got larger as the probability of moving right decreased.\n> > The runtime is nicely linear throughout the range.\n> \n> That looks brilliant!! (Bearing in mind that I have over 10 million\n> tuples in my table, you can imagine what performance was like for me!)\n> Is there any chance you could generate a patch against released 7.0.2\n> to add just this functionality... It would be the kiss of life for my\n> code!\n> \n> (Not in a hurry, I'm not back in work until Wednesday, as it happens)\n> \n> And, of course, what would /really/ get my code going speedily would\n> be the partial indices mentioned elsewhere in this thread. If the\n> backend could automagically drop keys containing > 10% (tunable) of\n> the rows from the index, then my index would be (a) about 70% smaller!\n> and (b) only used when it's faster. [This means it would have to\n> update some simple histogram data. However, I can't see that being\n> much of an overhead]\n> \n> For the short term, if I can get a working version of the above\n> randomisation patch, I think I shall 'fake' a partial index by\n> manually setting 'enable_seqscan=off' for all but the 4 or 5 most\n> common categories. Those two factors combined will speed up my bulk\n> inserts a lot.\n\nWhat would be really nifty is to take the most common value found by\nVACUUM ANALYZE, and cause sequential scans if that value represents more\nthan 50% of the entries in the table.\n\nAdded to TODO:\n\n* Prevent index lookups (or index entries using partial index) on most\n common values; instead use sequential scan \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 17:44:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "> Matthew Kirkwood <[email protected]> writes:\n> > One of MySQL's little syntax abuses allows:\n> > INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..);\n> \n> Actually, that's perfectly standard SQL92, just an item we haven't\n> got round to supporting yet. (Until we do the fabled querytree\n> restructuring, it seems a lot harder than it is worth.)\n\nAdded to TODO:\n\n\tINSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..);\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 17:50:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "> * Prevent index lookups (or index entries using partial index) on most\n> common values; instead use sequential scan \n\nThis behavior already exists for the most common value, and would\nexist for any additional values that we had stats for. Don't see\nwhy you think a separate TODO item is needed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Oct 2000 17:51:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "> > * Prevent index lookups (or index entries using partial index) on most\n> > common values; instead use sequential scan \n> \n> This behavior already exists for the most common value, and would\n> exist for any additional values that we had stats for. Don't see\n> why you think a separate TODO item is needed.\n\nYou mean the optimizer already skips an index lookup for the most common\nvalue, and instead does a sequential scan? Seems you are way ahead of\nme.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 17:51:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Thomas Lockhart <[email protected]> writes:\n>>>> What is the status of partial indices?\n>>\n>> They've been diked out of gram.y's syntax for CREATE INDEX at least\n>> since Postgres95. No way to tell who did that, why or when, AFAIK.\n>> There is still an awful lot of support code for them, however.\n\n> It may have been me. Not sure. At the time, no one was sure what they\n> did, or if we even wanted them. They were broken, I think.\n\nWasn't you unless you were hacking the code before it entered our CVS\nsystem. I checked the oldest CVS version (Postgres95 release) and saw\nthat partial indexes were disabled in gram.y in that version.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Oct 2000 17:52:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> * Prevent index lookups (or index entries using partial index) on most\n>>>> common values; instead use sequential scan \n>> \n>> This behavior already exists for the most common value, and would\n>> exist for any additional values that we had stats for. Don't see\n>> why you think a separate TODO item is needed.\n\n> You mean the optimizer already skips an index lookup for the most common\n> value, and instead does a sequential scan?\n\nNo, it goes for the sequential scan if it estimates the cost of the\nindexscan as more than sequential. Indexscan cost depends on estimated\nnumber of retrieved rows --- which it can estimate from pg_statistic\nif the query is WHERE column = mostcommonvalue. So which plan you get\ndepends on just how common the most common value is.\n\nHard-wiring either choice of plan for the most common value would be\ninferior to what the code already does, AFAICS. But for values other\nthan the-most-common, we don't have adequate stats in pg_statistic,\nand so you may or may not get a good estimated row count and hence\na good choice of plan. That's what needs to be fixed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Oct 2000 17:59:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >>>> * Prevent index lookups (or index entries using partial index) on most\n> >>>> common values; instead use sequential scan \n> >> \n> >> This behavior already exists for the most common value, and would\n> >> exist for any additional values that we had stats for. Don't see\n> >> why you think a separate TODO item is needed.\n> \n> > You mean the optimizer already skips an index lookup for the most common\n> > value, and instead does a sequential scan?\n> \n> No, it goes for the sequential scan if it estimates the cost of the\n> indexscan as more than sequential. Indexscan cost depends on estimated\n> number of retrieved rows --- which it can estimate from pg_statistic\n> if the query is WHERE column = mostcommonvalue. So which plan you get\n> depends on just how common the most common value is.\n> \n> Hard-wiring either choice of plan for the most common value would be\n> inferior to what the code already does, AFAICS. But for values other\n> than the-most-common, we don't have adequate stats in pg_statistic,\n> and so you may or may not get a good estimated row count and hence\n> a good choice of plan. That's what needs to be fixed.\n\nOK, I remember now. If the most common value is used as a constant, it\nuses the value from pg_statistic for most common, rather than use\nthe dispersion value. That is great.\n\nWhat I am more concerned about is a join that uses the most common\nvalue. We do an index scan in that case. I wonder of we could get\nsomething into the executor that would switch to sequential scan when\nthe most common value is hit. Is that worth persuing?\n\n\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 18:22:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> What I am more concerned about is a join that uses the most common\n> value. We do an index scan in that case.\n\nNo, we do whichever plan looks cheapest. Again, it's all about\nstatistics.\n\nRight now, eqjoinsel() is just a stub that returns a constant\nselectivity estimate. It might be useful to compute some more\nsophisticated value based on pg_statistic entries for the two\ncolumns, but right now I doubt you could tell much. Should keep\nthe join case in mind when we extend the statistics...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Oct 2000 18:30:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > What I am more concerned about is a join that uses the most common\n> > value. We do an index scan in that case.\n> \n> No, we do whichever plan looks cheapest. Again, it's all about\n> statistics.\n> \n> Right now, eqjoinsel() is just a stub that returns a constant\n> selectivity estimate. It might be useful to compute some more\n> sophisticated value based on pg_statistic entries for the two\n> columns, but right now I doubt you could tell much. Should keep\n> the join case in mind when we extend the statistics...\n\nOK, let me be more specific. Suppose the most common value in a column\nis 3. For a query \"col = 3\", we know 3 is most common, and use the most\ncommon statistics rather than the dispersion statistic, right?\n\nOK, let's assume use of the most common statistic causes a sequential\nscan, but use of dispersion causes an index scan.\n\nThe query \"col = 3\" uses sequential scan. In the query \"col = tab2.col2\",\nthe dispersion statistic is used, causing an index scan. \n\nHowever, assume tab2.col2 equals 3. I assume this would cause an index\nscan because the executor doesn't know about the most common value,\nright? Is it worth trying to improve that?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 19:20:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> However, assume tab2.col2 equals 3. I assume this would cause an index\n> scan because the executor doesn't know about the most common value,\n> right? Is it worth trying to improve that?\n\nOh, I see: you are assuming that a nestloop join is being done, and\nwondering if it's worthwhile to switch dynamically between seqscan\nand indexscan for each scan of the inner relation, depending on exactly\nwhat value is being supplied from the outer relation for that scan.\nHmm.\n\nNot sure if it's worth the trouble or not. Nestloop is usually a\nlast-resort join strategy anyway, and is unlikely to be picked when the\ntables are large enough to make performance be a big issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Oct 2000 19:33:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > However, assume tab2.col2 equals 3. I assume this would cause an index\n> > scan because the executor doesn't know about the most common value,\n> > right? Is it worth trying to improve that?\n> \n> Oh, I see: you are assuming that a nestloop join is being done, and\n> wondering if it's worthwhile to switch dynamically between seqscan\n> and indexscan for each scan of the inner relation, depending on exactly\n> what value is being supplied from the outer relation for that scan.\n> Hmm.\n> \n> Not sure if it's worth the trouble or not. Nestloop is usually a\n> last-resort join strategy anyway, and is unlikely to be picked when the\n> tables are large enough to make performance be a big issue.\n\nYes, I realize only nested loop has this problem. Mergejoin and\nHashjoin actually would grab the whole table via sequential scan, so the\nindex is not involved, right?\n\nLet me ask, if I do the query, \"tab1.col = tab2.col and tab2.col = 3\",\nthe system would use an index to get tab2.col, but then what method\nwould it use to join to tab1? Nested loop because it thinks it is going\nto get only one row from tab1.col1. If 3 is the most common value in\ntab1.col, it is going to get lots more than one row, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 23:18:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yes, I realize only nested loop has this problem. Mergejoin and\n> Hashjoin actually would grab the whole table via sequential scan, so the\n> index is not involved, right?\n\nThey'd grab the whole table after applying restriction clauses. An\nindexscan might be used if there's an appropriate restriction clause\nfor either table, or to sort a table for merge join...\n\n> Let me ask, if I do the query, \"tab1.col = tab2.col and tab2.col = 3\",\n> the system would use an index to get tab2.col, but then what method\n> would it use to join to tab1? Nested loop because it thinks it is going\n> to get only one row from tab1.col1.\n\nI don't think it'd think that. The optimizer is not presently smart\nenough to make the transitive deduction that tab1.col = 3 (it does\nrecognize transitive equality of Vars, but doesn't extend that to\nnon-variable values). So it won't see any restriction clause for\ntab1 here.\n\nIf it thinks that tab2.col = 3 will yield one row, it might well choose\na nested loop with tab2 as the outer, rather than merge or hash join.\nSo an inner indexscan for tab1 is definitely a possible plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 00:12:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Yes, I realize only nested loop has this problem. Mergejoin and\n> > Hashjoin actually would grab the whole table via sequential scan, so the\n> > index is not involved, right?\n> \n> They'd grab the whole table after applying restriction clauses. An\n> indexscan might be used if there's an appropriate restriction clause\n> for either table, or to sort a table for merge join...\n> \n> > Let me ask, if I do the query, \"tab1.col = tab2.col and tab2.col = 3\",\n> > the system would use an index to get tab2.col, but then what method\n> > would it use to join to tab1? Nested loop because it thinks it is going\n> > to get only one row from tab1.col1.\n> \n> I don't think it'd think that. The optimizer is not presently smart\n> enough to make the transitive deduction that tab1.col = 3 (it does\n> recognize transitive equality of Vars, but doesn't extend that to\n> non-variable values). So it won't see any restriction clause for\n> tab1 here.\n> \n> If it thinks that tab2.col = 3 will yield one row, it might well choose\n> a nested loop with tab2 as the outer, rather than merge or hash join.\n> So an inner indexscan for tab1 is definitely a possible plan.\n\nYes, that was my point, that a nested loop could easily be involved if\nthe joined table has a restriction. Is there a TODO item here?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 00:41:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> So an inner indexscan for tab1 is definitely a possible plan.\n\n> Yes, that was my point, that a nested loop could easily be involved if\n> the joined table has a restriction. Is there a TODO item here?\n\nMore like a \"to investigate\" --- I'm not sold on the idea that a\ndynamic switch in plan types would be a win. Maybe it would be,\nbut...\n\nOne thing to think about is that it'd be critically dependent on having\naccurate statistics. Currently, the planner only places bets on the\naverage behavior over a whole join. If you make a separate bet on each\nscan, then you open up the risk of betting wrong every time, should\nyour stats be out-of-date or otherwise misleading.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 00:48:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> So an inner indexscan for tab1 is definitely a possible plan.\n> \n> > Yes, that was my point, that a nested loop could easily be involved if\n> > the joined table has a restriction. Is there a TODO item here?\n> \n> More like a \"to investigate\" --- I'm not sold on the idea that a\n> dynamic switch in plan types would be a win. Maybe it would be,\n> but...\n> \n> One thing to think about is that it'd be critically dependent on having\n> accurate statistics. Currently, the planner only places bets on the\n> average behavior over a whole join. If you make a separate bet on each\n> scan, then you open up the risk of betting wrong every time, should\n> your stats be out-of-date or otherwise misleading.\n\nI agree. Not sure how to approach this, but I am sure it is dealt with\nby most database systems. Can someone find out how other db's handle\nthis? Is there any research on it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 00:59:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on inserts"
}
] |
[
{
"msg_contents": "> > If you also need the function definitions, check libpq-fe.h for C\n> > style syntax. (It's in src/interfaces/libpq)\n> \n> I need function definitions and structure (called Type in VB) \n> definitions, and yes I have looked in libpq-fe.h, actually this is \n> the placed where the dll is build :-). But as mentioned earlier, the \n> structures for PGconn and PGresult is not in this file (they are \n> typedef'ed directly from pg_conn and pg_result, which I can't find in \n> any of the included files, so I actually wonder how the dll is build \n> in the first place???).\nAhh. Ok. Those are defined in 'libpq-int.h', in the same directory. But be\ncareful with possibly accessing any of the members directly - use the\naccessor functions instead. \nThey have been moved into the -int file in order to make it harder to find\nthem directly for just that reason :-)\n\n\n> > As a sidenote, you may be much better off using ADO with the ODBC\n> > driver - it's definitly move VB-friendly.\n> \n> Yes, but what I forgot to tell you, is that I'm trying to create a \n> activex control, which is to be placed on a (intranet) web page, \n> using the Esker plugin, so ODBC is not the best way to go, as that \n> would need a ODBC-connection be set up on each client machine, which \n> are to use the activex control, so that's why I need to go directly \n> to the dll file (which can be fetched from the page). To set up an \n> ODBC connection you would need to install the psqlodbc.dll anyway, so \n> why not take the direct way.\nOk, that puts it in different light. You could always construct the\nconnection string on-the-fly so you don't need to create an actual\ndatasource (specifying the driver manually), but you would still ned to get\nthe DLL file onto the client, possibly along with some registry entries.\n(For libpq, you definitly need only the DLL, and nothing in the registry).\n\n\n//Magnus\n",
"msg_date": "Fri, 25 Aug 2000 13:31:51 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: libpq.dll and VB"
}
] |
[
{
"msg_contents": "I'm in the process of extending PostgreSQL and need a bit of advice\nthat I don't seem to get from the manuals.\n\n- When dynamically linking functions must there be at most one\n function per shared object module or can there be multiple external\n entry points within a single shared object?\n\n- If the latter, will multiple copies of the file be loaded (e.g., one\n per function invoked) or will the same copy be used to resolve all\n the multiple external entry points?\n\n- I am writing some functions to handle some new types. These\n functions logically should share code. How should the shared object\n modules be structured in order to allow code sharing among\n functions? (This question is obviously related to the previous\n two.)\n\n- Is it possible to write functions to automatically convert one\n extended type into another? If so, how should this be done?\n\n- Some of my types will be complex and so it makes sense to have\n functions extract components of the types (an analogy is what\n datepart() does). Should such functions return character strings or\n some other type? If they return an appropriate built-in (or\n extended?) type will the needed conversion be provided\n automatically depending on context?\n\nBy the way, the new web pages seem to be missing some links. For\nexample, the \"users-lounge/documentation\" link just goes to a\ndirectory; shouldn't there be an actual page with links to the various\ncomponents of the documentation?\n\nAlso, in the docs for Chapter 5. Extending SQL: Types there is a code\nexample with the following:\n\n\telog(WARN, ...);\n\nelog.h does not define WARN. Should this be changed to NOTICE in the\ndocs?\n\nThanks for your help.\n\nCheers,\nBrook\n",
"msg_date": "Fri, 25 Aug 2000 08:08:50 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "advice on extensions needed"
},
{
"msg_contents": "> - When dynamically linking functions must there be at most one\n> function per shared object module or can there be multiple external\n> entry points within a single shared object?\n\nmultiple entry points are fine.\n\n> - If the latter, will multiple copies of the file be loaded (e.g., one\n> per function invoked) or will the same copy be used to resolve all\n> the multiple external entry points?\n\nafaik the same file is used.\n\n> - I am writing some functions to handle some new types. These\n> functions logically should share code. How should the shared object\n> modules be structured in order to allow code sharing among\n> functions? (This question is obviously related to the previous\n> two.)\n\nYou want *multiple* loadable modules to share code between them? afaik\nyou will have to make direct calls to the dynamic linker to get this to\nhappen. Usually, I resolve all symbols within the loadable module since\nit is self-contained. However, it may be that the dynamic linker is\nsmart enough to find symbols from previously-loaded modules; try it out\nand then check src/backend/port/dynloader/... for details.\n\n> - Is it possible to write functions to automatically convert one\n> extended type into another? If so, how should this be done?\n\nIs \"extended type\" the same as a \"user defined type\"? Or something else?\nIf it is a UDT then sure, write away. And if you provide a function with\nthe target type as the name and taking one argument having the source\ntype, Postgres will know how to convert it automatically when required.\n\n> - Some of my types will be complex and so it makes sense to have\n> functions extract components of the types (an analogy is what\n> datepart() does). Should such functions return character strings or\n> some other type? If they return an appropriate built-in (or\n> extended?) type will the needed conversion be provided\n> automatically depending on context?\n\nSure, as long as you have the right conversion functions defined and as\nlong as the conversion can be chosen without ambiguity.\n\n> elog.h does not define WARN. Should this be changed to NOTICE in the\n> docs?\n\nThat seems to already be fixed.\n\n - Thomas\n",
"msg_date": "Fri, 25 Aug 2000 15:12:39 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on extensions needed"
},
{
"msg_contents": "Brook Milligan <[email protected]> writes:\n> - When dynamically linking functions must there be at most one\n> function per shared object module or can there be multiple external\n> entry points within a single shared object?\n\nMulti functions per shared object file is fine (in fact normal, I'd\nsay). See the example in src/tutorial/.\n\n> - If the latter, will multiple copies of the file be loaded (e.g., one\n> per function invoked) or will the same copy be used to resolve all\n> the multiple external entry points?\n\nJust one copy --- see src/backend/utils/fmgr/dfmgr.c\n\n> - Is it possible to write functions to automatically convert one\n> extended type into another? If so, how should this be done?\n\nA function named the same as a type, with one argument of some other\ntype, is treated as an implicit type conversion rule by the parser.\n\n> Also, in the docs for Chapter 5. Extending SQL: Types there is a code\n> example with the following:\n> \telog(WARN, ...);\n> elog.h does not define WARN. Should this be changed to NOTICE in the\n> docs?\n\nelog(ERROR) is the correct equivalent. Looks like someone fixed that\nalready; I can't find elog WARN anywhere in the current docs, except for\n\n/home/postgres/pgsql/HISTORY: Change elog(WARN) to elog(ERROR)(Bruce)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Aug 2000 11:25:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on extensions needed "
},
{
"msg_contents": "Thanks for the quick responses. They are very helpful.\n\n elog(ERROR) is the correct equivalent. Looks like someone fixed that\n already; I can't find elog WARN anywhere in the current docs, except for\n\n /home/postgres/pgsql/HISTORY: Change elog(WARN) to elog(ERROR)(Bruce)\n\nI was going by the web page\n(http://www.postgresql.org/users-lounge/docs/7.0/programmer/xtypes.htm),\nwhich still by the way mentions WARN. Perhaps they are out of date.\n\nCheers,\nBrook\n",
"msg_date": "Fri, 25 Aug 2000 11:30:53 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: advice on extensions needed"
},
{
"msg_contents": " > - Is it possible to write functions to automatically convert one\n > extended type into another? If so, how should this be done?\n\n A function named the same as a type, with one argument of some other\n type, is treated as an implicit type conversion rule by the parser.\n\nJust to make sure I understand. Suppose I create two user-defined\ntypes A and B and want interconversions. I will need the following\nfunctions, right?\n\n\t/* I/O */\nA * A_in (const char *);\nchar * A_out (const A *);\nB * B_in (const char *);\nchar * B_out (const B *);\n\n\t/* conversions */\nA * A (const B *);\nB * B (const A *);\n\nThanks again.\n\nCheers,\nBrook\n",
"msg_date": "Fri, 25 Aug 2000 12:19:30 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: advice on extensions needed"
},
{
"msg_contents": "> Just to make sure I understand. Suppose I create two user-defined\n> types A and B and want interconversions. I will need the following\n> functions, right?\n> /* conversions */\n> A * A (const B *);\n> B * B (const A *);\n\nRight.\n\n - Thomas\n",
"msg_date": "Sat, 26 Aug 2000 07:00:11 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on extensions needed"
}
] |
[
{
"msg_contents": "We have a serious problem with vacuum locking up our tables for\ntoo long, (large amount of data + large number of updates == long\nvacuum)\n\nAs a hack I'm thinking about using the RULE system to forward select\nqueries and alternate between two backing data tables.\n\nthe concept is:\n\n front\n |\n CREATE RULE \"_RETfront\" AS ON SELECT TO front DO INSTEAD\n SELECT * FROM back1;\n / x\nback1 back2\n\nThe idea is that after several large updates, instead of vacuuming,\nwe do this:\n\n-- suspend updating back1\ntruncate back2;\nselect * into table back2 from back1; -- is there a quicker way?\nvacuum verbose analyze back2;\n\nbegin; -- rule update needs a lock to prevent falling through into 'front'\nlock front; -- stops all queries to front\n -- is this really needed?\n -- will the next action (rule drop/recreate) happen atomically?\ndrop rule _RETfront;\nCREATE RULE \"_RETfront\" AS ON SELECT TO front DO INSTEAD SELECT * FROM back2;\nupdate active_table set active = '2'; -- remeber who's the active table\nend;\n-- resume normal updating however we now update back2\n\nafter several updates repeate the same process except swap back2\nwith back1 and vice versa.\n\nOk, now I know this is evil, but will it work? Will queries on\n'front' suffer any performance problems? The docs seem to indicate\nthat it won't however I just wanted to put this up and see if any\nof the developers could offer insight as to whether I'm apt to\nshoot myself in the foot doing this.\n\nWe really don't mind lagging the data updates, but stoping queries\nfor the 5 or so minutes it takes to vacuum is not an option. We need\nto vacuum every twenty minutes or so otherwise the table becomes nearly\nunusable.\n\nIs there a faster way to duplicate tables under postgresql than\nSELECT INTO?\n\nAre we going to have problems dropping and adding rules in the middle\nof a transaction?\n\nthanks!\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Fri, 25 Aug 2000 07:15:42 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuuming in the background"
}
] |
[
{
"msg_contents": "Is anybody considering this? So that people can write program which access\na database via the Internet. What I'm getting at is that we have\napplications which run on our Intranet. They query and update\ndatabases. There is interest in a work at home solution. Since the company\nhas multiple T1 Internet connections, they are interested in allowing\npeople to use their home ISP to connect. We are looking at a VPN solution\nas well, but they all seem to have a \"per seat\" or \"concurrent\nuse\" restriction. The more users, the higher the cost. Also, some ISPs\nhave stated that using a VPN over their facility is forbidden and will\nresult in termination of the service. Another possibility is to simply\nuse a secure Web server and rewrite the applications as CGI's or something\nsimiliar.\n\nMore of a curiousity question at present,\nJohn\n\n",
"msg_date": "Fri, 25 Aug 2000 12:56:24 -0500 (CDT)",
"msg_from": "John McKown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Access PostgreSQL server via SSL/Internet"
},
{
"msg_contents": "> Is anybody considering this? So that people can write program which access\n> a database via the Internet. What I'm getting at is that we have\n> applications which run on our Intranet. They query and update\n> databases. There is interest in a work at home solution. Since the company\n> has multiple T1 Internet connections, they are interested in allowing\n> people to use their home ISP to connect. We are looking at a VPN solution\n> as well, but they all seem to have a \"per seat\" or \"concurrent\n> use\" restriction. The more users, the higher the cost. Also, some ISPs\n> have stated that using a VPN over their facility is forbidden and will\n> result in termination of the service. Another possibility is to simply\n> use a secure Web server and rewrite the applications as CGI's or something\n> similiar.\n\nIt is trivial to connect clients and servers across an ssh-piped\nconnection. I'm not sure of the details as far as getting things set up\nto be automated for turnkey installations.\n\n - Thomas\n",
"msg_date": "Sat, 26 Aug 2000 07:04:44 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Access PostgreSQL server via SSL/Internet"
},
{
"msg_contents": "> \n> It is trivial to connect clients and servers across an ssh-piped\n> connection. I'm not sure of the details as far as getting things set up\n> to be automated for turnkey installations.\n> \n\nOTOH, people using ssh-piped connections need actual accounts on \nthe database server, opposed to just database accounts. That's\nsomething that isn't necessarily a good idea. Also, ssh-piped \nconnections are decent to setup, but you must always ssh in before\nyou want to do anything else.\n\n",
"msg_date": "Sat, 26 Aug 2000 09:39:37 -0500",
"msg_from": "Andrew Selle <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Access PostgreSQL server via SSL/Internet"
},
{
"msg_contents": "On Sat, 26 Aug 2000, Thomas Lockhart wrote:\n\n> > have stated that using a VPN over their facility is forbidden and will\n> > result in termination of the service. Another possibility is to simply\n> > use a secure Web server and rewrite the applications as CGI's or something\n> > similiar.\n> \n> It is trivial to connect clients and servers across an ssh-piped\n> connection. I'm not sure of the details as far as getting things set up\n> to be automated for turnkey installations.\n> \n\nThomas,\n\nThanks for the thought. I just found something called \"stunnel\" which may\ndo the trick.\n\nJohn\n\n",
"msg_date": "Sat, 26 Aug 2000 09:50:10 -0500 (CDT)",
"msg_from": "John McKown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Access PostgreSQL server via SSL/Internet"
},
{
"msg_contents": "On Sat, 26 Aug 2000, John McKown wrote:\n\n> On Sat, 26 Aug 2000, Thomas Lockhart wrote:\n> \n> > It is trivial to connect clients and servers across an ssh-piped\n> > connection. I'm not sure of the details as far as getting things set up\n> > to be automated for turnkey installations.\n> > \n> \n> Thomas,\n> \n> Thanks for the thought. I just found something called \"stunnel\" which may\n> do the trick.\n\nAlso look into \"vpnd\" - we're using it for a project for a client until I\ncan get the SSL connection stuff working properly... (Hint, hint... It\nwould be nice if it was better documented :)\n\nhttp://sunsite.auc.dk/vpnd/\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Sat, 26 Aug 2000 10:39:55 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Access PostgreSQL server via SSL/Internet"
},
{
"msg_contents": "* Andrew Selle <[email protected]> [000826 07:50] wrote:\n> > \n> > It is trivial to connect clients and servers across an ssh-piped\n> > connection. I'm not sure of the details as far as getting things set up\n> > to be automated for turnkey installations.\n> > \n> \n> OTOH, people using ssh-piped connections need actual accounts on \n> the database server, opposed to just database accounts. That's\n> something that isn't necessarily a good idea. Also, ssh-piped \n> connections are decent to setup, but you must always ssh in before\n> you want to do anything else.\n\nActually I'm pretty sure you can get around this problem with \nhost keys, but I haven't tried that.\n\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Sat, 26 Aug 2000 14:01:51 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Access PostgreSQL server via SSL/Internet"
}
] |
[
{
"msg_contents": "Hi,\n\nI am interested in how to speed up storage. About 1000\nor more inserts may need to be performed at a time ,\nand before each insert I need to look up its key from\nthe reference table. So each insert is actually a\nquery followed by an insert.\n \nThe tables concerned are :\nCREATE TABLE referencetable(idx serial, rcol1 int4 NOT\nNULL, rcol2 int4 NOT NULL, rcol3 varchar(20) NOT\nNULL, rcol4 varchar(20), PRIMARY KEY(idx) ...\nCREATE INDEX index_referencetable on\nreferencetable(rcol1, rcol2, rcol3, rcol4);\n\nCREATE TABLE datatable ( ref_idx int4,\nstart_date_offset int4 NOT NULL, stop_date_offset int4\nNOT NULL, dcol4 float NOT NULL, dcol5 float NOT NULL,\nPRIMARY KEY(ref_idx, start_date_offset), CONSTRAINT c1\nFOREIGN KEY(ref_idx) REFERENCES referencetable(idx) );\n\nI need to do the following sequence n number of times\n- \n1. select idx (as key) from referencetable where\ncol1=c1 and col2=c2 and col3=c3 and col4=c4; (Would an\ninitial 'select into temptable' help here since for a\nlarge number of these queries 'c1' and 'c2'\ncomnbinations would remain constant ?)\n2. insert into datatable values(key, ....);\n\nI am using JDBC interface of postgresql-7.0.2 on\nLinux. 'referencetable' has about 1000 records, it can\nkeep growing. 'datatable' has about 3 million records,\nit would grow at a very fast rate. Storing 2000\nrecords takes around 75 seconds after I vacuum\nanalyze. (before that it took around 40 seconds - ???)\n. I am performing all the inserts ( including the\nlookup) as one transaction.\n\nThanks,\nRini\n\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Mail - Free email you can access from anywhere!\nhttp://mail.yahoo.com/\n",
"msg_date": "Fri, 25 Aug 2000 12:20:59 -0700 (PDT)",
"msg_from": "Rini Dutta <[email protected]>",
"msg_from_op": true,
"msg_subject": "queries and inserts "
},
{
"msg_contents": "Removing indexes will speed up the INSERT portion but slow down the SELECT\nportion.\n\nJust an FYI, you can INSERT into table (select whatever from another\ntable) -- you could probably do what you need in a single query (but would\nalso probably still have the speed problem).\n\nHave you EXPLAINed the SELECT query to see if index scans are being used\nwhere possible?\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Rini Dutta\" <[email protected]>\nTo: <[email protected]>\nCc: <[email protected]>\nSent: Friday, August 25, 2000 12:20 PM\nSubject: [SQL] queries and inserts\n\n\n> Hi,\n>\n> I am interested in how to speed up storage. About 1000\n> or more inserts may need to be performed at a time ,\n> and before each insert I need to look up its key from\n> the reference table. So each insert is actually a\n> query followed by an insert.\n>\n> The tables concerned are :\n> CREATE TABLE referencetable(idx serial, rcol1 int4 NOT\n> NULL, rcol2 int4 NOT NULL, rcol3 varchar(20) NOT\n> NULL, rcol4 varchar(20), PRIMARY KEY(idx) ...\n> CREATE INDEX index_referencetable on\n> referencetable(rcol1, rcol2, rcol3, rcol4);\n>\n> CREATE TABLE datatable ( ref_idx int4,\n> start_date_offset int4 NOT NULL, stop_date_offset int4\n> NOT NULL, dcol4 float NOT NULL, dcol5 float NOT NULL,\n> PRIMARY KEY(ref_idx, start_date_offset), CONSTRAINT c1\n> FOREIGN KEY(ref_idx) REFERENCES referencetable(idx) );\n>\n> I need to do the following sequence n number of times\n> -\n> 1. select idx (as key) from referencetable where\n> col1=c1 and col2=c2 and col3=c3 and col4=c4; (Would an\n> initial 'select into temptable' help here since for a\n> large number of these queries 'c1' and 'c2'\n> comnbinations would remain constant ?)\n> 2. insert into datatable values(key, ....);\n>\n> I am using JDBC interface of postgresql-7.0.2 on\n> Linux. 'referencetable' has about 1000 records, it can\n> keep growing. 'datatable' has about 3 million records,\n> it would grow at a very fast rate. Storing 2000\n> records takes around 75 seconds after I vacuum\n> analyze. (before that it took around 40 seconds - ???)\n> . I am performing all the inserts ( including the\n> lookup) as one transaction.\n>\n> Thanks,\n> Rini\n>\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Yahoo! Mail - Free email you can access from anywhere!\n> http://mail.yahoo.com/\n>\n\n",
"msg_date": "Sun, 27 Aug 2000 15:01:02 -0700",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries and inserts "
}
] |
[
{
"msg_contents": "I would like to see if we can't get some amount of OUTER JOIN functionality\ninto the 7.1 release. It seems that the executor implementation should be\npretty simple, at least for the left-join case (which is what people appear\nto need the most). What we are mostly missing is a suitable parsetree\nrepresentation plus appropriate handling in the planner.\n\nRight now, the rangetable associated with a query serves two purposes.\nFirst, it provides a list of all relations used in the query for Var nodes\nto refer to (a Var's \"varno\" field is just an index into the rangetable\nlist). Second, it drives the planner's construction of the join tree for\nthe query (all entries that are marked inJoinSet will be added to the join\ntree in some order). I think what we want to do is separate these two\nfunctions into two data structures. The rangetable list is just fine for\nlooking up Var references, but it's not sufficient information for\nrepresenting multiple types of joins. A ready-to-plan Query node should\nhave an additional subsidiary data structure that describes the required\njoin tree of relations.\n\nThe additional data structure could really be pretty close to the raw parse\nstructure that the grammar emits for FROM clauses. I envision it as a tree\nwhose leaf nodes are references to rangetable entries and whose upper-level\nnodes correspond to JOIN clauses (each JOIN node has pointers to its child\nnodes plus information about the specified join type and any supplied\nconstraint conditions). If the FROM clause has more than one item, the\ntop level of the join tree is a CROSS JOIN node with all the FROM-clause\nitems as children. (Note that at least for this case, a join node needs\nto be able to have more than two children.)\n\nIf anything, this structure should be easier for the parser/analyzer to emit\nthan what it's doing now. Notice in particular that ON/USING qual clauses\nassociated with joins are to be left in the join tree, *not* merged into the\nmain WHERE clause. (The planner would just have to separate them out again\nand discover which join they should go with, so what's the point of merging\nthem into WHERE?)\n\nI envision the planner as operating by walking this join tree and\nconsidering plans that perform each specified join. At nodes with more than\ntwo children (such as the top-level CROSS JOIN from a many-element FROM\nclause) the planner will consider all possible orders for joining the\nchildren together, the same as it does now. Note that this implementation\nimplies that the user can guide/constrain the planner by writing an\nappropriate FROM clause. For example:\n\nSELECT ... FROM a,b,c\n\tPlanner will consider all three possible join orders:\n\t(a cross join b) cross join c\n\t(a cross join c) cross join b\n\t(b cross join c) cross join a\n\nSELECT ... FROM (a cross join b) cross join c\n\tPlanner will only consider joining a with b and then adding c.\n\nThis might be construed as either a feature or a bug depending on your\nviews about how much responsibility to put on the planner vs. the user.\nIn any case it seems like a reasonable first-cut implementation; we can\nthink about improvements later.\n\nThoughts, objections, better ideas?\n\nOne thing I'm not very clear on is how much has been done in the parser\nabout the ISO rules for visibility and aliasing of table columns in\nexplicit JOINs. Thomas, can you sketch out where we stand in that area?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Aug 2000 16:07:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposal for supporting outer joins in 7.1"
},
{
"msg_contents": "> I would like to see if we can't get some amount of OUTER JOIN functionality\n> into the 7.1 release.\n\nGreat!\n\n> It seems that the executor implementation should be\n> pretty simple, at least for the left-join case (which is what people appear\n> to need the most). What we are mostly missing is a suitable parsetree\n> representation plus appropriate handling in the planner.\n\nI've already got the merge-join code walking left, right, and full joins\n(look for #ifdef code in the appropriate routines). Just didn't implment\nthe \"null column filling\", and didn't figure out how to push the\nleft/right/full flags into the executor. I haven't looked at the other\njoin techniques to see how easy those would be.\n\n> Right now, the rangetable associated with a query serves two purposes.\n> First, it provides a list of all relations used in the query for Var nodes\n> to refer to (a Var's \"varno\" field is just an index into the rangetable\n> list). Second, it drives the planner's construction of the join tree for\n> the query (all entries that are marked inJoinSet will be added to the join\n> tree in some order). I think what we want to do is separate these two\n> functions into two data structures. The rangetable list is just fine for\n> looking up Var references, but it's not sufficient information for\n> representing multiple types of joins. A ready-to-plan Query node should\n> have an additional subsidiary data structure that describes the required\n> join tree of relations.\n> \n> The additional data structure could really be pretty close to the raw parse\n> structure that the grammar emits for FROM clauses. I envision it as a tree\n> whose leaf nodes are references to rangetable entries and whose upper-level\n> nodes correspond to JOIN clauses (each JOIN node has pointers to its child\n> nodes plus information about the specified join type and any supplied\n> constraint conditions). If the FROM clause has more than one item, the\n> top level of the join tree is a CROSS JOIN node with all the FROM-clause\n> items as children. (Note that at least for this case, a join node needs\n> to be able to have more than two children.)\n> \n> If anything, this structure should be easier for the parser/analyzer to emit\n> than what it's doing now. Notice in particular that ON/USING qual clauses\n> associated with joins are to be left in the join tree, *not* merged into the\n> main WHERE clause. (The planner would just have to separate them out again\n> and discover which join they should go with, so what's the point of merging\n> them into WHERE?)\n> \n> I envision the planner as operating by walking this join tree and\n> considering plans that perform each specified join. At nodes with more than\n> two children (such as the top-level CROSS JOIN from a many-element FROM\n> clause) the planner will consider all possible orders for joining the\n> children together, the same as it does now. Note that this implementation\n> implies that the user can guide/constrain the planner by writing an\n> appropriate FROM clause. For example:\n> \n> SELECT ... FROM a,b,c\n> Planner will consider all three possible join orders:\n> (a cross join b) cross join c\n> (a cross join c) cross join b\n> (b cross join c) cross join a\n> \n> SELECT ... FROM (a cross join b) cross join c\n> Planner will only consider joining a with b and then adding c.\n> \n> This might be construed as either a feature or a bug depending on your\n> views about how much responsibility to put on the planner vs. the user.\n> In any case it seems like a reasonable first-cut implementation; we can\n> think about improvements later.\n> \n> Thoughts, objections, better ideas?\n> \n> One thing I'm not very clear on is how much has been done in the parser\n> about the ISO rules for visibility and aliasing of table columns in\n> explicit JOINs. Thomas, can you sketch out where we stand in that area?\n\nThe current code does not do the scoping rules quite right, but it isn't\nvery far wrong either. I think that the scoping and aliasing can be\ncontained within the parser code, swallowing the intermediate scopes by\nresolving back to the original names. However, if we continue to do it\nthis way (losing intermediate alias names) then we will be making it\nharder for the planner/optimizer to jiggle up the plans, since the query\ntree will have already assumed some references. For example:\n\n select * from (a join b on (i)) join c using (i = j);\n\nis equivalent to\n\n select * from (a join b on (i)) as t1 (i, ...) join c using (t1.i =\nj);\n\nbut the parser may already recast it as\n\n select a.i, a..., b.x, b..., c.j, c... from a, b, c\n where (a.i = b.i) and (a.i = c.j);\n\nso the optimizer would have to figure out for itself that a.i is\ninterchangable with b.i.\n\n(Also, you'll remember that it currently barfs on three-way joins; I\nhaven't had a chance to look at that yet).\n\nGetting outer joins for 7.1 would be great. Count me in to help...\n\n - Thomas\n",
"msg_date": "Sat, 26 Aug 2000 07:21:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for supporting outer joins in 7.1"
},
{
"msg_contents": "> I would like to see if we can't get some amount of OUTER JOIN functionality\n> into the 7.1 release. It seems that the executor implementation should be\n\nJust a word of support. Outer Joins are one of the few things that PostgreSQL\nmisses to really make it big time. If you can help 90% of all cases until the\n\"real\" solution comes in 7.2, there's no reason to hesitate.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2582\nHowitzvej 75 �ben 14.00-18.00 Email: [email protected]\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n",
"msg_date": "Sat, 26 Aug 2000 14:53:28 +0200",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for supporting outer joins in 7.1"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I've already got the merge-join code walking left, right, and full joins\n> (look for #ifdef code in the appropriate routines).\n\nOK, will look.\n\n> Just didn't implment\n> the \"null column filling\", and didn't figure out how to push the\n> left/right/full flags into the executor.\n\nI can deal with the null tuple insertion. As for the flags, we just\nneed to add those to the Plan nodes for joins ... but first the info\nhas to get to the planner, thus we need to fix the parser output.\n\n> I haven't looked at the other\n> join techniques to see how easy those would be.\n\nOffhand it seems that left join is trivial in all three join types.\nRight join can be handled by swapping the two tables (make the inner\nouter), but full join only seems possible with a merge join. I am also\nconcerned about whether even a merge join can do right/full joins\ncorrectly, if the merge qual (ordering expression) does not include\n*all* the relevant WHERE conditions.\n\n> The current code does not do the scoping rules quite right, but it isn't\n> very far wrong either. I think that the scoping and aliasing can be\n> contained within the parser code, swallowing the intermediate scopes by\n> resolving back to the original names. However, if we continue to do it\n> this way (losing intermediate alias names) then we will be making it\n> harder for the planner/optimizer to jiggle up the plans, since the query\n> tree will have already assumed some references.\n\nHow so? The planner only sees Var nodes, which are unambiguous by\ndefinition: rangetable index and attribute number don't have any\ndependency on aliases. I think the only real problem here is whether\nanalyze.c resolves column names correctly, ie searching the right set\nof aliases, in each part of the query text. (If I understand the SQL\nspec correctly, the set of aliases visible in a JOIN's ON/USING clauses\nis different from what's visible elsewhere in the query --- do we do\nthat right, or not?)\n\n> but the parser may already recast it as\n> select a.i, a..., b.x, b..., c.j, c... from a, b, c\n> where (a.i = b.i) and (a.i = c.j);\n\nThat is what the parser currently does, but I think it's actually easier\nall round if we leave the ON/USING conditions attached to the relevant\nJoinExpr node, instead of pushing them over to the main WHERE clause.\n\n> (Also, you'll remember that it currently barfs on three-way joins; I\n> haven't had a chance to look at that yet).\n\nIIRC the barf is directly related to the lack of suitable parsetree\nrepresentation for this stuff, so I suspect that we need not worry too\nmuch about fixing that problem. It should go away by itself once we\nnail down a better representation.\n\n> Getting outer joins for 7.1 would be great. Count me in to help...\n\nOK, good. I think I have enough of a handle on the issues downstream\nfrom the parser. Are you interested in dealing with the column lookup/\naliasing questions?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 Aug 2000 12:05:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for supporting outer joins in 7.1 "
},
{
"msg_contents": "> OK, good. I think I have enough of a handle on the issues downstream\n> from the parser. Are you interested in dealing with the column lookup/\n> aliasing questions?\n\nSure. That's a fun part for me...\n\n - Thomas\n",
"msg_date": "Sat, 26 Aug 2000 16:33:02 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for supporting outer joins in 7.1"
},
{
"msg_contents": "At 16:07 25/08/00 -0400, Tom Lane wrote:\n> For example:\n>\n>SELECT ... FROM a,b,c\n>\tPlanner will consider all three possible join orders:\n>\t(a cross join b) cross join c\n>\t(a cross join c) cross join b\n>\t(b cross join c) cross join a\n>\n>SELECT ... FROM (a cross join b) cross join c\n>\tPlanner will only consider joining a with b and then adding c.\n>\n\nI'm not sure whether the above is the syntax you plan to use, but it looks\na little too much like:\n\nSELECT ... FROM (select * from a cross join b) as z cross join c\n\nwhich has a quite different meaning to any kind of outer join, and the\nparenthesis, in this case, should not affect the planner choices. I don;t\nthink that support for this kind of query is implemented, but it could\nconfuse things a little.\n\nAs an aside, while you are in there, I don't suppose you would be able to\nimplement that above syntax as well? Then, maybe, have a go at the common\ncold, too.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 27 Aug 2000 19:56:57 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for supporting outer joins in 7.1"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n>> SELECT ... FROM (a cross join b) cross join c\n> I'm not sure whether the above is the syntax you plan to use, but it looks\n> a little too much like:\n> SELECT ... FROM (select * from a cross join b) as z cross join c\n> which has a quite different meaning to any kind of outer join,\n\nHuh? AFAIK they mean exactly the same thing, modulo a few issues about\nvisibility of columns. In any case you'll have to take your complaint\nto ISO, because that's what the spec says the syntaxes are.\n\n> I don;t think that support for this kind of query is implemented,\n\nNot yet, but it's certainly on the TODO list. I'm not seriously\nthinking about getting subselect-in-FROM done in this go-round,\nthough.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 27 Aug 2000 11:38:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for supporting outer joins in 7.1 "
},
{
"msg_contents": "At 11:38 27/08/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>>> SELECT ... FROM (a cross join b) cross join c\n>> I'm not sure whether the above is the syntax you plan to use, but it looks\n>> a little too much like:\n>> SELECT ... FROM (select * from a cross join b) as z cross join c\n>> which has a quite different meaning to any kind of outer join,\n>\n>Huh? AFAIK they mean exactly the same thing, modulo a few issues about\n>visibility of columns. In any case you'll have to take your complaint\n>to ISO, because that's what the spec says the syntaxes are.\n\nSorry, I should have written:\n\n SELECT ... FROM (select * from a,b) as z, c\n\nwhich does not do an outer join, and does not imply any ordering.\n\n\n>> I don;t think that support for this kind of query is implemented,\n>\n>Not yet, but it's certainly on the TODO list. I'm not seriously\n>thinking about getting subselect-in-FROM done in this go-round,\n>though.\n\nGreat! If by Subselect-in-from you mean something like what I wrote above,\nthen it is a major win.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 28 Aug 2000 03:38:25 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for supporting outer joins in 7.1 "
}
] |
[
{
"msg_contents": "\nForwarded from the webmaster mailbox, please be sure to include \[email protected] in your responses.\n\n---------- Forwarded message ----------\nDate: Fri, 25 Aug 2000 17:38:23 GMT\nFrom: Wout de Jong <[email protected]>\nTo: \"[email protected]\" <[email protected]>\nSubject: Pure ODBMS\n\nHi,\n\nI appreciate your effort in building this very nice piece of software.\nHowever, I'm building an app and not interested in translating all my\nobjects into SQL. I think I found one solution called Pure Object\nDBMS. The only problem is that there is no open source variant\navailable. Do you know of any initiatives in this direction from\npostgresql.org or other opensource groups? TIA for answering...\n\nGreetz, Wout\n\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n\n",
"msg_date": "Fri, 25 Aug 2000 18:50:02 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pure ODBMS (fwd)"
},
{
"msg_contents": "\nMy background project is to turn postgresql into a ODMG compliant object\ndatabase. But more work needs to be done!\n\nVince Vielhaber wrote:\n> \n> Forwarded from the webmaster mailbox, please be sure to include\n> [email protected] in your responses.\n> \n> ---------- Forwarded message ----------\n> Date: Fri, 25 Aug 2000 17:38:23 GMT\n> From: Wout de Jong <[email protected]>\n> To: \"[email protected]\" <[email protected]>\n> Subject: Pure ODBMS\n> \n> Hi,\n> \n> I appreciate your effort in building this very nice piece of software.\n> However, I'm building an app and not interested in translating all my\n> objects into SQL. I think I found one solution called Pure Object\n> DBMS. The only problem is that there is no open source variant\n> available. Do you know of any initiatives in this direction from\n> postgresql.org or other opensource groups? TIA for answering...\n> \n> Greetz, Wout\n> \n> _________________________________________________________\n> Do You Yahoo!?\n> Get your free @yahoo.com address at http://mail.yahoo.com\n",
"msg_date": "Mon, 28 Aug 2000 09:53:09 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Pure ODBMS (fwd)"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\nHi!\n\nI'm currently checking how difficult it would be to create something like \nOracle TNS (I think that what it's called) service for the frontend library. \nThe idea is:\n\n1. Modify conninfo_parse() to accept parameters like \"service=myservice\"\n\n2. e.g. you put as connection string \"user=foo password=bar \nservice=schlumpf\", this will look into a special file (e.g. pg_services.conf) \nfor a section like this:\n[schlumpf]\nhost=194.121.121.121\nport=5433\ndatabase=mydb\noptions=....\n\nThis will allow migration of database services much easier, especially if you \nhave a lot of different applications (like at ISP's or ASP's), so when you \nchange host, port or something like this, you won't have to touch \napplications (where faults are likely to occur).\n\nIs something like this already possible in Postgres, or available with a \npatch?\n\n- -- \nWhy is it always Segmentation's fault?\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.3i\nCharset: noconv\n\niQCVAwUBOaeI+wotfkegMgnVAQGZyAQAtBIOt3uk+fSq+otCuWQXSReh2evhwsaj\npDPCSe4OKFK6oIp65IIfgrLDxzmtBCl4xOI2A5wpv7G81V28WYbnZUG5YEI9rEk0\nxej6JozKL1pkGsfbPhnermIniVn81NCjhnBcSSonPG3Q4DGT+lbmY7aOx2CAO4Eu\nD1LhFQrQnls=\n=2t79\n-----END PGP SIGNATURE-----\n",
"msg_date": "Sat, 26 Aug 2000 11:08:10 +0200",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": true,
"msg_subject": "TNS Services like Oracle?"
}
] |
[
{
"msg_contents": "I hate these discussions. Endless back and forth -- much ado about nothing. \n Yet...\n\n\"PostgresSQL\" does seem kinda unwieldy and awkward. And the \"SQL\" part does \nseem superfluous since most databases now support SQL in some way shape or \nform. Adding \"SQL\" now seems almost as arbitrary as the previous \"95\".\n\nYes, not all databases can claim full SQL compliance, and Postgres does it \nbetter than MySQL but so what? PostgreSQL's \"competition\" is much broader: \nOracle, SQL Server, DB2, etc. -- all of whom have excellent SQL support.\n\nIf it is deemed important to tack on defining features of a \"next \ngeneration\" database then prehaps \"OR\" (object-relational) should be tacked \non instead of, or in addition to \"SQL\"? But in five years this will again \nseem an unnecessary addition to \"Postgre(s)\" and the name would still be \ncumbersome.\n\nPostgreSQLOR?, PostgreSQL-OR?, PostgreORSQL? Postgre-ORSQL?\n\nI really don't know what the best name would be, and keeping it the way it \nis is just fine with me. I use PostgreSQL for what it is, not what it is \ncalled. Returning to the core descriptive term: \"Postgre(s)\" would be fine \nalso.\n\nJohn\n\n\n________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com\n\n",
"msg_date": "Sat, 26 Aug 2000 13:39:39 EDT",
"msg_from": "\"John Daniels\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: How Do You Pronounce \"PostgreSQL\"?"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\nI've checked the sources and found out that the implementation is quite \ntrivial. I've modified fe-connect.c and will deploy this modification on our \nsite. Is there any interest in a patch?\n\nThis will allow to have important connection parameters in a separate file, \nand thus changing the database host/port/dbname without having to touch \n(maybe a lot) of applications. For PHP/Apache this is ideal, because as a \ndatabase administrator you're not aware where database are referenced from, \nand so you've only to modify one file.\n\n- -- \nWhy is it always Segmentation's fault?\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.3i\nCharset: noconv\n\niQCVAwUBOafYZgotfkegMgnVAQEUhQQAmCRrEpkqxhfFlTYrDzp6MSNjkUGUUFjL\nEJba8bVGz35JjqBaIF98Xh/av8Jt7pzqTcul1tK5oqCv3VpxDriscSTPfslm2FgR\npWuCzKegLoOWGpiB0MOhcF5YOLXW7LlKHSnUjdYSibnemB3kdSoGM96rpIonQO6h\ngyKxlELtQj0=\n=pv+S\n-----END PGP SIGNATURE-----\n",
"msg_date": "Sat, 26 Aug 2000 16:47:02 +0200",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": true,
"msg_subject": "TNS like service for Postgres / Update"
}
] |
[
{
"msg_contents": "I was a little startled to find that current sources refuse to compile\non a rather old redhat release (4.2). The observed failure is\n\npqcomm.c: In function `StreamConnection':\npqcomm.c:314: `socklen_t' undeclared (first use this function)\n\nwhich is pretty reasonable seeing that socklen_t is in fact not defined\nanywhere in /usr/include on this system. What's not reasonable is that\nconfigure is selecting socklen_t as the value of ACCEPT_TYPE_ARG3.\nLooking into it, I find that AC_FUNC_ACCEPT_ARGTYPES is in fact failing\ncompletely: every single combination it tries fails with errors like\n\nconfigure:4749: gcc -c -O2 conftest.c 1>&5\nconfigure:4743: conflicting types for `accept'\n/usr/include/sys/socket.h:375: previous declaration of `accept'\n\nand it then defaults to \n\n ac_cv_func_accept_arg1=int\n ac_cv_func_accept_arg2='struct sockaddr *'\n ac_cv_func_accept_arg3='socklen_t'\n\nwhich is not too bright considering that it has just proven that\nthat combination will not work.\n\nThe proximate cause of the failure is that this platform actually\ndeclares accept as\n\nint accept __P ((int __sockfd, __const struct sockaddr *__peer,\n int *__paddrlen));\n\nso it would seem that you need to try \"const struct sockaddr *\"\n(and perhaps \"const void *\"?) as one of the possibilities for\nac_cv_func_accept_arg2. However I also suggest that\n\n1. If we fail to get any combination to compile, configure should\nabort, not press on with a default combination that is guaranteed\nnot to work.\n\n2. If sys/socket.h doesn't exist or doesn't provide a prototype for\naccept(), then the first combination tried will \"succeed\". As coded,\nthis is int / struct sockaddr * / socklen_t, which may be POSIX-standard\nbut I wonder how likely it is to work on platforms that are too old to\nhave prototypes in sys/socket.h. Perhaps size_t would be a safer thing\nto try first.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 Aug 2000 14:51:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "AC_FUNC_ACCEPT_ARGTYPES is falling down on the job"
}
] |
[
{
"msg_contents": "There is a critical PostgreSQL server at SMU (Southern Methodist\nUniversity) that is down. Rebooting does not help and pg_dump fails. \nThey are running 6.5.3. The site is down, and the administrator is\nstuck. The previous pg_dump is four days old.\n\nIf someone is interested in assisting them, please reply to this e-mail\nto me. I have the contact information and telnet passwords.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 26 Aug 2000 18:19:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Critical server is down, user needs help"
},
{
"msg_contents": "\nI can take a quick look around, if they want ... if nothing else, can act\nas \"the lackey\" and provide more detailed bug reports to the -hackers list\n..\n\n\nOn Sat, 26 Aug 2000, Bruce Momjian wrote:\n\n> There is a critical PostgreSQL server at SMU (Southern Methodist\n> University) that is down. Rebooting does not help and pg_dump fails. \n> They are running 6.5.3. The site is down, and the administrator is\n> stuck. The previous pg_dump is four days old.\n> \n> If someone is interested in assisting them, please reply to this e-mail\n> to me. I have the contact information and telnet passwords.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 26 Aug 2000 20:16:10 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Critical server is down, user needs help"
},
{
"msg_contents": "I have bounced you the email with the info. Can you call me so I can\ngive you the password?\n\n> There is a critical PostgreSQL server at SMU (Southern Methodist\n> University) that is down. Rebooting does not help and pg_dump fails. \n> They are running 6.5.3. The site is down, and the administrator is\n> stuck. The previous pg_dump is four days old.\n> \n> If someone is interested in assisting them, please reply to this e-mail\n> to me. I have the contact information and telnet passwords.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 26 Aug 2000 19:24:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Critical server is down, user needs help"
}
] |
[
{
"msg_contents": "I noticed you've got some really ugly stuff in gram.y to handle\n\tSELECT * FROM foo UNION JOIN bar\nwhich has a shift/reduce conflict with\n\tSELECT * FROM foo UNION SELECT * FROM bar\nLooks like you resolved this by requiring parens around a UNION JOIN\nconstruct. So, aside from being ugly, this fails to meet the SQL92\nspec (nothing about parens there...).\n\nThis is another case where a one-token lookahead between the lexer\nand parser would make life a lot easier: we could replace UNION JOIN\nwith a single UNIONJOIN token and thereby eliminate the shift-reduce\nconflict.\n\nYou'll probably recall that the ambiguity between NOT NULL and NOT\nDEFERRABLE gave us similar problems. We were able to get around that\nby pretending NOT DEFERRABLE is an independent clause and leaving some\nof the parsing work to be done by analyze.c, but I don't think that\ntrick will work here.\n\nI seem to recall a third case where a lookahead would have helped,\nbut can't find the details in the archives right now.\n\nI think it's time to bite the bullet and put in a lookahead filter.\nWhat say you?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 Aug 2000 19:36:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "UNION JOIN vs UNION SELECT"
},
{
"msg_contents": "> I noticed you've got some really ugly stuff in gram.y to handle\n> SELECT * FROM foo UNION JOIN bar\n> I think it's time to bite the bullet and put in a lookahead filter.\n> What say you?\n\n*sigh* Probably right. The UNION vs UNION JOIN stuff illustrates it\npretty well. I haven't tried assigning precedence levels to these tokens\nor to those subclauses; would that help to resolve the conflicts?\n\n - Thomas\n",
"msg_date": "Mon, 28 Aug 2000 15:51:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UNION JOIN vs UNION SELECT"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> I think it's time to bite the bullet and put in a lookahead filter.\n>> What say you?\n\n> *sigh* Probably right. The UNION vs UNION JOIN stuff illustrates it\n> pretty well. I haven't tried assigning precedence levels to these tokens\n> or to those subclauses; would that help to resolve the conflicts?\n\nI don't see how. The real problem is that given\n\n\tSELECT * FROM foo UNION ...\n ^ parsing here\n\nyou don't know whether to reduce what you have to select_clause\n(as you must if what follows is UNION SELECT) or shift (as you must\nif you want to parse \"foo UNION JOIN bar\" as part of the FROM-clause).\nPrecedence will not help: the grammar is just plain not LR(1) unless you\ncount UNION JOIN as a single token. It's barely possible that we could\nredesign our grammar to avoid needing to make a shift-reduce decision\nhere, but it would be so ugly and nonintuitive that I can't see that as\nbeing a better answer than a lookahead filter.\n\nWe should use precedence to implement ISO's distinction in the\nprecedence of UNION, INTERSECT, and EXCEPT (we get that wrong\ncurrently), but I don't see how it helps for the UNION vs UNION JOIN\nissue.\n\nQuite apropos of this: now that we are committed to assuming our lexer\nis flex, does anyone object to using flex's -P option to customize the\nyyfoo() names emitted by flex? That seems cleaner to me than the\nsed-script kluges we currently rely on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Aug 2000 14:03:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: UNION JOIN vs UNION SELECT "
},
{
"msg_contents": "> the grammar is just plain not LR(1) unless you\n> count UNION JOIN as a single token. \n\nWould it be bad to make UNION JOIN as a single token?\n",
"msg_date": "Tue, 29 Aug 2000 11:07:39 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: UNION JOIN vs UNION SELECT"
},
{
"msg_contents": "Chris <[email protected]> writes:\n>> the grammar is just plain not LR(1) unless you\n>> count UNION JOIN as a single token. \n\n> Would it be bad to make UNION JOIN as a single token?\n\nThat's exactly the solution I'm proposing. However, it's pretty painful\nto make the lexer do it directly (consider intervening comments, for\nexample) so what I have in mind is a filter between the parser and lexer\nthat does one-token lookahead when it finds a UNION token. If next\ntoken is JOIN, pass back just one UNIONJOIN token, else stash away the\nsecond token to be returned on next call from parser.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Aug 2000 20:27:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: UNION JOIN vs UNION SELECT "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris <[email protected]> writes:\n> >> the grammar is just plain not LR(1) unless you\n> >> count UNION JOIN as a single token.\n> \n> > Would it be bad to make UNION JOIN as a single token?\n> \n> That's exactly the solution I'm proposing. However, it's pretty painful\n> to make the lexer do it directly (consider intervening comments, for\n> example)\n\nComments are a pain in the parser. What if something prior to the lexer\nfiltered out comments before either the lexer or parser could see them?\nWould it be as easy as s/--.*// before the lexer?\n",
"msg_date": "Tue, 29 Aug 2000 11:58:23 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: UNION JOIN vs UNION SELECT"
},
{
"msg_contents": "\nTo answer my own question, of course that's no good because there are\nconstants and other stuff. Another suggestion, could we take the SQL\nstandards group out the back and have them flogged? :-)\n\n> \n> Tom Lane wrote:\n> >\n> > Chris <[email protected]> writes:\n> > >> the grammar is just plain not LR(1) unless you\n> > >> count UNION JOIN as a single token.\n> >\n> > > Would it be bad to make UNION JOIN as a single token?\n> >\n> > That's exactly the solution I'm proposing. However, it's pretty painful\n> > to make the lexer do it directly (consider intervening comments, for\n> > example)\n> \n> Comments are a pain in the parser. What if something prior to the lexer\n> filtered out comments before either the lexer or parser could see them?\n> Would it be as easy as s/--.*// before the lexer?\n",
"msg_date": "Tue, 29 Aug 2000 12:07:25 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: UNION JOIN vs UNION SELECT"
},
{
"msg_contents": "> > \n> > That's exactly the solution I'm proposing. However, it's pretty painful\n> > to make the lexer do it directly (consider intervening comments, for\n> > example)\n> \n> Comments are a pain in the parser. What if something prior to the lexer\n> filtered out comments before either the lexer or parser could see them?\n> Would it be as easy as s/--.*// before the lexer?\n\nIt probably wouldn't be that simple, but I do think that the solution\nis sound. Such a design is recommended by the Dragon book and has\nthe benefit of simplifying both the lexer and the parser.\n\n-Andy\n",
"msg_date": "Mon, 28 Aug 2000 22:09:45 -0500",
"msg_from": "Andrew Selle <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: UNION JOIN vs UNION SELECT"
},
{
"msg_contents": "> You'll probably recall that the ambiguity between NOT NULL and NOT\n> DEFERRABLE gave us similar problems. We were able to get around that\n> by pretending NOT DEFERRABLE is an independent clause and leaving some\n> of the parsing work to be done by analyze.c, but I don't think that\n> trick will work here.\n> \n> I seem to recall a third case where a lookahead would have helped,\n> but can't find the details in the archives right now.\n> \n> I think it's time to bite the bullet and put in a lookahead filter.\n> What say you?\n\nHmmm. Not real excited about that for performance reasons. Other options?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 23:51:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UNION JOIN vs UNION SELECT"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I think it's time to bite the bullet and put in a lookahead filter.\n>> What say you?\n\n> Hmmm. Not real excited about that for performance reasons. Other options?\n\nIt's been in there for a month. I'll bet lunch you will be unable to\nmeasure any performance cost --- one extra function call and if-test\nper token lexed is just not going to show on the radar screen.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Oct 2000 23:59:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UNION JOIN vs UNION SELECT "
}
] |
[
{
"msg_contents": "CREATE FUNCTION CHANGE_PASSWORD (text, text) RETURNS bool AS '\nBEGIN\nALTER USER $1 WITH PASSWORD ''$2'';\nRETURN ''t'';\nEND;\n' LANGUAGE 'plpgsql';\n\nselect CHANGE_PASSWORD('USER','PASS');\n\nthis get me \"ERROR: parser: parse error at or near \"$1\"\n\ncould I make a function that not returns values?\n\[email protected]\n\n\n______________________________________________\nFREE Personalized Email at Mail.com\nSign up at http://www.mail.com/?sr=signup\n\n",
"msg_date": "Sat, 26 Aug 2000 19:38:00 -0400 (EDT)",
"msg_from": "Juan Carlos Perez Vazquez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Where is the problem?"
}
] |
[
{
"msg_contents": "It seems current cvs has a problem with --enable-multibyte\ngcc -c -I../../../include -O2 -Wall -Wmissing-prototypes -Wmissing-declarations mbutils.c -o mbutils.o\nmbutils.c:178: conflicting types for \u0010g_mb2wchar'\n../../../include/mb/pg_wchar.h:98: previous declaration of \u0010g_mb2wchar'\nmbutils.c:185: conflicting types for \u0010g_mb2wchar_with_len'\n../../../include/mb/pg_wchar.h:99: previous declaration of \u0010g_mb2wchar_with_len'\nmake[4]: *** [mbutils.o] Error 1\nmake[4]: Leaving directory /home/postgres/cvs/pgsql/src/backend/utils/mb'\nThis problem appears about week ago.\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sun, 27 Aug 2000 12:20:44 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "current cvs is broken with --enable-multibyte"
},
{
"msg_contents": "Sorry, forgot to check in some files. Should be fixed now.\n\n> It seems current cvs has a problem with --enable-multibyte\n> gcc -c -I../../../include -O2 -Wall -Wmissing-prototypes -Wmissing-declarations mbutils.c -o mbutils.o\n> mbutils.c:178: conflicting types for \u0010g_mb2wchar'\n> ../../../include/mb/pg_wchar.h:98: previous declaration of \u0010g_mb2wchar'\n> mbutils.c:185: conflicting types for \u0010g_mb2wchar_with_len'\n> ../../../include/mb/pg_wchar.h:99: previous declaration of \u0010g_mb2wchar_with_len'\n> make[4]: *** [mbutils.o] Error 1\n> make[4]: Leaving directory /home/postgres/cvs/pgsql/src/backend/utils/mb'\n> This problem appears about week ago.\n> \n> \tRegards,\n> \n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n",
"msg_date": "Sun, 27 Aug 2000 20:03:32 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: current cvs is broken with --enable-multibyte"
},
{
"msg_contents": "Thanks, it compiles now\n\n\tRegards,\n\t\t\n\t\tOleg\nOn Sun, 27 Aug 2000, Tatsuo Ishii wrote:\n\n> Date: Sun, 27 Aug 2000 20:03:32 +0900\n> From: Tatsuo Ishii <[email protected]>\n> To: [email protected]\n> Cc: [email protected]\n> Subject: Re: [HACKERS] current cvs is broken with --enable-multibyte\n> \n> Sorry, forgot to check in some files. Should be fixed now.\n> \n> > It seems current cvs has a problem with --enable-multibyte\n> > gcc -c -I../../../include -O2 -Wall -Wmissing-prototypes -Wmissing-declarations mbutils.c -o mbutils.o\n> > mbutils.c:178: conflicting types for \u0010g_mb2wchar'\n> > ../../../include/mb/pg_wchar.h:98: previous declaration of \u0010g_mb2wchar'\n> > mbutils.c:185: conflicting types for \u0010g_mb2wchar_with_len'\n> > ../../../include/mb/pg_wchar.h:99: previous declaration of \u0010g_mb2wchar_with_len'\n> > make[4]: *** [mbutils.o] Error 1\n> > make[4]: Leaving directory /home/postgres/cvs/pgsql/src/backend/utils/mb'\n> > This problem appears about week ago.\n> > \n> > \tRegards,\n> > \n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sun, 27 Aug 2000 17:55:41 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: current cvs is broken with --enable-multibyte"
}
] |
[
{
"msg_contents": "> > > It is trivial to connect clients and servers across an ssh-piped\n> > > connection. I'm not sure of the details as far as getting \n> things set up\n> > > to be automated for turnkey installations.\n> > > \n> > \n> > Thomas,\n> > \n> > Thanks for the thought. I just found something called \n> \"stunnel\" which may\n> > do the trick.\n> \n> Also look into \"vpnd\" - we're using it for a project for a \n> client until I\n> can get the SSL connection stuff working properly... (Hint, hint... It\n> would be nice if it was better documented :)\n\nDocs of the SSL stuff is coming up as soon as I get \"final approval\" of \nthe patch that brings SSL up to working (e.g. either applying or \nrejectnig :-). I have a very rough outline so far, but I don't want \nto put down too much work into it until I know I am documenting the \nright thing (the version that will eventually go in, that is).\n\nBut it's on it's way.\n\n//Magnus\n",
"msg_date": "Sun, 27 Aug 2000 11:57:50 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Access PostgreSQL server via SSL/Internet"
}
] |
[
{
"msg_contents": "Can someone comment on this?\n\n$ pg_dump -o regression >/dev/null\nCan not create pg_dump_oid table. Explanation from backend: 'ERROR: Illegal class name 'pg_dump_oid'\n\tThe 'pg_' name prefix is reserved for system catalogs\n'.\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 27 Aug 2000 20:11:42 +0900",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "pg_dump -o does not work on current"
},
{
"msg_contents": "At 20:11 27/08/00 +0900, [email protected] wrote:\n>Can someone comment on this?\n>\n>$ pg_dump -o regression >/dev/null\n>Can not create pg_dump_oid table. Explanation from backend: 'ERROR:\nIllegal class name 'pg_dump_oid'\n>\tThe 'pg_' name prefix is reserved for system catalogs\n\nI can fix this one...\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 27 Aug 2000 21:47:58 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump -o does not work on current"
}
] |
[
{
"msg_contents": "Hi!\n\nWhy does not work this?\n\nCREATE FUNCTION PWDCHG () RETURNS OPAQUE AS '\nBEGIN\nALTER USER utest WITH PASSWORD ''ptest'';\nEND;\n' LANGUAGE 'plpgsql';\n\nselect PWDCHG();\nERROR: typeidTypeRelid: Invalid type - oid = 0\n\n\nand this other?\n\nCREATE FUNCTION PWDCHG () RETURNS bool AS '\nBEGIN\nALTER USER utest WITH PASSWORD ''ptest'';\nRETURN ''t'';\nEND;\n' LANGUAGE 'plpgsql';\n\nselect PWDCHG();\nERROR: copyObject: don't know how to copy 646\n\nRegards,\nJuan Carlos.\n\n\n______________________________________________\nFREE Personalized Email at Mail.com\nSign up at http://www.mail.com/?sr=signup\n\n",
"msg_date": "Sun, 27 Aug 2000 07:19:44 -0400 (EDT)",
"msg_from": "Juan Carlos Perez Vazquez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why?"
},
{
"msg_contents": "\nOn Sun, 27 Aug 2000, Juan Carlos Perez Vazquez wrote:\n\n> Hi!\n> \n> Why does not work this?\n> \n> CREATE FUNCTION PWDCHG () RETURNS OPAQUE AS '\n> BEGIN\n> ALTER USER utest WITH PASSWORD ''ptest'';\n> END;\n> ' LANGUAGE 'plpgsql';\n> \n> select PWDCHG();\n> ERROR: typeidTypeRelid: Invalid type - oid = 0\n\nYou don't select functions that have return type\nOPAQUE. Think of functions returning opaque\nas procedures that cannot be called in a context\nwhere their return value is used.\nPlus, the utility commands aren't fully implemented\nin plpgsql in 7.0 (more below)\n\n> and this other?\n> \n> CREATE FUNCTION PWDCHG () RETURNS bool AS '\n> BEGIN\n> ALTER USER utest WITH PASSWORD ''ptest'';\n> RETURN ''t'';\n> END;\n> ' LANGUAGE 'plpgsql';\n> \n> select PWDCHG();\n> ERROR: copyObject: don't know how to copy 646\n\nIn 7.0 most of the utility commands are not \navailable in plpgsql. They should be available\nin 7.1. I don't know if any of the other pl\nlanguages had utility commands that worked in\n7.0 (I don't know tcl and didn't compile pl/perl),\nbut that's another possibility for how to do it.\n\n",
"msg_date": "Sun, 27 Aug 2000 13:25:19 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why?"
}
] |
[
{
"msg_contents": "> > Docs of the SSL stuff is coming up as soon as I get \"final \n> approval\" of \n> > the patch that brings SSL up to working (e.g. either applying or \n> > rejectnig :-). I have a very rough outline so far, but I don't want \n> > to put down too much work into it until I know I am documenting the \n> > right thing (the version that will eventually go in, that is).\n> \n> Your patch looked fine to me, the details can be hammered out later.\nOk. Great. That's what I needed to hear.\n\n\n> What I'd like to see is some at least informal documentation \n> on how to use\n> this at all. We can't put in any patches that we don't know \n> how to use.\n\nHere is a patch against the same cvs tree as the SSL patch (Aug 20). \nI hope I didn't mess the SGML up too bad, but somebody should definitly\nlook that over. I tried to steal as much as I could from around :-)\n\nThis patch updates:\n* Installation instructions (paragraph on how to compile with openssl)\n* Documentation of pg_hba.conf (added \"hostssl\" record docs)\n* Libpq documentation (added connection option, documentation of\n PQgetssl() function)\n* Add section on SSL to \"Server Runtime Environment\"\n\nIf you beleive any particular area needs more attention, please let me know.\n\n//Magnus",
"msg_date": "Sun, 27 Aug 2000 15:52:18 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Access PostgreSQL server via SSL/Internet"
},
{
"msg_contents": "Applied. Thanks.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> > > Docs of the SSL stuff is coming up as soon as I get \"final \n> > approval\" of \n> > > the patch that brings SSL up to working (e.g. either applying or \n> > > rejectnig :-). I have a very rough outline so far, but I don't want \n> > > to put down too much work into it until I know I am documenting the \n> > > right thing (the version that will eventually go in, that is).\n> > \n> > Your patch looked fine to me, the details can be hammered out later.\n> Ok. Great. That's what I needed to hear.\n> \n> \n> > What I'd like to see is some at least informal documentation \n> > on how to use\n> > this at all. We can't put in any patches that we don't know \n> > how to use.\n> \n> Here is a patch against the same cvs tree as the SSL patch (Aug 20). \n> I hope I didn't mess the SGML up too bad, but somebody should definitly\n> look that over. I tried to steal as much as I could from around :-)\n> \n> This patch updates:\n> * Installation instructions (paragraph on how to compile with openssl)\n> * Documentation of pg_hba.conf (added \"hostssl\" record docs)\n> * Libpq documentation (added connection option, documentation of\n> PQgetssl() function)\n> * Add section on SSL to \"Server Runtime Environment\"\n> \n> If you beleive any particular area needs more attention, please let me know.\n> \n> //Magnus\n> \n> \n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 29 Aug 2000 00:15:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] RE: Access PostgreSQL server via SSL/Internet"
},
{
"msg_contents": "Applied. Thanks. I always love doc patches.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> > > Docs of the SSL stuff is coming up as soon as I get \"final \n> > approval\" of \n> > > the patch that brings SSL up to working (e.g. either applying or \n> > > rejectnig :-). I have a very rough outline so far, but I don't want \n> > > to put down too much work into it until I know I am documenting the \n> > > right thing (the version that will eventually go in, that is).\n> > \n> > Your patch looked fine to me, the details can be hammered out later.\n> Ok. Great. That's what I needed to hear.\n> \n> \n> > What I'd like to see is some at least informal documentation \n> > on how to use\n> > this at all. We can't put in any patches that we don't know \n> > how to use.\n> \n> Here is a patch against the same cvs tree as the SSL patch (Aug 20). \n> I hope I didn't mess the SGML up too bad, but somebody should definitly\n> look that over. I tried to steal as much as I could from around :-)\n> \n> This patch updates:\n> * Installation instructions (paragraph on how to compile with openssl)\n> * Documentation of pg_hba.conf (added \"hostssl\" record docs)\n> * Libpq documentation (added connection option, documentation of\n> PQgetssl() function)\n> * Add section on SSL to \"Server Runtime Environment\"\n> \n> If you beleive any particular area needs more attention, please let me know.\n> \n> //Magnus\n> \n> \n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Oct 2000 23:24:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Access PostgreSQL server via SSL/Internet"
}
] |
[
{
"msg_contents": "\nEarlier this week, I reported getting core dumps with the following bt:\n\n(gdb) where\n#0 0x18271d90 in kill () from /usr/lib/libc.so.4\n#1 0x182b2e09 in abort () from /usr/lib/libc.so.4\n#2 0x80ee847 in s_lock_stuck (lock=0x20048065 \"\\001\", file=0x816723c \"spin.c\", line=127) at s_lock.c:51\n#3 0x80ee8c3 in s_lock (lock=0x20048065 \"\\001\", file=0x816723c \"spin.c\", line=127) at s_lock.c:80\n#4 0x80f1580 in SpinAcquire (lockid=7) at spin.c:127\n#5 0x80f3903 in LockRelease (lockmethod=1, locktag=0xbfbfe968, lockmode=1) at lock.c:1044\n\nI've been monitoring 'open files' on that machine, and after raising them\nto 8192, saw it hit \"Open Files Peak: 8179\" this morning and once more\nhave a dead database ...\n\nTom, you stated \"That sure looks like you'd better tweak your kernel\nsettings ... but offhand I don't see how it could lead to \"stuck spinlock\"\nerrors.\", so I'm wondering if maybe there is a bug, in that it should be\nhandling running out of FDs better?\n\nI just raised mine to 32k so that it *hopefully* never happens again, if I\nhit *that* many open files I'll be surprised ...\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n",
"msg_date": "Sun, 27 Aug 2000 12:53:14 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "[7.0.2] spinlock problems reported earlier ..."
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> I've been monitoring 'open files' on that machine, and after raising them\n> to 8192, saw it hit \"Open Files Peak: 8179\" this morning and once more\n> have a dead database ...\n\n> Tom, you stated \"That sure looks like you'd better tweak your kernel\n> settings ... but offhand I don't see how it could lead to \"stuck spinlock\"\n> errors.\", so I'm wondering if maybe there is a bug, in that it should be\n> handling running out of FDs better?\n\nAh-hah, now that I get to see the log file before it vanished, I have\na theory about how no FDs leads to stuck spinlock. The postmaster's own\nlog has\n\npostmaster: StreamConnection: accept: Too many open files in system\npostmaster: StreamConnection: accept: Too many open files in system\nFATAL 1: ReleaseLruFile: No open files available to be closed\n\nFATAL: s_lock(20048065) at spin.c:127, stuck spinlock. Aborting.\n\nFATAL: s_lock(20048065) at spin.c:127, stuck spinlock. Aborting.\n\n(more of same)\n\nwhile the backend log has a bunch of\n\nIpcSemaphoreLock: semop failed (Identifier removed) id=524288\nIpcSemaphoreLock: semop failed (Identifier removed) id=524288\nIpcSemaphoreLock: semop failed (Identifier removed) id=524288\nIpcSemaphoreLock: semop failed (Identifier removed) id=524288\n\n*followed by* the spinlock gripes.\n\nHere's my theory:\n\n1. Postmaster gets a connection, tries to read pg_hba.conf, which it\ndoes via AllocateFile(). On EMFILE failure that calls ReleaseLruFile,\nwhich elog()'s because in the postmaster environment there are not\ngoing to be any open virtual FDs to close.\n\n2. elog() inside the postmaster causes the postmaster to shut down.\nWhich it does faithfully, including cleaning up after itself, which\nincludes removing the semaphores it owns.\n\n3. Backends start falling over with semaphore-operation failures.\nThis is treated as a system-restart event (backend does proc_exit(255))\nbut there's no postmaster to kill the other backends and start a new\ncycle of life.\n\n4. At least one dying backend leaves the lock manager's spinlock locked\n(which it should not), so by and by we start to see stuck-spinlock\ngripes from backends that haven't yet tried to do a semop. But that's\npretty far down the cause-and-effect chain.\n\nIt looks to me like we have several things we want to do here.\n\n1. ReleaseLruFile() should not immediately elog() but should return \na failure code instead, allowing AllocateFile() to return NULL, which\nthe postmaster can handle more gracefully than it does an elog().\n\n2. ProcReleaseSpins() ought to be done by proc_exit(). Someone was lazy\nand hard-coded it into elog() instead.\n\n3. I think the real problem here is that the backends are opening too\ndamn many files. IIRC, FreeBSD is one of the platforms where\nsysconf(_SC_OPEN_MAX) will return a large number, which means that fd.c\nwill have no useful limit on the number of open files it eats up.\nIncreasing your kernel NFILES setting will just allow Postgres to eat\nup more FDs, and eventually (if you allow enough backends to run)\nyou'll be up against it again. Even if we manage to make Postgres\nitself fairly bulletproof against EMFILE failures, much of the rest\nof your system will be kayoed when PG is eating up every available\nkernel FD, so that is not the path to true happiness.\n\n(You might care to use lsof or some such to see just how many open\nfiles you have per backend. I bet it's a lot.)\n\nHmm, this is interesting: on HPUX, man sysconf(2) says that\nsysconf(_SC_OPEN_MAX) returns the max number of open files per process\n--- which is what fd.c assumes it means. But I see that on your FreeBSD\nbox, the sysconf man page defines it as\n\n _SC_OPEN_MAX\n The maximum number of open files per user id.\n\nwhich suggests that *on that platform* we need to divide by MAXBACKENDS.\nDoes anyone know of a more portable way to determine the appropriate\nnumber of open files per backend?\n\nOtherwise, we'll have to put some kind of a-priori sanity check on\nwhat we will believe from sysconf(). I don't much care for the idea of\nputting a hard-wired limit on max files per backend, but that might be\nthe quick-and-dirty answer.\n\nAnother possibility is to add a postmaster parameter \"max open files\nfor whole installation\", which we'd then divide by MAXBACKENDS to\ndetermine max files per backend, rather than trying to discover a\nsafe value on-the-fly.\n\nIn any case, I think we want something quick and dirty for a 7.0.*\nback-patch. Maybe just limiting what we believe from sysconf() to\n100 or so would be OK for a patch.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 27 Aug 2000 15:42:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Too many open files (was Re: spinlock problems reported earlier)"
},
{
"msg_contents": "On Sun, 27 Aug 2000, Tom Lane wrote:\n\n> Hmm, this is interesting: on HPUX, man sysconf(2) says that\n> sysconf(_SC_OPEN_MAX) returns the max number of open files per process\n> --- which is what fd.c assumes it means. But I see that on your FreeBSD\n> box, the sysconf man page defines it as\n> \n> _SC_OPEN_MAX\n> The maximum number of open files per user id.\n> \n> which suggests that *on that platform* we need to divide by MAXBACKENDS.\n> Does anyone know of a more portable way to determine the appropriate\n> number of open files per backend?\n\nOkay, I just checked out Solaris 8/x86, and it confirms what HP/ux thinks:\n\n _SC_OPEN_MAX OPEN_MAX Max open files per\n process\n\nI'm curious as to whether FreeBSD is the only one that doesn't follow this\n\"convention\"? I'm CCng in the FreeBSD Hackers mailing list to see if\nsomeone there might be able to shed some light on this ... my first\nthought, personally, would be to throw in some sort of:\n\n#ifdef __FreeBSD__\n max_files_per_backend = sysconf(_SC_OPEN_MAX) / num_of_backends;\n#else\n max_files_per_backend = sysconf(_SC_OPEN_MAX);\n#endif\n\n\n",
"msg_date": "Sun, 27 Aug 2000 20:31:43 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Too many open files (was Re: spinlock problems reported earlier)"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Okay, I just checked out Solaris 8/x86, and it confirms what HP/ux thinks:\n> _SC_OPEN_MAX OPEN_MAX Max open files per\n> process\n> I'm curious as to whether FreeBSD is the only one that doesn't follow this\n> \"convention\"?\n\nI've also confirmed that SunOS 4.1.4 (about as old-line BSD as it gets\nthese days) says _SC_OPEN_MAX is max per process. Furthermore,\nI notice that FreeBSD's description of sysctl(3) refers to a\nmax-files-per-process kernel parameter, but no max-files-per-userid\nparameter. Perhaps the entry in the FreeBSD sysconf(2) man page is\nmerely a typo?\n\nIf so, I still consider that FreeBSD returns an unreasonably large\nfraction of the kernel FD table size as the number of files one\nprocess is allowed to open.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Aug 2000 00:57:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Too many open files (was Re: spinlock problems reported earlier) "
},
{
"msg_contents": " The Hermit Hacker <[email protected]> writes:\n > Okay, I just checked out Solaris 8/x86, and it confirms what HP/ux thinks:\n > _SC_OPEN_MAX OPEN_MAX Max open files per\n > process\n > I'm curious as to whether FreeBSD is the only one that doesn't follow this\n > \"convention\"?\n\n From part of the NetBSD manpage for sysconf(3):\n\nDESCRIPTION\n This interface is defined by IEEE Std1003.1-1988 (``POSIX''). A far more\n complete interface is available using sysctl(3).\n\n _SC_OPEN_MAX\n The maximum number of open files per user id.\n\n _SC_STREAM_MAX\n The minimum maximum number of streams that a process may have\n open at any one time.\n\nBUGS\n The value for _SC_STREAM_MAX is a minimum maximum, and required to be the\n same as ANSI C's FOPEN_MAX, so the returned value is a ridiculously small\n and misleading number.\n\nSTANDARDS\n The sysconf() function conforms to IEEE Std1003.1-1990 (``POSIX'').\n\nHISTORY\n The sysconf function first appeared in 4.4BSD.\n\nThis suggests that _SC_STREAM_MAX might be a better value to use. On\none of my NetBSD boxes I have the following:\n\n_SC_OPEN_MAX: 64\n_SC_STREAM_MAX: 20\n\nIn any case, if this really follows the POSIX standard, perhaps\nPostgreSQL code should assume these semantics and work around other\ncases that don't follow the standard (instead of work around the POSIX\ncases).\n\nCheers,\nBrook\n",
"msg_date": "Mon, 28 Aug 2000 09:24:00 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems reported\n\tearlier)"
},
{
"msg_contents": "Brook Milligan <[email protected]> writes:\n> In any case, if this really follows the POSIX standard, perhaps\n> PostgreSQL code should assume these semantics and work around other\n> cases that don't follow the standard (instead of work around the POSIX\n> cases).\n\nHP asserts that *they* follow the POSIX standard, and in this case\nI'm more inclined to believe them than the *BSD camp. A per-process\nlimit on open files has existed in most Unices I've heard of; I had\nnever heard of a per-userid limit until yesterday. (And I'm not yet\nconvinced that that's actually what *BSD implements; are we sure it's\nnot just a typo in the man page?)\n\n64 or so for _SC_OPEN_MAX is not really what I'm worried about anyway.\nIIRC, we've heard reports that some platforms return values in the\nthousands, ie, essentially telling each process it can have the whole\nkernel FD table, and it's that behavior that I'm speculating is causing\nMarc's problem.\n\nMarc, could you check what is returned by sysconf(_SC_OPEN_MAX) on your\nbox? And/or check to see how many files each backend is actually\nholding open?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Aug 2000 13:32:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems reported\n\tearlier)"
},
{
"msg_contents": "On Mon, 28 Aug 2000, Tom Lane wrote:\n\n> Brook Milligan <[email protected]> writes:\n> > In any case, if this really follows the POSIX standard, perhaps\n> > PostgreSQL code should assume these semantics and work around other\n> > cases that don't follow the standard (instead of work around the POSIX\n> > cases).\n> \n> HP asserts that *they* follow the POSIX standard, and in this case\n> I'm more inclined to believe them than the *BSD camp. A per-process\n> limit on open files has existed in most Unices I've heard of; I had\n> never heard of a per-userid limit until yesterday. (And I'm not yet\n> convinced that that's actually what *BSD implements; are we sure it's\n> not just a typo in the man page?)\n> \n> 64 or so for _SC_OPEN_MAX is not really what I'm worried about anyway.\n> IIRC, we've heard reports that some platforms return values in the\n> thousands, ie, essentially telling each process it can have the whole\n> kernel FD table, and it's that behavior that I'm speculating is causing\n> Marc's problem.\n> \n> Marc, could you check what is returned by sysconf(_SC_OPEN_MAX) on your\n> box? And/or check to see how many files each backend is actually\n> holding open?\n\n> ./t\n4136\n\n\n> sysctl kern.maxfiles\nkern.maxfiles: 32768\n\n\n> cat t.c\n#include <stdio.h>\n#include <unistd.h>\n\nmain()\n{\n printf(\"%ld\\n\", sysconf(_SC_OPEN_MAX));\n}\n\nokay, slightly difficult since they come and go, but using the database\nthat is used for the search engine, with just a psql session:\n\npgsql# lsof -p 85333\nCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\npostgres 85333 pgsql cwd VDIR 13,131088 3072 7936 /pgsql/data2/udmsearch\npostgres 85333 pgsql rtd VDIR 13,131072 512 2 /\npostgres 85333 pgsql txt VREG 13,131084 4651486 103175 /pgsql/bin/postgres\npostgres 85333 pgsql txt VREG 13,131076 77648 212924 /usr/libexec/ld-elf.so.1\npostgres 85333 pgsql txt VREG 13,131076 11860 56504 /usr/lib/libdescrypt.so.2\npostgres 85333 pgsql txt VREG 13,131076 120736 56525 /usr/lib/libm.so.2\npostgres 85333 pgsql txt VREG 13,131076 34336 56677 /usr/lib/libutil.so.3\npostgres 85333 pgsql txt VREG 13,131076 154128 57068 /usr/lib/libreadline.so.4\npostgres 85333 pgsql txt VREG 13,131076 270100 56532 /usr/lib/libncurses.so.5\npostgres 85333 pgsql txt VREG 13,131076 570064 56679 /usr/lib/libc.so.4\npostgres 85333 pgsql 0r VCHR 2,2 0t0 7967 /dev/null\npostgres 85333 pgsql 1w VREG 13,131084 995 762037 /pgsql/logs/postmaster.5432.61308\npostgres 85333 pgsql 2w VREG 13,131084 316488878 762038 /pgsql/logs/5432.61308\npostgres 85333 pgsql 3r VREG 13,131088 1752 8011 /pgsql/data2/udmsearch/pg_internal.init\npostgres 85333 pgsql 4u VREG 13,131084 22757376 15922 /pgsql/data/pg_log\npostgres 85333 pgsql 5u unix 0xd46a3300 0t0 ->0xd469a540\npostgres 85333 pgsql 6u VREG 13,131084 8192 15874 /pgsql/data/pg_variable\npostgres 85333 pgsql 7u VREG 13,131088 16384 7982 /pgsql/data2/udmsearch/pg_class\npostgres 85333 pgsql 8u VREG 13,131088 32768 7980 /pgsql/data2/udmsearch/pg_class_relname_index\npostgres 85333 pgsql 9u VREG 13,131088 81920 7985 /pgsql/data2/udmsearch/pg_attribute\npostgres 85333 pgsql 10u VREG 13,131088 65536 7983 /pgsql/data2/udmsearch/pg_attribute_relid_attnum_index\npostgres 85333 pgsql 11u VREG 13,131088 8192 7945 /pgsql/data2/udmsearch/pg_trigger\npostgres 85333 pgsql 12u VREG 13,131088 8192 7993 /pgsql/data2/udmsearch/pg_am\npostgres 85333 pgsql 13u VREG 13,131088 16384 7977 /pgsql/data2/udmsearch/pg_index\npostgres 85333 pgsql 14u VREG 13,131088 8192 7988 /pgsql/data2/udmsearch/pg_amproc\npostgres 85333 pgsql 15u VREG 13,131088 16384 7991 /pgsql/data2/udmsearch/pg_amop\npostgres 85333 pgsql 16u VREG 13,131088 73728 7961 /pgsql/data2/udmsearch/pg_operator\npostgres 85333 pgsql 17u VREG 13,131088 16384 7976 /pgsql/data2/udmsearch/pg_index_indexrelid_index\npostgres 85333 pgsql 18u VREG 13,131088 32768 7960 /pgsql/data2/udmsearch/pg_operator_oid_index\npostgres 85333 pgsql 19u VREG 13,131088 16384 7976 /pgsql/data2/udmsearch/pg_index_indexrelid_index\npostgres 85333 pgsql 20u VREG 13,131088 16384 7942 /pgsql/data2/udmsearch/pg_trigger_tgrelid_index\npostgres 85333 pgsql 21u VREG 13,131084 8192 15921 /pgsql/data/pg_shadow\npostgres 85333 pgsql 22u VREG 13,131084 8192 15918 /pgsql/data/pg_database\npostgres 85333 pgsql 23u VREG 13,131088 8192 7952 /pgsql/data2/udmsearch/pg_rewrite\npostgres 85333 pgsql 24u VREG 13,131088 16384 7941 /pgsql/data2/udmsearch/pg_type\npostgres 85333 pgsql 25u VREG 13,131088 16384 7940 /pgsql/data2/udmsearch/pg_type_oid_index\npostgres 85333 pgsql 26u VREG 13,131088 0 7938 /pgsql/data2/udmsearch/pg_user\npostgres 85333 pgsql 27u VREG 13,131088 188416 7984 /pgsql/data2/udmsearch/pg_attribute_relid_attnam_index\npostgres 85333 pgsql 28u VREG 13,131088 65536 7959 /pgsql/data2/udmsearch/pg_operator_oprname_l_r_k_index\npostgres 85333 pgsql 29u VREG 13,131088 16384 7981 /pgsql/data2/udmsearch/pg_class_oid_index\npostgres 85333 pgsql 30u VREG 13,131088 40960 7948 /pgsql/data2/udmsearch/pg_statistic\npostgres 85333 pgsql 31u VREG 13,131088 32768 7947 /pgsql/data2/udmsearch/pg_statistic_relid_att_index\npostgres 85333 pgsql 32u VREG 13,131088 212992 7958 /pgsql/data2/udmsearch/pg_proc\npostgres 85333 pgsql 33u VREG 13,131088 49152 7957 /pgsql/data2/udmsearch/pg_proc_oid_index\n\n\nwhen running a vacuum on the database, the only changes appear to be\nadding (and removing when done) those tables that are currently being\nvacuumed ... so, it appears, ~48 or so files opened ...\n\n\n",
"msg_date": "Mon, 28 Aug 2000 14:56:28 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems\n\treported earlier)"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> cat t.c\n> #include <stdio.h>\n> #include <unistd.h>\n\n> main()\n> {\n> printf(\"%ld\\n\", sysconf(_SC_OPEN_MAX));\n> }\n\n>> ./t\n> 4136\n\nYup, there's our problem. Each backend will feel entitled to open up to\nabout 4100 files, assuming it manages to hit that many distinct tables/\nindexes during its run. You probably haven't got that many, but even\nseveral hundred files times a couple dozen backends would start pushing\nyour (previous) kernel FD limit.\n\nSo, at least on FreeBSD, we can't trust sysconf(_SC_OPEN_MAX) to tell us\nthe number we need.\n\nAn explicit parameter to the postmaster, setting the installation-wide\nopen file count (with default maybe about 50 * MaxBackends) is starting\nto look like a good answer to me. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Aug 2000 14:30:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems reported\n\tearlier)"
},
{
"msg_contents": "On Mon, 28 Aug 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> >> cat t.c\n> > #include <stdio.h>\n> > #include <unistd.h>\n> \n> > main()\n> > {\n> > printf(\"%ld\\n\", sysconf(_SC_OPEN_MAX));\n> > }\n> \n> >> ./t\n> > 4136\n> \n> Yup, there's our problem. Each backend will feel entitled to open up to\n> about 4100 files, assuming it manages to hit that many distinct tables/\n> indexes during its run. You probably haven't got that many, but even\n> several hundred files times a couple dozen backends would start pushing\n> your (previous) kernel FD limit.\n> \n> So, at least on FreeBSD, we can't trust sysconf(_SC_OPEN_MAX) to tell us\n> the number we need.\n> \n> An explicit parameter to the postmaster, setting the installation-wide\n> open file count (with default maybe about 50 * MaxBackends) is starting\n> to look like a good answer to me. Comments?\n\nOkay, if I understand correctly, this would just result in more I/O as far\nas having to close off \"unused files\" once that 50 limit is reached?\n\nWould it be installation-wide, or per-process? Ie. if I have 100 as\nmaxbackends, and set it to 1000, could one backend suck up all 1000, or\nwould each max out at 10? (note. I'm running with 192 backends right now,\nand have actually pushed it to run 188 simultaneously *grin*) ...\n\n\n\n",
"msg_date": "Mon, 28 Aug 2000 15:48:51 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems\n\treported earlier)"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> An explicit parameter to the postmaster, setting the installation-wide\n>> open file count (with default maybe about 50 * MaxBackends) is starting\n>> to look like a good answer to me. Comments?\n\n> Okay, if I understand correctly, this would just result in more I/O as far\n> as having to close off \"unused files\" once that 50 limit is reached?\n\nRight, the cost is extra close() and open() kernel calls to release FDs\ntemporarily.\n\n> Would it be installation-wide, or per-process? Ie. if I have 100 as\n> maxbackends, and set it to 1000, could one backend suck up all 1000, or\n> would each max out at 10?\n\nThe only straightforward implementation is to take the parameter, divide\nby MaxBackends, and allow each backend to have no more than that many\nfiles open. Any sort of dynamic allocation would require inter-backend\ncommunication, which is probably more trouble than it's worth to avoid\na few kernel calls.\n\n> (note. I'm running with 192 backends right now,\n> and have actually pushed it to run 188 simultaneously *grin*) ...\n\nLessee, 8192 FDs / 192 backends = 42 per backend. No wonder you were\nrunning out.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Aug 2000 15:04:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems reported\n\tearlier)"
},
{
"msg_contents": "On Mon, 28 Aug 2000, Tom Lane wrote:\n\n> The only straightforward implementation is to take the parameter,\n> divide by MaxBackends, and allow each backend to have no more than\n> that many files open. Any sort of dynamic allocation would require\n> inter-backend communication, which is probably more trouble than it's\n> worth to avoid a few kernel calls.\n\nagreed, just wanted to make sure ... sound great to me ...\n\n> > (note. I'm running with 192 backends right now,\n> > and have actually pushed it to run 188 simultaneously *grin*) ...\n> \n> Lessee, 8192 FDs / 192 backends = 42 per backend. No wonder you were\n> running out.\n\n*grin* I up'd it to 32k ... so far its max'd out at around 7175used ...\n\n\n",
"msg_date": "Mon, 28 Aug 2000 16:16:25 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems\n\treported earlier)"
},
{
"msg_contents": "Department of Things that Fell Through the Cracks:\n\nBack in August we had concluded that it is a bad idea to trust\n\"sysconf(_SC_OPEN_MAX)\" as an indicator of how many files each backend\ncan safely open. FreeBSD was reported to return 4136, and I have\nsince noticed that LinuxPPC returns 1024. Both of those are\nunreasonably large fractions of the actual kernel file table size.\nA few dozen backends opening hundreds of files apiece will fill the\nkernel file table on most Unix platforms.\n\nI'm not sure why this didn't get dealt with, but I think it's a \"must\nfix\" kind of problem for 7.1. The dbadmin has *got* to be able to\nlimit Postgres' appetite for open file descriptors.\n\nI propose we add a new configuration parameter, MAX_FILES_PER_PROCESS,\nwith a default value of about 100. A new backend would set its\nmax-files setting to the smaller of this parameter or\nsysconf(_SC_OPEN_MAX).\n\nAn alternative approach would be to make the parameter be total open files\nacross the whole installation, and divide it by MaxBackends to arrive at\nthe per-backend limit. However, it'd be much harder to pick a reasonable\ndefault value if we did it that way.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Dec 2000 17:11:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems reported\n\tearlier)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Department of Things that Fell Through the Cracks:\n> \n> Back in August we had concluded that it is a bad idea to trust\n> \"sysconf(_SC_OPEN_MAX)\" as an indicator of how many files each backend\n> can safely open. FreeBSD was reported to return 4136, and I have\n> since noticed that LinuxPPC returns 1024. Both of those are\n> unreasonably large fractions of the actual kernel file table size.\n> A few dozen backends opening hundreds of files apiece will fill the\n> kernel file table on most Unix platforms.\n> \n> I'm not sure why this didn't get dealt with, but I think it's a \"must\n> fix\" kind of problem for 7.1. The dbadmin has *got* to be able to\n> limit Postgres' appetite for open file descriptors.\n> \n> I propose we add a new configuration parameter, MAX_FILES_PER_PROCESS,\n> with a default value of about 100. A new backend would set its\n> max-files setting to the smaller of this parameter or\n> sysconf(_SC_OPEN_MAX).\n> \n> An alternative approach would be to make the parameter be total open files\n> across the whole installation, and divide it by MaxBackends to arrive at\n> the per-backend limit. However, it'd be much harder to pick a reasonable\n> default value if we did it that way.\n> \n> Comments?\n\nOn Linux, at least, the 1024 file limit is a per process limit, the\nsystem wide limit defaults to 4096 and can be easily changed by \n\necho 16384 > /proc/sys/fs/file-max\n\n(16384 is arbitrary and can be much larger)\n\nI am all for having the ability to tune behavior over the system\nreported values, but I think it should be an option which defaults to\nthe previous behavior.\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Sat, 23 Dec 2000 17:38:24 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Too many open files (was Re: spinlock problems reported earlier)"
},
{
"msg_contents": "Tom Lane writes:\n\n> I'm not sure why this didn't get dealt with, but I think it's a \"must\n> fix\" kind of problem for 7.1. The dbadmin has *got* to be able to\n> limit Postgres' appetite for open file descriptors.\n\nUse ulimit.\n\n> I propose we add a new configuration parameter, MAX_FILES_PER_PROCESS,\n> with a default value of about 100. A new backend would set its\n> max-files setting to the smaller of this parameter or\n> sysconf(_SC_OPEN_MAX).\n\nI think this is an unreasonable interference with the customary operating\nsystem interfaces (e.g., ulimit). The last thing I want to hear is\n\"Postgres is slow and it only opens 100 files per process even though I\n<did something> to allow 32 million.\"\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 24 Dec 2000 00:06:29 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems\n\treported earlier)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> I'm not sure why this didn't get dealt with, but I think it's a \"must\n>> fix\" kind of problem for 7.1. The dbadmin has *got* to be able to\n>> limit Postgres' appetite for open file descriptors.\n\n> Use ulimit.\n\nEven if ulimit exists and is able to control that parameter on a given\nplatform (highly unportable assumptions), it's not really a workable\nanswer. fd.c has to stop short of using up all of the actual nfile\nlimit, or else stuff like the dynamic loader is likely to fail.\n\n> I think this is an unreasonable interference with the customary operating\n> system interfaces (e.g., ulimit). The last thing I want to hear is\n> \"Postgres is slow and it only opens 100 files per process even though I\n> <did something> to allow 32 million.\"\n\n(1) A dbadmin who hasn't read the run-time configuration doc page (that\nyou did such a nice job with) is going to have lots of performance\nissues besides this one.\n\n(2) The last thing *I* want to hear is stories of a default Postgres\ninstallation causing system-wide instability. But if we don't insert\nan open-files limit that's tighter than the \"customary operating system\nlimit\", that's exactly the situation we have, at least on several\npopular platforms.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Dec 2000 18:12:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems reported\n\tearlier)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Maybe a setting that controls the total number of files that postmaster\n> plus backends can allocate among them would be useful.\n\nThat'd be nice if we could do it, but I don't see any inexpensive way\nto get one backend to release an open FD when another one needs one.\nSo, divvying up the limit on an N-per-backend basis seems like the\nmost workable approach.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Dec 2000 18:42:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems reported\n\tearlier)"
},
{
"msg_contents": "Maybe a setting that controls the total number of files that postmaster\nplus backends can allocate among them would be useful. If you have a per\nbackend setting then that sort of assumes lots of clients with relatively\nlittle usage. Which is probably true in many cases, but not in all.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 24 Dec 2000 00:42:53 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems\n\treported earlier)"
},
{
"msg_contents": "> (1) A dbadmin who hasn't read the run-time configuration doc page (that\n> you did such a nice job with) is going to have lots of performance\n> issues besides this one.\n> \n> (2) The last thing *I* want to hear is stories of a default Postgres\n> installation causing system-wide instability. But if we don't insert\n> an open-files limit that's tighter than the \"customary operating system\n> limit\", that's exactly the situation we have, at least on several\n> popular platforms.\n\nIMHO, let's remember we keep a cache of file descriptors open for\nperformance. How many file do we really need open in the cache? I can't\nimagine any performance reason to have hundreds of open file descriptors\ncached. A file open is not that big a deal.\n\nJust because the OS says we can open 1000 files doesn't mean we should\nopen them just to keep a nice cache.\n\nWe are keeping them open just for performance reasons, not because we\nactually need them to get work done.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 23 Dec 2000 19:05:03 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems reported\n\tearlier)"
},
{
"msg_contents": "* Tom Lane <[email protected]> [001223 14:16] wrote:\n> Department of Things that Fell Through the Cracks:\n> \n> Back in August we had concluded that it is a bad idea to trust\n> \"sysconf(_SC_OPEN_MAX)\" as an indicator of how many files each backend\n> can safely open. FreeBSD was reported to return 4136, and I have\n> since noticed that LinuxPPC returns 1024. Both of those are\n> unreasonably large fractions of the actual kernel file table size.\n> A few dozen backends opening hundreds of files apiece will fill the\n> kernel file table on most Unix platforms.\n\ngetdtablesize(2) on BSD should tell you the per-process limit.\nsysconf on FreeBSD shouldn't lie to you.\n\ngetdtablesize should take into account limits in place.\n\nlater versions of FreeBSD have a sysctl 'kern.openfiles' which\ncan be checked to see if the system is approaching the systemwide\nlimit.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Sat, 23 Dec 2000 16:24:17 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems reported\n\tearlier)"
},
{
"msg_contents": "> Department of Things that Fell Through the Cracks:\n> \n> Back in August we had concluded that it is a bad idea to trust\n> \"sysconf(_SC_OPEN_MAX)\" as an indicator of how many files each backend\n> can safely open. FreeBSD was reported to return 4136, and I have\n> since noticed that LinuxPPC returns 1024. Both of those are\n> unreasonably large fractions of the actual kernel file table size.\n> A few dozen backends opening hundreds of files apiece will fill the\n> kernel file table on most Unix platforms.\n> \n> I'm not sure why this didn't get dealt with, but I think it's a \"must\n> fix\" kind of problem for 7.1. The dbadmin has *got* to be able to\n> limit Postgres' appetite for open file descriptors.\n> \n> I propose we add a new configuration parameter, MAX_FILES_PER_PROCESS,\n> with a default value of about 100. A new backend would set its\n> max-files setting to the smaller of this parameter or\n> sysconf(_SC_OPEN_MAX).\n\nSeems nice idea. We have been heard lots of problem reports caused by\nruuning out of the file table.\n\nHowever it would be even nicer, if it could be configurable at runtime\n(at the postmaster starting up time) like -N option. Maybe\nMAX_FILES_PER_PROCESS can be a hard limit?\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 24 Dec 2000 11:42:45 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems\n\treported earlier)"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> I propose we add a new configuration parameter, MAX_FILES_PER_PROCESS,\n>> with a default value of about 100. A new backend would set its\n>> max-files setting to the smaller of this parameter or\n>> sysconf(_SC_OPEN_MAX).\n\n> Seems nice idea. We have been heard lots of problem reports caused by\n> ruuning out of the file table.\n\n> However it would be even nicer, if it could be configurable at runtime\n> (at the postmaster starting up time) like -N option.\n\nYes, what I meant was a GUC parameter named MAX_FILES_PER_PROCESS.\nYou could set it via postmaster.opts or postmaster command line switch.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Dec 2000 22:58:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Too many open files (was Re: spinlock problems reported\n\tearlier)"
}
] |
[
{
"msg_contents": " Date: Sunday, August 27, 2000 @ 15:00:22\nAuthor: petere\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql\n from hub.org:/home/projects/pgsql/tmp/cvs-serv76663\n\nModified Files:\n\taclocal.m4 configure configure.in \n\n----------------------------- Log Message -----------------------------\n\nRemove configure tests for `signed', `volatile', and signal handler args;\nthe harm potential outweighs the possible benefits.\n\n",
"msg_date": "Sun, 27 Aug 2000 15:00:22 -0400 (EDT)",
"msg_from": "Peter Eisentraut - PostgreSQL <petere>",
"msg_from_op": true,
"msg_subject": "pgsql (aclocal.m4 configure configure.in)"
},
{
"msg_contents": "Peter Eisentraut - PostgreSQL <[email protected]> writes:\n> Remove configure tests for `signed', `volatile', and signal handler args;\n> the harm potential outweighs the possible benefits.\n\nAhem. Should this change not have been discussed *before* making it?\nI am sure you have just broken some ports to older systems. You need\nto justify doing that. Where is the \"harm potential\"?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 27 Aug 2000 16:13:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "signed, volatile, etc"
}
] |
[
{
"msg_contents": "Those of you with long memories may recall a benchmark that Edmund Mergl\ndrew our attention to back in May '99. That test showed extremely slow\nperformance for updating a table with many indexes (about 20). At the\ntime, it seemed the problem was due to bad performance of btree with\nmany equal keys, so I thought I'd go back and retry the benchmark after\nthis latest round of btree hackery.\n\nThe good news is that btree itself seems to be pretty well fixed; the\nbad news is that the benchmark is still slow for large numbers of rows.\nThe problem is I/O: the CPU mostly sits idle waiting for the disk.\nAs best I can tell, the difficulty is that the working set of pages\nneeded to update this many indexes is too large compared to the number\nof disk buffers Postgres is using. (I was running with -B 1000 and\nlooking at behavior for a 100000-row test table. This gave me a table\nsize of 3876 pages, plus 11526 pages in 20 indexes.)\n\nOf course, there's only so much we can do when the number of buffers\nis too small, but I still started to wonder if we are using the buffers\nas effectively as we can. Some tracing showed that most of the pages\nof the indexes were being read and written multiple times within a\nsingle UPDATE query, while most of the pages of the table proper were\nfetched and written only once. That says we're not using the buffers\nas well as we could; the index pages are not being kept in memory when\nthey should be. In a query like this, we should displace main-table\npages sooner to allow keeping more index pages in cache --- but with\nthe simple LRU replacement method we use, once a page has been loaded\nit will stay in cache for at least the next NBuffers (-B) page\nreferences, no matter what. With a large NBuffers that's a long time.\n\nI've come across an interesting article:\n\tThe LRU-K Page Replacement Algorithm For Database Disk Buffering\n\tElizabeth J. O'Neil, Patrick E. O'Neil, Gerhard Weikum\n\tProceedings of the 1993 ACM SIGMOD international conference\n\ton Management of Data, May 1993\n(If you subscribe to the ACM digital library, you can get a PDF of this\nfrom there.) This article argues that standard LRU buffer management is\ninherently not great for database caches, and that it's much better to\nreplace pages on the basis of time since the K'th most recent reference,\nnot just time since the most recent one. K=2 is enough to get most of\nthe benefit. The big win is that you are measuring an actual page\ninterreference time (between the last two references) and not just\ndealing with a lower-bound guess on the interreference time. Frequently\nused pages are thus much more likely to stay in cache.\n\nIt looks like it wouldn't take too much work to replace shared buffers\non the basis of LRU-2 instead of LRU, so I'm thinking about trying it.\n\nHas anyone looked into this area? Is there a better method to try?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 27 Aug 2000 20:05:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Possible performance improvement: buffer replacement policy"
},
{
"msg_contents": "> (If you subscribe to the ACM digital library, you can get a PDF of this\n> from there.) This article argues that standard LRU buffer management is\n> inherently not great for database caches, and that it's much better to\n> replace pages on the basis of time since the K'th most recent reference,\n> not just time since the most recent one. K=2 is enough to get most of\n> the benefit. The big win is that you are measuring an actual page\n> interreference time (between the last two references) and not just\n> dealing with a lower-bound guess on the interreference time. Frequently\n> used pages are thus much more likely to stay in cache.\n> \n> It looks like it wouldn't take too much work to replace shared buffers\n> on the basis of LRU-2 instead of LRU, so I'm thinking about trying it.\n> \n> Has anyone looked into this area? Is there a better method to try?\n\nSounds like a perfect idea. Good luck. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 11:41:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible performance improvement: buffer replacement policy"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> It looks like it wouldn't take too much work to replace shared buffers\n>> on the basis of LRU-2 instead of LRU, so I'm thinking about trying it.\n>> \n>> Has anyone looked into this area? Is there a better method to try?\n\n> Sounds like a perfect idea. Good luck. :-)\n\nActually, the idea went down in flames :-(, but I neglected to report\nback to pghackers about it. I did do some code to manage buffers as\nLRU-2. I didn't have any good performance test cases to try it with,\nbut Richard Brosnahan was kind enough to re-run the TPC tests previously\npublished by Great Bridge with that code in place. Wasn't any faster,\nin fact possibly a little slower, likely due to the extra CPU time spent\non buffer freelist management. It's possible that other scenarios might\nshow a better result, but right now I feel pretty discouraged about the\nLRU-2 idea and am not pursuing it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 11:49:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possible performance improvement: buffer replacement policy "
},
{
"msg_contents": "\nTom, did we ever test this? I think we did and found that it was the\nsame or worse, right?\n\n> Those of you with long memories may recall a benchmark that Edmund Mergl\n> drew our attention to back in May '99. That test showed extremely slow\n> performance for updating a table with many indexes (about 20). At the\n> time, it seemed the problem was due to bad performance of btree with\n> many equal keys, so I thought I'd go back and retry the benchmark after\n> this latest round of btree hackery.\n> \n> The good news is that btree itself seems to be pretty well fixed; the\n> bad news is that the benchmark is still slow for large numbers of rows.\n> The problem is I/O: the CPU mostly sits idle waiting for the disk.\n> As best I can tell, the difficulty is that the working set of pages\n> needed to update this many indexes is too large compared to the number\n> of disk buffers Postgres is using. (I was running with -B 1000 and\n> looking at behavior for a 100000-row test table. This gave me a table\n> size of 3876 pages, plus 11526 pages in 20 indexes.)\n> \n> Of course, there's only so much we can do when the number of buffers\n> is too small, but I still started to wonder if we are using the buffers\n> as effectively as we can. Some tracing showed that most of the pages\n> of the indexes were being read and written multiple times within a\n> single UPDATE query, while most of the pages of the table proper were\n> fetched and written only once. That says we're not using the buffers\n> as well as we could; the index pages are not being kept in memory when\n> they should be. In a query like this, we should displace main-table\n> pages sooner to allow keeping more index pages in cache --- but with\n> the simple LRU replacement method we use, once a page has been loaded\n> it will stay in cache for at least the next NBuffers (-B) page\n> references, no matter what. With a large NBuffers that's a long time.\n> \n> I've come across an interesting article:\n> \tThe LRU-K Page Replacement Algorithm For Database Disk Buffering\n> \tElizabeth J. O'Neil, Patrick E. O'Neil, Gerhard Weikum\n> \tProceedings of the 1993 ACM SIGMOD international conference\n> \ton Management of Data, May 1993\n> (If you subscribe to the ACM digital library, you can get a PDF of this\n> from there.) This article argues that standard LRU buffer management is\n> inherently not great for database caches, and that it's much better to\n> replace pages on the basis of time since the K'th most recent reference,\n> not just time since the most recent one. K=2 is enough to get most of\n> the benefit. The big win is that you are measuring an actual page\n> interreference time (between the last two references) and not just\n> dealing with a lower-bound guess on the interreference time. Frequently\n> used pages are thus much more likely to stay in cache.\n> \n> It looks like it wouldn't take too much work to replace shared buffers\n> on the basis of LRU-2 instead of LRU, so I'm thinking about trying it.\n> \n> Has anyone looked into this area? Is there a better method to try?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 19 Jan 2001 12:03:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible performance improvement: buffer replacement policy"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, did we ever test this? I think we did and found that it was the\n> same or worse, right?\n\nI tried it and didn't see any noticeable improvement on the particular\ntest case I was using, so I got discouraged and didn't pursue the idea\nfurther. I'd like to come back to it someday, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Jan 2001 12:45:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possible performance improvement: buffer replacement policy "
},
{
"msg_contents": "\nI will throw the email into the optimizer TODO.detail file.\n\n> Bruce Momjian <[email protected]> writes:\n> > Tom, did we ever test this? I think we did and found that it was the\n> > same or worse, right?\n> \n> I tried it and didn't see any noticeable improvement on the particular\n> test case I was using, so I got discouraged and didn't pursue the idea\n> further. I'd like to come back to it someday, though.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 19 Jan 2001 12:48:46 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible performance improvement: buffer replacement policy"
},
{
"msg_contents": "\nThrew it into TODO.detail performance, not optimizer.\n\n> Bruce Momjian <[email protected]> writes:\n> > Tom, did we ever test this? I think we did and found that it was the\n> > same or worse, right?\n> \n> I tried it and didn't see any noticeable improvement on the particular\n> test case I was using, so I got discouraged and didn't pursue the idea\n> further. I'd like to come back to it someday, though.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 19 Jan 2001 12:52:11 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible performance improvement: buffer replacement policy"
},
{
"msg_contents": "On Fri, Jan 19, 2001 at 12:03:58PM -0500, Bruce Momjian wrote:\n> \n> Tom, did we ever test this? I think we did and found that it was the\n> same or worse, right?\n\n(Funnily enough, I just read that message:)\n\nTo: Bruce Momjian <[email protected]>\ncc: [email protected]\nSubject: Re: [HACKERS] Possible performance improvement: buffer replacement policy \nIn-reply-to: <[email protected]> \nReferences: <[email protected]>\nComments: In-reply-to Bruce Momjian <[email protected]>\n\tmessage dated \"Mon, 16 Oct 2000 11:41:41 -0400\"\nDate: Mon, 16 Oct 2000 11:49:52 -0400\nMessage-ID: <[email protected]>\nFrom: Tom Lane <[email protected]>\nX-Mailing-List: [email protected]\nPrecedence: bulk\nSender: [email protected]\nStatus: RO\nContent-Length: 947\nLines: 19\n\nBruce Momjian <[email protected]> writes:\n>> It looks like it wouldn't take too much work to replace shared buffers\n>> on the basis of LRU-2 instead of LRU, so I'm thinking about trying it.\n>> \n>> Has anyone looked into this area? Is there a better method to try?\n\n> Sounds like a perfect idea. Good luck. :-)\n\nActually, the idea went down in flames :-(, but I neglected to report\nback to pghackers about it. I did do some code to manage buffers as\nLRU-2. I didn't have any good performance test cases to try it with,\nbut Richard Brosnahan was kind enough to re-run the TPC tests previously\npublished by Great Bridge with that code in place. Wasn't any faster,\nin fact possibly a little slower, likely due to the extra CPU time spent\non buffer freelist management. It's possible that other scenarios might\nshow a better result, but right now I feel pretty discouraged about the\nLRU-2 idea and am not pursuing it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 19 Jan 2001 17:53:28 +0000",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible performance improvement: buffer replacement policy"
},
{
"msg_contents": "I added this too to TODO.detail/performance.\n\n> On Fri, Jan 19, 2001 at 12:03:58PM -0500, Bruce Momjian wrote:\n> > \n> > Tom, did we ever test this? I think we did and found that it was the\n> > same or worse, right?\n> \n> (Funnily enough, I just read that message:)\n> \n> To: Bruce Momjian <[email protected]>\n> cc: [email protected]\n> Subject: Re: [HACKERS] Possible performance improvement: buffer replacement policy \n> In-reply-to: <[email protected]> \n> References: <[email protected]>\n> Comments: In-reply-to Bruce Momjian <[email protected]>\n> \tmessage dated \"Mon, 16 Oct 2000 11:41:41 -0400\"\n> Date: Mon, 16 Oct 2000 11:49:52 -0400\n> Message-ID: <[email protected]>\n> From: Tom Lane <[email protected]>\n> X-Mailing-List: [email protected]\n> Precedence: bulk\n> Sender: [email protected]\n> Status: RO\n> Content-Length: 947\n> Lines: 19\n> \n> Bruce Momjian <[email protected]> writes:\n> >> It looks like it wouldn't take too much work to replace shared buffers\n> >> on the basis of LRU-2 instead of LRU, so I'm thinking about trying it.\n> >> \n> >> Has anyone looked into this area? Is there a better method to try?\n> \n> > Sounds like a perfect idea. Good luck. :-)\n> \n> Actually, the idea went down in flames :-(, but I neglected to report\n> back to pghackers about it. I did do some code to manage buffers as\n> LRU-2. I didn't have any good performance test cases to try it with,\n> but Richard Brosnahan was kind enough to re-run the TPC tests previously\n> published by Great Bridge with that code in place. Wasn't any faster,\n> in fact possibly a little slower, likely due to the extra CPU time spent\n> on buffer freelist management. It's possible that other scenarios might\n> show a better result, but right now I feel pretty discouraged about the\n> LRU-2 idea and am not pursuing it.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 19 Jan 2001 13:00:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible performance improvement: buffer replacement policy"
}
] |
[
{
"msg_contents": "\n> Since I don't see Chevrolet changing their name to Chevy just because\n> everyone calls it that, or Christopher Chris, Marcus Marc, Robert Bob,\n> etc. why do you feel it necessary to change PostgreSQL to Postgres? \n\nI like this interpretation of short name vs long name, but Marc sais we\n(referring to PostgreSQL by short name) are \"evil\".\n\nI think what we want is, that you easily find our webpage/product if you \nonly know the short name.\n\nAndreas\n",
"msg_date": "Mon, 28 Aug 2000 12:18:30 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: How Do You Pronounce \"PostgreSQL\"?"
}
] |
[
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> Peter Eisentraut - PostgreSQL <[email protected]> writes:\n>>>> Remove configure tests for `signed', `volatile', and signal handler args;\n>>>> the harm potential outweighs the possible benefits.\n>> \n>> Ahem. Should this change not have been discussed *before* making it?\n\n> Well, I can't really do more than ask and wait for objections, can I?\n\nIf you did, I didn't see it.\n\n> The problem with omitting \"signed\" is that it is simply not correct to\n> just omit it. Consider \"signed char\" where char is unsigned by default.\n\nOK, so we *may* have problems on machines where (a) the compiler doesn't\ngrok \"signed\" AND (b) char is unsigned by default. Your change does not\nimprove matters at all for these machines; the only way to fix it (if\nanything needs fixed) is to change the code. On the other hand, you\njust broke Postgres for machines where the compiler doesn't do \"signed\"\nand char is signed --- which I believe is a larger population. For\neveryone else, it's a no-op.\n\nRegardless of the number of machines involved, breaking things for one\ngroup without improving matters elsewhere is not a net forward step in\nportability in my mind.\n\n> The problem with volatile is similar: omitting volatile breaks the program\n> semantics.\n\nAgain, same comments. You haven't fixed any platforms, and you may have\nbroken some.\n\n> For signal handlers, three things: firstly this thing has been hard-coded,\n> no way to for users to change it except for hand-editing, and there is no\n> known case where anyone had to do that. If it is a problem then we need to\n> automate that check.\n\nAgreed, there is not currently any known system where the macro would\nneed to be changed. But what's the point of removing the macro? You\nhaven't improved portability, nor improved readability (if anything,\nyou have hurt it --- formerly it was easy to tell that a routine was\nintended as a signal handler, or to search for all signal handlers).\nYou've merely ensured that if anyone ever does have a problem in this\narea, they will need to undo what you did on their way to fixing it.\n\nI think all three of these changes were ill-considered.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Aug 2000 09:45:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: signed, volatile, etc "
}
] |
[
{
"msg_contents": "\nI'm getting:\n\n\"Warning: Unable to connect to PostgresSQL server: Sorry, too many clients \nalready in story_noticias.php on line 7\"\n\nI'm RTFM'ing rigth now, but if anyone can help me, please do !!!!\n\n\nsergio\n\n",
"msg_date": "Mon, 28 Aug 2000 13:08:59 -0300",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "too many clients"
},
{
"msg_contents": "On Mon, 28 Aug 2000 [email protected] wrote:\n\n> \n> I'm getting:\n> \n> \"Warning: Unable to connect to PostgresSQL server: Sorry, too many clients \n> already in story_noticias.php on line 7\"\n> \n> I'm RTFM'ing rigth now, but if anyone can help me, please do !!!!\n\nIncrease the -N parameter to allow more clients to connect ...\n\n\n",
"msg_date": "Mon, 28 Aug 2000 14:26:41 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: too many clients"
}
] |
[
{
"msg_contents": "\n(Please cc: me in any replies. I'm not on the mailing list ATM, but\n I'd be happy to subscribe if that's preferred or if this turns out to\n be worth pursuing.)\n\nWe've been toying with switching to using SQL for the financial engine\nin GnuCash for a while, and eventually we'll almost certainly add\nsupport for an SQL backend as an option, but we haven't gone that\nroute yet because for individuals using the program (i.e. people who\njust want something like Quicken/MSmoney/etc.), we don't feel it's\nreasonable to require them to handle installing and maintaining an SQL\nserver just to do their checkbook.\n\nHowever, if, for the single users, we could find an SQL system that\nwould (like sleepcat) allow us to keep the database in a local file,\nand not run a global server[1], we'd be set. We could use that for\nsingle-users and then support maxsql/postgresql for users who\nwant/need more power.\n\n[1] If a one-process solution would be too hard, we'd probably be fine\n with hacks like just automatically launching the server as the\n user whenever the user launches the app, and then having that\n dedicated server talk to the app exclusively via FS sockets or\n whatever, and manage its database in one of the user's\n directories.\n\nSo what I'd like to ask is this:\n\n (1) Are there any plans to add anything like this?\n\n (2) How hard do you think it would be for an outsider to add this\n feature as an option, and if someone did, would you be likely to\n be interested in incorporating the result upstream?\n\nThanks\n\n-- \nRob Browning <[email protected]> PGP=E80E0D04F521A094 532B97F5D64E3930\n <[email protected]> <[email protected]> <[email protected]>\n",
"msg_date": "28 Aug 2000 15:49:06 -0500",
"msg_from": "Rob Browning <[email protected]>",
"msg_from_op": true,
"msg_subject": "How hard would a \"no global server\" version be?"
},
{
"msg_contents": "> So what I'd like to ask is this:\n> (1) Are there any plans to add anything like this?\n\nNot specifically. Postgres is a full-up database, and afaik there isn't\na contingent of our developer community which is sufficiently interested\nto pursue \"mini\" configurations. But...\n\n> (2) How hard do you think it would be for an outsider to add this\n> feature as an option, and if someone did, would you be likely to\n> be interested in incorporating the result upstream?\n\nin the environments I'm familiar with (e.g. RH/Mandrake with PostgreSQL\nand Gnome), it would be pretty easy to wrap the Postgres libraries and\nbackend to be a \"standalone server\" application. When you start a\n\"postmaster\", you can specify the listener port number, database\nlocation, etc, and on specific systems you could easily have a scripted\nstartup/installation procedure which gets things set up.\n\nOf course we'd prefer that people realize that everything in the world\nwould be better if they just had a Postgres server running 24x7 ;)\n\n - Thomas\n",
"msg_date": "Tue, 29 Aug 2000 04:23:02 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How hard would a \"no global server\" version be?"
},
{
"msg_contents": "On Tue, 29 Aug 2000, Thomas Lockhart wrote:\n\n> > So what I'd like to ask is this:\n> > (1) Are there any plans to add anything like this?\n> \n> Not specifically. Postgres is a full-up database, and afaik there isn't\n> a contingent of our developer community which is sufficiently interested\n> to pursue \"mini\" configurations. But...\n> \n> > (2) How hard do you think it would be for an outsider to add this\n> > feature as an option, and if someone did, would you be likely to\n> > be interested in incorporating the result upstream?\n> \n> in the environments I'm familiar with (e.g. RH/Mandrake with PostgreSQL\n> and Gnome), it would be pretty easy to wrap the Postgres libraries and\n> backend to be a \"standalone server\" application. When you start a\n> \"postmaster\", you can specify the listener port number, database\n> location, etc, and on specific systems you could easily have a scripted\n> startup/installation procedure which gets things set up.\n\ncould they, from within the program itself, just do:\n\npostgres -D <datadir> <database>\n\nat the start, and kill that process when the program finishes?\n\nsimilar to what we do in initdb to initialize the database itself?\n\nbasically, the 'install procedure' for GnuCash would be something like:\n\n/usr/local/pgsql/bin/initdb --pglib=<pglib> --pgdata=<mydir>/.data\necho \"create database gnucash\" | postgres -D <mydir>/.data template1\n\nand then when you run gnucash, you would start up the postgres daemon as:\n\n/usr/local/pgsql/bin/postmaster -p <randomport> -D <mydir>/.data \n\nwhere randomport is set as part of the install process?\n\n\n\n",
"msg_date": "Tue, 29 Aug 2000 01:58:09 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How hard would a \"no global server\" version be?"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n\n> I think all of this is completely do-able with a few small hacks on\n> the application's side. First off you can invoke postgresql with\n> a -D option (where the data is) that points to some subdir in the\n> user's homedirectory. The only real problem that I see that there\n> doesn't seem to be a way to specify the path to unix domain socket\n> that postgresql uses, but that shouldn't be too difficult to fix\n> probably no more than an hour of coding or so.\n\nHmm. Actually, if this really is feasable, and if we decide this is\nthe way we'd like to go, I'd be happy to spend quite a few hours\nmaking this work right, documenting it, etc.\n\nLooks like I'll have to do some poking around and talk to some of the\nother gnucash developers. Spending time on postgresql might be a much\nbetter investment than spending time on the libxml output/input format\nI was about to begin.\n\nThanks so much.\n\n-- \nRob Browning <[email protected]> PGP=E80E0D04F521A094 532B97F5D64E3930\n",
"msg_date": "29 Aug 2000 00:22:02 -0500",
"msg_from": "Rob Browning <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How hard would a \"no global server\" version be?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n\n> Not specifically. Postgres is a full-up database, and afaik there isn't\n> a contingent of our developer community which is sufficiently interested\n> to pursue \"mini\" configurations. But...\n\nWell perhaps I'll become that contingent :>\n\n> Of course we'd prefer that people realize that everything in the\n> world would be better if they just had a Postgres server running\n> 24x7 ;)\n\nNo doubt, but perhaps the \"mini\" configuration might be an insidious\nmethod of initiating the corruption leading to the \"one true way\".\n\n-- \nRob Browning <[email protected]> PGP=E80E0D04F521A094 532B97F5D64E3930\n",
"msg_date": "29 Aug 2000 00:25:08 -0500",
"msg_from": "Rob Browning <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How hard would a \"no global server\" version be?"
},
{
"msg_contents": "> could they, from within the program itself, just do:\n...\n\nRight, something like that should work just fine.\n\n - Thomas\n",
"msg_date": "Tue, 29 Aug 2000 05:38:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How hard would a \"no global server\" version be?"
},
{
"msg_contents": "On Tue, Aug 29, 2000 at 12:25:08AM -0500, Rob Browning wrote:\n> Thomas Lockhart <[email protected]> writes:\n> \n> > Not specifically. Postgres is a full-up database, and afaik there isn't\n> > a contingent of our developer community which is sufficiently interested\n> > to pursue \"mini\" configurations. But...\n> \n> Well perhaps I'll become that contingent :>\n> \n\nAnother use for such a mini config would be the PDA market. IBM's got\nDB2 for the Palm, if I remember correctly. That's a little _too_ small a\ntarget, I think, but the new crop of PocketPC devices have enough memory\nand horsepower to be useful with a real database.\n\n> > Of course we'd prefer that people realize that everything in the\n> > world would be better if they just had a Postgres server running\n> > 24x7 ;)\n\nNaw, that'd suck all the joules out of my battery!\n\n> No doubt, but perhaps the \"mini\" configuration might be an insidious\n> method of initiating the corruption leading to the \"one true way\".\n\nWith the PDA, we'd need a conduit to go back and forth to the desktop,\nwhich runs the 24x7 full server. Corruption by another path...\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Tue, 29 Aug 2000 11:30:20 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How hard would a \"no global server\" version be?"
}
] |
[
{
"msg_contents": "> I see we have a command\n> SET SESSION CHARACTERISTICS AS TRANSACTION COMMIT {TRUE | FALSE}\n> Where does that come from/fit in? I can't see it in SQL 99.\n\nI found SET SESSION CHARACTERISTICS in my copy of the SQL99 draft docs\nwhich have been discussed on this list. afaik there is no concept of\n\"autocommit on/off\" in SQL9x, but clearly if we support \"session\ncharacteristics\" this would be one to include. I intend to add it in (as\nI did for the other SESSION CHARACTERISTICS feature) but if someone\nbeats me to it, I'll not be upset.\n\nI imagine that your question is not really about SET SESSION... but\nabout the latter half of the command above; as I mentioned SQL9x does\nnot have the concept of \"autocommit\", but Postgres does not (yet) have\nthe concept of SQL9x-compatible \"never autocommit\". The SET SESSION...\nwould allow us to do both.\n\nThe syntax itself is a bit verbose, and if SQL99 doesn't really have it\nI'd be happy to consider shorter alternatives. btw, we usually *do*\nsupport a shorter alternative via the SET key=val feature.\n\n - Thomas\n",
"msg_date": "Tue, 29 Aug 2000 04:32:21 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Session characteristics"
}
] |
[
{
"msg_contents": "I've read article by Theodore Jonhnson\n\"Performance Measurements of Compressed Bitmap Indices\"\nhttp://www.informatik.uni-trier.de/~ley/db/conf/vldb/Johnson99.html\nand is wondering if there are plans to implement bitmap indices\nin postgres. As it stated in the article these indices could speedup\njoin operations and decision support queries.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 29 Aug 2000 15:40:15 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "compressed bitmap indices"
},
{
"msg_contents": "On Tue, 29 Aug 2000, Oleg Bartunov wrote:\n\n> I've read article by Theodore Jonhnson\n> \"Performance Measurements of Compressed Bitmap Indices\"\n> http://www.informatik.uni-trier.de/~ley/db/conf/vldb/Johnson99.html\n> and is wondering if there are plans to implement bitmap indices\n> in postgres. As it stated in the article these indices could speedup\n> join operations and decision support queries.\n> \n> \tRegards,\n> \t\tOleg\n\nLook forward to seeing patches? :)\n\n\n",
"msg_date": "Tue, 29 Aug 2000 11:30:29 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: compressed bitmap indices"
}
] |
[
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Points taken. I'm reverting it.\n\nGood, thanks for listening...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Aug 2000 10:58:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: signed, volatile, etc "
}
] |
[
{
"msg_contents": "This patch is for the TODO item\n\n* Disallow LOCK on view \n\nsrc/backend/commands/command.c is the only affected file\n\n-- \nMark Hollomon\[email protected]",
"msg_date": "Tue, 29 Aug 2000 11:21:36 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "disallow LOCK on a view"
},
{
"msg_contents": "Mark Hollomon <[email protected]> writes:\n> sprintf(rulequery, \"select * from pg_views where viewname='%s'\", relname);\n> [ evaluate query via SPI ]\n\nI really dislike seeing backend utility operations built atop SPI.\nQuite aside from the (lack of) speed, there are all sorts of nasty\ntraps that can come from runtime evaluation of query strings. The\nmost obvious example in this case is what if relname contains a quote\nmark? Or backslash?\n\nThe permanent memory leak induced by SPI_saveplan() is another good\nreason not to do it this way.\n\nFinally, once one has written a nice neat little is_view() query\nfunction, there's a strong temptation to just use it from anywhere,\nwithout thought for the side-effects it might have like grabbing/\nreleasing locks, CommandCounterIncrement(), etc. There are many\nplaces in the backend where the side-effects of doing a full query\nevaluation would be harmful.\n\nMark's patch is OK as is, since it's merely relocating some poorly\nwritten code and not trying to fix it, but someone ought to think\nabout fixing the code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Aug 2000 11:35:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Backend-internal SPI operations"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Mark's patch is OK as is, since it's merely relocating some poorly\n> written code and not trying to fix it, but someone ought to think\n> about fixing the code.\n> \n\nI'll take a crack at it.\n\nJust out of curiousity, is there technical reason there isn't\na (say) relisview attribute to pg_class?\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Tue, 29 Aug 2000 12:37:19 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": "\"Mark Hollomon\" <[email protected]> writes:\n> Just out of curiousity, is there technical reason there isn't\n> a (say) relisview attribute to pg_class?\n\nThat might indeed be the most reasonable way to attack it, rather\nthan having to go messing about looking for a matching rule.\n(Jan, any thoughts here?)\n\nAdding a column to a core system table like pg_class is a good\nexercise for the student ;-) ... it's not exactly automated,\nand you have to find all the places that need to be updated.\nYou might want to keep notes and prepare a writeup for the\ndeveloper's FAQ. I thought of that the last time I did something\nsimilar, but it was only at the end that I realized I should've\nbeen keeping notes to start with.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Aug 2000 12:49:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Mark Hollomon\" <[email protected]> writes:\n> > Just out of curiousity, is there technical reason there isn't\n> > a (say) relisview attribute to pg_class?\n>\n> That might indeed be the most reasonable way to attack it, rather\n> than having to go messing about looking for a matching rule.\n> (Jan, any thoughts here?)\n\n The right way IMHO would be to give views another relkind.\n Then we could easily\n\n 1. detect if the final query after rewriting still tries to\n INSERT/UPDATE/DELETE a view - i.e. \"missing rewrite\n rule(s)\".\n\n 2. disable things like LOCK etc.\n\n The problem here is, that the relkind must change at rule\n creation/drop time. Fortunately rules on SELECT are totally\n restricted to VIEW's since 6.4, and I don't see any reason to\n change this.\n\n And it's time to make more use of the relkind attribute. For\n 7.2, when we want to have tuple-set returns for functions, we\n might want to have structures as well (we talked about that\n already, Tom). A structure is just a row/type description. A\n function, returning a tuple or set of tuples, can return this\n type or set of type as well as any other existing table/view\n structure. So to create a function returning a set of tuples,\n which have a structure different from any existing table,\n someone creates a named structure, then the function\n returning tuples of that type. These structures are just\n entries in pg_class, pg_attribute and pg_type. There is no\n file or any rules, triggers etc. attached to them. They just\n describe a typle that can be built in memory.\n\n> Adding a column to a core system table like pg_class is a good\n> exercise for the student ;-) ... it's not exactly automated,\n> and you have to find all the places that need to be updated.\n> You might want to keep notes and prepare a writeup for the\n> developer's FAQ. I thought of that the last time I did something\n> similar, but it was only at the end that I realized I should've\n> been keeping notes to start with.\n\n Meetoo :-}\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 30 Aug 2000 06:52:37 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Tom Lane wrote:\n> > \"Mark Hollomon\" <[email protected]> writes:\n> > > Just out of curiousity, is there technical reason there isn't\n> > > a (say) relisview attribute to pg_class?\n> >\n> > That might indeed be the most reasonable way to attack it, rather\n> > than having to go messing about looking for a matching rule.\n> > (Jan, any thoughts here?)\n> \n> The right way IMHO would be to give views another relkind.\n> Then we could easily\n> \n> 1. detect if the final query after rewriting still tries to\n> INSERT/UPDATE/DELETE a view - i.e. \"missing rewrite\n> rule(s)\".\n\nThis appeals to me. The current silent no-op behavior of INSERT/DELETE on a view\nis annoying.\n\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Wed, 30 Aug 2000 08:31:10 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": ">> The right way IMHO would be to give views another relkind.\n\n> This appeals to me.\n\nI like it too. Aside from the advantages Jan mentioned, we could also\nrefrain from creating an underlying file for a view, which would be\nnice to avoid cluttering the database directory.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Aug 2000 10:20:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> >> The right way IMHO would be to give views another relkind.\n> \n> > This appeals to me.\n> \n> I like it too. Aside from the advantages Jan mentioned, we could also\n> refrain from creating an underlying file for a view, which would be\n> nice to avoid cluttering the database directory.\n\nExcellent. I think we have a consensus. I'll start coding in that direction.\n\nAnybody have any thoughts on the upgrade ramification of this change?\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Wed, 30 Aug 2000 10:45:29 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": "Jan Wieck <[email protected]> writes:\n> From memory I think views are created as CREATE TABLE, with\n> an internal DefineRuleStmt, and dumped as CREATE TABLE,\n> CREATE RULE for sure. So the CREATE/DROP RULE would need to\n> remove/recreate the tables file (plus toast file and index)\n> if you want it to be consistent. Don't think you want that -\n> do you?\n\nBut that's only true because it's such a pain in the neck for pg_dump\nto discover that a table is a view. If this could easily be told from\ninspection of pg_class, then it'd be no problem to dump views as\nCREATE VIEW statements in the first place --- obviously better, no?\n\nHowever the initial version upgrade would be a problem, since dump\nfiles out of existing releases would contain CREATE TABLE & RULE\ncommands instead of CREATE VIEW. I guess what would happen is that\nviews reloaded that way wouldn't really be views, they'd be tables\nwith rules attached. Grumble.\n\nHow about this:\n\tCREATE RULE of an on-select-instead rule changes table's\n\trelkind to 'view'. We don't need to drop the underlying\n\ttable file, though (just leave it be, it'll go away at\n\tnext initdb).\n\n\tDROP RULE of a view's on-select-instead is not allowed.\n\tYou have to drop the whole view instead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Aug 2000 11:05:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations "
},
{
"msg_contents": "Tom Lane wrote:\n> >> The right way IMHO would be to give views another relkind.\n>\n> > This appeals to me.\n>\n> I like it too. Aside from the advantages Jan mentioned, we could also\n> refrain from creating an underlying file for a view, which would be\n> nice to avoid cluttering the database directory.\n\n From memory I think views are created as CREATE TABLE, with\n an internal DefineRuleStmt, and dumped as CREATE TABLE,\n CREATE RULE for sure. So the CREATE/DROP RULE would need to\n remove/recreate the tables file (plus toast file and index)\n if you want it to be consistent. Don't think you want that -\n do you?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 30 Aug 2000 10:42:47 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Jan Wieck <[email protected]> writes:\n> > From memory I think views are created as CREATE TABLE, with\n> > an internal DefineRuleStmt, and dumped as CREATE TABLE,\n> > CREATE RULE for sure. So the CREATE/DROP RULE would need to\n> > remove/recreate the tables file (plus toast file and index)\n> > if you want it to be consistent. Don't think you want that -\n> > do you?\n> \n> But that's only true because it's such a pain in the neck for pg_dump\n> to discover that a table is a view. If this could easily be told from\n> inspection of pg_class, then it'd be no problem to dump views as\n> CREATE VIEW statements in the first place --- obviously better, no?\n\nThe fact that views can be created by a separate table/rule\nsequence allows pg_dump to properly dump views which are based\nupon functions, or views which may have dependencies on other\ntables/views. The new pg_dump dumps in oid order in an attempt to\nresolve 95% of the dependency problems, but it could never solve\na circular dependency. I was thinking that with:\n\n(a) The creation of an ALTER FUNCTION name(args) SET ...\n\nand\n\n(b) Allow for functions to be created like:\n\nCREATE FUNCTION foo(int) RETURNS int AS NULL;\n\nwhich would return NULL as a result.\n\nA complex schema with views based upon functions, tables, and\nother views, and functions based upon views could be properly\ndumped by dumping:\n\n 1. Function Prototypes (CREATE FUNCTION ... AS NULL)\n 2. Types\n 3. Aggregates\n 4. Operators\n 5. Sequences\n 6. Tables\n\n...DATA...\n\n 7. Triggers\n 8. Function Implementations (ALTER FUNCTION ... SET)\n 9. Rules (including Views)\n10. Indexes\n11. Comments :-)\n\nWouldn't this be a \"correct\" dump?\n\nMike Mascari\n",
"msg_date": "Wed, 30 Aug 2000 12:03:15 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": "Some idiot wrote:\n> \n> The fact that views can be created by a separate table/rule\n> sequence allows pg_dump to properly dump views which are based\n> upon functions, or views which may have dependencies on other\n> tables/views. The new pg_dump dumps in oid order in an attempt to\n> resolve 95% of the dependency problems, but it could never solve\n> a circular dependency. I was thinking that with:\n> \n> (a) The creation of an ALTER FUNCTION name(args) SET ...\n> \n> and\n> \n> (b) Allow for functions to be created like:\n> \n> CREATE FUNCTION foo(int) RETURNS int AS NULL;\n> \n> which would return NULL as a result.\n> \n> A complex schema with views based upon functions, tables, and\n> other views, and functions based upon views could be properly\n> dumped by dumping:\n> \n> 1. Function Prototypes (CREATE FUNCTION ... AS NULL)\n> 2. Types\n> 3. Aggregates\n> 4. Operators\n> 5. Sequences\n> 6. Tables\n>\n> ...more idiocy follows...\n\nSorry. I forgot about function prototypes with arguments of\nuser-defined types. Seems there's no magic bullet. :-(\n\nMike Mascari\n",
"msg_date": "Wed, 30 Aug 2000 12:32:45 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": "Mike Mascari wrote:\n> \n> Tom Lane wrote:\n> >\n> > Jan Wieck <[email protected]> writes:\n> > > From memory I think views are created as CREATE TABLE, with\n> > > an internal DefineRuleStmt, and dumped as CREATE TABLE,\n> > > CREATE RULE for sure. So the CREATE/DROP RULE would need to\n> > > remove/recreate the tables file (plus toast file and index)\n> > > if you want it to be consistent. Don't think you want that -\n> > > do you?\n> >\n> > But that's only true because it's such a pain in the neck for pg_dump\n> > to discover that a table is a view. If this could easily be told from\n> > inspection of pg_class, then it'd be no problem to dump views as\n> > CREATE VIEW statements in the first place --- obviously better, no?\n> \n> The fact that views can be created by a separate table/rule\n> sequence allows pg_dump to properly dump views which are based\n> upon functions, or views which may have dependencies on other\n> tables/views.\n\nI don't see this. a 'CREATE VIEW' cannot reference a function that\ndid not exist at the time it was executed. The only way to get in\ntrouble, that I see, is a DROP/CREATE RULE. But I think\nthe proposal is not to allow this to happen if the rule is the\nselect rule for a view.\n\nThe reason that pg_dump used the table/rule sequence was that historically\nit was hard to figure out that a tuple in pg_class really represented a\nview.\n\nBut I could be mistaken.\n\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Wed, 30 Aug 2000 12:38:27 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": "\"Hollomon, Mark\" wrote:\n> \n> Do we still want to be able to inherit from views?\n\nAlso:\n\nCurrently a view may be dropped with either 'DROP VIEW'\nor 'DROP TABLE'. Should this be changed?\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Wed, 30 Aug 2000 14:29:56 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New relkind for views"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n>> 1. Function Prototypes (CREATE FUNCTION ... AS NULL)\n>> 2. Types\n\n> Sorry. I forgot about function prototypes with arguments of\n> user-defined types. Seems there's no magic bullet. :-(\n\nNot necessarily --- there's a shell-type (or type forward reference,\nif you prefer) feature that exists to handle exactly that apparent\ncircularity. Otherwise you could never define a user-defined type at\nall, since you have to define its I/O procedures before you can do\nCREATE TYPE.\n\nI didn't study your proposal in detail, but it might work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Aug 2000 14:41:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations "
},
{
"msg_contents": "Mark Hollomon wrote:\n> Mike Mascari wrote:\n> >\n> > Tom Lane wrote:\n> > >\n> > > Jan Wieck <[email protected]> writes:\n> > > > From memory I think views are created as CREATE TABLE, with\n> > > > an internal DefineRuleStmt, and dumped as CREATE TABLE,\n> > > > CREATE RULE for sure. So the CREATE/DROP RULE would need to\n> > > > remove/recreate the tables file (plus toast file and index)\n> > > > if you want it to be consistent. Don't think you want that -\n> > > > do you?\n> > >\n> > > But that's only true because it's such a pain in the neck for pg_dump\n> > > to discover that a table is a view. If this could easily be told from\n> > > inspection of pg_class, then it'd be no problem to dump views as\n> > > CREATE VIEW statements in the first place --- obviously better, no?\n> >\n> > The fact that views can be created by a separate table/rule\n> > sequence allows pg_dump to properly dump views which are based\n> > upon functions, or views which may have dependencies on other\n> > tables/views.\n>\n> I don't see this. a 'CREATE VIEW' cannot reference a function that\n> did not exist at the time it was executed. The only way to get in\n> trouble, that I see, is a DROP/CREATE RULE. But I think\n> the proposal is not to allow this to happen if the rule is the\n> select rule for a view.\n>\n> The reason that pg_dump used the table/rule sequence was that historically\n> it was hard to figure out that a tuple in pg_class really represented a\n> view.\n>\n> But I could be mistaken.\n\n Yep, you are.\n\n The reason why we dump views as table+rule is that\n historically we wheren't able to dump views and rules at all.\n We only store the parsetree representation of rules, since\n epoch. Then, someone wrote a little backend function that's\n able to backparse these rule actions. It got enhanced by a\n couple of other smart guys and got used by pg_dump. At that\n time, it was right to dump views as table+rule, because\n pg_dump didn't do anything in OID order. So views using sql\n functions using views in turn wouldn't be dumpable otherwise.\n And it was easier too because it was already done after\n dumping rules at the end. No need to do anything else for\n views :-)\n\n So far about history, now the future.\n\n Dumping views as CREATE VIEW is cleaner. It is possible now,\n since we dump the objects in OID order. So I like it. I see\n no problem with Tom's solution, changing the relkind and\n removing the files at CREATE RULE time for a couple of\n releases. And yes, dropping the SELECT rule from a view must\n be forbidden. As defining triggers, constraints and the like\n for them should be.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 30 Aug 2000 14:01:26 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Mark Hollomon wrote:\n> > But I could be mistaken.\n> \n> Yep, you are.\n\nD'oh.\n\n> So far about history, now the future.\n> \n> Dumping views as CREATE VIEW is cleaner. It is possible now,\n> since we dump the objects in OID order. So I like it. I see\n> no problem with Tom's solution, changing the relkind and\n> removing the files at CREATE RULE time for a couple of\n> releases. And yes, dropping the SELECT rule from a view must\n> be forbidden. As defining triggers, constraints and the like\n> for them should be.\n\nAlright. To recap.\n\n1. CREATE VIEW sets relkind to RELKIND_VIEW\n2. CREATE RULE ... AS ON SELECT DO INSTEAD ... sets relkind to RELKIND_VIEW\n\tand deletes any relation files.\n\n q: If we find an index, should we drop it, or complain, or ignore it?\n q: Should the code check to see if the relation is empty (no valid tuples)?\n\n3. DROP RULE complains if dropping the select rule for a view.\n4. ALTER TABLE complains if run against a view.\n\nAnything else?\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Wed, 30 Aug 2000 15:24:57 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": "\"Mark Hollomon\" <[email protected]> writes:\n> 2. CREATE RULE ... AS ON SELECT DO INSTEAD ... sets relkind to RELKIND_VIEW\n> \tand deletes any relation files.\n\n> q: If we find an index, should we drop it, or complain, or ignore it?\n> q: Should the code check to see if the relation is empty (no valid tuples)?\n\nI think we can ignore indexes. However, it seems like a wise move to\nrefuse to convert a nonempty table to view status, *especially* if we\nare going to blow away the physical file. Otherwise mistyping the\nrelation name in a CREATE RULE could be disastrous (what? you wanted\nthat data?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Aug 2000 16:28:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations "
},
{
"msg_contents": "Applied. Thanks.\n\n> This patch is for the TODO item\n> \n> * Disallow LOCK on view \n> \n> src/backend/commands/command.c is the only affected file\n> \n> -- \n> Mark Hollomon\n> [email protected]\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 00:30:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disallow LOCK on a view"
},
{
"msg_contents": "> \"Hollomon, Mark\" wrote:\n> > \n> > Do we still want to be able to inherit from views?\n> \n> Also:\n> \n> Currently a view may be dropped with either 'DROP VIEW'\n> or 'DROP TABLE'. Should this be changed?\n\nI say let them drop it with either one. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 12:29:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: New relkind for views"
},
{
"msg_contents": "On Mon, 16 Oct 2000, Bruce Momjian wrote:\n\n> > \"Hollomon, Mark\" wrote:\n> > > \n> > > Do we still want to be able to inherit from views?\n> > \n> > Also:\n> > \n> > Currently a view may be dropped with either 'DROP VIEW'\n> > or 'DROP TABLE'. Should this be changed?\n> \n> I say let them drop it with either one. \n\nI kinda like the 'drop index with drop index', 'drop table with drop\ntable' and 'drop view with drop view' groupings ... at least you are\npretty sure you haven't 'oopsed' in the process :)\n\n\n",
"msg_date": "Mon, 16 Oct 2000 20:41:43 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: New relkind for views"
},
{
"msg_contents": "> On Mon, 16 Oct 2000, Bruce Momjian wrote:\n> \n> > > \"Hollomon, Mark\" wrote:\n> > > > \n> > > > Do we still want to be able to inherit from views?\n> > > \n> > > Also:\n> > > \n> > > Currently a view may be dropped with either 'DROP VIEW'\n> > > or 'DROP TABLE'. Should this be changed?\n> > \n> > I say let them drop it with either one. \n> \n> I kinda like the 'drop index with drop index', 'drop table with drop\n> table' and 'drop view with drop view' groupings ... at least you are\n> pretty sure you haven't 'oopsed' in the process :)\n\nGood point. Oops is bad.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 19:52:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: New relkind for views]"
},
{
"msg_contents": "On Mon, Oct 16, 2000 at 08:41:43PM -0300, The Hermit Hacker wrote:\n> On Mon, 16 Oct 2000, Bruce Momjian wrote:\n> \n> > > \"Hollomon, Mark\" wrote:\n> > > > \n> > > > Do we still want to be able to inherit from views?\n> > > \n> > > Also:\n> > > \n> > > Currently a view may be dropped with either 'DROP VIEW'\n> > > or 'DROP TABLE'. Should this be changed?\n> > \n> > I say let them drop it with either one. \n> \n> I kinda like the 'drop index with drop index', 'drop table with drop\n> table' and 'drop view with drop view' groupings ... at least you are\n> pretty sure you haven't 'oopsed' in the process :)\n> \n> \n\nSo the vote is now tied. Any other opinions\n\n-- \nMark Hollomon\[email protected]\n",
"msg_date": "Mon, 16 Oct 2000 20:53:01 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: New relkind for views"
},
{
"msg_contents": "Mark Hollomon <[email protected]> writes:\n>>>> I say let them drop it with either one. \n>> \n>> I kinda like the 'drop index with drop index', 'drop table with drop\n>> table' and 'drop view with drop view' groupings ... at least you are\n>> pretty sure you haven't 'oopsed' in the process :)\n\n> So the vote is now tied. Any other opinions\n\nI vote for the fascist approach (command must agree with actual type\nof object). Seems safest. Please make sure the error message is\nhelpful though, like \"Use DROP SEQUENCE to drop a sequence\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 20:56:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: New relkind for views "
},
{
"msg_contents": "On Monday 16 October 2000 20:56, Tom Lane wrote:\n> Mark Hollomon <[email protected]> writes:\n> >>>> I say let them drop it with either one.\n> >>\n> >> I kinda like the 'drop index with drop index', 'drop table with drop\n> >> table' and 'drop view with drop view' groupings ... at least you are\n> >> pretty sure you haven't 'oopsed' in the process :)\n> >\n> > So the vote is now tied. Any other opinions\n>\n> I vote for the fascist approach (command must agree with actual type\n> of object). Seems safest. Please make sure the error message is\n> helpful though, like \"Use DROP SEQUENCE to drop a sequence\".\n>\n\nSince Bruce changed his vote, it is now 3 to 0 for fascism.\n\nI'll see what I can do.\n\n-- \nMark Hollomon\n",
"msg_date": "Tue, 17 Oct 2000 10:26:57 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: New relkind for views"
}
] |
[
{
"msg_contents": "* Mark Hollomon <[email protected]> [000829 11:26] wrote:\n> Here is a patch against CVS (without my earlier patch)\n> to disallow\n> \n> LOCK x\n> \n> if x is a view.\n> \n> It does not use the SPI interface.\n\nWaitasec, why?? This can be very useful if you want to atomically lock\nsomething that sits \"in front\" of several other tables that you need to\ndo something atomically with.\n\nDoes it cause corruption if allowed?\n\nthanks,\n-Alfred\n",
"msg_date": "Tue, 29 Aug 2000 11:46:59 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] disallow LOCK on a view - the Tom Lane remix"
},
{
"msg_contents": "Here is a patch against CVS (without my earlier patch)\nto disallow\n\nLOCK x\n\nif x is a view.\n\nIt does not use the SPI interface.\n\n-- \nMark Hollomon\[email protected]",
"msg_date": "Tue, 29 Aug 2000 15:16:12 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": false,
"msg_subject": "disallow LOCK on a view - the Tom Lane remix"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> * Mark Hollomon <[email protected]> [000829 11:26] wrote:\n>> Here is a patch against CVS (without my earlier patch)\n>> to disallow\n>> LOCK x\n>> if x is a view.\n\n> Waitasec, why?? This can be very useful if you want to atomically lock\n> something that sits \"in front\" of several other tables that you need to\n> do something atomically with.\n\n> Does it cause corruption if allowed?\n\nNo, but I doubt that it does anything useful either ... the system\nis going to be acquiring locks on the referenced tables, not the\nview itself.\n\nA full (exclusive) LOCK on the view itself might work (by preventing\nother backends from reading the view definition), but lesser types of\nlocks would certainly not operate as desired. Even an exclusive lock\nwouldn't prevent re-execution of previously planned queries against\nthe view, as could happen in plpgsql functions for example.\n\nMoreover, a lock on the view would not prevent people from\naccessing/manipulating the referenced tables; they'd just have to\nnot go through the view.\n\nAll in all, the behavior seems squirrelly enough that I agree with\nMark: better to consider it a disallowed operation than to have to\ndeal with complaints that it didn't do whatever the user thought\nit would do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Aug 2000 18:58:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] disallow LOCK on a view - the Tom Lane remix "
},
{
"msg_contents": "* Tom Lane <[email protected]> [000829 15:58] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > * Mark Hollomon <[email protected]> [000829 11:26] wrote:\n> >> Here is a patch against CVS (without my earlier patch)\n> >> to disallow\n> >> LOCK x\n> >> if x is a view.\n> \n> > Waitasec, why?? This can be very useful if you want to atomically lock\n> > something that sits \"in front\" of several other tables that you need to\n> > do something atomically with.\n> \n> > Does it cause corruption if allowed?\n> \n> No, but I doubt that it does anything useful either ... the system\n> is going to be acquiring locks on the referenced tables, not the\n> view itself.\n> \n> A full (exclusive) LOCK on the view itself might work (by preventing\n> other backends from reading the view definition), but lesser types of\n> locks would certainly not operate as desired. Even an exclusive lock\n> wouldn't prevent re-execution of previously planned queries against\n> the view, as could happen in plpgsql functions for example.\n\nThis is a bug that could be solved with a sequence of callbacks\nhooked to a relation that are called when that relation changes.\n\n> Moreover, a lock on the view would not prevent people from\n> accessing/manipulating the referenced tables; they'd just have to\n> not go through the view.\n> \n> All in all, the behavior seems squirrelly enough that I agree with\n> Mark: better to consider it a disallowed operation than to have to\n> deal with complaints that it didn't do whatever the user thought\n> it would do.\n\nOk, I'm wondering if this patch will cause problems locking a table\nthat has had:\n\nCREATE RULE \"_RETfoo\" AS ON SELECT TO foo DO INSTEAD SELECT * FROM foo1;\n\nI need to be able to lock the table 'foo' exclusively while I swap\nout the underlying rule to forward to another table.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Tue, 29 Aug 2000 16:14:00 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: disallow LOCK on a view - the Tom Lane remix"
},
{
"msg_contents": "> -----Original Message-----\n> From: Alfred Perlstein\n> \n> * Mark Hollomon <[email protected]> [000829 11:26] wrote:\n> > Here is a patch against CVS (without my earlier patch)\n> > to disallow\n> > \n> > LOCK x\n> > \n> > if x is a view.\n> > \n> > It does not use the SPI interface.\n> \n> Waitasec, why?? This can be very useful if you want to atomically lock\n> something that sits \"in front\" of several other tables that you need to\n> do something atomically with.\n> \n> Does it cause corruption if allowed?\n>\n\nIf I remember correctly,the problem is \"LOCK VIEW\" acquires a\nlock for the target view itself but doesn't acquire the lock for the\nbase tables of the view.\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Wed, 30 Aug 2000 08:34:48 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: disallow LOCK on a view - the Tom Lane remix"
},
{
"msg_contents": "* Mark Hollomon <[email protected]> [000829 17:13] wrote:\n> On Tue, Aug 29, 2000 at 04:14:00PM -0700, Alfred Perlstein wrote:\n> > \n> > Ok, I'm wondering if this patch will cause problems locking a table\n> > that has had:\n> > \n> > CREATE RULE \"_RETfoo\" AS ON SELECT TO foo DO INSTEAD SELECT * FROM foo1;\n> > \n> > I need to be able to lock the table 'foo' exclusively while I swap\n> > out the underlying rule to forward to another table.\n> > \n> \n> Yes, it would. 'foo' would be seen as view.\n> \n> Okay, this gives me a reason to to do it the hard way.\n> \n> I will try to add a relisview attribute to pg_class.\n> That way, we can differentiate between tables with rules\n> and things created with 'CREATE VIEW'.\n> \n> Hmmm... guess I'll need to change the definition of the pg_views\n> view as well.\n\nOk, thanks I appreciate you taking my situation into consideration.\n\nthanks,\n--\n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Tue, 29 Aug 2000 17:23:02 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: disallow LOCK on a view - the Tom Lane remix"
},
{
"msg_contents": "On Tue, Aug 29, 2000 at 04:14:00PM -0700, Alfred Perlstein wrote:\n> \n> Ok, I'm wondering if this patch will cause problems locking a table\n> that has had:\n> \n> CREATE RULE \"_RETfoo\" AS ON SELECT TO foo DO INSTEAD SELECT * FROM foo1;\n> \n> I need to be able to lock the table 'foo' exclusively while I swap\n> out the underlying rule to forward to another table.\n> \n\nYes, it would. 'foo' would be seen as view.\n\nOkay, this gives me a reason to to do it the hard way.\n\nI will try to add a relisview attribute to pg_class.\nThat way, we can differentiate between tables with rules\nand things created with 'CREATE VIEW'.\n\nHmmm... guess I'll need to change the definition of the pg_views\nview as well.\n\n-- \nMark Hollomon\[email protected]\n",
"msg_date": "Tue, 29 Aug 2000 21:03:00 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disallow LOCK on a view - the Tom Lane remix"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> Ok, I'm wondering if this patch will cause problems locking a table\n> that has had:\n> CREATE RULE \"_RETfoo\" AS ON SELECT TO foo DO INSTEAD SELECT * FROM foo1;\n> I need to be able to lock the table 'foo' exclusively while I swap\n> out the underlying rule to forward to another table.\n\nUh, do you actually need any sort of lock for that?\n\nSeems to me that if you do\n\tBEGIN;\n\tDELETE RULE \"_RETfoo\";\n\tCREATE RULE \"_RETfoo\" AS ...;\n\tCOMMIT;\nthen any other transaction will see either the old rule definition\nor the new one. No intermediate state, no need for a lock as such.\n\nBTW, this seems to be a counterexample for my prior suggestion that\npg_class should have a \"relviewrule\" OID column. If it did, you'd\nhave to update that field when doing something like the above.\nPain-in-the-neck factor looms large...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Aug 2000 23:52:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] disallow LOCK on a view - the Tom Lane remix "
},
{
"msg_contents": "* Tom Lane <[email protected]> [000829 20:52] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > Ok, I'm wondering if this patch will cause problems locking a table\n> > that has had:\n> > CREATE RULE \"_RETfoo\" AS ON SELECT TO foo DO INSTEAD SELECT * FROM foo1;\n> > I need to be able to lock the table 'foo' exclusively while I swap\n> > out the underlying rule to forward to another table.\n> \n> Uh, do you actually need any sort of lock for that?\n> \n> Seems to me that if you do\n> \tBEGIN;\n> \tDELETE RULE \"_RETfoo\";\n> \tCREATE RULE \"_RETfoo\" AS ...;\n> \tCOMMIT;\n> then any other transaction will see either the old rule definition\n> or the new one. No intermediate state, no need for a lock as such.\n> \n\nUgh! I keep on forgetting that transactions are atomic. Thanks.\n\n> BTW, this seems to be a counterexample for my prior suggestion that\n> pg_class should have a \"relviewrule\" OID column. If it did, you'd\n> have to update that field when doing something like the above.\n> Pain-in-the-neck factor looms large...\n\nI'd prefer this stuff be as simple as possible, it's already\ngetting quite complex.\n\nthanks,\n-Alfred\n",
"msg_date": "Tue, 29 Aug 2000 22:05:11 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] disallow LOCK on a view - the Tom Lane remix"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> BTW, this seems to be a counterexample for my prior suggestion that\n> pg_class should have a \"relviewrule\" OID column. If it did, you'd\n> have to update that field when doing something like the above.\n> Pain-in-the-neck factor looms large...\n> \n\nI was already considering the possiblity of a 'ALTER VIEW' command that\nwould effectively allow you do that.\n\nCREATE VIEW bar as select * from foo1;\nALTER VIEW bar as select * from foo2;\n\nIt would update the \"relviewrule\" field.\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Wed, 30 Aug 2000 08:18:02 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disallow LOCK on a view - the Tom Lane remix"
},
{
"msg_contents": "OK, previous patch unapplied, and this patch was applied.\n\n\n> Here is a patch against CVS (without my earlier patch)\n> to disallow\n> \n> LOCK x\n> \n> if x is a view.\n> \n> It does not use the SPI interface.\n> \n> -- \n> Mark Hollomon\n> [email protected]\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 00:34:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] disallow LOCK on a view - the Tom Lane remix"
},
{
"msg_contents": "> Pain-in-the-neck factor looms large...\n\nCan we copyright that term? :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 00:35:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] disallow LOCK on a view - the Tom Lane remix"
}
] |
[
{
"msg_contents": "Hi, I'm new on this list and would like to develop and play around a bit with\nthe postgres source.\nI would like to know where to go (besides the source) to get started. What\nare the features that are been added to future realeses?\nThanks!\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Tue, 29 Aug 2000 18:14:22 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "new in list"
},
{
"msg_contents": "> Hi, I'm new on this list and would like to develop and play around a bit with\n> the postgres source.\n\nWelcome!\n\n> I would like to know where to go (besides the source) to get started.\n\nThere are html and hardcopy docs. There are README files scattered\nthroughout the source tree, which should help introduce you to each of\nthe pieces. There are the mailing list archives (look at the last month\nor so). And of course there is the hackers mailing list, which you have\nalready found. Having a good SQL book handy can't hurt.\n\n> What are the features that are been added to future realeses?\n\nEach contributor has his own interests and emphasis. And it isn't\npossible to predict with certainty what features may appear (you may\ndecide to contribute something we haven't even thought of!). But\ncertainly the following themes are likely to see some attention:\n\no outer joins, perhaps requiring a redesigned query tree\no SQL99 \"schemas\"\no a redesigned query tree, to make some operations easier to represent\no WAL, which will speed up queries and recoveries\no memory management (already improved, but always a candidate for more)\no distributed databases\no CORBA, XML, SOAP, ??\no applications\n\n - Thomas\n",
"msg_date": "Wed, 30 Aug 2000 04:56:21 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: new in list"
}
] |
[
{
"msg_contents": "Hi,\n\n I am trying to use the CREATE FUNCTION in order to process multiple\ncalculation, and forward at the end multiple instances.\n\nThis is the SQL statement I am using:\n\nCREATE FUNCTION foo(varchar) RETURNS setof myTable\nAS 'UPDATE .......;\nINSERT.......;\nSELECT myTable.field2 from myTable'\nLANGUAGE 'sql';\n\nI always get an error saying that there is a type mismatch between what is\nbehing the \"setof\" and what is return by this function (myTable.field2)\n\n\nAny idea?\n\n(Note: I am using postgresql 7.02)\n\nFabien\n\n\n",
"msg_date": "Wed, 30 Aug 2000 17:05:56 +0100",
"msg_from": "Fabien Thiriet <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to use the \"setof\" of CREATE FUNCTION"
},
{
"msg_contents": "Fabien Thiriet <[email protected]> writes:\n> CREATE FUNCTION foo(varchar) RETURNS setof myTable\n> AS 'UPDATE .......;\n> INSERT.......;\n> SELECT myTable.field2 from myTable'\n> LANGUAGE 'sql';\n\n> I always get an error saying that there is a type mismatch between what is\n> behing the \"setof\" and what is return by this function (myTable.field2)\n\nWell, yeah: you declared the function to return a set of the tuple\ndatatype myTable, not a set of whatever field2's datatype is.\nPerhaps you wanted\n\nCREATE FUNCTION foo(varchar) RETURNS setof myTable\nAS 'UPDATE .......;\nINSERT.......;\nSELECT * from myTable'\nLANGUAGE 'sql';\n\nwhich hands back the entire table. Alternatively, if you do want to\nreturn just the one column, you should declare the function to return\nsetof whatever-type-field2-is.\n\nNote that functions returning sets are not as useful as they should be,\nbecause you can only call them in limited places (at the top level of\na SELECT-list item, IIRC). Functions returning tuples are not as\nuseful as they should be either, because you can't do anything with\nthe result except select out an individual column; worse, there's this\nbizarre syntax for it --- you can't write the obvious foo(x).bar,\nfor some reason, but have to do x.foo.bar, which only works for simple\nfield-of-a-relation arguments. Ugh. This whole area needs work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Aug 2000 14:21:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to use the \"setof\" of CREATE FUNCTION "
}
] |
[
{
"msg_contents": "\nHere's some fun new problems in Pgsql 7.0.2.\n\nMy nightly job failed last night because supposedly the tables already\nexisted. If you do a \\d, you can see the tables. If you issue drop\ntable, look at the output below.\n\nThis happened with two important tables, which I create with the\nfollowing each night:\n\n\n\nCREATE TABLE tbl_mail_arch_dates2 AS SELECT fld_mail_list,\nfld_mail_year, fld_mail_month, count(*) FROM tbl_mail_archive GROUP BY\nfld_mail_list, fld_mail_year, fld_mail_month;\n\nBEGIN; \nDELETE FROM tbl_mail_arch_dates2 WHERE fld_mail_year<1985; \nDROP TABLE tbl_mail_arch_dates; \nALTER TABLE tbl_mail_arch_dates2 RENAME TO tbl_mail_arch_dates; \nCOMMIT;\n\ncreate index idx_arch_date_list_year_mo on\ntbl_mail_arch_dates(fld_mail_list,fld_mail_year,fld_mail_month);\n\ncreate index idx_arch_date_list on tbl_mail_arch_dates(fld_mail_list);\n\n\n\n\nSo I wonder if the problem is because I am doing drop tables/rename\ntables inside of a transaction.\n\nTim\n\n\n\ndb_geocrawler=# \\d\n List of relations\n Name | Type | Owner \n------------------------+----------+----------\n monitored_lists | table | postgres\n seq_geocrawler_users | sequence | postgres\n seq_mail_chunk_no | sequence | postgres\n seq_mail_lists | sequence | postgres\n seq_mailid | sequence | postgres\n seq_posted_messages | sequence | postgres\n tbl_activity_log | table | postgres\n tbl_geocrawler_users | table | postgres\n tbl_mail_arch_dates | table | postgres\n tbl_mail_arch_dates2 | table | postgres\n tbl_mail_archive | table | postgres\n tbl_mail_categories | table | postgres\n tbl_mail_chunks | table | postgres\n tbl_mail_lists | table | postgres\n tbl_mail_subcategories | table | postgres\n tbl_posted_messages | table | postgres\n(16 rows)\n\ndb_geocrawler=# drop table tbl_mail_arch_dates2;\nNOTICE: mdopen: couldn't open tbl_mail_arch_dates2: No such file or\ndirectory\nNOTICE: RelationIdBuildRelation: smgropen(tbl_mail_arch_dates2): No\nsuch file or directory\nNOTICE: mdopen: couldn't open tbl_mail_arch_dates2: No such file or\ndirectory\nDROP\ndb_geocrawler=# ERROR: Query was cancelled.\n\ndb_geocrawler=# \\d\n List of relations\n Name | Type | Owner \n------------------------+----------+----------\n monitored_lists | table | postgres\n seq_geocrawler_users | sequence | postgres\n seq_mail_chunk_no | sequence | postgres\n seq_mail_lists | sequence | postgres\n seq_mailid | sequence | postgres\n seq_posted_messages | sequence | postgres\n tbl_activity_log | table | postgres\n tbl_geocrawler_users | table | postgres\n tbl_mail_arch_dates | table | postgres\n tbl_mail_archive | table | postgres\n tbl_mail_categories | table | postgres\n tbl_mail_chunks | table | postgres\n tbl_mail_lists | table | postgres\n tbl_mail_subcategories | table | postgres\n tbl_posted_messages | table | postgres\n(15 rows)\n\ndb_geocrawler=# drop table tbl_mail_arch_dates;\nNOTICE: mdopen: couldn't open idx_arch_date_list_year_mo: No such file\nor directory\nNOTICE: RelationIdBuildRelation: smgropen(idx_arch_date_list_year_mo):\nNo such file or directory\nNOTICE: mdopen: couldn't open idx_arch_date_list: No such file or\ndirectory\nNOTICE: RelationIdBuildRelation: smgropen(idx_arch_date_list): No such\nfile or directory\nDROP\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 30 Aug 2000 12:37:23 -0500",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fragged State in 7.0.2"
}
] |
[
{
"msg_contents": "> I'm inclined to think that that is the correct solution and the new\n> approach is simply broken. But, not knowing what Vadim had in mind\n> while making this change, I'm going to leave it to him to fix this.\n\nThanks, Tom! I'll take care about this...\n\n> Although this specific lockup mode didn't exist in 7.0.*, it does\n> suggest a possible cause of the deadlocks-with-no-deadlock-report\n> behavior that a couple of people have reported with 7.0: maybe there\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> is another logic path that allows a deadlock involving two \n> buffer locks, or a buffer lock and a normal lock. I'm on the\n> warpath now ...\n\nBuffer locks were implemented in 6.5.\n\nVadim\n",
"msg_date": "Wed, 30 Aug 2000 10:55:36 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Silent deadlock possible in current sources"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> Although this specific lockup mode didn't exist in 7.0.*, it does\n>> suggest a possible cause of the deadlocks-with-no-deadlock-report\n>> behavior that a couple of people have reported with 7.0: maybe there\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n>> is another logic path that allows a deadlock involving two \n>> buffer locks, or a buffer lock and a normal lock. I'm on the\n>> warpath now ...\n\n> Buffer locks were implemented in 6.5.\n\nYeah, but we've only heard about silent deadlocks from people running\n7.0. I'm speculating that some \"unrelated\" 7.0 change is interacting\nbadly with the buffer lock management. Haven't gone digging yet, but\nI will.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Aug 2000 23:23:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Silent deadlock possible in current sources "
}
] |
[
{
"msg_contents": "Observe:\n\nheap_update()\n{\n /* lock page containing old copy of tuple */\n LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);\n\n ...\n\n /* Find buffer for new tuple */\n if ((unsigned) MAXALIGN(newtup->t_len) <= PageGetFreeSpace((Page) dp))\n newbuf = buffer;\n else\n newbuf = RelationGetBufferForTuple(relation, newtup->t_len, buffer);\n\n ...\n\n if (newbuf != buffer)\n {\n LockBuffer(newbuf, BUFFER_LOCK_UNLOCK);\n WriteBuffer(newbuf);\n }\n LockBuffer(buffer, BUFFER_LOCK_UNLOCK);\n WriteBuffer(buffer);\n}\n\nRelationGetBufferForTuple(Relation relation, Size len, Buffer Ubuf)\n{\n if (!relation->rd_myxactonly)\n LockPage(relation, 0, ExclusiveLock);\n\n ...\n\n buffer = ReadBuffer(relation, lastblock - 1);\n\n if (buffer != Ubuf)\n LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);\n\n ...\n\n if (!relation->rd_myxactonly)\n UnlockPage(relation, 0, ExclusiveLock);\n\n ...\n}\n\nIn other words, if heap_update can't fit the new copy of the tuple on\nthe same page it's already on, then *while still holding the exclusive\nlock on the old tuple's buffer*, it calls RelationGetBufferForTuple\nwhich tries to grab the relation's extension lock and then the exclusive\nlock on the last page of the relation. The code is smart enough to deal\nwith the case that the old tuple is in the last page of the relation\n(in which case we already have the exclusive buffer lock on that page,\nand mustn't ask for it twice).\n\nBUT: suppose two different processes are trying to do this at about\nthe same time. Process A is updating a tuple in the last page of the\nrelation and Process B is updating a tuple in some earlier page. Both\nare able to get their exclusive buffer locks on their old tuples' pages.\nNow, suppose that Process B is a little bit ahead and so it is first\nto reach the LockPage operation. It gets the relation extension lock.\nNow it wants to get an exclusive buffer lock on the last page of the\nrelation. It can't, because Process A already has that lock --- but\nnow Process A will be waiting to get the relation extension lock that\nProcess B has.\n\nThis deadlock is not detected or reported because the buffer lock\nmechanism doesn't have any deadlock detection capability (buffer locks\naren't done via the lock manager, which might be considered a bug in\nitself). Instead, the two processes just silently lock up, and\nthereafter so will all other processes that try to update or insert\nin that relation.\n\n\nThis bug did not exist in 7.0.* because heap_update used to release\nits exclusive lock on the source page while extending the relation:\n\n /*\n * New item won't fit on same page as old item, have to look for a\n * new place to put it. Note that we have to unlock current buffer\n * context - not good but RelationPutHeapTupleAtEnd uses extend\n * lock.\n */\n LockBuffer(buffer, BUFFER_LOCK_UNLOCK);\n RelationPutHeapTupleAtEnd(relation, newtup);\n LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);\n\nI'm inclined to think that that is the correct solution and the new\napproach is simply broken. But, not knowing what Vadim had in mind\nwhile making this change, I'm going to leave it to him to fix this.\n\n\nAlthough this specific lockup mode didn't exist in 7.0.*, it does\nsuggest a possible cause of the deadlocks-with-no-deadlock-report\nbehavior that a couple of people have reported with 7.0: maybe there\nis another logic path that allows a deadlock involving two buffer locks,\nor a buffer lock and a normal lock. I'm on the warpath now ...\n\n\t\t\tregards, tom lane\n\n From nobody Wed Oct 9 18:05:06 2024\nReceived: from arctica.sime.com ([email protected] [193.228.80.12])\n\tby hub.org (8.10.1/8.10.1) with ESMTP id e7UI3tC66774\n\tfor <[email protected]>;\n\tWed, 30 Aug 2000 14:03:55 -0400 (EDT)\nReceived: from loki (c-039.static.AT.EU.net [193.154.188.39])\n\tby arctica.sime.com (8.10.0/8.10.0) with SMTP id e7UI3qL23532\n\tfor <[email protected]>; Wed, 30 Aug 2000 20:03:52 +0200\nFrom: Mario Weilguni <[email protected]>\nReply-To: [email protected]\nTo: Postgres Hacker Lister <[email protected]>\nSubject: Patch for TNS services\nDate: Wed, 30 Aug 2000 20:04:33 +0200\nX-Mailer: KMail [version 1.1.90]\nContent-Type: Multipart/Mixed;\n\tboundary=\"------------Boundary-00=_L7A4CXIUPUVOPJQXHIYU\"\nMIME-Version: 1.0\nMessage-Id: <00083020043302.00781@loki>\nX-Archive-Number: 200008/828\n\n\n--------------Boundary-00=_L7A4CXIUPUVOPJQXHIYU\nContent-Type: text/plain\nContent-Transfer-Encoding: quoted-printable\n\n-----BEGIN PGP SIGNED MESSAGE-----\n\nLast week I created a patch for the Postgres client side libraries to allow=\n=20\nsomething like a (not so mighty) form of Oracle TNS, but nobody showed any=\n=20\ninterest. Currently, the patch is not perfect yet, but works fine for us. I=\n=20\nwant to avoid improving the patch if there is no interest in it, so if you=\n=20\nthink it might be a worthy improvement please drop me a line.\n\nIt works like this:\nThe patch allows to supply another parameter to the Postgres connect string=\n,=20\ncalled \"service\". So, instead of having a connect string (e.g. in PHP) like=\n=20\n\"dbname=3Dfoo host=3Dbar port=3D5433 user=3Dfoouser password=3Dbarpass\"\nthe string would be\n\"service=3Dstupid_name_here\"\nor more often\n\"service=3Dstupid_name_here user=3Dfoouser password=3Dbarpass\"\n\nThere's a config file /etc/pg_service.conf, having an entry like:\n[stupid_name_here]\ndbname=3Dfoo\nhost=3Dbar\nport=3D5433\n....\n\nThe advantage is you can go from one database host, database, port or=20\nwhatever without having to touch the scripts or applications. We're current=\nly=20\nin the process of migrating all of our PHP and Python scripts to another fr=\nom=20\nlocalhost, port 5433 to another machine, port 5432 and it's not something I=\n=20\never want to do again, I'd to change around 100 files and I'm still not sur=\ne=20\nif I've missed one.\n\nThe patch is client-side only, around 100 lines, needs no changes to the=20\nbackend and is compatible with all applications supplying a connection stri=\nng=20\n(not using PQsetdblogin)\n\n- --=20\nWhy is it always Segmentation's fault?\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.3i\nCharset: noconv\n\niQCVAwUBOa1MsQotfkegMgnVAQEIsAP+Na72pNdT+RoQcjuX5cn1TKkPlNAh9BV5\nkCNP+Zui6WfZSiA8RYPuruXF0QyEMPZZD6AI9Wqr5sQ75kVSb65uOt9rLrdS0bxA\nWTClNjlLKG3Rk1IGSFBm+C0p8lcA3AYTohHLhHB3q+WeLTneI5lJfwpo2AWyinQt\n0k/1r6EwpUk=3D\n=3D+skX\n-----END PGP SIGNATURE-----\n\n--------------Boundary-00=_L7A4CXIUPUVOPJQXHIYU\nContent-Type: text/x-c; name=\"postgres-7.0-services.patch\"\nContent-Transfer-Encoding: base64\nContent-Disposition: attachment; filename=\"postgres-7.0-services.patch\"\n\nLS0tIHBvc3RncmVzcWwtNy4wLm9sZC9zcmMvaW50ZXJmYWNlcy9saWJwcS9m\nZS1jb25uZWN0LmMJTW9uIEFwciAxNyAwNTo0NTozNCAyMDAwCisrKyBwb3N0\nZ3Jlc3FsLTcuMC9zcmMvaW50ZXJmYWNlcy9saWJwcS9mZS1jb25uZWN0LmMJ\nU2F0IEF1ZyAyNiAxODo1MjozOSAyMDAwCkBAIC0xMDUsNiArMTA1LDkgQEAK\nIAl7ImF1dGh0eXBlIiwgIlBHQVVUSFRZUEUiLCBEZWZhdWx0QXV0aHR5cGUs\nIE5VTEwsCiAJIkRhdGFiYXNlLUF1dGh0eXBlIiwgIkQiLCAyMH0sCiAKKwl7\nInNlcnZpY2UiLCAiUEdTRVJWSUNFIiwgTlVMTCwgTlVMTCwKKwkgIkRhdGFi\nYXNlLVNlcnZpY2UiLCAiIiwgMjB9LAorCiAJeyJ1c2VyIiwgIlBHVVNFUiIs\nIE5VTEwsIE5VTEwsCiAJIkRhdGFiYXNlLVVzZXIiLCAiIiwgMjB9LAogCkBA\nIC0xNzYsNiArMTc5LDggQEAKIHN0YXRpYyBjaGFyICpjb25uaW5mb19nZXR2\nYWwoUFFjb25uaW5mb09wdGlvbiAqY29ubk9wdGlvbnMsCiAJCQkJY29uc3Qg\nY2hhciAqa2V5d29yZCk7CiBzdGF0aWMgdm9pZCBkZWZhdWx0Tm90aWNlUHJv\nY2Vzc29yKHZvaWQgKmFyZywgY29uc3QgY2hhciAqbWVzc2FnZSk7CitzdGF0\naWMgaW50ICBwYXJzZVNlcnZpY2VJbmZvKFBRY29ubmluZm9PcHRpb24gKm9w\ndGlvbnMsIAorCQkJICAgICBQUUV4cEJ1ZmZlciBlcnJvck1lc3NhZ2UpOwog\nCiAKIC8qIC0tLS0tLS0tLS0tLS0tLS0KQEAgLTIwODEsNiArMjA4NiwxMTQg\nQEAKIAlyZXR1cm4gU1RBVFVTX09LOwogfQogCitpbnQgcGFyc2VTZXJ2aWNl\nSW5mbyhQUWNvbm5pbmZvT3B0aW9uICpvcHRpb25zLCBQUUV4cEJ1ZmZlciBl\ncnJvck1lc3NhZ2UpIHsKKyAgY2hhciAqc2VydmljZSA9IGNvbm5pbmZvX2dl\ndHZhbChvcHRpb25zLCAic2VydmljZSIpOworICBjaGFyICpzZXJ2aWNlRmls\nZSA9ICIvZXRjL3BnX3NlcnZpY2UuY29uZiI7CisgIGludCAgTUFYQlVGU0la\nRSA9IDI1NjsKKyAgaW50ICBncm91cF9mb3VuZCA9IDA7CisgIGludCAgbGlu\nZW5yPTAsIGk7CisKKyAgaWYoc2VydmljZSAhPSBOVUxMKSB7CisgICAgRklM\nRSAqZjsKKyAgICBjaGFyIGJ1ZltNQVhCVUZTSVpFXSwgKmxpbmU7CisgICAg\nCisgICAgZiA9IGZvcGVuKHNlcnZpY2VGaWxlLCAiciIpOworICAgIGlmKGYg\nPT0gTlVMTCkgeworICAgICAgcHJpbnRmUFFFeHBCdWZmZXIoZXJyb3JNZXNz\nYWdlLCAiRVJST1I6IFNlcnZpY2UgZmlsZSAnJXMnIG5vdCBmb3VuZFxuIiwK\nKwkJCXNlcnZpY2VGaWxlKTsKKyAgICAgIHJldHVybiAxOworICAgIH0KKwor\nICAgIC8qIEFzIGRlZmF1bHQsIHNldCB0aGUgZGF0YWJhc2UgbmFtZSB0byB0\naGUgbmFtZSBvZiB0aGUgc2VydmljZSAqLworICAgIGZvcihpID0gMDsgb3B0\naW9uc1tpXS5rZXl3b3JkOyBpKyspCisgICAgICBpZihzdHJjbXAob3B0aW9u\nc1tpXS5rZXl3b3JkLCAiZGJuYW1lIikgPT0gMCkgeworCWlmKG9wdGlvbnNb\naV0udmFsICE9IE5VTEwpCisJICBmcmVlKG9wdGlvbnNbaV0udmFsKTsKKwlv\ncHRpb25zW2ldLnZhbCA9IHN0cmR1cChzZXJ2aWNlKTsKKyAgICAgIH0KKyAg\nICAKKyAgICB3aGlsZSgobGluZSA9IGZnZXRzKGJ1ZiwgTUFYQlVGU0laRS0x\nLCBmKSkgIT0gTlVMTCkgeworICAgICAgbGluZW5yKys7CisKKyAgICAgIGlm\nKHN0cmxlbihsaW5lKSA+PSBNQVhCVUZTSVpFIC0gMikgeworCWZjbG9zZShm\nKTsKKwlwcmludGZQUUV4cEJ1ZmZlcihlcnJvck1lc3NhZ2UsCisJCQkgIkVS\nUk9SOiBsaW5lICVkIHRvbyBsb25nIGluIHNlcnZpY2UgZmlsZSAnJXMnXG4i\nLAorCQkJICBsaW5lbnIsCisJCQkgc2VydmljZUZpbGUpOworCXJldHVybiAy\nOworICAgICAgfQorCisgICAgICAvKiBpZ25vcmUgRU9MIGF0IGVuZCBvZiBs\naW5lICovCisgICAgICBpZihzdHJsZW4obGluZSkgJiYgbGluZVtzdHJsZW4o\nbGluZSktMV0gPT0gJ1xuJykKKwlsaW5lW3N0cmxlbihsaW5lKS0xXSA9IDA7\nCisKKyAgICAgIC8qIGlnbm9yZSBsZWFkaW5nIGJsYW5rcyAqLworICAgICAg\nd2hpbGUoKmxpbmUgJiYgaXNzcGFjZShsaW5lWzBdKSkKKwlsaW5lKys7CisK\nKyAgICAgIC8qIGlnbm9yZSBjb21tZW50cyBhbmQgZW1wdHkgbGluZXMgKi8K\nKyAgICAgIGlmKHN0cmxlbihsaW5lKSA9PSAwIHx8IGxpbmVbMF0gPT0gJyMn\nKQorCWNvbnRpbnVlOworCisgICAgICAvKiBDaGVjayBmb3IgcmlnaHQgZ3Jv\ndXBuYW1lICovCisgICAgICBpZihsaW5lWzBdID09ICdbJykgeworCWlmKGdy\nb3VwX2ZvdW5kKSB7CisJICAvKiBncm91cCBpbmZvIGFscmVhZHkgcmVhZCAq\nLworCSAgZmNsb3NlKGYpOworCSAgcmV0dXJuIDA7CisJfQorCisJaWYoc3Ry\nbmNtcChsaW5lKzEsIHNlcnZpY2UsIHN0cmxlbihzZXJ2aWNlKSkgPT0gMCAm\nJgorCSAgIGxpbmVbc3RybGVuKHNlcnZpY2UpKzFdID09ICddJykKKwkgIGdy\nb3VwX2ZvdW5kID0gMTsKKwllbHNlCisJICBncm91cF9mb3VuZCA9IDA7Cisg\nICAgICB9IGVsc2UgeworCWlmKGdyb3VwX2ZvdW5kKSB7CisJICAvKiBGaW5h\nbGx5LCB3ZSBhcmUgaW4gdGhlIHJpZ2h0IGdyb3VwIGFuZCBjYW4gcGFyc2Ug\ndGhlIGxpbmUgKi8KKwkgIGNoYXIgKmtleSwgKnZhbDsKKwkgIGludCBmb3Vu\nZF9rZXl3b3JkOworCisJICBrZXkgPSBzdHJ0b2sobGluZSwgIj0iKTsKKwkg\nIGlmKGtleSA9PSBOVUxMKSB7CisJICAgIHByaW50ZlBRRXhwQnVmZmVyKGVy\ncm9yTWVzc2FnZSwKKwkJCSAgICAgICJFUlJPUjogc3ludGF4IGVycm9yIGlu\nIHNlcnZpY2UgZmlsZSAnJXMnLCBsaW5lICVkXG4iLAorCQkJICAgICAgc2Vy\ndmljZUZpbGUsCisJCQkgICAgICBsaW5lbnIpOworCSAgICBmY2xvc2UoZik7\nCisJICAgIHJldHVybiAzOworCSAgfQorCSAgdmFsID0gbGluZSArIHN0cmxl\nbihsaW5lKSArIDE7CisJICAKKwkgIGZvdW5kX2tleXdvcmQgPSAwOworCSAg\nZm9yKGkgPSAwOyBvcHRpb25zW2ldLmtleXdvcmQ7IGkrKykgeworCSAgICBp\nZihzdHJjbXAob3B0aW9uc1tpXS5rZXl3b3JkLCBrZXkpID09IDApIHsKKyAJ\nICAgICAgaWYob3B0aW9uc1tpXS52YWwgIT0gTlVMTCkKKyAJCWZyZWUob3B0\naW9uc1tpXS52YWwpOworIAkgICAgICBvcHRpb25zW2ldLnZhbCA9IHN0cmR1\ncCh2YWwpOworCSAgICAgIGZvdW5kX2tleXdvcmQgPSAxOworCSAgICB9CisJ\nICB9CisKKwkgIGlmKCFmb3VuZF9rZXl3b3JkKSB7CisJICAgIHByaW50ZlBR\nRXhwQnVmZmVyKGVycm9yTWVzc2FnZSwKKwkJCSAgICAgICJFUlJPUjogc3lu\ndGF4IGVycm9yIGluIHNlcnZpY2UgZmlsZSAnJXMnLCBsaW5lICVkXG4iLAor\nCQkJICAgICAgc2VydmljZUZpbGUsCisJCQkgICAgICBsaW5lbnIpOworCSAg\nICBmY2xvc2UoZik7CisJICAgIHJldHVybiAzOworCSAgfQorCX0KKyAgICAg\nIH0KKyAgICB9CisKKyAgICBmY2xvc2UoZik7CisgIH0KKworICByZXR1cm4g\nMDsKK30KKwogCiAvKiAtLS0tLS0tLS0tLS0tLS0tCiAgKiBDb25uaW5mbyBw\nYXJzZXIgcm91dGluZQpAQCAtMjI1NCw2ICsyMzY3LDE0IEBACiAJCWlmIChv\ncHRpb24tPnZhbCkKIAkJCWZyZWUob3B0aW9uLT52YWwpOwogCQlvcHRpb24t\nPnZhbCA9IHN0cmR1cChwdmFsKTsKKworCX0KKworCS8qIE5vdyBjaGVjayBm\nb3Igc2VydmljZSBpbmZvICovCQorCWlmKHBhcnNlU2VydmljZUluZm8ob3B0\naW9ucywgZXJyb3JNZXNzYWdlKSkgeworCSAgUFFjb25uaW5mb0ZyZWUob3B0\naW9ucyk7CisJICBmcmVlKGJ1Zik7CisJICByZXR1cm4gTlVMTDsKIAl9CiAK\nIAkvKiBEb25lIHdpdGggdGhlIG1vZGlmaWFibGUgaW5wdXQgc3RyaW5nICov\nCg==\n\n--------------Boundary-00=_L7A4CXIUPUVOPJQXHIYU--\n",
"msg_date": "Wed, 30 Aug 2000 13:58:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Silent deadlock possible in current sources"
}
] |
[
{
"msg_contents": "\"Barnes, Sandy (Sandra)\" <[email protected]> writes:\n> Platform: PostgreSQL 7.0.2 on RedHat6.2 Linux\n> Test: Testing the creation of large objects. I was putting the large\n> objects into a database table but this \n> test program recreates the error for me without having to do that. \n> Program Error: Can't create large object\n> Database Log Error: FATAL 1: my bits moved right off the end of the world!\n>\tRecreate index pg_attribute_relid_attnum_index\n\nCan anyone else duplicate this? I don't see it on my available boxes\n(7.0.2 and current on HPUX and an old Linux box). The test program\nis pretty trivial, see attached.\n\n\t\t\tregards, tom lane\n\n/*-------------------------------------------------------------------------\n *\n * loOid2.c\n *\t test creation of many large objects \n *-------------------------------------------------------------------------\n */\n#include <stdlib.h>\n#include <stdio.h>\n#include <fcntl.h>\n#include <unistd.h>\n#include \"libpq-fe.h\"\n#include \"libpq/libpq-fs.h\"\n\nstatic Oid\ncreateOid(PGconn *conn, int i)\n{\n\tOid\t\t\tlobjId;\n\tint\t\t\tlobj_fd;\n\n\t/*\n\t * create a large object\n\t */\n\tlobjId = lo_creat(conn, INV_READ | INV_WRITE);\n\tif (lobjId == 0)\n\t{\n\t\tfprintf(stderr, \"can't create large object\\n\");\n\t\texit(1);\n\t}\n\n\tlobj_fd = lo_open(conn, lobjId, INV_WRITE);\n\n\tprintf(\"oid [%d] %d\\n\", lobjId, i);\n/*\tprintf(\"\\tfd [%d]\", lobj_fd); */\n\n\tlo_close(conn, lobj_fd);\n\n\treturn lobjId;\n}\n\nstatic void\nexit_nicely(PGconn *conn)\n{\n\tPQfinish(conn);\n\texit(1);\n}\n\nint\nmain(int argc, char **argv)\n{\n\tchar\t *database;\n\tOid\t\t\tlobjOid;\n\tPGconn\t *conn;\n\tPGresult *res;\n\tint\t\t\ti;\n\n\tdatabase = argv[1];\n\n\t/*\n\t * set up the connection\n\t */\n\tconn = PQsetdb(NULL, NULL, NULL, NULL, database);\n\n\t/* check to see that the backend connection was successfully made */\n\tif (PQstatus(conn) == CONNECTION_BAD)\n\t{\n\t\tfprintf(stderr, \"Connection to database '%s' failed.\\n\", database);\n\t\tfprintf(stderr, \"%s\", PQerrorMessage(conn));\n\t\texit_nicely(conn);\n\t}\n\n\tres = PQexec(conn, \"begin\");\n\tPQclear(res);\n\t\tfor (i=0; i<100; i++)\n\t\t{\n\t \tlobjOid = createOid(conn, i); \n\t\t}\n\tres = PQexec(conn, \"end\");\n\tPQclear(res);\n\tPQfinish(conn);\n\treturn 0;\n}\n",
"msg_date": "Wed, 30 Aug 2000 15:31:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] calling lo_creat() "
}
] |
[
{
"msg_contents": "Hiroshi Inoue pointed out that Postgres neglects to do an explicit\ntransaction abort during backend shutdown. For example, in psql\n\tbegin;\n\tdeclare myc cursor for select * from ..;\n\tfetch in myc;\n\t\\q\nwould cause the backend to exit without having released the resources\nacquired for the open transaction. This is OK from the point of view\nof data integrity (other transactions will believe that the transaction\nwas aborted) but not OK if shared resources are left locked up. In\nparticular, this oversight probably accounts for the sporadic reports\nwe've seen of errors like\n\nNOTICE: FlushRelationBuffers(all_flows, 500237): block 171439 is\nreferenced (private 0, global 1)\nFATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2\n\nsince shared buffer reference counts would not be released by an\nexiting backend, leading to a complaint (perhaps much later) when\nVACUUM checks that there are no references to the relation it's\ntrying to vacuum.\n\nI have fixed this problem in current sources and back-patched the\nfix into the 7.0.* branch. But I do not know when or if we'll have\na 7.0.3 release, so for anyone who's been annoyed by this problem\nand doesn't want to wait, the patch for 7.0.* is attached.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/tcop/postgres.c.orig\tSat May 20 22:23:30 2000\n--- src/backend/tcop/postgres.c\tWed Aug 30 16:47:51 2000\n***************\n*** 1459,1465 ****\n \t * Initialize the deferred trigger manager\n \t */\n \tif (DeferredTriggerInit() != 0)\n! \t\tproc_exit(0);\n \n \tSetProcessingMode(NormalProcessing);\n \n--- 1459,1465 ----\n \t * Initialize the deferred trigger manager\n \t */\n \tif (DeferredTriggerInit() != 0)\n! \t\tgoto normalexit;\n \n \tSetProcessingMode(NormalProcessing);\n \n***************\n*** 1479,1490 ****\n \t\t\tTPRINTF(TRACE_VERBOSE, \"AbortCurrentTransaction\");\n \n \t\tAbortCurrentTransaction();\n! \t\tInError = false;\n \t\tif (ExitAfterAbort)\n! \t\t{\n! \t\t\tProcReleaseLocks(); /* Just to be sure... */\n! \t\t\tproc_exit(0);\n! \t\t}\n \t}\n \n \tWarn_restart_ready = true;\t/* we can now handle elog(ERROR) */\n--- 1479,1489 ----\n \t\t\tTPRINTF(TRACE_VERBOSE, \"AbortCurrentTransaction\");\n \n \t\tAbortCurrentTransaction();\n! \n \t\tif (ExitAfterAbort)\n! \t\t\tgoto errorexit;\n! \n! \t\tInError = false;\n \t}\n \n \tWarn_restart_ready = true;\t/* we can now handle elog(ERROR) */\n***************\n*** 1553,1560 ****\n \t\t\t\tif (HandleFunctionRequest() == EOF)\n \t\t\t\t{\n \t\t\t\t\t/* lost frontend connection during F message input */\n! \t\t\t\t\tpq_close();\n! \t\t\t\t\tproc_exit(0);\n \t\t\t\t}\n \t\t\t\tbreak;\n \n--- 1552,1558 ----\n \t\t\t\tif (HandleFunctionRequest() == EOF)\n \t\t\t\t{\n \t\t\t\t\t/* lost frontend connection during F message input */\n! \t\t\t\t\tgoto normalexit;\n \t\t\t\t}\n \t\t\t\tbreak;\n \n***************\n*** 1608,1618 ****\n \t\t\t\t */\n \t\t\tcase 'X':\n \t\t\tcase EOF:\n! \t\t\t\tif (!IsUnderPostmaster)\n! \t\t\t\t\tShutdownXLOG();\n! \t\t\t\tpq_close();\n! \t\t\t\tproc_exit(0);\n! \t\t\t\tbreak;\n \n \t\t\tdefault:\n \t\t\t\telog(ERROR, \"unknown frontend message was received\");\n--- 1606,1612 ----\n \t\t\t\t */\n \t\t\tcase 'X':\n \t\t\tcase EOF:\n! \t\t\t\tgoto normalexit;\n \n \t\t\tdefault:\n \t\t\t\telog(ERROR, \"unknown frontend message was received\");\n***************\n*** 1642,1651 ****\n \t\t\tif (IsUnderPostmaster)\n \t\t\t\tNullCommand(Remote);\n \t\t}\n! \t}\t\t\t\t\t\t\t/* infinite for-loop */\n \n! \tproc_exit(0);\t\t\t\t/* shouldn't get here... */\n! \treturn 1;\n }\n \n #ifndef HAVE_GETRUSAGE\n--- 1636,1655 ----\n \t\t\tif (IsUnderPostmaster)\n \t\t\t\tNullCommand(Remote);\n \t\t}\n! \t}\t\t\t\t\t\t\t/* end of main loop */\n! \n! normalexit:\n! \tExitAfterAbort = true;\t\t/* ensure we will exit if elog during abort */\n! \tAbortOutOfAnyTransaction();\n! \tif (!IsUnderPostmaster)\n! \t\tShutdownXLOG();\n! \n! errorexit:\n! \tpq_close();\n! \tProcReleaseLocks();\t\t\t/* Just to be sure... */\n! \tproc_exit(0);\n \n! \treturn 1;\t\t\t\t\t/* keep compiler quiet */\n }\n \n #ifndef HAVE_GETRUSAGE\n",
"msg_date": "Wed, 30 Aug 2000 17:29:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Important 7.0.* fix to ensure buffers are released"
},
{
"msg_contents": "> Hiroshi Inoue pointed out that Postgres neglects to do an explicit\n> transaction abort during backend shutdown. For example, in psql\n> \tbegin;\n> \tdeclare myc cursor for select * from ..;\n> \tfetch in myc;\n> \t\\q\n> would cause the backend to exit without having released the resources\n> acquired for the open transaction. This is OK from the point of view\n> of data integrity (other transactions will believe that the transaction\n> was aborted) but not OK if shared resources are left locked up. In\n> particular, this oversight probably accounts for the sporadic reports\n> we've seen of errors like\n> \n> NOTICE: FlushRelationBuffers(all_flows, 500237): block 171439 is\n> referenced (private 0, global 1)\n> FATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2\n> \n> since shared buffer reference counts would not be released by an\n> exiting backend, leading to a complaint (perhaps much later) when\n> VACUUM checks that there are no references to the relation it's\n> trying to vacuum.\n\nInteresting thing is that 6.5.x does not have the problem. Is it new\none for 7.0.x?\n\nI remember that you have fixed some refcount leaks in 6.5.x. Could you\ntell me any examples to demonstrate the cases in 6.5.x, those are\nsupposed to be fixed in 7.0.x? I just want to know what kind of\nrefcount leak problems existing in 6.5.x and 7.0.x.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 02 Sep 2000 17:14:33 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Important 7.0.* fix to ensure buffers are released"
},
{
"msg_contents": "[email protected] writes:\n> Interesting thing is that 6.5.x does not have the problem. Is it new\n> one for 7.0.x?\n\nI think the bug has been there for a long time. It is easier to see\nin 7.0.2 because VACUUM will now check for nonzero refcount on *all*\npages of the relation. Formerly, it only checked pages that it was\nabout to actually truncate from the relation. So it's possible for\nan unreleased pin on a page to go unnoticed in 6.5 but generate a\ncomplaint in 7.0.\n\nNow that I look closely, I see that VACUUM still has a problem with\nthis in current sources: it only calls FlushRelationBuffers() if it\nneeds to shorten the relation. So pinned pages will not be reported\nunless the file gets shortened by at least one page. This is a bug\nbecause it means that pg_upgrade still can't trust VACUUM to ensure\nthat all on-row status bits are correct (see comments for\nFlushRelationBuffers). I will change it to call FlushRelationBuffers\nalways.\n\n> I remember that you have fixed some refcount leaks in 6.5.x. Could you\n> tell me any examples to demonstrate the cases in 6.5.x, those are\n> supposed to be fixed in 7.0.x?\n\nI think the primary problems had to do with recursive calls to\nExecutorRun, which'd invoke the badly broken buffer refcount save/\nrestore mechanism that was present in 6.5 and earlier. This would\nmainly be done by SQL and PL functions that do SELECTs. A couple\nof examples:\n * elog(ERROR) from inside an SQL function would mean that buffer\n refcounts held by the outer scan wouldn't be released. So, eg,\n\tSELECT sqlfunction(column1) FROM foo;\n was a buffer leak risk.\n * SQL functions returning sets could leak even without any elog(),\n if the entire set result was not read for some reason.\nThere were probably some non-SQL-function cases that got fixed along the\nway, but I don't have any concrete examples. See the pghacker threads\n\tAnyone understand shared buffer refcount mechanism?\n\tProgress report: buffer refcount bugs and SQL functions\nfrom September 1999 for more info.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Sep 2000 12:37:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Important 7.0.* fix to ensure buffers are released "
},
{
"msg_contents": "[ Cc: to hackers list]\n\n> I think the primary problems had to do with recursive calls to\n> ExecutorRun, which'd invoke the badly broken buffer refcount save/\n> restore mechanism that was present in 6.5 and earlier. This would\n> mainly be done by SQL and PL functions that do SELECTs. A couple\n> of examples:\n> * elog(ERROR) from inside an SQL function would mean that buffer\n> refcounts held by the outer scan wouldn't be released. So, eg,\n> \tSELECT sqlfunction(column1) FROM foo;\n> was a buffer leak risk.\n\nFollowing case doesn't produce notices from BlowawayRelationBuffers.\n\ndrop table t1;\ncreate table t1(i int);\ndrop table t2;\ncreate table t2(i int);\ninsert into t1 values(1);\ndrop function f1(int);\ncreate function f1(int) returns int as '\n\t select $1 +1;\n\t select i from t2;\n' language 'sql';\ndrop table t2;\nselect f1(i) from t1;\ndelete from t1;\nvacuum t1;\n\nAm I missing something?\n\n> * SQL functions returning sets could leak even without any elog(),\n> if the entire set result was not read for some reason.\n\nHowever, following case produces:\n\nNOTICE: BlowawayRelationBuffers(t1, 0): block 0 is referenced...\n\nas expected.\n\ndrop table t1;\ncreate table t1(i int);\ninsert into t1 values(1);\ninsert into t1 select i from t1;\ninsert into t1 select i from t1;\ndrop function f1(int);\ncreate function f1(int) returns setof int as '\n\t select i from t1;\n' language 'sql';\nselect f1(i) from t1 limit 1 offset 0;\ndelete from t1;\nvacuum analyze t1;\n\nInteresting thing is that the select in above case produces a\nnotice in 7.0.2 (with or without your patches):\n\nNOTICE: Buffer Leak: [059] (freeNext=-3, freePrev=-3, relname=t1, blockNum=0, flags=0x4, refcount=1 2)\n\nwhile 6.5.3 does not. Maybe 6.5.3 failes to detect buffer leaks at\ntransaction commit time?\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 04 Sep 2000 10:04:16 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Important 7.0.* fix to ensure buffers are released "
},
{
"msg_contents": "[email protected] writes:\n> Am I missing something?\n\nI don't have 6.5.* running anymore to check, but it looked to me like\nelog out of an SQL function would result in refcount leaks. But the\nelog would have to occur while inside the function's recursive call\nto ExecutorRun, so your example (which will detect its error during\nquery plan setup) doesn't exhibit the problem. Try something like\n\tselect 1/0;\ninside the function.\n\n> Interesting thing is that the select in above case produces a\n> notice in 7.0.2 (with or without your patches):\n\nYes, still there in current sources. The leak comes from the fact\nthat the function's internal SELECT is never shut down because the\nfunction isn't run to completion. This is one of the things I think we\nneed to fix during querytree redesign. However, 7.0 at least detects\nand recovers from the leak, which is more than can be said for 6.5.\n\n> while 6.5.3 does not. Maybe 6.5.3 failes to detect buffer leaks at\n> transaction commit time?\n\nIn fact it does fail to detect them, in cases like this where the leak\nis attributable to an uncompleted query inside a function call. That's\none of the things that was broken about the refcount save/restore\nmechanism...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Sep 2000 22:42:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Important 7.0.* fix to ensure buffers are released "
},
{
"msg_contents": "> I don't have 6.5.* running anymore to check, but it looked to me like\n> elog out of an SQL function would result in refcount leaks. But the\n> elog would have to occur while inside the function's recursive call\n> to ExecutorRun, so your example (which will detect its error during\n> query plan setup) doesn't exhibit the problem. Try something like\n> \tselect 1/0;\n> inside the function.\n\nOh, I see.\n\n> > Interesting thing is that the select in above case produces a\n> > notice in 7.0.2 (with or without your patches):\n> \n> Yes, still there in current sources. The leak comes from the fact\n> that the function's internal SELECT is never shut down because the\n> function isn't run to completion. This is one of the things I think we\n> need to fix during querytree redesign. However, 7.0 at least detects\n> and recovers from the leak, which is more than can be said for 6.5.\n\nAgreed.\n\n> > while 6.5.3 does not. Maybe 6.5.3 failes to detect buffer leaks at\n> > transaction commit time?\n> \n> In fact it does fail to detect them, in cases like this where the leak\n> is attributable to an uncompleted query inside a function call. That's\n> one of the things that was broken about the refcount save/restore\n> mechanism...\n\nUnderstood.\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 04 Sep 2000 20:29:36 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Important 7.0.* fix to ensure buffers are released "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> [email protected] writes:\n> > Interesting thing is that 6.5.x does not have the problem. Is it new\n> > one for 7.0.x?\n> \n> I think the bug has been there for a long time. It is easier to see\n\nOne of the reason why we see the bug often in 7.0 seems to be\nthe following change which was applied to temprel.c before 7.0.\nremove_all_temp_relations() always called AbortOutAnyTransaction()\nbefore the change. remove_all_temp_relations() has been called from\nshmem_exit() and accidentally(I don't think it had been intensional)\nproc_exit() always called AbortOutAnyTransaction().\n\n@@ -79,6 +79,9 @@\n \tList\t *l,\n \t\t\t *next;\n \n+\tif (temp_rels == NIL)\n+\t\treturn;\n+\n \tAbortOutOfAnyTransaction();\n \tStartTransactionCommand();\n \n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Tue, 5 Sep 2000 17:51:15 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Important 7.0.* fix to ensure buffers are released "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> One of the reason why we see the bug often in 7.0 seems to be\n> the following change which was applied to temprel.c before 7.0.\n> remove_all_temp_relations() always called AbortOutAnyTransaction()\n> before the change.\n\nBingo! So actually there was an abort-transaction call buried in the\nshutdown process. I wondered why we didn't see more problems...\n\nAnyway, I've added an AbortOutOfAnyTransaction() call to postgres.c,\nso the behavior should be more straightforward now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Sep 2000 09:57:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Important 7.0.* fix to ensure buffers are released "
},
{
"msg_contents": "[Cc:ed to hackers list]\n\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > One of the reason why we see the bug often in 7.0 seems to be\n> > the following change which was applied to temprel.c before 7.0.\n> > remove_all_temp_relations() always called AbortOutAnyTransaction()\n> > before the change.\n> \n> Bingo! So actually there was an abort-transaction call buried in the\n> shutdown process. I wondered why we didn't see more problems...\n> \n> Anyway, I've added an AbortOutOfAnyTransaction() call to postgres.c,\n> so the behavior should be more straightforward now.\n\nAre you going to make a back patch for the 7.0 tree?\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 06 Sep 2000 17:17:06 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Important 7.0.* fix to ensure buffers are released "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Anyway, I've added an AbortOutOfAnyTransaction() call to postgres.c,\n>> so the behavior should be more straightforward now.\n\n> Are you going to make a back patch for the 7.0 tree?\n\nI did. See starting message of this thread ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Sep 2000 10:39:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Important 7.0.* fix to ensure buffers are released "
}
] |
[
{
"msg_contents": "Mario Weilguni wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n>\n> Last week I created a patch for the Postgres client side libraries to allow\n> something like a (not so mighty) form of Oracle TNS, but nobody showed any\n> interest. Currently, the patch is not perfect yet, but works fine for us. I\n> want to avoid improving the patch if there is no interest in it, so if you\n> think it might be a worthy improvement please drop me a line.\n>\n> It works like this:\n> The patch allows to supply another parameter to the Postgres connect string,\n> called \"service\". So, instead of having a connect string (e.g. in PHP) like\n> \"dbname=foo host=bar port=5433 user=foouser password=barpass\"\n> the string would be\n> \"service=stupid_name_here\"\n> or more often\n> \"service=stupid_name_here user=foouser password=barpass\"\n>\n> There's a config [...]\n\n IMHO a good idea.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 30 Aug 2000 17:26:52 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Patch for TNS services"
},
{
"msg_contents": "Mario Weilguni <[email protected]> writes:\n> Last week I created a patch for the Postgres client side libraries to allow\n> something like a (not so mighty) form of Oracle TNS, but nobody showed any\n> interest. Currently, the patch is not perfect yet, but works fine for us. I\n> want to avoid improving the patch if there is no interest in it, so if you\n> think it might be a worthy improvement please drop me a line.\n\nFWIW, it seemed like a moderately reasonable idea to me. But it\ntroubles me that you didn't get positive responses; as you say, no\npoint in maintaining the feature unless people will use it.\n\nPerhaps you should bring it up on pgsql-interfaces, which seems like\nthe most likely place to discuss such a feature.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 Aug 2000 01:14:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch for TNS services "
},
{
"msg_contents": "> > Last week I created a patch for the Postgres client side libraries to allow\n> > something like a (not so mighty) form of Oracle TNS, but nobody showed any\n> > interest.\n\nPerhaps it would be helpful to describe TNS: what it does, and what it\nis good for. You posted your notice out of the blue, and for those of us\nwho didn't take the time to research what the heck you were talking\nabout, there wasn't much point in responding.\n\nThere are lots of good ideas floating around out there, and it is a\nshame, but true anyway, that it takes most of us a while to warm up to a\nnew idea or proposal. Thanks for posting a followup message, and it\nwould be good to hear more about your proposal. I'm sure if it is a\nuseful feature that it will be of interest in the long run...\n\nRegards.\n\n - Thomas\n",
"msg_date": "Thu, 31 Aug 2000 05:32:51 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch for TNS services"
},
{
"msg_contents": "On Wed, Aug 30, 2000 at 08:04:33PM +0200, Mario Weilguni wrote:\n> \n> Last week I created a patch for the Postgres client side libraries to allow \n> something like a (not so mighty) form of Oracle TNS, but nobody showed any \n> interest. Currently, the patch is not perfect yet, but works fine for us. I \n> want to avoid improving the patch if there is no interest in it, so if you \n> think it might be a worthy improvement please drop me a line.\n> \n> It works like this:\n> The patch allows to supply another parameter to the Postgres connect string, \n> called \"service\". So, instead of having a connect string (e.g. in PHP) like \n> \"dbname=foo host=bar port=5433 user=foouser password=barpass\"\n> the string would be\n> \"service=stupid_name_here\"\n> or more often\n> \"service=stupid_name_here user=foouser password=barpass\"\n> \n> There's a config file /etc/pg_service.conf, having an entry like:\n> [stupid_name_here]\n> dbname=foo\n> host=bar\n> port=5433\n> ....\n> \n\nLooks kind of like a server side ODBC datasource. Looks useful\nto me. I've mainly been using ColdFusion to connect my webpages to\npostgresql, via ODBC, and I can attest to the utility of just changing\nthe host or dbname in the one source being referenced by all that code.\nHaving it server side, so other interfaces can use the same setup,\nwould be nice.\n\nAre any of the parameters from your service overrideable? i.e. can it\nbe used to store defaults, say for username?\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Thu, 31 Aug 2000 09:43:48 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch for TNS services"
},
{
"msg_contents": "Sounds like people want it. Can you polish it off, add SGML docs and\nsend it over?\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> \n> Last week I created a patch for the Postgres client side libraries to allow \n> something like a (not so mighty) form of Oracle TNS, but nobody showed any \n> interest. Currently, the patch is not perfect yet, but works fine for us. I \n> want to avoid improving the patch if there is no interest in it, so if you \n> think it might be a worthy improvement please drop me a line.\n> \n> It works like this:\n> The patch allows to supply another parameter to the Postgres connect string, \n> called \"service\". So, instead of having a connect string (e.g. in PHP) like \n> \"dbname=foo host=bar port=5433 user=foouser password=barpass\"\n> the string would be\n> \"service=stupid_name_here\"\n> or more often\n> \"service=stupid_name_here user=foouser password=barpass\"\n> \n> There's a config file /etc/pg_service.conf, having an entry like:\n> [stupid_name_here]\n> dbname=foo\n> host=bar\n> port=5433\n> ....\n> \n> The advantage is you can go from one database host, database, port or \n> whatever without having to touch the scripts or applications. We're currently \n> in the process of migrating all of our PHP and Python scripts to another from \n> localhost, port 5433 to another machine, port 5432 and it's not something I \n> ever want to do again, I'd to change around 100 files and I'm still not sure \n> if I've missed one.\n> \n> The patch is client-side only, around 100 lines, needs no changes to the \n> backend and is compatible with all applications supplying a connection string \n> (not using PQsetdblogin)\n> \n> - -- \n> Why is it always Segmentation's fault?\n> -----BEGIN PGP SIGNATURE-----\n> Version: 2.6.3i\n> Charset: noconv\n> \n> iQCVAwUBOa1MsQotfkegMgnVAQEIsAP+Na72pNdT+RoQcjuX5cn1TKkPlNAh9BV5\n> kCNP+Zui6WfZSiA8RYPuruXF0QyEMPZZD6AI9Wqr5sQ75kVSb65uOt9rLrdS0bxA\n> WTClNjlLKG3Rk1IGSFBm+C0p8lcA3AYTohHLhHB3q+WeLTneI5lJfwpo2AWyinQt\n> 0k/1r6EwpUk=\n> =+skX\n> -----END PGP SIGNATURE-----\n> \n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 01:13:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch for TNS services"
},
{
"msg_contents": "Yes, I will try to make it this weekend when I've time.\n\nAm Die, 12 Sep 2000 schrieben Sie:\n> Sounds like people want it. Can you polish it off, add SGML docs and\n> send it over?\n>\n> > -----BEGIN PGP SIGNED MESSAGE-----\n> >\n> > Last week I created a patch for the Postgres client side libraries to\n> > allow something like a (not so mighty) form of Oracle TNS, but nobody\n> > showed any interest. Currently, the patch is not perfect yet, but works\n> > fine for us. I want to avoid improving the patch if there is no interest\n> > in it, so if you think it might be a worthy improvement please drop me a\n> > line.\n> >\n> > It works like this:\n> > The patch allows to supply another parameter to the Postgres connect\n> > string, called \"service\". So, instead of having a connect string (e.g. in\n> > PHP) like \"dbname=foo host=bar port=5433 user=foouser password=barpass\"\n> > the string would be\n> > \"service=stupid_name_here\"\n> > or more often\n> > \"service=stupid_name_here user=foouser password=barpass\"\n> >\n> > There's a config file /etc/pg_service.conf, having an entry like:\n> > [stupid_name_here]\n> > dbname=foo\n> > host=bar\n> > port=5433\n> > ....\n> >\n> > The advantage is you can go from one database host, database, port or\n> > whatever without having to touch the scripts or applications. We're\n> > currently in the process of migrating all of our PHP and Python scripts\n> > to another from localhost, port 5433 to another machine, port 5432 and\n> > it's not something I ever want to do again, I'd to change around 100\n> > files and I'm still not sure if I've missed one.\n> >\n> > The patch is client-side only, around 100 lines, needs no changes to the\n> > backend and is compatible with all applications supplying a connection\n> > string (not using PQsetdblogin)\n> >\n> > - --\n> > Why is it always Segmentation's fault?\n> > -----BEGIN PGP SIGNATURE-----\n> > Version: 2.6.3i\n> > Charset: noconv\n> >\n> > iQCVAwUBOa1MsQotfkegMgnVAQEIsAP+Na72pNdT+RoQcjuX5cn1TKkPlNAh9BV5\n> > kCNP+Zui6WfZSiA8RYPuruXF0QyEMPZZD6AI9Wqr5sQ75kVSb65uOt9rLrdS0bxA\n> > WTClNjlLKG3Rk1IGSFBm+C0p8lcA3AYTohHLhHB3q+WeLTneI5lJfwpo2AWyinQt\n> > 0k/1r6EwpUk=\n> > =+skX\n> > -----END PGP SIGNATURE-----\n>\n> [ Attachment, skipping... ]\n",
"msg_date": "Tue, 12 Sep 2000 18:39:54 +0200",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch for TNS services"
},
{
"msg_contents": "Am Die, 12 Sep 2000 schrieb Bruce Momjian:\n> Sounds like people want it. Can you polish it off, add SGML docs and\n> send it over?\n\nI prepared and tested a patch vs. 7.0.2, and it works fine. I've added \nanother option which allows users to have their own service file in \n~/.pg_service.conf, which might come handy sometimes.\n\nI tried to add SGML docs, but I'm not sure where to add a section. At first \nI'd say in the user.sgml, but I checked it and I'm not sure if this is the \nright place. Where shall I add the docs?\n\nAnd I've a second problem here, I'm no native english speaker and my docs \nshould be proofread and corrected before adding it.\n\n\n>\n> > -----BEGIN PGP SIGNED MESSAGE-----\n> >\n> > Last week I created a patch for the Postgres client side libraries to\n> > allow something like a (not so mighty) form of Oracle TNS, but nobody\n> > showed any interest. Currently, the patch is not perfect yet, but works\n> > fine for us. I want to avoid improving the patch if there is no interest\n> > in it, so if you think it might be a worthy improvement please drop me a\n> > line.\n> >\n> > It works like this:\n> > The patch allows to supply another parameter to the Postgres connect\n> > string, called \"service\". So, instead of having a connect string (e.g. in\n> > PHP) like \"dbname=foo host=bar port=5433 user=foouser password=barpass\"\n> > the string would be\n> > \"service=stupid_name_here\"\n> > or more often\n> > \"service=stupid_name_here user=foouser password=barpass\"\n> >\n> > There's a config file /etc/pg_service.conf, having an entry like:\n> > [stupid_name_here]\n> > dbname=foo\n> > host=bar\n> > port=5433\n> > ....\n> >\n> > The advantage is you can go from one database host, database, port or\n> > whatever without having to touch the scripts or applications. We're\n> > currently in the process of migrating all of our PHP and Python scripts\n> > to another from localhost, port 5433 to another machine, port 5432 and\n> > it's not something I ever want to do again, I'd to change around 100\n> > files and I'm still not sure if I've missed one.\n> >\n> > The patch is client-side only, around 100 lines, needs no changes to the\n> > backend and is compatible with all applications supplying a connection\n> > string (not using PQsetdblogin)\n> >\n> > - --\n> > Why is it always Segmentation's fault?\n> > -----BEGIN PGP SIGNATURE-----\n> > Version: 2.6.3i\n> > Charset: noconv\n> >\n> > iQCVAwUBOa1MsQotfkegMgnVAQEIsAP+Na72pNdT+RoQcjuX5cn1TKkPlNAh9BV5\n> > kCNP+Zui6WfZSiA8RYPuruXF0QyEMPZZD6AI9Wqr5sQ75kVSb65uOt9rLrdS0bxA\n> > WTClNjlLKG3Rk1IGSFBm+C0p8lcA3AYTohHLhHB3q+WeLTneI5lJfwpo2AWyinQt\n> > 0k/1r6EwpUk=\n> > =+skX\n> > -----END PGP SIGNATURE-----\n>\n> [ Attachment, skipping... ]\n",
"msg_date": "Sat, 16 Sep 2000 10:51:05 +0200",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch for TNS services"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Sounds like people want it. Can you polish it off, add SGML docs and\n> send it over?\n\n> > There's a config file /etc/pg_service.conf, having an entry like:\n\nPlease check that the final patch uses a file in ${sysconfdir} that was\nspecified at configure time, and not \"/etc\". Something like\nsrc/backend/libpq does for the Kerberos files.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 16 Sep 2000 21:04:36 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch for TNS services"
},
{
"msg_contents": "I've now prepared a polished and clean patch vs. 7.0.2. Who's gonna integrate \nthis patch in the CVS? I've no CVS access.\n\nThe docs are another problem. I've installed jade and most other SGML stuff \nhere, but \"make user.html\" fails with errors like :\n\njade:user.sgml:5:55:W: cannot generate system identifier for public text \n\"-//OASIS//DTD Dojade:user.sgml:41:0:E: reference to entity \"BOOK\" for which \nno system identifier could be\njade:user.sgml:5:0: entity was defined here\njade:user.sgml:41:0:E: DTD did not contain element declaration for document \ntype name \n\nThe patch is included as attachement (159 lines).\n\n\nThe patch is included\n\nAm Tue, 12 Sep 2000 schrieben Sie:\n> Sounds like people want it. Can you polish it off, add SGML docs and\n> send it over?\n>\n> > -----BEGIN PGP SIGNED MESSAGE-----\n> >\n> > Last week I created a patch for the Postgres client side libraries to\n> > allow something like a (not so mighty) form of Oracle TNS, but nobody\n> > showed any interest. Currently, the patch is not perfect yet, but works\n> > fine for us. I want to avoid improving the patch if there is no interest\n> > in it, so if you think it might be a worthy improvement please drop me a\n> > line.\n> >\n> > It works like this:\n> > The patch allows to supply another parameter to the Postgres connect\n> > string, called \"service\". So, instead of having a connect string (e.g. in\n> > PHP) like \"dbname=foo host=bar port=5433 user=foouser password=barpass\"\n> > the string would be\n> > \"service=stupid_name_here\"\n> > or more often\n> > \"service=stupid_name_here user=foouser password=barpass\"\n> >\n> > There's a config file /etc/pg_service.conf, having an entry like:\n> > [stupid_name_here]\n> > dbname=foo\n> > host=bar\n> > port=5433\n> > ....\n> >\n> > The advantage is you can go from one database host, database, port or\n> > whatever without having to touch the scripts or applications. We're\n> > currently in the process of migrating all of our PHP and Python scripts\n> > to another from localhost, port 5433 to another machine, port 5432 and\n> > it's not something I ever want to do again, I'd to change around 100\n> > files and I'm still not sure if I've missed one.\n> >\n> > The patch is client-side only, around 100 lines, needs no changes to the\n> > backend and is compatible with all applications supplying a connection\n> > string (not using PQsetdblogin)\n> >\n> > - --\n> > Why is it always Segmentation's fault?\n> > -----BEGIN PGP SIGNATURE-----\n> > Version: 2.6.3i\n> > Charset: noconv\n> >\n> > iQCVAwUBOa1MsQotfkegMgnVAQEIsAP+Na72pNdT+RoQcjuX5cn1TKkPlNAh9BV5\n> > kCNP+Zui6WfZSiA8RYPuruXF0QyEMPZZD6AI9Wqr5sQ75kVSb65uOt9rLrdS0bxA\n> > WTClNjlLKG3Rk1IGSFBm+C0p8lcA3AYTohHLhHB3q+WeLTneI5lJfwpo2AWyinQt\n> > 0k/1r6EwpUk=\n> > =+skX\n> > -----END PGP SIGNATURE-----\n>\n> [ Attachment, skipping... ]",
"msg_date": "Fri, 22 Sep 2000 18:05:00 +0200",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch for TNS services"
},
{
"msg_contents": "Sorry for replying to the fairly old messages, but I just had time to open\nmy pgsql mailbox:\n\nI would _love_ if your implementation can also implement TNS-style \nfailover, i.e. having multiple /etc/pg_service.conf entries with same\nname. When replication is available, this will provide a efficient and\ntransparent failover for clients...\n\nOne of my projects involves putting all email account information into\npostgres, which means postgres needs to be highly available. Now, I can\ntake care of replication with triggers and hacks, but client-side solution\nwould really benefit from failover...\n\n-alex\n\nOn Tue, 12 Sep 2000, Bruce Momjian wrote:\n\n> Sounds like people want it. Can you polish it off, add SGML docs and\n> send it over?\n> \n> > -----BEGIN PGP SIGNED MESSAGE-----\n> > \n> > Last week I created a patch for the Postgres client side libraries to allow \n> > something like a (not so mighty) form of Oracle TNS, but nobody showed any \n> > interest. Currently, the patch is not perfect yet, but works fine for us. I \n> > want to avoid improving the patch if there is no interest in it, so if you \n> > think it might be a worthy improvement please drop me a line.\n> > \n> > It works like this:\n> > The patch allows to supply another parameter to the Postgres connect string, \n> > called \"service\". So, instead of having a connect string (e.g. in PHP) like \n> > \"dbname=foo host=bar port=5433 user=foouser password=barpass\"\n> > the string would be\n> > \"service=stupid_name_here\"\n> > or more often\n> > \"service=stupid_name_here user=foouser password=barpass\"\n> > \n> > There's a config file /etc/pg_service.conf, having an entry like:\n> > [stupid_name_here]\n> > dbname=foo\n> > host=bar\n> > port=5433\n> > ....\n> > \n> > The advantage is you can go from one database host, database, port or \n> > whatever without having to touch the scripts or applications. We're currently \n> > in the process of migrating all of our PHP and Python scripts to another from \n> > localhost, port 5433 to another machine, port 5432 and it's not something I \n> > ever want to do again, I'd to change around 100 files and I'm still not sure \n> > if I've missed one.\n> > \n> > The patch is client-side only, around 100 lines, needs no changes to the \n> > backend and is compatible with all applications supplying a connection string \n> > (not using PQsetdblogin)\n> > \n> > - -- \n> > Why is it always Segmentation's fault?\n> > -----BEGIN PGP SIGNATURE-----\n> > Version: 2.6.3i\n> > Charset: noconv\n> > \n> > iQCVAwUBOa1MsQotfkegMgnVAQEIsAP+Na72pNdT+RoQcjuX5cn1TKkPlNAh9BV5\n> > kCNP+Zui6WfZSiA8RYPuruXF0QyEMPZZD6AI9Wqr5sQ75kVSb65uOt9rLrdS0bxA\n> > WTClNjlLKG3Rk1IGSFBm+C0p8lcA3AYTohHLhHB3q+WeLTneI5lJfwpo2AWyinQt\n> > 0k/1r6EwpUk=\n> > =+skX\n> > -----END PGP SIGNATURE-----\n> > \n> [ Attachment, skipping... ]\n> \n> \n> \n\n",
"msg_date": "Sat, 7 Oct 2000 19:15:48 -0400 (EDT)",
"msg_from": "Alex Pilosov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch for TNS services"
},
{
"msg_contents": "Patch applied. Can you send me the SGML diff? I will merge them in.\n\n\n> I've now prepared a polished and clean patch vs. 7.0.2. Who's gonna integrate \n> this patch in the CVS? I've no CVS access.\n> \n> The docs are another problem. I've installed jade and most other SGML stuff \n> here, but \"make user.html\" fails with errors like :\n> \n> jade:user.sgml:5:55:W: cannot generate system identifier for public text \n> \"-//OASIS//DTD Dojade:user.sgml:41:0:E: reference to entity \"BOOK\" for which \n> no system identifier could be\n> jade:user.sgml:5:0: entity was defined here\n> jade:user.sgml:41:0:E: DTD did not contain element declaration for document \n> type name \n> \n> The patch is included as attachement (159 lines).\n> \n> \n> The patch is included\n> \n> Am Tue, 12 Sep 2000 schrieben Sie:\n> > Sounds like people want it. Can you polish it off, add SGML docs and\n> > send it over?\n> >\n> > > -----BEGIN PGP SIGNED MESSAGE-----\n> > >\n> > > Last week I created a patch for the Postgres client side libraries to\n> > > allow something like a (not so mighty) form of Oracle TNS, but nobody\n> > > showed any interest. Currently, the patch is not perfect yet, but works\n> > > fine for us. I want to avoid improving the patch if there is no interest\n> > > in it, so if you think it might be a worthy improvement please drop me a\n> > > line.\n> > >\n> > > It works like this:\n> > > The patch allows to supply another parameter to the Postgres connect\n> > > string, called \"service\". So, instead of having a connect string (e.g. in\n> > > PHP) like \"dbname=foo host=bar port=5433 user=foouser password=barpass\"\n> > > the string would be\n> > > \"service=stupid_name_here\"\n> > > or more often\n> > > \"service=stupid_name_here user=foouser password=barpass\"\n> > >\n> > > There's a config file /etc/pg_service.conf, having an entry like:\n> > > [stupid_name_here]\n> > > dbname=foo\n> > > host=bar\n> > > port=5433\n> > > ....\n> > >\n> > > The advantage is you can go from one database host, database, port or\n> > > whatever without having to touch the scripts or applications. We're\n> > > currently in the process of migrating all of our PHP and Python scripts\n> > > to another from localhost, port 5433 to another machine, port 5432 and\n> > > it's not something I ever want to do again, I'd to change around 100\n> > > files and I'm still not sure if I've missed one.\n> > >\n> > > The patch is client-side only, around 100 lines, needs no changes to the\n> > > backend and is compatible with all applications supplying a connection\n> > > string (not using PQsetdblogin)\n> > >\n> > > - --\n> > > Why is it always Segmentation's fault?\n> > > -----BEGIN PGP SIGNATURE-----\n> > > Version: 2.6.3i\n> > > Charset: noconv\n> > >\n> > > iQCVAwUBOa1MsQotfkegMgnVAQEIsAP+Na72pNdT+RoQcjuX5cn1TKkPlNAh9BV5\n> > > kCNP+Zui6WfZSiA8RYPuruXF0QyEMPZZD6AI9Wqr5sQ75kVSb65uOt9rLrdS0bxA\n> > > WTClNjlLKG3Rk1IGSFBm+C0p8lcA3AYTohHLhHB3q+WeLTneI5lJfwpo2AWyinQt\n> > > 0k/1r6EwpUk=\n> > > =+skX\n> > > -----END PGP SIGNATURE-----\n> >\n> > [ Attachment, skipping... ]\n> \n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 21:01:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Patch for TNS services"
}
] |
[
{
"msg_contents": "I've just been asked how to store unicode text in a postgresql database. The\nproblem as I understand it is that unicode strings may contain binary 0s\nwhich might break string handling. Since I never tried, I think it's better\nto ask here before answering that question. \n\nThe application uses ecpg so it's not only a backend question.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 30 Aug 2000 16:00:10 -0700",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to store unicode?"
},
{
"msg_contents": "> I've just been asked how to store unicode text in a postgresql database. The\n> problem as I understand it is that unicode strings may contain binary 0s\n> which might break string handling. Since I never tried, I think it's better\n> to ask here before answering that question. \n\nI guess you are talking about UCS encoding. There is another encoding\nfor Unicode, called UTF-8. It does not 0s, so you could use it with\nPostgreSQL. Actually we have some unicode(utf-8) regression tests in\nsrc/test/mb and they seem working.\n\n> The application uses ecpg so it's not only a backend question.\n\nI guess ecpg is ok as long as using UTF-8.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 31 Aug 2000 08:40:12 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: How to store unicode?"
}
] |
[
{
"msg_contents": "\nAs per my previous post on this matter:\nhttp://www.postgresql.org/mhonarc/pgsql-general/2000-08/msg00326.html\n\n... somebody responded to me in private (IIRC - don't remember the name,\nand can't find it in the archives), saying that this had been fixed in 7.1\n- I assumed this meant it already was in CVS, and as it turns out, that's\nnot the case. Having gotten latest (CVS as of a few hours ago) working and\nloaded the database(s), I still get the exact same errors as before... Any\nclues as to if/when this will be fixed? (If it won't, or is of so little\npriority that it's far in the future - a simple \"Won't happen\" will work.)\n\n\nThanks in advance,\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Wed, 30 Aug 2000 23:18:20 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "More about \"CREATE TABLE\" from inside a function/trigger..."
},
{
"msg_contents": "I believe you could do CREATE TABLE from inside a pltcl or plperl\nfunction today. plpgsql won't work because it tries to cache query\nplans for repeated execution --- which essentially means that you\ncan only substitute parameters for data values, not for table names\nor field names or other structural aspects of a query. But the other\ntwo just treat queries as dynamically-generated strings, so you can\ndo anything you want in those languages. (At a performance price,\nof course: no caching. There ain't no such thing as a free lunch.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 Aug 2000 01:23:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More about \"CREATE TABLE\" from inside a function/trigger... "
},
{
"msg_contents": "Tom Lane wrote:\n> I believe you could do CREATE TABLE from inside a pltcl or plperl\n> function today. plpgsql won't work because it tries to cache query\n> plans for repeated execution --- which essentially means that you\n> can only substitute parameters for data values, not for table names\n> or field names or other structural aspects of a query. But the other\n> two just treat queries as dynamically-generated strings, so you can\n> do anything you want in those languages. (At a performance price,\n> of course: no caching. There ain't no such thing as a free lunch.)\n\n You're right - any longer not :-)\n\n I just committed a little patch adding an EXECUTE statement\n to PL/pgSQL. It takes an expression (preferrably resulting\n in a string which is a valid SQL command) and executes it via\n SPI_exec() (no prepare/cache).\n\n It can occur as is, where the querystrings execution via\n SPI_exec() must NOT return SPI_OK_SELECT. Or it can occur\n instead of the SELECT part of a FOR loop, where it's\n execution via SPI_exec() MUST return SPI_OK_SELECT.\n\n Here's the output from a little test:\n\n CREATE TABLE t1 (a integer, b integer, c integer);\n CREATE\n INSERT INTO t1 VALUES (1, 11, 111);\n INSERT 19276 1\n INSERT INTO t1 VALUES (2, 22, 222);\n INSERT 19277 1\n INSERT INTO t1 VALUES (3, 33, 333);\n INSERT 19278 1\n CREATE FUNCTION f1 (name, name) RETURNS integer AS '\n DECLARE\n sumrec record;\n result integer;\n BEGIN\n EXECUTE ''CREATE TEMP TABLE f1_temp (val integer)'';\n EXECUTE ''INSERT INTO f1_temp SELECT '' || $2 ||\n '' FROM '' || $1;\n FOR sumrec IN EXECUTE ''SELECT sum(val) AS sum FROM f1_temp''\n LOOP\n result = sumrec.sum;\n END LOOP;\n EXECUTE ''DROP TABLE f1_temp'';\n RETURN result;\n END;\n ' LANGUAGE 'plpgsql';\n CREATE\n SELECT f1('t1', 'a') AS \"sum t1.a\";\n sum t1.a\n ----------\n 6\n (1 row)\n\n SELECT f1('t1', 'b') AS \"sum t1.b\";\n sum t1.b\n ----------\n 66\n (1 row)\n\n SELECT f1('t1', 'c') AS \"sum t1.c\";\n sum t1.c\n ----------\n 666\n (1 row)\n\n So PL/pgSQL can now execute dynamic SQL including utility\n statements.\n\n Who adds this new feature to the docs? I don't have the jade\n tools installed and don't like to fiddle around in source\n files where I cannot check the results.\n\n I think two little functions for quoting of literals and\n identifiers might be handy. Like\n\n quote_ident('silly \"TEST\" table')\n\n returns '\"silly \"\"TEST\"\" table\"'\n\n so that the querystring build in the above sample can be done\n in a bullet proof way.\n\n Comments?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Thu, 31 Aug 2000 10:07:57 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More about \"CREATE TABLE\" from inside a function/trigger..."
}
] |
[
{
"msg_contents": "Hi,\n\nI wonder what is the status of WAL implementation?\n\n\n",
"msg_date": "Thu, 31 Aug 2000 13:07:08 +0800",
"msg_from": "\"Alexey Raschepkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "WAL"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have seen discussions about iscachable attribute of\nfunctions. Now I'm confused to see a solution in 6.5\n(by Shigeru Matsumoto).\n\n =# explain select * from pg_class where oid=1259;\n\nIndex Scan using pg_class_oid_index on pg_class (cost=0.00..2.01\n\trows=1 width=92) \n\n1) Using non-cachable function f()\n =# create function f(oid) returns oid as\n '\n select $1;\n ' language 'sql';\n =# explain select * from pg_class where oid=f(1259);\n\n Seq Scan on pg_class (cost=0.00..3.17 rows=1 width=92) \n\nSeems reasonable.\n\n2) Using select f() \n =# explain select * from pg_class where oid=(select f(1259)); \n\n Index Scan using pg_class_oid_index on pg_class (cost=0.00..2.01\n\trows=1 width=92)\n InitPlan\n -> Result (cost=0.00..0.00 rows=0 width=0) \n\nThis is the result in my current environment.\nHmm,what's the difference between 1) and 2) ?\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Thu, 31 Aug 2000 17:10:18 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "func() & select func()"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> 1) Using non-cachable function f()\n> =# explain select * from pg_class where oid=f(1259);\n> Seq Scan on pg_class (cost=0.00..3.17 rows=1 width=92) \n\n> 2) Using select f() \n> =# explain select * from pg_class where oid=(select f(1259)); \n> Index Scan using pg_class_oid_index on pg_class (cost=0.00..2.01\n> \trows=1 width=92)\n> InitPlan\n-> Result (cost=0.00..0.00 rows=0 width=0) \n\nThe sub-select is reduced to an initplan --- ie, executed only once,\nnot once per row --- because it has no dependency on the outer select.\n\nCurrently we do not consider the presence of noncachable functions as\na reason that prevents reducing a subplan to an initplan. I thought\nabout it but didn't like the performance penalty. It seems to me that\nit's debatable which is the correct semantics, anyway. Arguably an\nouter select *should* assume that a parameterless inner select yields\nconstant results --- if you don't assume that then it makes no sense\nto do joins over the results of sub-SELECTs in FROM, which is a feature\nrequired by full SQL...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 Aug 2000 10:37:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: func() & select func() "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > 1) Using non-cachable function f()\n> > =# explain select * from pg_class where oid=f(1259);\n> > Seq Scan on pg_class (cost=0.00..3.17 rows=1 width=92) \n> \n> > 2) Using select f() \n> > =# explain select * from pg_class where oid=(select f(1259)); \n> > Index Scan using pg_class_oid_index on pg_class (cost=0.00..2.01\n> > \trows=1 width=92)\n> > InitPlan\n> -> Result (cost=0.00..0.00 rows=0 width=0) \n> \n> The sub-select is reduced to an initplan --- ie, executed only once,\n> not once per row --- because it has no dependency on the outer select.\n> \n> Currently we do not consider the presence of noncachable functions as\n> a reason that prevents reducing a subplan to an initplan. I thought\n> about it but didn't like the performance penalty. It seems to me that\n> it's debatable which is the correct semantics, anyway. Arguably an\n\nAfter a little thought,it seems to me that the behavior of the subquery\nis more reasonable than current evaluation of functions.\n\nUnder MVCC,SELECT returns the content of a database at the time\nwhen the query started no matter how long time it takes to return\nthe resultset.\nShouldn't functions be evaluated once(per the combination of parameters)\nat the time when a query started ?\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Fri, 1 Sep 2000 11:36:53 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: func() & select func() "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> Currently we do not consider the presence of noncachable functions as\n>> a reason that prevents reducing a subplan to an initplan. I thought\n>> about it but didn't like the performance penalty. It seems to me that\n>> it's debatable which is the correct semantics, anyway. Arguably an\n\n> Shouldn't functions be evaluated once(per the combination of parameters)\n> at the time when a query started ?\n\nI don't think I want to buy into guaranteeing that, either. In the\nfirst place, that makes it impossible to get a random sampling of your\ndata by methods like\n\tSELECT * FROM foo WHERE random() < 0.01;\nNow admittedly this is a little bit flaky (not least because you'd get\ninconsistent results if you tried to use a cursor to scan the output\nmultiple times) but I think it's useful enough to not want to break it.\nEspecially not when I just recently put a few hours into making the\noptimizer treat this case correctly ;-)\n\nIn the second place, to guarantee evaluate-once behavior with a\nnoncooperative function, you'd have to actually maintain a cache of\nfunction parameter sets and result values and consult it before making\na function call. That we certainly don't want to do. So I think the\nexisting distinction between cachable and noncachable functions is\nappropriate. The question is whether the presence of noncachable\nfunctions should propagate out to cause the whole containing SELECT\nto be treated as noncachable.\n\nI think you could probably argue that either way, and could invent\nexamples favoring either choice. Without a compelling reason to change\nthe behavior, I'm inclined to leave it as is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Sep 2000 09:17:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: func() & select func() "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n>\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> Currently we do not consider the presence of noncachable functions as\n> >> a reason that prevents reducing a subplan to an initplan. I thought\n> >> about it but didn't like the performance penalty. It seems to me that\n> >> it's debatable which is the correct semantics, anyway. Arguably an\n>\n> > Shouldn't functions be evaluated once(per the combination of parameters)\n> > at the time when a query started ?\n>\n> I don't think I want to buy into guaranteeing that, either.\n\nI'm still confused and now suspicious if we could expect\nunambiguous results for the queries which constain function\ncalls which cause strong side effect.\nI think there are 2 ways.\n\n1) Function calls with strong side effect should be inhibited\n except the simple procedure call query \"select func()\".\n Seems Oracle has a similar restriction(I don't know details\n sorry).\n\n2) Users are responsible for calling functions without strong side\n effect. Optimizer could freely change the order of evaluation\n and cache the funtion result.\n\n> In the\n> first place, that makes it impossible to get a random sampling of your\n> data by methods like\n> \tSELECT * FROM foo WHERE random() < 0.01;\n\nI don't understand what we should expect for the query.\nRandom sampling may be useful but it doesn't necessarily mean\nproper. Shouldn't we make random() an exception by adding\nanother attribute for it if we expect random sampling ?\n\nBTW for the query\n SELECT * FROM foo where random() < 0.01 and id < 100;\n\nIs random() called for each row or for rows which satisfy id < 100 ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Mon, 4 Sep 2000 09:51:16 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: func() & select func() "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I'm still confused and now suspicious if we could expect\n> unambiguous results for the queries which constain function\n> calls which cause strong side effect.\n\nSo far we have not talked about functions that actually have side\neffects, just about how predictable the result of a side-effect-free\nfunction is. It would be a serious mistake to constrain our handling\nof side-effect-free functions on the basis of what's needed to make\nside-effects predictable.\n\nAt the moment I do not care at all about how predictable side-effects\nare --- I think that that's up to the user to deal with. We have seen\nfew if any complaints about misoptimization of nextval(), even though\nit's theoretically been possible to have a problem with it for a long\ntime. For example, in\n\tSELECT (column > 1) OR (nextval('seq') > 100) FROM ...\nI believe it's been true forever that nextval won't be evaluated at\nevery column, but how many people complain? Saying that the behavior\nis implementation-defined seems fine to me.\n\n> Random sampling may be useful but it doesn't necessarily mean\n> proper. Shouldn't we make random() an exception by adding\n> another attribute for it if we expect random sampling ?\n\nMaybe. Right now we don't distinguish random() from other functions\nthat are considerably more predictable, like now(). Perhaps it'd be\nworthwhile to recognize more levels of function predictability.\nnow() could be classified as \"fixed result during one transaction\",\nsince I believe it gives back the time of the start of the current\nxact. But I'm not sure it's worth worrying about just for now(). The\nhard part would be figuring out a reasonable way to describe functions\nthat consult database tables --- those are fixed within a transaction\nonly if the tables they read don't change, but is it worth trying to\ndescribe that? If so, how?\n\n> BTW for the query\n> SELECT * FROM foo where random() < 0.01 and id < 100;\n\n> Is random() called for each row or for rows which satisfy id < 100 ?\n\nGood question. I think it'd be a mistake to specify a single answer for\nthat. For this particular application, the user wouldn't care anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Sep 2000 22:16:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: func() & select func() "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I'm still confused and now suspicious if we could expect\n> > unambiguous results for the queries which constain function\n> > calls which cause strong side effect.\n> \n> So far we have not talked about functions that actually have side\n> effects, just about how predictable the result of a side-effect-free\n> function is. It would be a serious mistake to constrain our handling\n> of side-effect-free functions on the basis of what's needed to make\n> side-effects predictable.\n> \n> At the moment I do not care at all about how predictable side-effects\n> are --- I think that that's up to the user to deal with. We have seen\n> few if any complaints about misoptimization of nextval(), even though\n> it's theoretically been possible to have a problem with it for a long\n> time. For example, in\n> \tSELECT (column > 1) OR (nextval('seq') > 100) FROM ...\n> I believe it's been true forever that nextval won't be evaluated at\n> every column, but how many people complain? Saying that the behavior\n> is implementation-defined seems fine to me.\n>\n\nAgreed.\nIt seems too painful for optimizer to care about side-effects.\n \n[snip]\n\n> xact. But I'm not sure it's worth worrying about just for now(). The\n> hard part would be figuring out a reasonable way to describe functions\n> that consult database tables --- those are fixed within a transaction\n> only if the tables they read don't change, but is it worth trying to\n> describe that? If so, how?\n>\n\nAs to database lookup functions,we could expect fixed results for\none query. MVCC mechanism guarantees it and it's never a trivial\nfact. However strictly speaking functions in a query may see the\nchange by the query itself. The change could be caused by functions\nwhich insert/update/delete.\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Mon, 4 Sep 2000 18:11:19 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: func() & select func() "
}
] |
[
{
"msg_contents": "\n> Tom Lane wrote:\n> > \n> > Mark's patch is OK as is, since it's merely relocating some poorly\n> > written code and not trying to fix it, but someone ought to think\n> > about fixing the code.\n> > \n> \n> I'll take a crack at it.\n> \n> Just out of curiousity, is there technical reason there isn't\n> a (say) relisview attribute to pg_class?\n\nI have been arguing ages for a different relkind for real views\nthat are created with \"create view ...\". This would obliviate the\nactual file that is currently needed for each view too.\n\nImho that would be a better solution than an additional flag.\n\nAndreas \n",
"msg_date": "Thu, 31 Aug 2000 15:51:22 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Backend-internal SPI operations"
}
] |
[
{
"msg_contents": "\n> The problem here is, that the relkind must change at rule\n> creation/drop time. Fortunately rules on SELECT are totally\n> restricted to VIEW's since 6.4, and I don't see any reason to\n> change this.\n\nI don't see why a real view should still be createable by the old\ncreate table then create rule way. Then the relkind would never \nneed to change. The only place that would need correction is \npg_dump to dump create view statements.\n\nImho we should not limit select rules to views by design.\nWith a different relkind for views this is not necessary anyway.\n\nAndreas\n",
"msg_date": "Thu, 31 Aug 2000 16:07:42 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Backend-internal SPI operations"
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> I don't see why a real view should still be createable by the old\n> create table then create rule way.\n\nBecause we'll need to be able to read dump files created by existing\nversions of pg_dump. If we don't cope with CREATE TABLE + CREATE RULE\nthen restored-from-dump views won't really be views and will act\ndifferently from freshly created views. Avoiding the resulting\nsupport headaches is worth a little bit of ugliness in the code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 Aug 2000 10:19:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Backend-internal SPI operations "
},
{
"msg_contents": "On Thu, Aug 31, 2000 at 10:19:36AM -0400, Tom Lane wrote:\n> Zeugswetter Andreas SB <[email protected]> writes:\n> > I don't see why a real view should still be createable by the old\n> > create table then create rule way.\n> \n> Because we'll need to be able to read dump files created by existing\n> versions of pg_dump. If we don't cope with CREATE TABLE + CREATE RULE\n> then restored-from-dump views won't really be views and will act\n> differently from freshly created views. Avoiding the resulting\n> support headaches is worth a little bit of ugliness in the code.\n> \n\nSo, this'd be a one release only sort of hack, for compatability? It'd\nbe born deprecated? Hmm, is someone keeping track of all the features\nthat have been deprecated, so they can be stripped out later?\n\nAlso, sounds like something for Lamar's future pg_upgrade rewrite.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Thu, 31 Aug 2000 09:53:04 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Backend-internal SPI operations"
}
] |
[
{
"msg_contents": "\n> Zeugswetter Andreas SB <[email protected]> writes:\n> > I don't see why a real view should still be createable by the old\n> > create table then create rule way.\n> \n> Because we'll need to be able to read dump files created by existing\n> versions of pg_dump. If we don't cope with CREATE TABLE + CREATE RULE\n> then restored-from-dump views won't really be views and will act\n> differently from freshly created views. Avoiding the resulting\n> support headaches is worth a little bit of ugliness in the code.\n\nWell, since an on select do instead rule on a whole table is offhand \nonly good for a view I do agree. (At least I can't think of another use)\n \nThe only worry I have is that it should not destroy the possibility for real\n\non select do instead rules on single attributes.\n\nAndreas\n",
"msg_date": "Thu, 31 Aug 2000 17:00:59 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: Backend-internal SPI operations "
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n>\n> > Zeugswetter Andreas SB <[email protected]> writes:\n> > > I don't see why a real view should still be createable by the old\n> > > create table then create rule way.\n> >\n> > Because we'll need to be able to read dump files created by existing\n> > versions of pg_dump. If we don't cope with CREATE TABLE + CREATE RULE\n> > then restored-from-dump views won't really be views and will act\n> > differently from freshly created views. Avoiding the resulting\n> > support headaches is worth a little bit of ugliness in the code.\n>\n> Well, since an on select do instead rule on a whole table is offhand\n> only good for a view I do agree. (At least I can't think of another use)\n>\n> The only worry I have is that it should not destroy the possibility for real\n>\n> on select do instead rules on single attributes.\n\nAndreas,\n\n The rule system we got from Berkeley was \"brittle and broken\"\n (as the original docs say).\n\n 6.4 was the first release with a rule system that was useful\n for things other than views.\n\n With 6.4 I restricted rules ON SELECT to:\n\n - must be INSTEAD\n - must NOT be conditional\n - must have exactly one SELECT query as action\n - the action must return a targetlist identical to the\n table layout they are fired for\n - must be named \"_RET<tablename>\".\n\n Yes, that is a VIEW, a whole VIEW and nothing but a VIEW, so\n help me God.\n\n We don't have any other ON SELECT rules than views - for\n years now. And I'm not sure if the concept of single\n attribute rules ON SELECT could ever be implemented in the\n rewriter. It's already complicated enough with view rules,\n and I remember the sleepless nights well enough not to try.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Thu, 31 Aug 2000 11:26:14 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Backend-internal SPI operations"
}
] |
[
{
"msg_contents": "> I wonder what is the status of WAL implementation?\n\nTesting is still scheduled for Oct 1 or so.\n\nVadim\n",
"msg_date": "Thu, 31 Aug 2000 09:07:02 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: WAL"
},
{
"msg_contents": "Is there any way to take a look at the code?\nIs it in the CVS ?\n\n\"Mikheev, Vadim\" wrote:\n\n> > I wonder what is the status of WAL implementation?\n>\n> Testing is still scheduled for Oct 1 or so.\n>\n> Vadim\n\n",
"msg_date": "Wed, 06 Sep 2000 21:59:10 +0800",
"msg_from": "\"Alexey Raschepkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL"
}
] |
[
{
"msg_contents": "\nDo we have any kind of bitwise AND? \n\n select foo from bar where (foo AND 2); \n\nDoesn't work with ints.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 31 Aug 2000 12:44:33 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "bitwise AND?"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> Do we have any kind of bitwise AND?\n\nNon directly.\n \n> select foo from bar where (foo AND 2);\n\nOk, you _can_ do this; it's just a big pain to do so.\n\nFirst, decompose any bitwise AND into TESTBIT and logical AND\noperators. TESTBIT is the same as AND, just having a single bit set.\n\nIn your example, AND 2 is already a single bit. But, I'm going to\nillustrate the general solution: suppose we have AND 7 -- decompose into\n(foo TESTBIT 4) AND (foo TESTBIT 2) AND (foo TESTBIT 1).\n\nNow, rewrite that to:\n((foo % 8)/4)*((foo % 4)/2)*((foo % 2)/1) -- if the result is greater\nthan zero, the logical AND is true.\n\nFor bitwise OR, substitute integer + for integer * above.\n\nFor your case, write the query:\n\nselect foo from bar where ((foo % 4)/2)>0\n\nAFAIK and have tested, that should work the way you think it should. (I\nknew those exercise in Z80 machine language would come in handy! :-))\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 31 Aug 2000 14:47:50 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bitwise AND?"
}
] |
[
{
"msg_contents": "NB: I will be on vacation from 1-Sep to 5-Sep\n\nOn the patches list I sent the following:\n-----------------------------------------\n\nThis patch implements a different \"relkind\"\nfor views. Views are now have a \"relkind\" of\nRELKIND_VIEW instead of RELKIND_RELATION.\n\nAlso, views no longer have actual heap storage\nfiles.\n\nThe follow changes were made\n\n1. CREATE VIEW sets the new relkind\n\n2. The executor complains if a DELETE or\n\tINSERT references a view.\n\n3. DROP RULE complains if an attempt is made\n\tto delete a view SELECT rule.\n\n4. CREATE RULE \"_RETmytable\" AS ON SELECT TO mytable DO INSTEAD ...\n\t1. checks to make sure mytable is empty.\n\t2. sets the relkind to RELKIND_VIEW.\n\t3. deletes the heap storage files.\n\n5. LOCK myview is not allowed. :)\n\n\n6. the regression test type_sanity was changed to\n\taccount for the new relkind value.\n\n7. CREATE INDEX ON myview ... is not allowed.\n\n8. VACUUM myview is not allowed.\n\tVACUUM automatically skips views when do the entire\n\tdatabase.\n\n9. TRUNCATE myview is not allowed.\n\n\nTHINGS LEFT TO THINK ABOUT\n\no pg_views\n\no pg_dump\n\no pgsql (\\d \\dv)\n\no Do we really want to be able to inherit from views?\n\no Is 'DROP TABLE myview' OK?\n\n\n-- \nMark Hollomon\[email protected]\n\n",
"msg_date": "Thu, 31 Aug 2000 12:56:03 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "new relkind for view"
},
{
"msg_contents": "Applied.\n\n> NB: I will be on vacation from 1-Sep to 5-Sep\n> \n> On the patches list I sent the following:\n> -----------------------------------------\n> \n> This patch implements a different \"relkind\"\n> for views. Views are now have a \"relkind\" of\n> RELKIND_VIEW instead of RELKIND_RELATION.\n> \n> Also, views no longer have actual heap storage\n> files.\n> \n> The follow changes were made\n> \n> 1. CREATE VIEW sets the new relkind\n> \n> 2. The executor complains if a DELETE or\n> \tINSERT references a view.\n> \n> 3. DROP RULE complains if an attempt is made\n> \tto delete a view SELECT rule.\n> \n> 4. CREATE RULE \"_RETmytable\" AS ON SELECT TO mytable DO INSTEAD ...\n> \t1. checks to make sure mytable is empty.\n> \t2. sets the relkind to RELKIND_VIEW.\n> \t3. deletes the heap storage files.\n> \n> 5. LOCK myview is not allowed. :)\n> \n> \n> 6. the regression test type_sanity was changed to\n> \taccount for the new relkind value.\n> \n> 7. CREATE INDEX ON myview ... is not allowed.\n> \n> 8. VACUUM myview is not allowed.\n> \tVACUUM automatically skips views when do the entire\n> \tdatabase.\n> \n> 9. TRUNCATE myview is not allowed.\n> \n> \n> THINGS LEFT TO THINK ABOUT\n> \n> o pg_views\n> \n> o pg_dump\n> \n> o pgsql (\\d \\dv)\n> \n> o Do we really want to be able to inherit from views?\n> \n> o Is 'DROP TABLE myview' OK?\n> \n> \n> -- \n> Mark Hollomon\n> [email protected]\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 00:49:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: new relkind for view"
},
{
"msg_contents": "At 00:49 12/09/00 -0400, Bruce Momjian wrote:\n>> \n>> o pg_dump\n>> \n\nI've got to fix up a few things in pg_dump, so I'll try to do this as well...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 12 Sep 2000 16:02:56 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: new relkind for view"
},
{
"msg_contents": "\nIn a continuing effort to make pg_dump produce valid SQL where possible, I\nwould like to move away from the 'Create Table'/'Create Rule' method of\ndefining views, and actually dump the 'Create View' command.\n\nThis seems quite do-able, but before I start I thought I would ask if there\nwere any reasons people could think of for not doing this?\n\nThe approach will be:\n\n- when getting table info, also call pg_get_viewdef (which returns 'not a\nview' for non-view relations).\n\n- When dumping rules, ignore all 'view rules'. \n\n- Dump the 'Create View' statement in oid order as per normal.\n\nIt would be really nice if there was a simple way of detecting view rules\nthat was analagous to relkind. Is there a reason why this has not been\ndone? Has it been done?\n\nMaybe the code that checks the rule name to see if it is a view could be\nput in the backend?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 14 Sep 2000 22:16:55 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Dumping views as views?"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> - when getting table info, also call pg_get_viewdef (which returns 'not a\n> view' for non-view relations).\n\nHuh? Just use the relkind to detect views.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Sep 2000 11:45:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dumping views as views? "
},
{
"msg_contents": "At 11:45 14/09/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> - when getting table info, also call pg_get_viewdef (which returns 'not a\n>> view' for non-view relations).\n>\n>Huh? Just use the relkind to detect views.\n\nSorry; what the above means is that I will get the views when I get the\ntables, and call 'pg_get_viewdef' in all cases. I will check relkind when I\ncome to dump the 'table'.\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 15 Sep 2000 11:01:24 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dumping views as views? "
}
] |
[
{
"msg_contents": "\nHi, I'm using 7.0.2 and I have a problem with UNION in some subselects,\nnamely the following works:\n\tselect * from a where a.f in (select 1)\nbut this doesn't:\n\tselect * from a where a.f in (select 1 union select 2)\n\nIn the grammar we have :\n in_expr: SubSelect | ...\nbut SubSelect doesn't allow UNIONs, only select_clause does.\n\nCould in_expr be changed to use select_clause instead without adverse\nill effects ?\n\nIn fact SubSelect is used all over the place, so maybe it's better to\nswitch select_clause and SubSelect in the definitions for SelectStmt,\nselect_clause and SubSelect ?\n\n\nThanks,\nFlorent\n\n(Please Cc: me in your answers)\n",
"msg_date": "Thu, 31 Aug 2000 19:50:45 +0200",
"msg_from": "Florent Guillaume <[email protected]>",
"msg_from_op": true,
"msg_subject": "UNION/INTERSECT in subselects"
},
{
"msg_contents": "Florent Guillaume <[email protected]> writes:\n> In the grammar we have :\n> in_expr: SubSelect | ...\n> but SubSelect doesn't allow UNIONs, only select_clause does.\n\n> Could in_expr be changed to use select_clause instead without adverse\n> ill effects ?\n\nUnfortunately the problems with union/intersect/except go a lot deeper\nthan the grammar. Take a look at the rewriter and the planner, if\nyou have a strong stomach. They're just not built to deal with these\nconstructs except at the top level of a query. (The executor would\nlikely work just fine, if only the upstream modules would give it a\nvalid plan ...)\n\nI'm hoping to see this stuff cleaned up during the much-talked-of\nquerytree redesign that we plan for the 7.2 cycle. AFAICS there is\nno way to fix it without some pretty serious hacking on the querytree\nrepresentation of union etc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Sep 2000 01:27:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UNION/INTERSECT in subselects "
}
] |
[
{
"msg_contents": "Thanks for your suggestions, though I've already\nconsidered most of them. (I have a detailed reply\nbelow, interleaved with your mail). \n\nI am considering an option but would need help from\nsomebody who knows how the backend works to be able to\nfigure out if any of the following options would help.\nConsider the scenario of a database with say 3 tables,\nand atleast 3 concurrent writers to all the tables\ninserting different records. Which of the three\noptions would be expected to perform better ? (I am\nusing JDBC, I dont know if that is relevant)\n\n1. Having a different Connection per writer\n2. Having a different Connection per table\n3. Having a single Connection which performs the 3\ntransactions sequentially.\n\nI was trying out some tests to decide between option 1\n& option 2 , but did not get any conclusive results.\n\nWould be helpful to get some suggestions on the same.\n\nThanks,\nRini\n\n--- Mitch Vincent <[email protected]> wrote:\n> Removing indexes will speed up the INSERT portion\n> but slow down the SELECT\n> portion.\nI cannot remove indexes since there may be other\nqueries to these tables at the same time when I am\ndoing the inserts.\n\n> Just an FYI, you can INSERT into table (select\n> whatever from another\n> table) -- you could probably do what you need in a\n> single query (but would\n> also probably still have the speed problem).\nI have not spent time on it but I could not figure out\nhow to have an insert statement such that one of the\nattributes (only) is a result of a select from another\ntable. I would be interested in knowing if there is a\nway to do that.\n\n> Have you EXPLAINed the SELECT query to see if index\n> scans are being used\n> where possible?\nYes, the index scans are being used\n\n> -Mitch\n> \n> ----- Original Message -----\n> From: \"Rini Dutta\" <[email protected]>\n> To: <[email protected]>\n> Cc: <[email protected]>\n> Sent: Friday, August 25, 2000 12:20 PM\n> Subject: [SQL] queries and inserts\n> \n> \n> > Hi,\n> >\n> > I am interested in how to speed up storage. About\n> 1000\n> > or more inserts may need to be performed at a time\n> ,\n> > and before each insert I need to look up its key\n> from\n> > the reference table. So each insert is actually a\n> > query followed by an insert.\n> >\n> > The tables concerned are :\n> > CREATE TABLE referencetable(idx serial, rcol1 int4\n> NOT\n> > NULL, rcol2 int4 NOT NULL, rcol3 varchar(20) NOT\n> > NULL, rcol4 varchar(20), PRIMARY KEY(idx) ...\n> > CREATE INDEX index_referencetable on\n> > referencetable(rcol1, rcol2, rcol3, rcol4);\n> >\n> > CREATE TABLE datatable ( ref_idx int4,\n> > start_date_offset int4 NOT NULL, stop_date_offset\n> int4\n> > NOT NULL, dcol4 float NOT NULL, dcol5 float NOT\n> NULL,\n> > PRIMARY KEY(ref_idx, start_date_offset),\n> CONSTRAINT c1\n> > FOREIGN KEY(ref_idx) REFERENCES\n> referencetable(idx) );\n> >\n> > I need to do the following sequence n number of\n> times\n> > -\n> > 1. select idx (as key) from referencetable where\n> > col1=c1 and col2=c2 and col3=c3 and col4=c4;\n> (Would an\n> > initial 'select into temptable' help here since\n> for a\n> > large number of these queries 'c1' and 'c2'\n> > comnbinations would remain constant ?)\n> > 2. insert into datatable values(key, ....);\n> >\n> > I am using JDBC interface of postgresql-7.0.2 on\n> > Linux. 'referencetable' has about 1000 records, it\n> can\n> > keep growing. 'datatable' has about 3 million\n> records,\n> > it would grow at a very fast rate. Storing 2000\n> > records takes around 75 seconds after I vacuum\n> > analyze. (before that it took around 40 seconds -\n> ???)\n> > . I am performing all the inserts ( including the\n> > lookup) as one transaction.\n> >\n> > Thanks,\n> > Rini\n> >\n> >\n> > __________________________________________________\n> > Do You Yahoo!?\n> > Yahoo! Mail - Free email you can access from\n> anywhere!\n> > http://mail.yahoo.com/\n> >\n> \n\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Mail - Free email you can access from anywhere!\nhttp://mail.yahoo.com/\n",
"msg_date": "Thu, 31 Aug 2000 12:22:58 -0700 (PDT)",
"msg_from": "Rini Dutta <[email protected]>",
"msg_from_op": true,
"msg_subject": "optimal performance for inserts "
}
] |
[
{
"msg_contents": "Peter Eisentraut wrote:\n> Jan Wieck writes:\n>\n> > And it's time to make more use of the relkind attribute. For\n> > 7.2, when we want to have tuple-set returns for functions, we\n> > might want to have structures as well\n>\n> After the fabled query-tree redesign, couldn't you implement views simply\n> like this:\n>\n> SELECT * FROM my_view;\n>\n> becomes\n>\n> SELECT * FROM (view_definition);\n>\n\n Hmmm, don't know what you mean with that.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Thu, 31 Aug 2000 18:34:24 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Backend-internal SPI operations"
}
] |
[
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> then it becomes\n> SELECT * FROM (SELECT a, b, c FROM my_table);\n> which would presumably be possible with the new query-tree.\n\nRight, that's exactly how we've been planning to fix the problems\nwith grouped views and so forth.\n\nI am beginning to think that this may have to happen for 7.1, else\nview inside ISO-style JOIN constructs aren't going to work right.\nFurther news as it happens...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Sep 2000 09:28:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Backend-internal SPI operations "
},
{
"msg_contents": "Jan Wieck <[email protected]> writes:\n> I suggest let's get what we have out of the door now and\n> don't work under pressure on these complex things.\n\nPressure? I've got a month, I thought. Should be plenty of time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Sep 2000 18:02:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Backend-internal SPI operations "
},
{
"msg_contents": "Jan Wieck <[email protected]> writes:\n> Hmm - too simple - real life is harder. So to what do you\n> expand the query\n\n> SELECT a, c, d FROM my_view, other_table\n> WHERE my_view.a = other_table.a\n> AND other_table.x = 'foo';\n\n SELECT a, c, d\n\t FROM (SELECT a, b, c FROM my_table) AS my_view, other_table\n WHERE my_view.a = other_table.a\n AND other_table.x = 'foo';\n\nI'm still not detecting a problem here ... if selecting from a view\n*doesn't* act exactly like a sub-SELECT, it'd be broken IMHO.\n\nWe're not that far away from being able to do this, and it looks more\nattractive to work on that than to hack the rewriter into an even\ngreater state of unintelligibility ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Sep 2000 18:08:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Backend-internal SPI operations "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Jan Wieck writes:\n>\n> > Hmmm, don't know what you mean with that.\n>\n> If I define a view\n>\n> CREATE my_view AS SELECT a, b, c FROM my_table;\n>\n> and then do\n>\n> SELECT * FROM my_view;\n>\n> then it becomes\n>\n> SELECT * FROM (SELECT a, b, c FROM my_table);\n>\n> which would presumably be possible with the new query-tree.\n\n Hmm - too simple - real life is harder. So to what do you\n expand the query\n\n SELECT a, c, d FROM my_view, other_table\n WHERE my_view.a = other_table.a\n AND other_table.x = 'foo';\n\n And then have a little more complex \"my_view\", maybe a join\n with it's own WHERE clause.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Fri, 1 Sep 2000 17:19:01 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > then it becomes\n> > SELECT * FROM (SELECT a, b, c FROM my_table);\n> > which would presumably be possible with the new query-tree.\n>\n> Right, that's exactly how we've been planning to fix the problems\n> with grouped views and so forth.\n>\n> I am beginning to think that this may have to happen for 7.1, else\n> view inside ISO-style JOIN constructs aren't going to work right.\n> Further news as it happens...\n\n You really want to start on that NOW?\n\n I suggest let's get what we have out of the door now and\n don't work under pressure on these complex things.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Fri, 1 Sep 2000 17:21:24 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <[email protected]> writes:\n> > Hmm - too simple - real life is harder. So to what do you\n> > expand the query\n>\n> > SELECT a, c, d FROM my_view, other_table\n> > WHERE my_view.a = other_table.a\n> > AND other_table.x = 'foo';\n>\n> SELECT a, c, d\n> FROM (SELECT a, b, c FROM my_table) AS my_view, other_table\n> WHERE my_view.a = other_table.a\n> AND other_table.x = 'foo';\n>\n> I'm still not detecting a problem here ... if selecting from a view\n> *doesn't* act exactly like a sub-SELECT, it'd be broken IMHO.\n\n I do. The qualification does not restrict the subselect in\n any way. So it'll be a sequential scan - no?\n\n Imagine my_table has 10,000,000 rows and other_table is\n small. With an index on my_table.a and the rewriting we do\n today there's a good chance to end up with index lookups in\n my_table for all the other_table matches of x = 'foo'.\n\n Of course, after all the view must behave like a subselect.\n But please only logical - not physical!\n\n So the hard part of the NEW rewriter will be to detect which\n qualifications can be moved/duplicated down into which\n subselects (tuple sources) to restrict scans.\n\n> We're not that far away from being able to do this, and it looks more\n> attractive to work on that than to hack the rewriter into an even\n> greater state of unintelligibility ...\n\n Then again, let's get 7.1 out as is and do the full querytree\n redesign for 7.2. It looks easy, but I fear it's more or less\n like an iceberg.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Fri, 1 Sep 2000 18:36:59 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backend-internal SPI operations"
},
{
"msg_contents": "Jan Wieck <[email protected]> writes:\n> So the hard part of the NEW rewriter will be to detect which\n> qualifications can be moved/duplicated down into which\n> subselects (tuple sources) to restrict scans.\n\nActually, what I was envisioning was pulling the subselect's guts *up*\ninto the main query (collapsing out the sub-Query node) if the sub-Query\nis simple enough --- that is, no grouping/sorting/aggregates/etc. The\nnice thing about that is we can start with a very simple method that\nonly deals with easy cases. The hard cases will still *work*.\nI consider that an improvement over the current situation, where even\nsimple cases are nightmarishly difficult to implement (as you well know)\nand the hard cases don't work. Worst case is that some\nintermediate-complexity examples might lose performance for a while\nuntil we build up a smart subquery-merging algorithm, but that seems\na price worth paying.\n\n> Then again, let's get 7.1 out as is\n\nHas the release schedule moved up without my knowing about it?\nI don't feel any urgent need to freeze development now...\n\n> and do the full querytree\n> redesign for 7.2. It looks easy, but I fear it's more or less\n> like an iceberg.\n\nThe original reason for this effort was to do a trial balloon that would\ngive us more knowledge about how to do it right for 7.2. The more I get\ninto it, the more I realize what a good idea that was. I'm not sure how\nmuch of what I'm doing now will be completely discarded in the 7.2\ncycle, but I do know that I understand the issues a lot better than\nI did a week ago...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Sep 2000 00:10:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Backend-internal SPI operations "
}
] |
[
{
"msg_contents": "\nI always thought that an OID was unsigned ...\n\npgsql=# SELECT oid FROM projects WHERE oid < 0; \n oid \n-------------\n -1727061152\n -548634912\n -548593248\n -886806784\n -1001235776\n -1196613696\n -1198068800\n -1228311424\n -1344696224\n -548591776\n -1553984768\n -1554041312\n -1554147456\n -1661653408\n -1662100832\n -548591104\n -1662315872\n -1694490400\n -1694761376\n -1694791904\n -1725658848\n -548590496\n -1725958496\n -1726398208\n -1727061856\n -548589792\n -1992983392\n -2055459232\n -548589376\n -2055475456\n(30 rows)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 1 Sep 2000 10:58:12 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "[7.0.2] Negative OIDs?"
},
{
"msg_contents": "> I always thought that an OID was unsigned ...\n\nIt is. But we cheat and use the int4 i/o routines. There are notes in\nthe sources pointing this out.\n\n - Thomas\n",
"msg_date": "Fri, 01 Sep 2000 16:21:07 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] Negative OIDs?"
},
{
"msg_contents": "Hi all,\nI have a problem starting the postgres database.\nI am using postgres 7.0.2 on a Redhat Linux 6.2. I had to shutdown my computer\nsince it was hanging. I have the postgres start command added to the bootup\nprocess. I have rebooted the Linux machine several times and it was starting\npostgres correctly. But this time it was saying \"Postmaster could not connect to\nunix socket 5432\" . I checked if any other instance of Postmaster was running by\nchecking the process ids. There was no second instance running. Also, I tried to\nstop and restart postgres using\n\"/etc/rc.d/init.d/postgres stop\" and \"/etc/rc.d/init.d/postgres start\" commands.\nIt was saying\n\"Starting Postres [ ]\". If Postgres was really started it will show the process\nid within the square brackets. But this time it did not show the process id. If I\ntry to connect to the database using psql it gives the error message \"Postgres\ncould not be connected to socket 5432\".\n\nPlease let me know if there is a different way of starting postgres.\nThanks,\nNataraj\n\n",
"msg_date": "Fri, 01 Sep 2000 14:05:25 -0500",
"msg_from": "Nataraj <[email protected]>",
"msg_from_op": false,
"msg_subject": "Postgres startup problem"
},
{
"msg_contents": "The problem is that the \"socket\" is actually a file. On my system, this\nfile is \"/tmp/.s.PGSQL.5432\". Logon as root. Take down PostgreSQL. Then\n\"rm\" that file. PostgreSQL should then start up OK.\n\nOn Fri, 1 Sep 2000, Nataraj wrote:\n\n> Hi all,\n> I have a problem starting the postgres database.\n> I am using postgres 7.0.2 on a Redhat Linux 6.2. I had to shutdown my computer\n> since it was hanging. I have the postgres start command added to the bootup\n> process. I have rebooted the Linux machine several times and it was starting\n> postgres correctly. But this time it was saying \"Postmaster could not connect to\n> unix socket 5432\" . I checked if any other instance of Postmaster was running by\n> checking the process ids. There was no second instance running. Also, I tried to\n> stop and restart postgres using\n> \"/etc/rc.d/init.d/postgres stop\" and \"/etc/rc.d/init.d/postgres start\" commands.\n> It was saying\n> \"Starting Postres [ ]\". If Postgres was really started it will show the process\n> id within the square brackets. But this time it did not show the process id. If I\n> try to connect to the database using psql it gives the error message \"Postgres\n> could not be connected to socket 5432\".\n> \n> Please let me know if there is a different way of starting postgres.\n> Thanks,\n> Nataraj\n> \n\n",
"msg_date": "Fri, 1 Sep 2000 14:41:53 -0500 (CDT)",
"msg_from": "John McKown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres startup problem"
},
{
"msg_contents": "Thanks John.\nI will try this when I reconstruct the server from the backup. Just to keep the system\ngoing, I have temporarily reconstructed the entire server from an old Ghost backup. I\nwill bringback the image of the server which had the postgres startup problem because\nI need to recover all the data from the database.\n\nThanks,\nNataraj\n\nJohn McKown wrote:\n\n> The problem is that the \"socket\" is actually a file. On my system, this\n> file is \"/tmp/.s.PGSQL.5432\". Logon as root. Take down PostgreSQL. Then\n> \"rm\" that file. PostgreSQL should then start up OK.\n>\n> On Fri, 1 Sep 2000, Nataraj wrote:\n>\n> > Hi all,\n> > I have a problem starting the postgres database.\n> > I am using postgres 7.0.2 on a Redhat Linux 6.2. I had to shutdown my computer\n> > since it was hanging. I have the postgres start command added to the bootup\n> > process. I have rebooted the Linux machine several times and it was starting\n> > postgres correctly. But this time it was saying \"Postmaster could not connect to\n> > unix socket 5432\" . I checked if any other instance of Postmaster was running by\n> > checking the process ids. There was no second instance running. Also, I tried to\n> > stop and restart postgres using\n> > \"/etc/rc.d/init.d/postgres stop\" and \"/etc/rc.d/init.d/postgres start\" commands.\n> > It was saying\n> > \"Starting Postres [ ]\". If Postgres was really started it will show the process\n> > id within the square brackets. But this time it did not show the process id. If I\n> > try to connect to the database using psql it gives the error message \"Postgres\n> > could not be connected to socket 5432\".\n> >\n> > Please let me know if there is a different way of starting postgres.\n> > Thanks,\n> > Nataraj\n> >\n\n",
"msg_date": "Fri, 01 Sep 2000 15:11:13 -0500",
"msg_from": "Nataraj <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres startup problem"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> I always thought that an OID was unsigned ...\n>\n> It is. But we cheat and use the int4 i/o routines. There are notes in\n> the sources pointing this out.\n\nWe also cheat by using the int4 comparison routines, so sort order is\nnot what it should be ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Sep 2000 17:40:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] Negative OIDs? "
}
] |
[
{
"msg_contents": "(sorry if you receive this twice)\n\nHi, I'm using 7.0.2 and I have a problem with UNION in some subselects,\nnamely the following works:\n select * from a where a.f in (select 1)\nbut this doesn't:\n select * from a where a.f in (select 1 union select 2)\n\nIn the grammar we have :\n in_expr: SubSelect | ...\nbut SubSelect doesn't allow UNIONs, only select_clause does.\n\nCould in_expr be changed to use select_clause instead without adverse\nill effects ?\n\nIn fact SubSelect is used all over the place, so maybe it's better to\nswitch select_clause and SubSelect in the definitions for SelectStmt,\nselect_clause and SubSelect ?\n\n\nThanks,\nFlorent\n\n(Please Cc: me in your answers)\n\n",
"msg_date": "Fri, 1 Sep 2000 21:19:01 +0200",
"msg_from": "Florent Guillaume <[email protected]>",
"msg_from_op": true,
"msg_subject": "UNION/INTERSECT in subselects"
}
] |
[
{
"msg_contents": "Am I right in thinking that the WHERE clause of a query must logically\nbe applied *after* any joins specified in the FROM clause?\n\nFor example, suppose that we have table t1 (x int) containing the\nvalues 1, 2, 3, 4, and table t2 (y int) containing the values 1, 2, 4.\nIt's clear that the result of\n\tSELECT * FROM t1 LEFT JOIN t2 ON (x = y);\nshould be\n\tx\ty\n\n\t1\t1\n\t2\t2\n\t3\tNULL\n\t4\t4\n\nBut suppose we make the query\n\tSELECT * FROM t1 LEFT JOIN t2 ON (x = y) WHERE y <> 2;\nIt seems to me this should yield\n\tx\ty\n\n\t1\t1\n\t3\tNULL\n\t4\t4\n\nand not\n\tx\ty\n\n\t1\t1\n\t2\tNULL\n\t3\tNULL\n\t4\t4\n\nwhich is what you'd get if the y=2 tuple were filtered out before\nreaching the left-join stage. Does anyone read the spec differently,\nor get the latter result from another implementation?\n\nThe reason this is interesting is that this example breaks a rather\nfundamental assumption in our planner/optimizer, namely that WHERE\nconditions can be pushed down to the lowest level at which all the\nvariables they mention are available. Thus the planner would normally\napply \"y <> 2\" during its bottom-level scan of t2, which would cause the\nLEFT JOIN to decide that x = 2 is an unmatched value, and thus produce\na \"2 NULL\" output row.\n\nAn even more interesting example is\n\tSELECT * FROM t1 FULL JOIN t2 ON (x = y AND y <> 2);\nMy interpretation is that this should produce\n\tx\ty\n\n\t1\t1\n\t2\tNULL\n\tNULL\t2\n\t3\tNULL\n\t4\t4\nsince both t1's x=2 and t2's y=2 tuple will appear \"unmatched\".\nThis is *not* the same output you'd get from\n\tSELECT * FROM t1 FULL JOIN t2 ON (x = y) WHERE y <> 2;\nwhich I think should yield\n\tx\ty\n\n\t1\t1\n\t3\tNULL\n\t4\t4\nThis shows that JOIN/ON conditions for outer joins are not semantically\ninterchangeable with WHERE conditions.\n\nThis is going to be a bit of work to fix, so I thought I'd better\nconfirm that I'm reading the spec correctly before I dive into it.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Sep 2000 16:47:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "A fine point about OUTER JOIN semantics"
},
{
"msg_contents": "Tom I'd be happy to go back through any specs for second thoughts... do you \nhave a URL to go to?\n\nAt 9/1/2000 04:47 PM -0400, Tom Lane wrote:\n>Am I right in thinking that the WHERE clause of a query must logically\n>be applied *after* any joins specified in the FROM clause?\n>\n>For example, suppose that we have table t1 (x int) containing the\n>values 1, 2, 3, 4, and table t2 (y int) containing the values 1, 2, 4.\n>It's clear that the result of\n> SELECT * FROM t1 LEFT JOIN t2 ON (x = y);\n>should be\n> x y\n>\n> 1 1\n> 2 2\n> 3 NULL\n> 4 4\n>\n>But suppose we make the query\n> SELECT * FROM t1 LEFT JOIN t2 ON (x = y) WHERE y <> 2;\n>It seems to me this should yield\n> x y\n>\n> 1 1\n> 3 NULL\n> 4 4\n>\n>and not\n> x y\n>\n> 1 1\n> 2 NULL\n> 3 NULL\n> 4 4\n>\n>which is what you'd get if the y=2 tuple were filtered out before\n>reaching the left-join stage. Does anyone read the spec differently,\n>or get the latter result from another implementation?\n>\n>The reason this is interesting is that this example breaks a rather\n>fundamental assumption in our planner/optimizer, namely that WHERE\n>conditions can be pushed down to the lowest level at which all the\n>variables they mention are available. Thus the planner would normally\n>apply \"y <> 2\" during its bottom-level scan of t2, which would cause the\n>LEFT JOIN to decide that x = 2 is an unmatched value, and thus produce\n>a \"2 NULL\" output row.\n>\n>An even more interesting example is\n> SELECT * FROM t1 FULL JOIN t2 ON (x = y AND y <> 2);\n>My interpretation is that this should produce\n> x y\n>\n> 1 1\n> 2 NULL\n> NULL 2\n> 3 NULL\n> 4 4\n>since both t1's x=2 and t2's y=2 tuple will appear \"unmatched\".\n>This is *not* the same output you'd get from\n> SELECT * FROM t1 FULL JOIN t2 ON (x = y) WHERE y <> 2;\n>which I think should yield\n> x y\n>\n> 1 1\n> 3 NULL\n> 4 4\n>This shows that JOIN/ON conditions for outer joins are not semantically\n>interchangeable with WHERE conditions.\n>\n>This is going to be a bit of work to fix, so I thought I'd better\n>confirm that I'm reading the spec correctly before I dive into it.\n>\n>Comments?\n>\n> regards, tom lane\n\n",
"msg_date": "Fri, 01 Sep 2000 16:08:51 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A fine point about OUTER JOIN semantics"
},
{
"msg_contents": "> Am I right in thinking that the WHERE clause of a query must logically\n> be applied *after* any joins specified in the FROM clause?\n...\n> This shows that JOIN/ON conditions for outer joins are not semantically\n> interchangeable with WHERE conditions.\n\nRight. In some cases, an outer join with WHERE restrictions reduces to\nan inner join (so the qualification clauses can be consolidated). Our\noptimizer should be on the lookout for that, at least eventually.\n\n - Thomas\n",
"msg_date": "Sat, 02 Sep 2000 06:20:26 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A fine point about OUTER JOIN semantics"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> In some cases, an outer join with WHERE restrictions reduces to\n> an inner join (so the qualification clauses can be consolidated).\n\nI recall you having muttered something about that before, but I don't\nsee how it works. Can you give an example of an outer join that\nreduces to an inner join?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Sep 2000 12:40:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A fine point about OUTER JOIN semantics "
},
{
"msg_contents": "> > In some cases, an outer join with WHERE restrictions reduces to\n> > an inner join (so the qualification clauses can be consolidated).\n> I recall you having muttered something about that before, but I don't\n> see how it works. Can you give an example of an outer join that\n> reduces to an inner join?\n\nHmm. This example is pretty silly, but afaik it reduces to an inner\njoin:\n\n select i, j from t1 left join t2 using (i) where j is not null;\n\n(where t1 has column \"i\" and t2 has columns \"i\" and \"j\").\n\n - Thomas\n",
"msg_date": "Mon, 04 Sep 2000 18:25:28 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A fine point about OUTER JOIN semantics"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>>>> In some cases, an outer join with WHERE restrictions reduces to\n>>>> an inner join (so the qualification clauses can be consolidated).\n>> I recall you having muttered something about that before, but I don't\n>> see how it works. Can you give an example of an outer join that\n>> reduces to an inner join?\n\n> Hmm. This example is pretty silly, but afaik it reduces to an inner\n> join:\n\n> select i, j from t1 left join t2 using (i) where j is not null;\n\n> (where t1 has column \"i\" and t2 has columns \"i\" and \"j\").\n\nWell, I guess so, but I can't get excited about adding machinery to\ndetect cases like this ... are there any less-silly examples that make\na more compelling case for expending planner cycles to see if an outer\njoin can be reduced to an inner join?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Sep 2000 17:35:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A fine point about OUTER JOIN semantics "
},
{
"msg_contents": "> Well, I guess so, but I can't get excited about adding machinery to\n> detect cases like this ... are there any less-silly examples that make\n> a more compelling case for expending planner cycles to see if an outer\n> join can be reduced to an inner join?\n\nWell, in all cases the outer join reduces to an inner join if there is a\nqualification which would eliminate nulls from any intermediate result.\nIt would be neat to see this happen automagically, but perhaps that\nwould be a gratuitously studly feature ;)\n\n - Thomas\n",
"msg_date": "Tue, 05 Sep 2000 14:29:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A fine point about OUTER JOIN semantics"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.