threads
listlengths
1
2.99k
[ { "msg_contents": "I try to use initdb to create a new instance. But it fails. The call\n\npostgres -boot -C -F -D/usr/local/pgsql/data -Q template1\n\nretunrs the error code 139. Don't know what that means.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Wed, 6 May 1998 14:31:44 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "initdb still doesn't work" } ]
[ { "msg_contents": "I discovered the hard way that there is no regression test for\ncopy in/out (COPY table TO stdout, etc). This is not good.\npg_dump depends on copy in/out, and pg_dump is rather a critical\nfacility, wouldn't you say?\n\nI'd suggest, in fact, that there ought to be a regression test\nspecifically exercising pg_dump and the resulting reload script.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 May 1998 15:33:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Another missing regression test" }, { "msg_contents": "> \n> I discovered the hard way that there is no regression test for\n> copy in/out (COPY table TO stdout, etc). This is not good.\n> pg_dump depends on copy in/out, and pg_dump is rather a critical\n> facility, wouldn't you say?\n> \n> I'd suggest, in fact, that there ought to be a regression test\n> specifically exercising pg_dump and the resulting reload script.\n> \n> \t\t\tregards, tom lane\n\nWhat I have tried in the past is to dump the completed regression\ndatabase and reload it and dump it again, and compare the output. Seems\nto find problems, but did not do that for 6.3.*.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 6 May 1998 17:44:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another missing regression test" } ]
[ { "msg_contents": "pg_dump doesn't cope very gracefully at all with databases containing\nmultiple tables not all owned by the same person. It knows enough\nto issue \\connect commands in the reload script that cause the\nnew tables to be owned by the same people as before. But the reload\nscript fails with permission errors later on during the data copying\nphase, if the tables are not made world-writable.\n\nThis is certain to happen if the -z switch is not used to dump the\ntables' grant/revoke status. I suspect that pg_dump ought not try\nto save/restore table ownership unless it is also saving/restoring\naccess rights; that is, if -z is not given the \\connect commands\nshouldn't appear either. Then, without -z the reload script will\ngenerate a new database wholly owned by the script invoker.\n\nWhen using -z, the failure of the copy-in command could be fixed by\nissuing more \\connect commands so that the data transfer is done while\nlogged in as the table owner.\n\nThis is particularly nasty because the reload script fails even if\nrun as the Postgres superuser. I think this is because the script\nreconnects as the various table owners and thereby loses superuser\naccess rights.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 May 1998 15:51:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "An item for the TODO list: pg_dump and multiple table owners" }, { "msg_contents": "> \n> pg_dump doesn't cope very gracefully at all with databases containing\n> multiple tables not all owned by the same person. It knows enough\n> to issue \\connect commands in the reload script that cause the\n> new tables to be owned by the same people as before. But the reload\n> script fails with permission errors later on during the data copying\n> phase, if the tables are not made world-writable.\n> \n> This is certain to happen if the -z switch is not used to dump the\n> tables' grant/revoke status. I suspect that pg_dump ought not try\n> to save/restore table ownership unless it is also saving/restoring\n> access rights; that is, if -z is not given the \\connect commands\n> shouldn't appear either. Then, without -z the reload script will\n> generate a new database wholly owned by the script invoker.\n> \n> When using -z, the failure of the copy-in command could be fixed by\n> issuing more \\connect commands so that the data transfer is done while\n> logged in as the table owner.\n> \n> This is particularly nasty because the reload script fails even if\n> run as the Postgres superuser. I think this is because the script\n> reconnects as the various table owners and thereby loses superuser\n> access rights.\n\nThis is a very good point. I will look into it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 6 May 1998 17:45:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] An item for the TODO list: pg_dump and multiple table\n\towners" } ]
[ { "msg_contents": "OK guys this really concerns me. And I really do consider it a bug.\nNow if Thomas has gotten this working for 6.4 I'll shut my trap. But\nuntil I hear someone tell me that I'm overreacting or that it's fixed\nI'll continue to try to raise support for one or the other. This is my\nlittle experiment:\n\ntest=> create table dtm_test (dtm datetime);\nCREATE\ntest=> insert into dtm_test VALUES (NOW()::DATETIME);\nINSERT 228745 1\ntest=> select * from dtm_test;\ndtm \n----------------------------\nWed May 06 13:37:56 1998 CDT\n(1 row)\n\ntest=> select dtm::DATE from dtm_test;\n date\n----------\n05-06-1998\n(1 row)\n\ntest=> INSERT INTO dtm_test VALUES (NULL);\nINSERT 228746 1\ntest=> select * from dtm_test;\ndtm \n----------------------------\nWed May 06 13:37:56 1998 CDT\n \n(2 rows)\n\ntest=> select dtm::DATE from dtm_test;\nERROR: Unable to convert null datetime to date\n\nI do realize that NULL signifies a undefined answer, but wouldn't it be\na good idea to convert NULL::DATETIME to NULL::DATE, NULL::TIME, and\nNULL::TIMESTAMP.\n\nonly DATETIME, TIMESTAMP, DATE, and TIME are being considered here but\nwe have other conversion holes that need to be plugged.\nWe currently have w/o NULL conversion:\n TIMESTAMP -> DATETIME \n Why not have TIME and DATE as well?\n DATETIME -> DATE, TIME \n Why not have TIMESTAMP if were going to have it? And if your going\nto say 'Because of the range problems', then tell me what\n'infinity'::TIMESTAMP is for.\n DATE -> DATETIME\n Once more I ask why not TIMESTAMP?\n TIME -> nothing that I could find.\n\nWhat this means is that for a lot of conversions you'd have to do at\nleast two CAST steps. Some conversions (even though they would make\nsince) aren't possible.\n\nThere are similar problems with varchar, text, and char.\nI'm not even sure about other types. \n\nIf I'm way off base here I'm sure you guys will let me know. If not,\nwhat can I do to help fix this.\n\nWaiting patiently,\n\t\t-DEJ\n \n", "msg_date": "Wed, 6 May 1998 16:13:57 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Data type conversion again" }, { "msg_contents": "> OK guys this really concerns me. And I really do consider it a bug.\n> Now if Thomas has gotten this working for 6.4 I'll shut my trap.\n\nOoooh. Some motivation :) (sorry, couldn't resist...)\n\n> But until I hear someone tell me that I'm overreacting or that it's \n> fixed I'll continue to try to raise support for one or the other. \n> test=> select dtm::DATE from dtm_test;\n> ERROR: Unable to convert null datetime to date\n\nI'm suprised at this message and problem; will try to address it for\nv6.4. In testing on my alpha code I'm not handling this any better yet\n(and in fact have new problems with the null constant).\n\n> to say 'Because of the range problems', then tell me what\n> 'infinity'::TIMESTAMP is for.\n\nA kludge to work around the limited range. Although you kids may not\nrealize it, \"-infinity\" is actually substantially earlier than 1902,\ndespite Postgres' behavior :)\n\n> What this means is that for a lot of conversions you'd have to do at\n> least two CAST steps. Some conversions (even though they would make\n> since) aren't possible.\n> \n> There are similar problems with varchar, text, and char.\n> I'm not even sure about other types.\n> \n> If I'm way off base here I'm sure you guys will let me know. If not,\n> what can I do to help fix this.\n\nCataloging specific problems as you are doing is helpful. If you want to\nkeep a running list then I can ask you for it later, once I've gotten\nthe new type conversion code off the ground.\n\nWe've been chipping away at the type conversion and casting problem for\nthe last several releases, and things are a _lot_ more solid than they\nwere when we started. Consolidating types for v6.4 as we are doing with\ncharacter strings and perhaps date/time types will also help us focus on\nthe useful ones which remain.\n\n - Tom\n", "msg_date": "Thu, 07 May 1998 05:39:43 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Data type conversion again" } ]
[ { "msg_contents": "Hope a mention of -i can be added to the error message.\n> \n> On Wed, 6 May 1998, Klaus Fink wrote:\n> \n> > Hi there,\n> > \n> > I have actually a problem with DBD:Pg when connecting to the Postgres\n> > Server. But probably it is mainly a problem with postgres.\n> > \n> > I set up pg_hba.conf as\n> > host all 0.0.0.0 0.0.0.0 trust\n> > but still it tells me that the postmaster (which is running) is not\n> > listening.\n> > \n> > 'netstat -an | grep 5432' returns something weird:\n> > f5b84b78 stream-ord 13 0 /tmp/.s.PGSQL.5432\n> > \n> > Has anyone an idea what may be wrong?\n> \n> Yep, you haven't started the postmaster with -i (which is needed for\n> tcp/ip networking). the above file is the unix domain socket.\n> \n> -- \n> Peter T Mount [email protected] or [email protected]\n> Main Homepage: http://www.retep.org.uk\n> ************ Someday I may rebuild this signature completely ;-) ************\n> Work Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n> \n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 6 May 1998 17:40:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] cannot remote connect to DB" } ]
[ { "msg_contents": "I must say, I have not felt the pressure of the mailing lists recently. \n\nThanks to all who have taken up the slack by performing many PostgreSQL\nduties. I now feel I can concentrate on the items I specialize in.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 6 May 1998 19:29:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "mailing lists" } ]
[ { "msg_contents": " \n>> Just tried this out, and we have a bug here:\n>> \n>> simply not implemented, not a bug.\n>\n>\tThen should generate a NOTICE to that effect...right now, its\n>misleading unless you go and do a select on pg_group to find that it\n>wasn't actually performed...\n>\n>\tAs it stands now, it is a bug...\n\nYes, I think it should scan pg_group for a valid group name. I would not make it\ncreate a group, since this might be a simple typo. \n\n>> template1=> create user tester in group pg_user;\n>> CREATE USER\n\n< snipped my bad mood comment here :-) > \n\n>> The group \"pg_user\" must already exist. But since the \"in group\" clause\n>> is currently ignored, no error shows up.\n>\n>\tWhy? if group doesn't exist do:\n>\n>insert into pg_group values ('groname',max(grosysid)+1,'{values}');\n\nThis again I would not do to avoid typo's.\n\tcreate {role|group} test; -- would be preferable I think\n\t\n>> template1=> insert into pg_group values ('test',0,'{10}');\n>> INSERT 18497 1\n>> \n>> you created a group \"test\" with one user (\"scrappy\") as it's only member. \n>> This is currently the only way to do it.\n\n>\tUnfortunately, the above test was done at home, but here it is\n>again:\n>\n>template1=> select * from pg_group;\n>groname|grosysid|grolist \n>-------+--------+----------------\n>pgsql | 0|{10,1044,65534} \n>banner | 1|{10,65534} \n>acctng | 2|{0,99,10} \n>survey | 3|{10,65534,0,206}\n>(4 rows)\n\nDoes anybody know what grosysid is supposed to be ? I think it is checked\nagainst a valid unix group id.\ngrosysid certainly sounds like in connex with system gid, this could be useful\nfor \"system identified groups\", read write permissions on files during load/unload \netc. ...\nIn the rest of the system tables the pg_group.oid should be used, like for \npg_user or pg_class, or add a field groid.\nThis certainly needs cleanup.\n\n>template1=> create user someone in group agroup;\n>CREATE USER\n>template1=> select * from pg_group;\n>groname|grosysid|grolist \n>-------+--------+----------------\n>pgsql | 0|{10,1044,65534} \n>banner | 1|{10,65534} \n>acctng | 2|{0,99,10} \n>survey | 3|{10,65534,0,206}\n>(4 rows)\n>\n>template1=> create user some in group agroup;\n>ERROR: defineUser: user \"some\" has already been created\n>template1=> \n>\n>\tThere is no group 'some'...it almost looks like its doing a '~*'\n>match:\n>\n>template1=> select usename from pg_user;\n>usename \n>--------\n>scrappy \n>neil \n>nobody \n>darchell\n>adrenlin\n>julie \n>bigtech \n>news \n>acctng \n>root \n>salesorg\n>someone \n>(12 rows)\n\nThis don't sound good. Did you track it down ? Might also be a conflict with\none of the id's.\n\nAndreas\n\n\n", "msg_date": "Thu, 7 May 1998 09:41:11 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: [HACKERS] Re: [QUESTIONS] groups of users" } ]
[ { "msg_contents": "Olivier Caron wrote:\n >in the user's guide (chapter _ inheritance), i see :\n >-----------------------------------------------------------\n >select * from cities *\n >\n >here the * after cities indicates that the query should be run over cities\n >and all classes below cities in the inheritance hierarchy.\n >Many of the commands that we have already discussed -- select, update\n >and delete -- support this * notation, as do others, like alter\n >-----------------------------------------------------------\n >\n >i don't succedeed in to execute update or delete with this * notation,\n \nThe ramifications of inheritance are not well thought out in PostgreSQL.\n\nConsider:\n\n cities -----+---- capital_cities ----- megacities\n ¦\n +---- regional_centres\n ¦\n +---- small_towns\n\nAt the moment, a subclass inherits columns and _some_ constraints from\nits parents - it inherits check constraints but not primary key constraints.\nI don't know what behaviour is planned for foreign key constraints.\nIt is possible, with multiple inheritance, to inherit mutually-inconsistent\ncheck constraints but there is no way to undefine or redefine inherited\nconstraints.\n\nYou can do `select * from cities *', but you cannot do an insert, delete\nor update. In other words, the inheritance tree as a whole is regarded as\na read-only entity.\n\nThe inability to do insert seems reasonable, because there\nwould not be any safe way of knowing which table in the inheritance\ntree was intended. However, update and delete could be expected to\nwork. I suspect that no-one ever asked for the facility!\n\nI have previously suggested some syntax for specifying the inheritance\nof constraints more precisely. It looks as if there should also be a\nmeans of saying whether a descendant class should be included in\noperations on multiple rows - I don't think it should be a facility that\noperates automatically.\n\nIs any of the developers an expert on the inheritance features? or is\nthis something that no-one has yet taken up?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If it is possible, as much as it depends on you, live \n peaceably with all men.\" Romans 12:18 \n\n\n", "msg_date": "Thu, 07 May 1998 12:38:38 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] inheritance questions " }, { "msg_contents": "> Is any of the developers an expert on the inheritance features? or is\n> this something that no-one has yet taken up?\n\nI think no one yet. I have thought about taking it up after finishing\n(?) the type conversion issue, but that is in the future if at all...\n\n - Tom\n", "msg_date": "Thu, 07 May 1998 12:28:17 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] inheritance questions" }, { "msg_contents": "> I have previously suggested some syntax for specifying the inheritance\n> of constraints more precisely. It looks as if there should also be a\n> means of saying whether a descendant class should be included in\n> operations on multiple rows - I don't think it should be a facility that\n> operates automatically.\n> \n> Is any of the developers an expert on the inheritance features? or is\n> this something that no-one has yet taken up?\n\nThis is the most complete specification of inheritance limitations I\nhave read, and plan to keep it for inclusion in the TODO list.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 7 May 1998 10:35:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] inheritance questions" } ]
[ { "msg_contents": "The say the connect statement has to have the following form:\n\nCONNECT TO <SQL-server> [AS <connection name>] [USER <user name>]\n\nIt is implementation dependant how to get database name, server name resp.\nnumber and port number from SQL-server. How will we do this?\n\nHow about this?\n\n<dbname>@<server>:<port>\n\nEach missing entry will be set to the default value:\ntemplate1@localhost:5432.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 7 May 1998 13:04:04 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Decicision needed for connect statement" }, { "msg_contents": "Thomas G. Lockhart writes:\n> > Each missing entry will be set to the default value:\n> > template1@localhost:5432.\n> \n> Looks good, though perhaps the default db should be the user's name?\n\nYes, of course. This example came just from me using the Debian release of\n6.3 which opens template1 as default. I cannot run 6.4, so I have to test\nwith the old version.\n\nI do not enter a default myself, but rather let PQsetdb handle it.\n\n> Is there any benefit to using a url-style spec?\n> \n> postgres://server:port/dbname\n> \n> Very fashionable :)\n\nThat looks good. But to be really usefule I think we should add both ways\nnot only to ecpg, but to psql as well.\n\nHow about a adding the parser code for both to PQsetdb so we can do:\n\nPQsetdeb(NULL,NULL,NULL,NULL,\"postgres://server:port/dbname\")\n\nand\n\nPQsetdeb(NULL,NULL,NULL,NULL,\"dbname@server:port\")\n\nas synonym for \n\nPQsetdb(server,port,NULL,NULL,dbname)?\n\nMichael\n\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 7 May 1998 14:39:23 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Decicision needed for connect statement" }, { "msg_contents": "> CONNECT TO <SQL-server> [AS <connection name>] [USER <user name>]\n> How about this?\n> <dbname>@<server>:<port>\n> Each missing entry will be set to the default value:\n> template1@localhost:5432.\n\nLooks good, though perhaps the default db should be the user's name?\n\nIs there any benefit to using a url-style spec?\n\n postgres://server:port/dbname\n\nVery fashionable :)\n\n - Tom\n", "msg_date": "Thu, 07 May 1998 12:39:45 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Decicision needed for connect statement" }, { "msg_contents": "Michael Meskes wrote:\n> \n> The say the connect statement has to have the following form:\n> \n> CONNECT TO <SQL-server> [AS <connection name>] [USER <user name>]\n> \n> It is implementation dependant how to get database name, server name resp.\n> number and port number from SQL-server. How will we do this?\n\nI use <server>:<port> as <SQL-server> and a separate SET SCHEMA <name>\nto set the database name as (I think) this is more consistent with\nSQL/2.\n\nPhil\n", "msg_date": "Thu, 07 May 1998 17:06:19 +0000", "msg_from": "Phil Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Decicision needed for connect statement" }, { "msg_contents": "Phil Thompson writes:\n> I use <server>:<port> as <SQL-server> and a separate SET SCHEMA <name>\n> to set the database name as (I think) this is more consistent with\n> SQL/2.\n\nI don't think I like this:\n\n 12.3 <set schema statement>\n\n Function\n\n Set the default schema name for unqualified <schema qualified\n name>s in <preparable statement>s that are prepared in the\n current SQL-session by an <execute immediate statement> or a\n <prepare statement> and in <direct SQL statement>s that are invoked\n directly.\n\n...\n\nSo it means I have to do the following to really connect:\n\n\texec sql connect to server;\n\texec sql set scheme template1;\n\nThis only makes sense IMO if we support different schemes over one\nconnection. but we don't do this, do we?\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Fri, 8 May 1998 10:34:40 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Decicision needed for connect statement" } ]
[ { "msg_contents": "Great. Do we implement this in libpq?\n\nAnd do we implement the old style syntax with '@', too?\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tPeter Mount [SMTP:[email protected]]\n> Sent:\tThursday, May 07, 1998 3:27 PM\n> To:\t'Michael Meskes'; 'Thomas G. Lockhart'\n> Cc:\t'PostgreSQL Hacker'\n> Subject:\tRE: [HACKERS] Decicision needed for connect statement\n> \n> > Is there any benefit to using a url-style spec?\n> > \n> > postgres://server:port/dbname\n> > \n> > Very fashionable :)\n> \n> This would be in line to JDBC's url-style:\n> \n> jdbc:postgresql://server:port/dbname?options\n> \n> --\n> Peter T Mount, [email protected], [email protected]\n> Please note that this is from my works email. If you reply, please cc\n> my\n> home address.\n> \n", "msg_date": "Thu, 7 May 1998 15:25:40 +0200", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Decicision needed for connect statement" } ]
[ { "msg_contents": "> Is there any benefit to using a url-style spec?\n> \n> postgres://server:port/dbname\n> \n> Very fashionable :)\n\nThis would be in line to JDBC's url-style:\n\n jdbc:postgresql://server:port/dbname?options\n\n--\nPeter T Mount, [email protected], [email protected]\nPlease note that this is from my works email. If you reply, please cc my\nhome address.\n\n\n", "msg_date": "Thu, 7 May 1998 14:27:21 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Decicision needed for connect statement" } ]
[ { "msg_contents": "The : is used in Informix as the delimiter to a remote tablename like:\n\tdns@dns1ifx:dnstab\nbut I like the look of your dbname syntax.\n\nI think it might still be a good Idea to start using a connection file like\nInformix (sqlhosts) and Oracle (tnsnames.ora) use it, so we could have short names \nfor our Instances and a flexible way to add additional config options in this file.\n\nInformix has:\n# shortname\tprotocol\t\thostname\tservice\t\tadditional \nzeusifx \t\tonsoctcp \tzeus \t\tsqlexec\nzeusifxshm \tonipcshm \tzeus \t\tzeusifxshm\nzvrentifx \tonsoctcp \ta1880104 \tsqlexec7\n\nTherefore:\n\tconnect to dbname@shortname\n\nAndreas\n\nThe say the connect statement has to have the following form:\n\nCONNECT TO <SQL-server> [AS <connection name>] [USER <user name>]\n\nIt is implementation dependant how to get database name, server name resp.\nnumber and port number from SQL-server. How will we do this?\n\nHow about this?\n\n<dbname>@<server>:<port>\n\nEach missing entry will be set to the default value:\ntemplate1@localhost:5432.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n\n", "msg_date": "Thu, 7 May 1998 15:28:49 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Decicision needed for connect statement" } ]
[ { "msg_contents": "I am (not) finding a missing file src/interfaces/libpq/fe-print.c in the\ncurrent CVSup'd cvs source tree, and psql is having trouble building\nbecause it is missing PQprint(). \n\nI assume they are related and that someone intended PQprint() to move\nfrom fe-exec.c to a new file fe-print.c.\n\nIs this the case? If so, where is fe-print.c??\n\nAssuming that I can get the tree to build, what is the state of the\ntree? Do the regression tests pass? Are there any known breakages?? I'd\nlike to move my type conversion development up from my 980408 rev-locked\ntree but not if it won't run :/\n\n - Tom\n", "msg_date": "Thu, 07 May 1998 13:33:16 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Broken source tree" }, { "msg_contents": "> \n> I am (not) finding a missing file src/interfaces/libpq/fe-print.c in the\n> current CVSup'd cvs source tree, and psql is having trouble building\n> because it is missing PQprint(). \n> \n> I assume they are related and that someone intended PQprint() to move\n> from fe-exec.c to a new file fe-print.c.\n> \n> Is this the case? If so, where is fe-print.c??\n> \n> Assuming that I can get the tree to build, what is the state of the\n> tree? Do the regression tests pass? Are there any known breakages?? I'd\n> like to move my type conversion development up from my 980408 rev-locked\n> tree but not if it won't run :/\n\nFixed. It was a new file from the Tom Lane patch that I missed a 'cvs\nadd'. Don't know that status, but I think things still run with this\nnew patch.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 7 May 1998 10:54:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Broken source tree" }, { "msg_contents": "> I am (not) finding a missing file src/interfaces/libpq/fe-print.c in the\n> current CVSup'd cvs source tree, and psql is having trouble building\n> because it is missing PQprint(). \n> I assume they are related and that someone intended PQprint() to move\n> from fe-exec.c to a new file fe-print.c.\n\nYes, I moved the print functions out of fe-exec, because they bulked it\nup to an unreasonable size, and because I felt that an application not\nusing them should not be forced to link them in.\n\nLooks like Bruce forgot to check in the new file when he applied the\nlibpq patches I sent him. Sorry about the glitch.\n\n> Assuming that I can get the tree to build, what is the state of the\n> tree? Do the regression tests pass? Are there any known breakages??\n\nAs far as the libpq changes go: the regression tests pass. The gripes\nI've been sending about missing regression tests have to do with things\nthat still didn't work after the regression tests all passed ;-). AFAIK\nthere is nothing broken now.\n\nAs to what other people may have been breaking, this deponent sayeth\nnot.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 May 1998 11:10:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Broken source tree " }, { "msg_contents": "> Yes, I moved the print functions out of fe-exec,\n> As far as the libpq changes go: the regression tests pass.\n> AFAIK there is nothing broken now.\n\nThanks Bruce I have the file now :)\n\nThere is one compiler warning in fe-print.c about a missing declaration\nfor ioctl(); the includes should have <sys/ioctl.h> as did the old\nfe-exec.c ??\n\n - Tom\n", "msg_date": "Thu, 07 May 1998 16:06:09 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Broken source tree" }, { "msg_contents": "> \n> > Yes, I moved the print functions out of fe-exec,\n> > As far as the libpq changes go: the regression tests pass.\n> > AFAIK there is nothing broken now.\n> \n> Thanks Bruce I have the file now :)\n> \n> There is one compiler warning in fe-print.c about a missing declaration\n> for ioctl(); the includes should have <sys/ioctl.h> as did the old\n> fe-exec.c ??\n\nFixed. fe-misc.c was missing errno.h on my OS too. I want to make\nthose backend changes so people can start testing the new features.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 7 May 1998 12:16:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Broken source tree" } ]
[ { "msg_contents": "I'm not sure about libpq as I've never used either with it. However,\nURL-style does seem to be the in thing at the moment.\n\n--\nPeter T Mount, [email protected], [email protected]\nPlease note that this is from my works email. If you reply, please cc my\nhome address.\n\n\n-----Original Message-----\nFrom: Meskes, Michael [mailto:[email protected]]\nSent: Thursday, May 07, 1998 2:42 PM\nTo: 'Peter Mount'; 'Michael Meskes'; 'Thomas G. Lockhart'\nCc: 'PostgreSQL Hacker'\nSubject: RE: [HACKERS] Decicision needed for connect statement\n\n\nGreat. Do we implement this in libpq?\n\nAnd do we implement the old style syntax with '@', too?\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tPeter Mount [SMTP:[email protected]]\n> Sent:\tThursday, May 07, 1998 3:27 PM\n> To:\t'Michael Meskes'; 'Thomas G. Lockhart'\n> Cc:\t'PostgreSQL Hacker'\n> Subject:\tRE: [HACKERS] Decicision needed for connect statement\n>\n> > Is there any benefit to using a url-style spec?\n> >\n> > postgres://server:port/dbname\n> >\n> > Very fashionable :)\n>\n> This would be in line to JDBC's url-style:\n>\n> jdbc:postgresql://server:port/dbname?options\n>\n> --\n> Peter T Mount, [email protected], [email protected]\n> Please note that this is from my works email. If you reply, please cc\n> my\n> home address.\n>\n\n", "msg_date": "Thu, 7 May 1998 16:22:44 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Decicision needed for connect statement" } ]
[ { "msg_contents": "Forwarded to the HACKERS list.\n\t\t-DEJ\n\n> -----Original Message-----\n> From:\tJohn Edstrom [SMTP:[email protected]]\n> Sent:\tFriday, May 01, 1998 4:37 PM\n> To:\[email protected]\n> Subject:\t[QUESTIONS] admin question\n> \n> How is pg_log used?\n> \n> I don't see it mentioned in the docs.\n> \n> \n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Thu, 7 May 1998 10:58:01 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] admin question" } ]
[ { "msg_contents": "Forwarded to the HACKER list.\n\t\t-DEJ\n\n> -----Original Message-----\n> From:\tAnton Stöckl [SMTP:[email protected]]\n> Sent:\tTuesday, May 05, 1998 7:57 AM\n> To:\tPostgreSQL Questions\n> Subject:\t[QUESTIONS] arrays\n> \n> Hi there,\n> \n> I just started playing around with arrays and have a question:\n> \n> Can I select all values of an array without the braces and delimiters?\n> \n> Like:\n> \n> apache_conf=> select directive_key from vhosts;\n> directive_key\n> -------------\n> {1,2} \n> (1 row)\n> \n> apache_conf=> select all_array_values(directive_key) from vhosts;\n> directive_key\n> -------------\n> 1\n> 2 \n> (2 row)\n> \n> I don't know how many elements the array holds, so I can't just use\n> the array[n] notation (would I need an array if I knew?).\n> I could parse it in my program, but in that case, I wouldn't need an\n> array, too (but could use a varchar type).\n> \n> There is an additional array question coming to my mind:\n> Can I insert values into the array (and delete from it), or do I have\n> to override it with the new values.\n> \n> Sorry if there are answers in the manuals, I just found the\n> description\n> how to create an array.\n> \n> Any pointers appreciated, Tony\n> \n> -- \n> ----------C-Y-B-E-R-S-O-L-U-T-I-O-N-S----------------\n> Anton Stöckl mailto:[email protected]\n> CyberSolutions GmbH http://www.cys.de\n> Frankfurter Ring 193A Phone +49 89 32369223\n> 80807 Muenchen Fax +49 89 32369220\n> ------W-E----M-A-K-E----I-T----P-O-S-S-I-B-L-E-------\n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Thu, 7 May 1998 11:00:09 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] arrays" } ]
[ { "msg_contents": "> Hi !\n> Does anybody know, if there are any way to grant to user permissions\n> to\n> create 'C' function ? Psql says that only postgres can do that ....\nI think it means what it says.\n\t\t-DEJ\n\n> Thanx & Wishes !\n> ---------------------------\n> Sergei Chernev\n> Internet: [email protected]\n> Phone: +7-3832-397354\n> \n", "msg_date": "Thu, 7 May 1998 11:01:08 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Permissions to external functions" }, { "msg_contents": "> \n> > Hi !\n> > Does anybody know, if there are any way to grant to user permissions\n> > to\n> > create 'C' function ? Psql says that only postgres can do that ....\n> I think it means what it says.\n> \t\t-DEJ\n\nYep, it is a security thing because a C function can bypass permission\nrestrictions.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 7 May 1998 12:19:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [QUESTIONS] Permissions to external functions" } ]
[ { "msg_contents": "> > CONNECT TO <SQL-server> [AS <connection name>] [USER <user name>]\nWhat about PASSWORD?\n\t\t-DEJ\n", "msg_date": "Thu, 7 May 1998 11:22:08 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Decicision needed for connect statement" }, { "msg_contents": "Jackson, DeJuan writes:\n> > > CONNECT TO <SQL-server> [AS <connection name>] [USER <user name>]\n> What about PASSWORD?\n> \t\t-DEJ\n\nIt's simply not listed. Oracle allows two was:\n\n1) List the username as <user>/<passwd>\n2) Add the clause 'IDENTIFIED BY <passwd>'.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Fri, 8 May 1998 10:35:42 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Decicision needed for connect statement" } ]
[ { "msg_contents": ">The inability to do insert seems reasonable, because there\n>would not be any safe way of knowing which table in the inheritance\n>tree was intended. However, update and delete could be expected to\n>work. I suspect that no-one ever asked for the facility!\n>\nThis and similar points have been pointed out by a number of poeple (usually\nin general terms) however somehow it seems OO features are of lower priority\nfor now.\n\nRegards,\n Maurice.\n\n\n", "msg_date": "Thu, 7 May 1998 19:20:36 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] inheritance questions " } ]
[ { "msg_contents": "I have found a problem with libpgtcl and cases where the linking brings\nin the old libpq libraries in the current installed directory, rather\nthan the one in the current source.\n\nHere is the link line:\n\ngcc2 -O2 -m486 -pipe -I../../include -I../../backend -I/u/readline \n-g -Wall -pg -I/usr/X11R6/include -I../../interfaces/libpgtcl -o pgtclsh\npgtclAppInit.o -L../../interfaces/libpgtcl -L/usr/local/pgsql/lib -lpgtcl \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^\n-L../../interfaces/libpq -L/usr/local/pgsql/lib -lpq -ltcl -lm -L/u/readline \n-L/usr/contrib/lib -lcompat -lln -lipc -ldl -lm -lreadline -lhistory -ltermcap \n-lcurses\n\nThe active items are highlighted. The link is looking to the source\nlibpgtcl first, then the one in the install directory. When the libpq\nlink comes up, it searches the install lib directory FIRST, rather than\nthe one in the current source tree.\n\nThe cause is that the libpgtcl is added as a group before the libpq\nstuff, and over-rides it.\n\nAny ideas on a solution? Someone else mentioned that perl has a problem\nwhere you can only create it AFTER you do an install. Any reason we are\nusing the install lib directory in the link?\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n\n", "msg_date": "Thu, 7 May 1998 23:53:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Bug in linking in old libraries" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have found a problem with libpgtcl and cases where the linking brings\n> in the old libpq libraries in the current installed directory, rather\n> than the one in the current source.\n\nThis strikes me as just plain brain fade in src/bin/pgtclsh/Makefile.\nInstead of \n\n# try to find libpgtcl.a in either directory\nLIBPGTCL= -L$(SRCDIR)/interfaces/libpgtcl -L$(LIBDIR) -lpgtcl\nLIBPQ= -L$(LIBPQDIR) -L$(LIBDIR) -lpq\n\nit should just have\n\nLIBPGTCL= -L$(SRCDIR)/interfaces/libpgtcl -lpgtcl\nLIBPQ= -L$(LIBPQDIR) -lpq\n\nThere's no good reason to be referencing the library install directory,\nsince even if it exists it may contain an incompatible down-rev library.\n\nOn some machines there is an issue of telling the executable to look for\nshared libraries in LIBDIR at run time, but that is handled by different\nswitches that Makefile.port supplies via LDFLAGS.\n\nA quick grep for LIBDIR shows no indication that the same mistake has\nbeen made anywhere else.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 May 1998 11:34:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug in linking in old libraries " }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > I have found a problem with libpgtcl and cases where the linking brings\n> > in the old libpq libraries in the current installed directory, rather\n> > than the one in the current source.\n> \n> This strikes me as just plain brain fade in src/bin/pgtclsh/Makefile.\n> Instead of \n> \n> # try to find libpgtcl.a in either directory\n> LIBPGTCL= -L$(SRCDIR)/interfaces/libpgtcl -L$(LIBDIR) -lpgtcl\n> LIBPQ= -L$(LIBPQDIR) -L$(LIBDIR) -lpq\n> \n> it should just have\n> \n> LIBPGTCL= -L$(SRCDIR)/interfaces/libpgtcl -lpgtcl\n> LIBPQ= -L$(LIBPQDIR) -lpq\n> \n> There's no good reason to be referencing the library install directory,\n> since even if it exists it may contain an incompatible down-rev library.\n> \n> On some machines there is an issue of telling the executable to look for\n> shared libraries in LIBDIR at run time, but that is handled by different\n> switches that Makefile.port supplies via LDFLAGS.\n> \n> A quick grep for LIBDIR shows no indication that the same mistake has\n> been made anywhere else.\n\nThanks for checking. I will make the change.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 8 May 1998 13:16:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bug in linking in old libraries" } ]
[ { "msg_contents": "\nServer is upgraded now, and I sort of 'oops'd and forgot to move the\nmailing list/majordomo stuff over :(\n\nFixed now...\n\n\n", "msg_date": "Fri, 8 May 1998 01:06:41 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "I knew I missed *something*..." } ]
[ { "msg_contents": "Okay, I'm willing to add it to either libecpg or (preferable imn my\nmind) to libpq. I'd also like to add my (old style) syntax if no one\nobjects.\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tPeter Mount [SMTP:[email protected]]\n> Sent:\tThursday, May 07, 1998 5:23 PM\n> To:\t'Meskes, Michael'; 'Peter Mount'; 'Thomas G. Lockhart'\n> Cc:\t'PostgreSQL Hacker'\n> Subject:\tRE: [HACKERS] Decicision needed for connect statement\n> \n> I'm not sure about libpq as I've never used either with it. However,\n> URL-style does seem to be the in thing at the moment.\n> \n> --\n> Peter T Mount, [email protected], [email protected]\n> Please note that this is from my works email. If you reply, please cc\n> my\n> home address.\n> \n> \n> -----Original Message-----\n> From: Meskes, Michael [mailto:[email protected]]\n> Sent: Thursday, May 07, 1998 2:42 PM\n> To: 'Peter Mount'; 'Michael Meskes'; 'Thomas G. Lockhart'\n> Cc: 'PostgreSQL Hacker'\n> Subject: RE: [HACKERS] Decicision needed for connect statement\n> \n> \n> Great. Do we implement this in libpq?\n> \n> And do we implement the old style syntax with '@', too?\n> \n> Michael\n> \n> --\n> Dr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\n> [email protected] | Europark A2, Adenauerstr. 20\n> [email protected] | 52146 Wuerselen\n> Go SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\n> Use Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n> \n> > -----Original Message-----\n> > From:\tPeter Mount [SMTP:[email protected]]\n> > Sent:\tThursday, May 07, 1998 3:27 PM\n> > To:\t'Michael Meskes'; 'Thomas G. Lockhart'\n> > Cc:\t'PostgreSQL Hacker'\n> > Subject:\tRE: [HACKERS] Decicision needed for connect statement\n> >\n> > > Is there any benefit to using a url-style spec?\n> > >\n> > > postgres://server:port/dbname\n> > >\n> > > Very fashionable :)\n> >\n> > This would be in line to JDBC's url-style:\n> >\n> > jdbc:postgresql://server:port/dbname?options\n> >\n> > --\n> > Peter T Mount, [email protected], [email protected]\n> > Please note that this is from my works email. If you reply, please\n> cc\n> > my\n> > home address.\n> >\n", "msg_date": "Fri, 8 May 1998 10:17:05 +0200", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Decicision needed for connect statement" } ]
[ { "msg_contents": "[resent as the local mail system screwed up - 10 points to guess what system\nit is ;-) ]\n\n> Okay, I'm willing to add it to either libecpg or (preferable imn my\n> mind) to libpq. I'd also like to add my (old style) syntax if no one\n> objects.\n\nI don't see why not. Handling both won't expand libpq by that much.\n\n--\nPeter T Mount, [email protected], [email protected]\nPlease note that this is from my works email. If you reply, please cc my\nhome address.\n\n> -----Original Message-----\n> From:\tPeter Mount [SMTP:[email protected]]\n> Sent:\tThursday, May 07, 1998 5:23 PM\n> To:\t'Meskes, Michael'; 'Peter Mount'; 'Thomas G. Lockhart'\n> Cc:\t'PostgreSQL Hacker'\n> Subject:\tRE: [HACKERS] Decicision needed for connect statement\n>\n> I'm not sure about libpq as I've never used either with it. However,\n> URL-style does seem to be the in thing at the moment.\n>\n> --\n> Peter T Mount, [email protected], [email protected]\n> Please note that this is from my works email. If you reply, please cc\n> my\n> home address.\n>\n>\n> -----Original Message-----\n> From: Meskes, Michael [mailto:[email protected]]\n> Sent: Thursday, May 07, 1998 2:42 PM\n> To: 'Peter Mount'; 'Michael Meskes'; 'Thomas G. Lockhart'\n> Cc: 'PostgreSQL Hacker'\n> Subject: RE: [HACKERS] Decicision needed for connect statement\n>\n>\n> Great. Do we implement this in libpq?\n>\n> And do we implement the old style syntax with '@', too?\n>\n> Michael\n>\n> --\n> Dr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\n> [email protected] | Europark A2, Adenauerstr. 20\n> [email protected] | 52146 Wuerselen\n> Go SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\n> Use Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n>\n> > -----Original Message-----\n> > From:\tPeter Mount [SMTP:[email protected]]\n> > Sent:\tThursday, May 07, 1998 3:27 PM\n> > To:\t'Michael Meskes'; 'Thomas G. Lockhart'\n> > Cc:\t'PostgreSQL Hacker'\n> > Subject:\tRE: [HACKERS] Decicision needed for connect statement\n> >\n> > > Is there any benefit to using a url-style spec?\n> > >\n> > > postgres://server:port/dbname\n> > >\n> > > Very fashionable :)\n> >\n> > This would be in line to JDBC's url-style:\n> >\n> > jdbc:postgresql://server:port/dbname?options\n> >\n> > --\n> > Peter T Mount, [email protected], [email protected]\n> > Please note that this is from my works email. If you reply, please\n> cc\n> > my\n> > home address.\n> >\n\n", "msg_date": "Fri, 8 May 1998 09:43:48 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Decicision needed for connect statement" } ]
[ { "msg_contents": "Let me try. Hmm, something from Microsoft? :-)\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tPeter Mount [SMTP:[email protected]]\n> Sent:\tFriday, May 08, 1998 10:44 AM\n> To:\t'Meskes, Michael'; 'Peter Mount'; 'Thomas G. Lockhart'\n> Cc:\t'PostgreSQL Hacker'\n> Subject:\tRE: [HACKERS] Decicision needed for connect statement\n> \n> [resent as the local mail system screwed up - 10 points to guess what\n> system\n> it is ;-) ]\n> \n> > Okay, I'm willing to add it to either libecpg or (preferable imn my\n> > mind) to libpq. I'd also like to add my (old style) syntax if no one\n> > objects.\n> \n> I don't see why not. Handling both won't expand libpq by that much.\n> \n> --\n> Peter T Mount, [email protected], [email protected]\n> Please note that this is from my works email. If you reply, please cc\n> my\n> home address.\n> \n> > -----Original Message-----\n> > From:\tPeter Mount [SMTP:[email protected]]\n> > Sent:\tThursday, May 07, 1998 5:23 PM\n> > To:\t'Meskes, Michael'; 'Peter Mount'; 'Thomas G. Lockhart'\n> > Cc:\t'PostgreSQL Hacker'\n> > Subject:\tRE: [HACKERS] Decicision needed for connect statement\n> >\n> > I'm not sure about libpq as I've never used either with it. However,\n> > URL-style does seem to be the in thing at the moment.\n> >\n> > --\n> > Peter T Mount, [email protected], [email protected]\n> > Please note that this is from my works email. If you reply, please\n> cc\n> > my\n> > home address.\n> >\n> >\n> > -----Original Message-----\n> > From: Meskes, Michael [mailto:[email protected]]\n> > Sent: Thursday, May 07, 1998 2:42 PM\n> > To: 'Peter Mount'; 'Michael Meskes'; 'Thomas G. Lockhart'\n> > Cc: 'PostgreSQL Hacker'\n> > Subject: RE: [HACKERS] Decicision needed for connect statement\n> >\n> >\n> > Great. Do we implement this in libpq?\n> >\n> > And do we implement the old style syntax with '@', too?\n> >\n> > Michael\n> >\n> > --\n> > Dr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\n> > [email protected] | Europark A2, Adenauerstr.\n> 20\n> > [email protected] | 52146 Wuerselen\n> > Go SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\n> > Use Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n> >\n> > > -----Original Message-----\n> > > From:\tPeter Mount [SMTP:[email protected]]\n> > > Sent:\tThursday, May 07, 1998 3:27 PM\n> > > To:\t'Michael Meskes'; 'Thomas G. Lockhart'\n> > > Cc:\t'PostgreSQL Hacker'\n> > > Subject:\tRE: [HACKERS] Decicision needed for connect statement\n> > >\n> > > > Is there any benefit to using a url-style spec?\n> > > >\n> > > > postgres://server:port/dbname\n> > > >\n> > > > Very fashionable :)\n> > >\n> > > This would be in line to JDBC's url-style:\n> > >\n> > > jdbc:postgresql://server:port/dbname?options\n> > >\n> > > --\n> > > Peter T Mount, [email protected], [email protected]\n> > > Please note that this is from my works email. If you reply, please\n> > cc\n> > > my\n> > > home address.\n> > >\n", "msg_date": "Fri, 8 May 1998 11:06:32 +0200", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Decicision needed for connect statement" } ]
[ { "msg_contents": "I had t add a #include <sys/time.h> to get fe-misc.c to compile on my Debian\nGNU/Linux 2.0 machine.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Fri, 8 May 1998 12:11:10 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "compile problem in libpq" }, { "msg_contents": "> \n> I had t add a #include <sys/time.h> to get fe-misc.c to compile on my Debian\n> GNU/Linux 2.0 machine.\n> \n\nDo you have the #include <time.h> line, and you need sys/time.h too?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 8 May 1998 10:14:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compile problem in libpq" }, { "msg_contents": ">> I had t add a #include <sys/time.h> to get fe-misc.c to compile on my Debian\n>> GNU/Linux 2.0 machine.\n\n> Do you have the #include <time.h> line, and you need sys/time.h too?\n\nNow that he mentions it, I've seen the same thing on Linux boxen:\nyou need to include both <time.h> and <sys/time.h>. (I think the\nlatter pulls in some definitions needed to use select() on that OS.)\nSorry for not remembering about it. On my OS <time.h> just includes\n<sys/time.h> ...\n\n\nThe Autoconf manual says:\n\n - Macro: AC_HEADER_TIME\n If a program may include both `time.h' and `sys/time.h', define\n `TIME_WITH_SYS_TIME'. On some older systems, `sys/time.h'\n includes `time.h', but `time.h' is not protected against multiple\n inclusion, so programs should not explicitly include both files.\n This macro is useful in programs that use, for example, `struct\n timeval' or `struct timezone' as well as `struct tm'. It is best\n used in conjunction with `HAVE_SYS_TIME_H', which can be checked\n for using `AC_CHECK_HEADERS(sys/time.h)'.\n\n #if TIME_WITH_SYS_TIME\n # include <sys/time.h>\n # include <time.h>\n #else\n # if HAVE_SYS_TIME_H\n # include <sys/time.h>\n # else\n # include <time.h>\n # endif\n #endif\n\nI notice that configure.in invokes AC_HEADER_TIME, but does not check\nfor existence of <sys/time.h> ... and neither of these symbols are\ngetting exported into config.h anyway. But that's probably the most\nrobust advice you're going to find.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 May 1998 11:20:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compile problem in libpq " }, { "msg_contents": "> \n> >> I had t add a #include <sys/time.h> to get fe-misc.c to compile on my Debian\n> >> GNU/Linux 2.0 machine.\n> \n> > Do you have the #include <time.h> line, and you need sys/time.h too?\n> \n> Now that he mentions it, I've seen the same thing on Linux boxen:\n> you need to include both <time.h> and <sys/time.h>. (I think the\n> latter pulls in some definitions needed to use select() on that OS.)\n> Sorry for not remembering about it. On my OS <time.h> just includes\n> <sys/time.h> ...\n\nOK, I added sys/time.h to fe-misc.c postmaster.c has time.h and\nsys/time.h, so why not here too?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 8 May 1998 13:39:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] compile problem in libpq" } ]
[ { "msg_contents": "Bingo, you got it in one.\n\nThere are only two businesses who call their customers 'users'\n\n\n-----Original Message-----\nFrom: Meskes, Michael [mailto:[email protected]]\nSent: Friday, May 08, 1998 10:26 AM\nTo: 'Peter Mount'; 'Meskes, Michael'; 'Thomas G. Lockhart'\nCc: 'PostgreSQL Hacker'\nSubject: RE: [HACKERS] Decicision needed for connect statement\n\n\nLet me try. Hmm, something from Microsoft? :-)\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tPeter Mount [SMTP:[email protected]]\n> Sent:\tFriday, May 08, 1998 10:44 AM\n> To:\t'Meskes, Michael'; 'Peter Mount'; 'Thomas G. Lockhart'\n> Cc:\t'PostgreSQL Hacker'\n> Subject:\tRE: [HACKERS] Decicision needed for connect statement\n> \n> [resent as the local mail system screwed up - 10 points to guess what\n> system\n> it is ;-) ]\n> \n> > Okay, I'm willing to add it to either libecpg or (preferable imn my\n> > mind) to libpq. I'd also like to add my (old style) syntax if no one\n> > objects.\n> \n> I don't see why not. Handling both won't expand libpq by that much.\n> \n> --\n> Peter T Mount, [email protected], [email protected]\n> Please note that this is from my works email. If you reply, please cc\n> my\n> home address.\n> \n> > -----Original Message-----\n> > From:\tPeter Mount [SMTP:[email protected]]\n> > Sent:\tThursday, May 07, 1998 5:23 PM\n> > To:\t'Meskes, Michael'; 'Peter Mount'; 'Thomas G. Lockhart'\n> > Cc:\t'PostgreSQL Hacker'\n> > Subject:\tRE: [HACKERS] Decicision needed for connect statement\n> >\n> > I'm not sure about libpq as I've never used either with it. However,\n> > URL-style does seem to be the in thing at the moment.\n> >\n> > --\n> > Peter T Mount, [email protected], [email protected]\n> > Please note that this is from my works email. If you reply, please\n> cc\n> > my\n> > home address.\n> >\n> >\n> > -----Original Message-----\n> > From: Meskes, Michael [mailto:[email protected]]\n> > Sent: Thursday, May 07, 1998 2:42 PM\n> > To: 'Peter Mount'; 'Michael Meskes'; 'Thomas G. Lockhart'\n> > Cc: 'PostgreSQL Hacker'\n> > Subject: RE: [HACKERS] Decicision needed for connect statement\n> >\n> >\n> > Great. Do we implement this in libpq?\n> >\n> > And do we implement the old style syntax with '@', too?\n> >\n> > Michael\n> >\n> > --\n> > Dr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\n> > [email protected] | Europark A2, Adenauerstr.\n> 20\n> > [email protected] | 52146 Wuerselen\n> > Go SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\n> > Use Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n> >\n> > > -----Original Message-----\n> > > From:\tPeter Mount [SMTP:[email protected]]\n> > > Sent:\tThursday, May 07, 1998 3:27 PM\n> > > To:\t'Michael Meskes'; 'Thomas G. Lockhart'\n> > > Cc:\t'PostgreSQL Hacker'\n> > > Subject:\tRE: [HACKERS] Decicision needed for connect statement\n> > >\n> > > > Is there any benefit to using a url-style spec?\n> > > >\n> > > > postgres://server:port/dbname\n> > > >\n> > > > Very fashionable :)\n> > >\n> > > This would be in line to JDBC's url-style:\n> > >\n> > > jdbc:postgresql://server:port/dbname?options\n> > >\n> > > --\n> > > Peter T Mount, [email protected], [email protected]\n> > > Please note that this is from my works email. If you reply, please\n> > cc\n> > > my\n> > > home address.\n> > >\n\n", "msg_date": "Fri, 8 May 1998 12:52:13 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Decicision needed for connect statement" } ]
[ { "msg_contents": "Okay, I have changed ecpg to accept the following:\n\nCONNECT TO connection_target opt_connection_name opt_user\nCONNECT TO DEFAULT\nCONNECT ora_user\n\nwith\n\nconnection_target being either 'dbname[@server][:port]' or\n'{esql,ecpg,sql}:postgresql://server[:port][/dbname]'\n\nopt_connection_name is empty so far\n\nopt_user is 'USER ora_user' or empty\n\nFinally ora_user is one of the following:\n\nuser_name\nuser_name '/' password\nuser_name SQL_IDENTIFIED BY user_name\n\nThis should allow us to accept the standard connect calls as well as the\nOracle ones. Is there any major db system that uses a different syntax? For\ncompatibility I'd like to add as much as possible.\n\nI haven't patched the library yet. I will do so as soon as we agree on\nadding both version to libpq.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Fri, 8 May 1998 15:00:58 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "connection names" } ]
[ { "msg_contents": "To: [email protected] \n\n Is your site listed with the top search engines? ListMe will \n list you with 50 search engines and indexes for $90. \n Satisfaction guaranteed! \n\nSearch engines are the only way most people have to find internet sites.\nBut if your site is not listed, no one will find it.\n\nListMe will submit your site for listing in the top 50 search engines for $90.\n\nHere's how it works:\n\n1. Complete and return the form below.\n\n2. We'll post your site to 50 search engines within two business days, and \nwe'll send you a submission report when we're finished.\n\n3. Pay nothing now. We'll send you an invoice after we've completed the posting.\nYour satisfaction is guaranteed!\n\nThese are permanent listings. The $90 is a one-time fee.\n\nWHICH SEARCH ENGINES?\n\nHere's the list:\n\nInfoseek, Lycos, Excite, Alta Vista, Galaxy, HotBot, Magellan, Open \nText Web Index, Web Crawler, BizWeb, New Riders WWW Yellow Pages, \nYelloWWWeb, True North, Northern Light, LinkMonster, The Weekly Bookmark, \nSeven Wonders, Jayde Online Directory, Starting Point, Web 100, Web Walker, \nNew Page List, PeekABoo, One World Plaza, PageHost A-Z, Net-Announce, \nProject Cool, Where2Go, World Wide Business Yellow Pages, Sserv, \nWow! Web Wonders!, WWW Worm, JumpCity, The Galactic Galaxy, TurnPike, \nUnlock:The Information Exchange, Your WebScout, Manufacturers Information \nNetwork, Net Happenings, Net Mall, Web World Internet Directory, \nInfoSpace, BC Internet, BizCardz Business Directory, Scrub The Web, \nWebVenture, Hotlist, What's New, WhatUSeek, JumpLink, Linkcentre Directory. \n\nORDER FORM\n\nHit the REPLY button on your e-mail program and fill out the following information.\n (This information will be posted to the search engines/indexes):\n\n\nContact name: \nCompany Name:\nAddress:\nCity: State/Prov: Zip/Postal Code: \nTelephone: \nFax: \nEmail address: \n\nContact e-mail address (in case we have questions about this order): \n\nURL: http://\nSite Title: \nDescription (250 characters): \n\nKey words (250 characters, in descending order of importance):\n\nIf billing a different address, please complete the following:\n\nAddressee: \nCompany Name:\nAddress:\nCity: State/Prov: Zip/Postal Code: \nTelephone: \nFax: \nEmail address: \n\nTERMS\n\nTerms are net 15 days from date of invoice.\n\n______________________________________________________________________\nListMe, Inc.\n1127 High Ridge Road - Suite 184\nStamford CT 06905\nPhone: (203) 326-8519\nFax: (203) 322-4505\nE-mail: [email protected]\n", "msg_date": "Fri, 8 May 1998 11:18:42 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Your Search Engine Listing" } ]
[ { "msg_contents": "> user_name SQL_IDENTIFIED BY user_name\n> \n> This should allow us to accept the standard connect calls as well as the\n> Oracle ones. Is there any major db system that uses a different syntax?\n\nFor the password Informix uses:\n\tconnect to 'stores@zeusifx' USER 'informix' USING :passwd_host_variable;\n\nThe rest looks very compatible. Do you want me to send you the demo samples ?\n\nAndreas\n\n\n\n", "msg_date": "Fri, 8 May 1998 17:20:22 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] connection names" }, { "msg_contents": "Andreas Zeugswetter writes:\n> For the password Informix uses:\n> \tconnect to 'stores@zeusifx' USER 'informix' USING :passwd_host_variable;\n\nI like this. It's already added to my source tree as an additional option.\n\n> The rest looks very compatible. Do you want me to send you the demo samples ?\n\nYes, I like to get demo samples. I'm currently feeding ecpg with all demos\nthat come with ORACLE.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Mon, 11 May 1998 11:54:38 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] connection names" } ]
[ { "msg_contents": "Forwarded to HACKERS list.\n\t\t-DEJ\n\n> When I invoke psql, the default delimiter is the pipe \"|\"\n> character. I can't find the correct syntax to\n> change the delimiter back to a single space. If I\n> type :\n> \\f <whitespace> -- delimiter stays at |\n> \\f \\<whitespace> -- delimiter stays at |\n> \n> Enclosing the space within single or double quotes produces\n> a delimiter that actually includes the quotes. I tried\n> octal specification \\040 for the space which also failed.\n> \n> Version 6.3.2 on Linux i386 (Redhat 4).\n> \n> What's the correct syntax? \n> \n> Marc Zuckman\n> [email protected]\n> \n> _\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n> _ Visit The Home and Condo MarketPlace\t\t _\n> _ http://www.ClassyAd.com\t\t\t _\n> _\t\t\t\t\t\t\t _\n> _ FREE basic property listings/advertisements and searches. _\n> _\t\t\t\t\t\t\t _\n> _ Try our premium, yet inexpensive services for a real\t _\n> _ selling or buying edge!\t\t\t\t _\n> _\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n> \n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Fri, 8 May 1998 10:57:57 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Using psql \\f to change delimiter to space" } ]
[ { "msg_contents": "Forwarded to HACKERS list.\n\t\t-DEJ\n\n> -----Original Message-----\n> From:\[email protected] [SMTP:[email protected]]\n> Sent:\tWednesday, May 06, 1998 1:34 AM\n> To:\tpgsql questions; pgsql hackers\n> Subject:\t[QUESTIONS] FATAL: Backend cache invalidation\n> initialisation failed \n> \n> On a Digital dual PII/32MB RAM/8GB SCSI\n> running the same query 45 times I get:\n> \n> May 6 09:16:27 digital logger: NOTICE: SIAssignBackendId: discarding\n> tag\n> 2147483646\n> May 6 09:16:27 digital logger: FATAL 1: Backend cache invalidation\n> initialization failed\n> ..................\n> May 6 09:16:40 digital logger: FATAL 1: Backend cache invalidation\n> initialization failed\n> May 6 09:17:09 digital PAM_pwdb[397]: (login) session opened for user\n> root\n> by (uid=0)\n> May 6 09:17:09 digital PAM_pwdb[397]: ROOT LOGIN ON tty4\n> May 6 09:21:41 digital logger: NOTICE: LockRelease: find xid, table\n> corrupted\n> May 6 09:23:09 digital logger: NOTICE: Message from PostgreSQL\n> backend:\n> May 6 09:23:09 digital logger: ^IThe Postmaster has informed me that\n> some\n> other backend died abnormally and possibly corrupted shared memory.\n> May 6 09:23:09 digital logger: ^II have rolled back the current\n> transaction\n> and am going to terminate your database system connection and exit.\n> May 6 09:23:09 digital logger: ^IPlease reconnect to the database\n> system\n> and repeat your query.\n> \n> This kills my sistem.\n> Increasing the memory to 96MB didn't work it out.\n> In real life I expect more than 45 queries simultaneously.\n> \n> What can I do ?\n> \n> TIA\n> Claudiu << File: pg-log >> \n", "msg_date": "Fri, 8 May 1998 11:03:45 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] FATAL: Backend cache invalidation initialisation\n\tfailed" } ]
[ { "msg_contents": "Forwarded to HACKERS list.\n\t\t-DEJ\n\n> System Configuration\n> ---------------------\n> Architecture (example: Intel Pentium) : Sun SPARCstation-20\n> \n> Operating System (example: Linux 2.0.26 ELF) : Solaris 5.5.1\n> \n> PostgreSQL version (example: PostgreSQL-6.3.2) : PostgreSQL-6.3.2\n> \n> Compiler used (example: gcc 2.7.2) : gcc v2.7\n> \n> \n> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> \n> 1. Compilation of Postgres 6.3.2 successful.\n> \n> 2. install successful.\n> \n> 3. initdb fails. following is the output of initdb command.\n> \n> Running with debug mode on.\n> initdb: using /app/edo/Postgres-6.3.2/lib/local1_template1.bki.source\n> as input\n> to create the template database.\n> initdb: using /app/edo/Postgres-6.3.2/lib/global1.bki.source as input\n> to create\n> the global classes.\n> initdb: using /app/edo/Postgres-6.3.2/lib/pg_hba.conf.sample as the\n> host-based\n> authentication control file.\n> \n> We are initializing the database system with username pswamy\n> (uid=41705).\n> This user will own all the files and must also own the server process.\n> \n> initdb: creating template database in\n> /app/edo/Postgres-6.3.2/data/base/template1\n> Running: postgres -boot -C -F -D/app/edo/Postgres-6.3.2/data -d\n> template1\n> \n> Creating global classes in /base\n> Running: postgres -boot -C -F -D/app/edo/Postgres-6.3.2/data -d\n> template1\n> \n> Adding template1 database to pg_database...\n> Running: postgres -boot -C -F -D/app/edo/Postgres-6.3.2/data -d\n> template1 <\n> /tmp/create.20796\n> Amopen: relation pg_database. attrsize 63\n> Segmentation Fault - core dumped\n> initdb: could not log template database\n> initdb: cleaning up.\n> \n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Fri, 8 May 1998 11:06:52 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] initdb fail in v 6.3.2" } ]
[ { "msg_contents": "Forwarded to HACKERS and BUGS lists.\n\t\t-DEJ\n\n> -----Original Message-----\n> From:\tIntegration [SMTP:[email protected]]\n> Sent:\tWednesday, May 06, 1998 10:19 AM\n> To:\[email protected]\n> Subject:\t[QUESTIONS] Multiple tables?\n> \n> Hello,\n> \n> I have an Intel x86, with pgsql-6.3.2, and when I create a table, I\n> get a \n> gazillion copies (8 to be exact) of the table, on with user postgres\n> as \n> creator, and 7 with user eddie (a random user I created).\n> \n> Did I do something silly to cause this?\n> \n> Eddie\n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Fri, 8 May 1998 11:11:27 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Multiple tables?" } ]
[ { "msg_contents": "> > When I invoke psql, the default delimiter is the pipe \"|\"\n> > character. I can't find the correct syntax to\n> > change the delimiter back to a single space. If I\n> > type :\n> > \\f <whitespace> -- delimiter stays at |\n> > \\f \\<whitespace> -- delimiter stays at |\n> > \n> > Enclosing the space within single or double quotes produces\n> > a delimiter that actually includes the quotes. I tried\n> > octal specification \\040 for the space which also failed.\n> > \n> > Version 6.3.2 on Linux i386 (Redhat 4).\n> > \n> > What's the correct syntax? \n> \n> I just did\n> \n> \\f \\ \n> ^ space here\n> \n> See:\n> \n> ----------------------------------------------------------------------\n> -----\n> \n> test=> \\f \\ \n> field separator changed to ''\n> test=> select * from pg_user\n> test-> \\g\n> usename usesysidusecreatedbusetraceusesuperusecatupdpasswd valuntil\n> \n> ----------------------------------------------------------------------\n> ------------------\n> postgres 139t t t t ********Sat Jan 31\n> 01:00:00 2037 EST\n> (1 row)\n> \nSorry to correct you Bruce but that causes the delimiter to be set to\nnothing ('') not space (' ').\n\n\t\t-DEJ\n", "msg_date": "Fri, 8 May 1998 11:16:19 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Using psql \\f to change delimiter to space" }, { "msg_contents": "> \n> > > When I invoke psql, the default delimiter is the pipe \"|\"\n> > > character. I can't find the correct syntax to\n> > > change the delimiter back to a single space. If I\n> > > type :\n> > > \\f <whitespace> -- delimiter stays at |\n> > > \\f \\<whitespace> -- delimiter stays at |\n> > > \n> > \n> Sorry to correct you Bruce but that causes the delimiter to be set to\n> nothing ('') not space (' ').\n\nI stand corrected. This has been asked before, and the only solution I\nknow is:\n\n\tpsql -F ' ' testdb\n\nIs that acceptable, or do you want it to work with \\f? The problem is\nthat the trailing spaces are removed from all command lines, so it is not\nseeing the space.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 8 May 1998 13:25:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Using psql \\f to change delimiter to space" } ]
[ { "msg_contents": "Forwarded to HACKERS list.\n\t\t-DEJ\n\n> -----Original Message-----\n> From:\tGiuliano P Procida [SMTP:[email protected]]\n> Sent:\tWednesday, May 06, 1998 12:54 PM\n> To:\[email protected]\n> Subject:\t[QUESTIONS] UInt types\n> \n> Hi.\n> \n> I would like a 32 bit unsigned integer type (to correspond to the SNMP\n> types Counter32 and Gauge32 - Counter64 exists as well). I would\n> rather not use text! I imagine someone has solved this one already? Or\n> has someone added bignums to PostgreSQL (say with the (LGPL) GNU MP\n> library)?\n> \n> Thanks in advance,\n> Giuliano.\n> -- \n> mail: [email protected] / [email protected] | public PGP key ID: 93898735\n> home: +44 1223 561237 / 547 Newmarket Road, Cambridge CB5 8PA, UK\n> work: +44 1223 332127 / Magdalene College, Cambridge CB3 0AG, UK\n> work: +44 1223 335333 / International Studies, Cambridge CB2 1QY, UK\n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Fri, 8 May 1998 11:19:11 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] UInt types" } ]
[ { "msg_contents": "Forwarded to HACKERS list.\n\nYou might want to try a 'make clean' and then './configure', if you ran\nconfigure before you loaded the new package it could still be reporting\nincorrectly.\n\n\t\t-DEJ\n\n> -----Original Message-----\n> From:\tZsolt Varga [SMTP:[email protected]]\n> Sent:\tThursday, May 07, 1998 6:58 AM\n> To:\[email protected]\n> Subject:\t[QUESTIONS] Wrong include/config.h --> Glibc2.0.7pre1 +\n> linux 2.0.33\n> \n> \n> \n> hello!\n> \n> I just installed a fresh debian 2.0 *frozen development version*\n> it's includes a glibc2.0.7pre1 and gcc 2.7.2.3, bindutils 2.9.x \n> \n> i ran ./configure --with-template=linux-elf --enable-hba\n> --enable-locale\n> \n> and I see the correct statements while the configure runs,\n> but after it's ready and created the include/config.h\n> it's not the same ;)\n> \n> like:\n> ...\n> checking for limits.h... (cached) yes\n> checking for unistd.h... (cached) yes\n> checking for termios.h... (cached) yes\n> checking for values.h... (cached) yes\n> checking for sys/select.h... (cached) yes\n> checking for sys/resource.h... (cached) yes\n> checking for netdb.h... (cached) yes\n> checking for arpa/inet.h... (cached) yes\n> checking for getopt.h... (cached) yes\n> checking for readline.h... (cached) yes\n> checking for history.h... (cached) yes\n> ...\n> \n> and my include/config.h looks like this:\n> (this is the parts of the config.h)\n> \n> /* Set to 1 if you have <limits.h> */\n> #undef HAVE_LIMITS_H\n> \n> /* Set to 1 if you have <readline.h> */\n> #undef HAVE_READLINE_H\n> \n> /* Set to 1 if you have <history.h> */\n> #undef HAVE_HISTORY\n> \n> /* Set to 1 if you have <readline/history.h> */\n> #undef HAVE_READLINE_HISTORY_H\n> \n> /* Set to 1 if you have <readline/readline.h> */\n> #undef HAVE_READLINE_READLINE_H\n> \n> \n> and so on...\n> \tcould someone help me ?\n> \n> \tredax\n> \n> .----------------------------------------------------------.\n> |Zsolt Varga | tel/fax: +36 36 422811 |\n> | AgriaComputer LTD | email: [email protected] |\n> | System Administrator | URL: http://www.agria.hu/ |\n> `----------------------------------------------------------'\n> \n> \n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Fri, 8 May 1998 11:24:21 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Wrong include/config.h --> Glibc2.0.7pre1 + linux\n\t2.0.33" } ]
[ { "msg_contents": "Hello all,\n\nThere is a new odbc driver (version 6.30.0237) and source code at our\nwebsite (http://www.insightdist.com/psqlodbc). This one contains some\nbug fixes AND alot of setup options! You can now configure the driver\nmore closely to match your needs hopefully. Please click on the version\nlink and/or the \"dialog boxes\" link for more detailed information. Some\nquick highlights are:\n\n- By democratic vote, uses ISO datestyle, period! Automatically sets\nthis datestyle to backend on connection.\n- Advanced options for Driver and Datasource (and lots of them).\n- Ability to control whether cursors are used, thus emulating the old\ndriver/libpq behavior.\n- Ability to control how unknown sizes are reported: \"Longest\" emulates\nold driver/libpq behavior.\n- Recognizing Unique indexes is a driver option.\n- Some data type mappings and data type sizes are configurable.\n- SQLExtendedFetch implemented when not using cursors.\n\nThere is also a \"Defaults\" button to set all these options back to\noptimum settings.\n\nNote, there are performance issues involved with not using cursors,\nsince the driver must suck down all the rows in the result set.\nHowever, there are some advantages when it comes to updating tables\nsince the tables are not kept locked by the backend. If you are having\nproblems with locked tables or the driver hanging, try setting Use\nCursors to false. This is a workaround until whenever the backend\nlocking improves. Also, when not using cursors, the sizes of character\ndata types varchar and text can be known since all the tuples are\nretrieved in the result set. So if you set the option \"Longest\" on the\nAdvanced Options (Driver), this will be possible.\n\n\nFeedback on this new driver is appreciated.\n\nRegards,\n\nByron.\n\n", "msg_date": "Fri, 08 May 1998 12:35:19 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "NEW ODBC DRIVER" }, { "msg_contents": "I just downloaded it, and I cannot get any queries that have an\n\"ORDER BY\" in them to work. Access '97 keeps returning an ODBC call\nfailed error, and gives me the message:\n\n ERROR: The field being ordered must appear in the target list (#1)\n\nIf I cut and paste the SQL code directly into a psql session, it\nruns just fine.\n\n---\nChris Osborn, Network Administrator T3West/WebCow!\n707 255 9330 x225 - Voice 1804 Soscol Ave, #203\n707 224 9916 - Fax Napa, CA 94559\n<[email protected]> <http://t3west.com/>\n\n", "msg_date": "Fri, 8 May 1998 17:49:36 PDT", "msg_from": "\"Chris Osborn\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] NEW ODBC DRIVER" }, { "msg_contents": "It also keeps insisting that the connection is read only, even\nthough I unchecked the read only boxes in both the Driver settings\nand Advanced settings.\n\n---\nChris Osborn, Network Administrator T3West/WebCow!\n707 255 9330 x225 - Voice 1804 Soscol Ave, #203\n707 224 9916 - Fax Napa, CA 94559\n<[email protected]> <http://t3west.com/>\n\n", "msg_date": "Fri, 8 May 1998 18:12:56 PDT", "msg_from": "\"Chris Osborn\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] NEW ODBC DRIVER" }, { "msg_contents": "Access insists on using order by clauses like that, even though the\ndriver returns information saying it can't support it!\n\nPostgres simply can't handle order by clauses without the fields also\nbeing in the target.\n\nThe error you see is a legitimate error coming from the backend.\n\nWhen the backend can handle those kinds of order by clauses, the error\nwill stop happening.\n\nByron\n\n\nChris Osborn wrote:\n\n> I just downloaded it, and I cannot get any queries that have an\n> \"ORDER BY\" in them to work. Access '97 keeps returning an ODBC call\n> failed error, and gives me the message:\n>\n> ERROR: The field being ordered must appear in the target list (#1)\n>\n> If I cut and paste the SQL code directly into a psql session, it\n> runs just fine.\n>\n> ---\n> Chris Osborn, Network Administrator T3West/WebCow!\n> 707 255 9330 x225 - Voice 1804 Soscol Ave, #203\n> 707 224 9916 - Fax Napa, CA 94559\n> <[email protected]> <http://t3west.com/>\n\n\n\n", "msg_date": "Sat, 09 May 1998 20:12:21 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] NEW ODBC DRIVER" }, { "msg_contents": "Hello,\n\nFor Access to be able to update records it must have a unique key.\nAccess 95 and 97 can ask for the key if the driver doesn't automatically\nreturn it.\nAccess 2.0 can not ask and relies completely on the driver to return the\nunique key.\n\nThere is a setting in the Driver Options dialog which controls how this\nhappens.\nIt is called \"Recognize Unique Indexes\". If checked, the driver will\nautomatically return the information. If not checked (the default), it\nwill not return a unique index.\n\nHope this helps.\n\nByron\n\n\nChris Osborn wrote:\n\n> It also keeps insisting that the connection is read only, even\n> though I unchecked the read only boxes in both the Driver settings\n> and Advanced settings.\n>\n> ---\n> Chris Osborn, Network Administrator T3West/WebCow!\n> 707 255 9330 x225 - Voice 1804 Soscol Ave, #203\n> 707 224 9916 - Fax Napa, CA 94559\n> <[email protected]> <http://t3west.com/>\n\n\n\n", "msg_date": "Sat, 09 May 1998 20:15:37 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] NEW ODBC DRIVER" }, { "msg_contents": "So what can I do to sort in Access '97? I doubt the backend will\nbe getting changed anytime soon.\n\n---\nChris Osborn, Network Administrator T3West/WebCow!\n707 255 9330 x225 - Voice 1804 Soscol Ave, #203\n707 224 9916 - Fax Napa, CA 94559\n<[email protected]> <http://t3west.com/>\n\nOn May. 09 98, 17:12 PDT, \"Byron Nikolaidis\" <[email protected]>\nwrote:\n\n> Access insists on using order by clauses like that, even though\n> the driver returns information saying it can't support it!\n\n> Postgres simply can't handle order by clauses without the fields\n> also being in the target.\n\n> The error you see is a legitimate error coming from the backend.\n\n> When the backend can handle those kinds of order by clauses, the\n> error will stop happening.\n\n", "msg_date": "Mon, 11 May 1998 10:45:09 PDT", "msg_from": "\"Chris Osborn\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] NEW ODBC DRIVER" }, { "msg_contents": "\n\nJose' Soares Da Silva wrote:\n\n> I have a problem with types.\n> I created a table with a column of type money and another with type\n> bool, Access translate money to numeric (double precision) and bool to\n> text.\n> I thought that Access recognized this types as Money and Yes/No.\n> Is it an ODBC or a PostgreSQL problem ?\n> Thanks, Jose'\n\n\nIn my tests, Access never bothered to retrieve the information returned by\nthe driver which says that the field is a MONEY type. I chose to make it a\nnumeric, but I could make it character, which would allow you to see the\nmoney symbols, but I'm not sure if you could perform calculations on it?\n\nAs for the BOOL problem, I tried to return it as a SQL_BOOL, but Access\ndisplayed it as 0=FALSE, and (-1)=TRUE. Why does TRUE translate to a -1, I\nhave no idea. But for that reason, I chose to make it a character type\ninstead.\n\nI could add options to the setup dialog for handling these two types, if\nanyone's interested.\n\nByron\n\n", "msg_date": "Mon, 11 May 1998 08:35:29 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NEW ODBC DRIVER" }, { "msg_contents": "On Fri, 8 May 1998, Byron Nikolaidis wrote:\n\n> Hello all,\n> \n> There is a new odbc driver (version 6.30.0237) and source code at our\n> ...\n> Feedback on this new driver is appreciated.\n> \nI just down load the ODBC 6.30.0238\nand seems that it works well with Access. Thanks to Byron. Great job!\n\nI have a problem with types.\nI created a table with a column of type money and another with type\nbool, Access translate money to numeric (double precision) and bool to\ntext.\nI thought that Access recognized this types as Money and Yes/No.\nIs it an ODBC or a PostgreSQL problem ?\n Thanks, Jose'\n\n", "msg_date": "Mon, 11 May 1998 13:25:03 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NEW ODBC DRIVER" }, { "msg_contents": "Hello,\n\nAt 08.35 11/05/98 -0400, Byron Nikolaidis wrote:\n>As for the BOOL problem, I tried to return it as a SQL_BOOL, but Access\n>displayed it as 0=FALSE, and (-1)=TRUE. Why does TRUE translate to a -1, I\n>have no idea. But for that reason, I chose to make it a character type\n>instead.\n\nThis is an MS brain damage implementation of Booleans. It is used this way\nstarting from MS Access 1.0 up to VB 5.0. I don't know why MS decided to\nuse this convention in the early MS Access 1.0 age but for compatibility\nreason they had to retain it up to the most recent version of their\ndevelopment programs.\n\nBye !\n\n\tDr. Sbragion Denis\n\tInfoTecna\n\tTel, Fax: +39 39 2324054\n\tURL: http://space.tin.it/internet/dsbragio\n", "msg_date": "Mon, 11 May 1998 18:15:24 +0200", "msg_from": "Sbragion Denis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: NEW ODBC DRIVER" }, { "msg_contents": "Sbragion Denis wrote:\n\n> Hello,\n>\n> At 08.35 11/05/98 -0400, Byron Nikolaidis wrote:\n> >As for the BOOL problem, I tried to return it as a SQL_BOOL, but Access\n> >displayed it as 0=FALSE, and (-1)=TRUE. Why does TRUE translate to a -1, I\n> >have no idea. But for that reason, I chose to make it a character type\n> >instead.\n>\n> This is an MS brain damage implementation of Booleans. It is used this way\n> starting from MS Access 1.0 up to VB 5.0. I don't know why MS decided to\n> use this convention in the early MS Access 1.0 age but for compatibility\n> reason they had to retain it up to the most recent version of their\n> development programs.\n>\n>\n\nOK,\n\nI'm gonna make it an option. But, as I mentioned before, there are some\nweirdnesses with Access. Here's another weird thing with the way it handles\nNULL SQL_BIT columns.\n\nIf I have my Postgres bool column, and it contains a NULL, Access automatically\ndisplays it as \"0\". Then if I try to update the record, it uses the \"0\" in the\nwhere clause. Well guess what, no records are updated because the \"0\" doesn't\nmatch the NULL in the record, and you get this ugly message about a user\nconflict!\n\nWhen BOOLS are handled as character data, this doesnt happen of course.\n\nAnybody got any ideas about this?\n\nByron\n\n\n", "msg_date": "Mon, 11 May 1998 14:42:16 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] Re: NEW ODBC DRIVER" }, { "msg_contents": "I suspect that it is only going to be the MS Access 97 users that are\ngoing to suffer from this weakness in the backend. I believe Access is\ntrying to optimize somehow by breaking a single multi-join statement into\nmultiple statements. To do this Access must be joining on the client\nside based on a relative row position rather than the specified join\ncolumns.\n\nUntil the problem is resolved in the backend, the workaround is to\nexplicitly include the missing attributes in the query. In my\nexperience, the missing attributes are usually from one or more sides of\nany join clauses. But, because Access is not showing the actual\nstatements it is sending to the backend, you will have to guess the\nattributes until the query succeeds. (You could also look at the log\nfile)\n\nIt is not very difficult to produce this problem in MS Access 97; I\nexpect my users to beat me up pretty good on this issue. Thus, I plan to\nlook into making the fix in the backend myself. Conceptually it does not\nseem too difficult.\n\n1. Add a hidden attribute to the target node structure.\n\n2. Modify the parser/analyzer to add any attributes in the GROUP/ORDER BY\nclause that are missing from the target list, to the target list with the\nhidden attribute set.\n\n3. Strip the hidden nodes from the target list projection of the query.\n\n4. Add the feature to HAVING clause?\n\nAny, hints, comments, or objections?\n\nChris Osborn wrote:\n\n> So what can I do to sort in Access '97? I doubt the backend will\n> be getting changed anytime soon.\n>\n> On May. 09 98, 17:12 PDT, \"Byron Nikolaidis\" <[email protected]>\n> wrote:\n>\n> > Access insists on using order by clauses like that, even though\n> > the driver returns information saying it can't support it!\n>\n> > Postgres simply can't handle order by clauses without the fields\n> > also being in the target.\n>\n> > The error you see is a legitimate error coming from the backend.\n>\n> > When the backend can handle those kinds of order by clauses, the\n> > error will stop happening.", "msg_date": "Mon, 11 May 1998 15:20:58 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": false, "msg_subject": "Group/Order by not in target - Was [NEW ODBC DRIVER]" }, { "msg_contents": "> > At 08.35 11/05/98 -0400, Byron Nikolaidis wrote:\n> > >As for the BOOL problem, I tried to return it as a SQL_BOOL, but Access\n> > >displayed it as 0=FALSE, and (-1)=TRUE. Why does TRUE translate to a -1, I\n> > >have no idea. But for that reason, I chose to make it a character type\n> > >instead.\n> >\n> \n> If I have my Postgres bool column, and it contains a NULL, Access automatically\n> displays it as \"0\". Then if I try to update the record, it uses the \"0\" in the\n> where clause. Well guess what, no records are updated because the \"0\" doesn't\n> match the NULL in the record, and you get this ugly message about a user\n> conflict!\n> \n> When BOOLS are handled as character data, this doesnt happen of course.\n> \n> Anybody got any ideas about this?\n\nWhen migrating tables from Access 2.0 to an SQL Server (Informix,\nInterbase or PostgreSQL) I'm using INT4 to simulate boolean values\n(0=False, -1=True), all Access queries using boolean columns will\nwork as before with native Access tables:\n\n\"... WHERE (b=True);\" selects all rows with column b == -1\n\"... WHERE (b=False);\" selects all rows with column b == 0\n\"... WHERE (b is Null);\" selects all rows with column b == NULL\n\nOf course you can't issue queries like \"...where (b = true);\" on the \nUNIX side.\n\nRegards,\nOlaf\n--\nOlaf Mittelstaedt - IuK - [email protected]\nFachhochschule Ulm Prittwitzstr. 10 89075 Ulm\nTel.: +49 (0)731-502-8220 Fax: -8270\n\n Tertium non datur.\n\n", "msg_date": "Tue, 12 May 1998 09:50:22 +0100", "msg_from": "\"Olaf Mittelstaedt\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NEW ODBC DRIVER" }, { "msg_contents": "On Mon, 11 May 1998, Byron Nikolaidis wrote:\n\n> \n> \n> Jose' Soares Da Silva wrote:\n> \n> > I have a problem with types.\n> > I created a table with a column of type money and another with type\n> > bool, Access translate money to numeric (double precision) and bool to\n> > text.\n> > I thought that Access recognized this types as Money and Yes/No.\n> > Is it an ODBC or a PostgreSQL problem ?\n> > Thanks, Jose'\n> \n> \n> In my tests, Access never bothered to retrieve the information returned by\n> the driver which says that the field is a MONEY type. I chose to make it a\n> numeric, but I could make it character, which would allow you to see the\n> money symbols, but I'm not sure if you could perform calculations on it?\n\nNumeric should be OK, but Access doesn't read the data on money fields.\nAccess displays the word \"#deleted\" on the field instead of data.\n\n Jose'\n\n", "msg_date": "Tue, 12 May 1998 11:02:43 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: NEW ODBC DRIVER" }, { "msg_contents": "On Tue, 12 May 1998, Jose' Soares Da Silva wrote:\n\n> On Mon, 11 May 1998, Byron Nikolaidis wrote:\n> \n> > \n> > In my tests, Access never bothered to retrieve the information returned by\n> > the driver which says that the field is a MONEY type. I chose to make it a\n> > numeric, but I could make it character, which would allow you to see the\n> > money symbols, but I'm not sure if you could perform calculations on it?\n> \n> Numeric should be OK, but Access doesn't read the data on money fields.\n> Access displays the word \"#deleted\" on the field instead of data.\n\nPlease forget the above message, now it works. The reason for \"#deleted\"\nprobably was that Access put this field as the primary key.\n Jose'\n\n", "msg_date": "Tue, 12 May 1998 12:17:56 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: NEW ODBC DRIVER" }, { "msg_contents": "This sounds like a different problem. I have tested money columns and at least I\ncan see the data. Since the driver must convert the money type into a numeric,\nmaybe it is having trouble with your type of money. Didn't you say that you are\nnot using dollars?\n\nByron\n\nJose' Soares Da Silva wrote:\n\n> On Mon, 11 May 1998, Byron Nikolaidis wrote:\n>\n> >\n> >\n> > Jose' Soares Da Silva wrote:\n> >\n> > > I have a problem with types.\n> > > I created a table with a column of type money and another with type\n> > > bool, Access translate money to numeric (double precision) and bool to\n> > > text.\n> > > I thought that Access recognized this types as Money and Yes/No.\n> > > Is it an ODBC or a PostgreSQL problem ?\n> > > Thanks, Jose'\n> >\n> >\n> > In my tests, Access never bothered to retrieve the information returned by\n> > the driver which says that the field is a MONEY type. I chose to make it a\n> > numeric, but I could make it character, which would allow you to see the\n> > money symbols, but I'm not sure if you could perform calculations on it?\n>\n> Numeric should be OK, but Access doesn't read the data on money fields.\n> Access displays the word \"#deleted\" on the field instead of data.\n>\n> Jose'\n\n\n\n", "msg_date": "Tue, 12 May 1998 10:37:08 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] Re: NEW ODBC DRIVER" }, { "msg_contents": "On Mon, 11 May 1998, David Hartwig wrote:\n\n \n> It is not very difficult to produce this problem in MS Access 97; I\n> expect my users to beat me up pretty good on this issue. Thus, I plan to\n> look into making the fix in the backend myself. Conceptually it does not\n> seem too difficult.\n> \n> 1. Add a hidden attribute to the target node structure.\n> \n> 2. Modify the parser/analyzer to add any attributes in the GROUP/ORDER BY\n> clause that are missing from the target list, to the target list with the\n> hidden attribute set.\nThis would be a great enhancement! \nSQL92 specifies that columns in the ORDER BY must appear in the\nSELECT clause, but this limitation has no sense, indeed many databases\nalready implement this enhancement.\nGo for it!\n\n", "msg_date": "Fri, 15 May 1998 13:34:04 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Group/Order by not in target - Was [NEW ODBC DRIVER]" }, { "msg_contents": "> \n> On Mon, 11 May 1998, David Hartwig wrote:\n> \n> \n> > It is not very difficult to produce this problem in MS Access 97; I\n> > expect my users to beat me up pretty good on this issue. Thus, I plan to\n> > look into making the fix in the backend myself. Conceptually it does not\n> > seem too difficult.\n> > \n> > 1. Add a hidden attribute to the target node structure.\n> > \n> > 2. Modify the parser/analyzer to add any attributes in the GROUP/ORDER BY\n> > clause that are missing from the target list, to the target list with the\n> > hidden attribute set.\n> This would be a great enhancement! \n> SQL92 specifies that columns in the ORDER BY must appear in the\n> SELECT clause, but this limitation has no sense, indeed many databases\n> already implement this enhancement.\n> Go for it!\n\nThere already is code in the backend for Junk fields to be removed. Not\nsure what it does, though.\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 15 May 1998 09:51:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Group/Order by not in target - Was\n\t[NEW ODBC DRIVER]" } ]
[ { "msg_contents": "Forwarded to HACKERS and BUGS lists.\n\n> Dear postgreSQL developers,\n> \n> I like your postgreSQL very much, but I have some problems. Please\n> help\n> me!\n> \n> My first problem:\n> If I create end destroy a table several times (for instance\n> table\n> ANALOG), after some trials I can't create this table (ANJALOG) again.\n> When I list my tables with \\dt command (in psql), psql says that I\n> haven't got any tables at all. Despite the fact that I haven't got any\n> tables, I can't create table ANALOG, but I can create another table\n> (for\n> instance BOCI).\nI'm on a Redhat 5.0 system running PG6.3.1 and I cannot recreate your\nproblem. I tried 40 create table/drop table's in a row and had no\nproblem (thank goodness for readline). You might want to try a 'vacuum\nanalyze;' and try the create again.\n\n> My second problem:\n> I can't create a table with name group. \n> If I write:\n> create table group (puszi int);\n> then psql says:\n> ERROR: parser: parse error at or near 'group'\n> \n'group' is reserved in PGSQL. The developers might have a workaround\nfor you, but it might just be easier to change the name of the table.\nAdd a prefix or a suffix i.e. mygroup, user_group, t_group, group_man,\ngroup_t, ...\n\n> I can't decide, that I've committed some errors or there are some bugs\n> in postgreSQL. \n> Please help me, and tell me what to do! Please notify me, when you\n> will\n> issue your next version of postgreSQL (I have 6.3)!\nWe are up to 6.3.2 with some patches, you might want to upgrade it could\nsolve your problem with the table creation.\n\n> Thank you for your attention.\n> \n> Gyorgy Lendvary\n> \n> e-mail: [email protected]\n> \n> P.S.: I'm sorry my terrible English, but I'm Hungarian!\nNot a problem, I'm a Texan and I understand you just fine.\nI'll bet you every cent that I have that your English is better than my\nHungarian.\n\nHope this helps,\n\t\t-DEJ\n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n", "msg_date": "Fri, 8 May 1998 11:50:23 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [QUESTIONS] Important questions about PostgreSQL" } ]
[ { "msg_contents": "> \n> > \n> > OK, here is my argument for inlining tas().\n> \n> I am going out of town till Monday, so don't have time to give this\n> the thoughtful response it deserves. I will get back to you on this as\n> I am not quite convinced, but obviously in the face of this I need to\n> explain my reasoning.\n\nSure. I just know that the reduction from 0.28 to 0.08 was performed\none 0.01 at a time. See the MemSet() macro. I am sure you will hate it\ntoo, but it did reduce the number of calls to memset() and reduced\nwallclock execution time as measured from the client.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 8 May 1998 13:31:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Try again: S_LOCK reduced contentionh]" }, { "msg_contents": "Bruce Momjian:\n> > > OK, here is my argument for inlining tas().\n> David Gould: \n> > I am going out of town till Monday, so don't have time to give this\n> > the thoughtful response it deserves. I will get back to you on this as\n> > I am not quite convinced, but obviously in the face of this I need to\n> > explain my reasoning.\n> \n> Sure. I just know that the reduction from 0.28 to 0.08 was performed\n> one 0.01 at a time. See the MemSet() macro. I am sure you will hate it\n> too, but it did reduce the number of calls to memset() and reduced\n> wallclock execution time as measured from the client.\n\nThis is always how micro-optimization goes, 1% and 2% gains here and there.\nI am very familiar with it.\n\nAnyhow, I am just back, and it occured to me to ask you for an exact\nreproducable test setup, so that I can run the same thing and play around\nwith it a bit. I am not convinced by your result for a number of reasons,\nbut rather than just make assertions or speculations, I think I should do a\nlittle data collection.\n\nAlso, if you have any other cases that you believe useful to test with, I\nwould be very happy to try those too.\n\nHave you done call graph profile for this, or just flat profiling? I think\nyou may find the call graph (gprof) output revealing, although perhaps not\non this particular topic...\n\nOne last item, appropos start up time. Illustra uses what we call \"Server\nCaching\". Basically when a connect is terminated, the backend instead of\nexiting goes into a pool of idle servers. When next a connection comes in for\nthe same database, instead of forking and initing a new server, we merely\nreuse the old one. This saves lots of startup time. However, there are some\nproblems in practice so we might want to do something \"like this only\ndifferent\". The idea that occurred to me is to have the postmaster \n\"pre-spawn\" some servers in each (configurable) database. These would run\nall the initialization and then just wait for a socket to be handed to them.\nThe postmaster would during idle time replenish the pool of ready servers.\nI think this might have a lot more impact on startup time than turning things\ninto macros...\n\nThoughts?\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Sun, 10 May 1998 22:53:02 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Try again: S_LOCK reduced contentionh]" }, { "msg_contents": "[email protected] (David Gould) writes:\n> The idea that occurred to me is to have the postmaster \n> \"pre-spawn\" some servers in each (configurable) database. These would run\n> all the initialization and then just wait for a socket to be handed to them.\n> The postmaster would during idle time replenish the pool of ready servers.\n\nCool idea ... but how to get the socket passed off from postmaster to\nback end, other than through a fork?\n\nI think there is a facility in SYSV messaging to transmit a file\ndescriptor from one process to another, but that's not going to be a\nportable answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 May 1998 10:49:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh] " }, { "msg_contents": "\nsame way that the current network socket is passed -- through an execv\nargument. hopefully, however, the non-execv()ing fork will be in 6.4.\n\ndoes anyone have any suggestions for postmaster->backend variable\npassing? Should it just pass an argv array for compatiblity reasons?\nThere will have to be some sort of arg parsing in any case,\nconsidering that you can pass configurable arguments to the backend..\n\nOn Mon, 11 May 1998, at 10:49:11, Tom Lane wrote:\n\n> Cool idea ... but how to get the socket passed off from postmaster to\n> back end, other than through a fork?\n> \n> I think there is a facility in SYSV messaging to transmit a file\n> descriptor from one process to another, but that's not going to be a\n> portable answer.\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Mon, 11 May 1998 07:57:23 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh] " }, { "msg_contents": "Brett McCormick <[email protected]> writes:\n> same way that the current network socket is passed -- through an execv\n> argument. hopefully, however, the non-execv()ing fork will be in 6.4.\n\nUm, you missed the point, Brett. David was hoping to transfer a client\nconnection from the postmaster to an *already existing* backend process.\nFork, with or without exec, solves the problem for a backend that's\nstarted after the postmaster has accepted the client socket.\n\nThis does lead to a different line of thought, however. Pre-started\nbackends would have access to the \"master\" connection socket on which\nthe postmaster listens for client connections, right? Suppose that we\nfire the postmaster as postmaster, and demote it to being simply a\nmanufacturer of new backend processes as old ones get used up. Have\none of the idle backend processes be the one doing the accept() on the\nmaster socket. Once it has a client connection, it performs the\nauthentication handshake and then starts serving the client (or just\nquits if authentication fails). Meanwhile the next idle backend process\nhas executed accept() on the master socket and is waiting for the next\nclient; and shortly the postmaster/factory/whateverwecallitnow notices\nthat it needs to start another backend to add to the idle-backend pool.\n\nThis'd probably need some interlocking among the backends. I have no\nidea whether it'd be safe to have all the idle backends trying to\ndo accept() on the master socket simultaneously, but it sounds risky.\nBetter to use a mutex so that only one gets to do it while the others\nsleep.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 May 1998 11:14:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh] " }, { "msg_contents": "Meanwhile, *I* missed the point about Brett's second comment :-(\n\nBrett McCormick <[email protected]> writes:\n> There will have to be some sort of arg parsing in any case,\n> considering that you can pass configurable arguments to the backend..\n\nIf we do the sort of change David and I were just discussing, then the\npre-spawned backend would become responsible for parsing and dealing\nwith the PGOPTIONS portion of the client's connection request message.\nThat's just part of shifting the authentication handshake code from\npostmaster to backend, so it shouldn't be too hard.\n\nBUT: the whole point is to be able to initialize the backend before it\nis connected to a client. How much of the expensive backend startup\nwork depends on having the client connection options available?\nAny work that needs to know the options will have to wait until after\nthe client connects. If that means most of the startup work can't\nhappen in advance anyway, then we're out of luck; a pre-started backend\nwon't save enough time to be worth the effort. (Unless we are willing\nto eliminate or redefine the troublesome options...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 May 1998 11:26:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh] " }, { "msg_contents": "On Mon, 11 May 1998, at 11:14:43, Tom Lane wrote:\n\n> Brett McCormick <[email protected]> writes:\n> > same way that the current network socket is passed -- through an execv\n> > argument. hopefully, however, the non-execv()ing fork will be in 6.4.\n> \n> Um, you missed the point, Brett. David was hoping to transfer a client\n> connection from the postmaster to an *already existing* backend process.\n> Fork, with or without exec, solves the problem for a backend that's\n> started after the postmaster has accepted the client socket.\n\nThat's what I get for jumping in on a thread I wasn't paying much\nattention to begin with.\n", "msg_date": "Mon, 11 May 1998 08:32:16 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh] " }, { "msg_contents": "Tom Lane:\n> Meanwhile, *I* missed the point about Brett's second comment :-(\n> \n> Brett McCormick <[email protected]> writes:\n> > There will have to be some sort of arg parsing in any case,\n> > considering that you can pass configurable arguments to the backend..\n> \n> If we do the sort of change David and I were just discussing, then the\n> pre-spawned backend would become responsible for parsing and dealing\n> with the PGOPTIONS portion of the client's connection request message.\n> That's just part of shifting the authentication handshake code from\n> postmaster to backend, so it shouldn't be too hard.\n> \n> BUT: the whole point is to be able to initialize the backend before it\n> is connected to a client. How much of the expensive backend startup\n> work depends on having the client connection options available?\n> Any work that needs to know the options will have to wait until after\n> the client connects. If that means most of the startup work can't\n> happen in advance anyway, then we're out of luck; a pre-started backend\n> won't save enough time to be worth the effort. (Unless we are willing\n> to eliminate or redefine the troublesome options...)\n\nI was thinking that we would have a pool of ready servers _per_database_.\nThat is, we would be able to configure say 8 servers in a particular DB, and\nsay 4 in another DB etc. These servers could run most of the way through\ninitialization (open catalogs, read in syscache etc). Then they would wait\nuntil a connection for the desired DB was handed to them by the postmaster.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n\n", "msg_date": "Mon, 11 May 1998 15:09:49 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh]" }, { "msg_contents": "> I was thinking that we would have a pool of ready servers _per_database_.\n> That is, we would be able to configure say 8 servers in a particular DB, and\n> say 4 in another DB etc. These servers could run most of the way through\n> initialization (open catalogs, read in syscache etc). Then they would wait\n> until a connection for the desired DB was handed to them by the postmaster.\n> \n\nOK, but how do you invalidate the catalog items that have changed from\nthe startup to the time it gets the client connection?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 11 May 1998 23:19:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh]" }, { "msg_contents": "Bruce Momjian:\n> > I was thinking that we would have a pool of ready servers _per_database_.\n> > That is, we would be able to configure say 8 servers in a particular DB, and\n> > say 4 in another DB etc. These servers could run most of the way through\n> > initialization (open catalogs, read in syscache etc). Then they would wait\n> > until a connection for the desired DB was handed to them by the postmaster.\n> > \n> \n> OK, but how do you invalidate the catalog items that have changed from\n> the startup to the time it gets the client connection?\n\nSame way we always do?\n\nIs there any reason the \"ready\" servers can't track the Shared Invalidate\ncache like any other backend? Maybe they have to time out their wait for a\nsocket every few seconds and process SI updates, but it should be possible\nto make this work. Perhaps not as easy as all that, but certainly doable I\nwould guess.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Mon, 11 May 1998 23:03:10 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh]" }, { "msg_contents": ">>>> I was thinking that we would have a pool of ready servers\n>>>> _per_database_. That is, we would be able to configure say 8\n>>>> servers in a particular DB, and say 4 in another DB etc. These\n>>>> servers could run most of the way through initialization (open\n>>>> catalogs, read in syscache etc). Then they would wait until a\n>>>> connection for the desired DB was handed to them by the postmaster.\n\nWhat I'm wondering is just how much work will actually be saved by the\nadditional complexity.\n\nWe are already planning to get rid of the exec() of the backend, right,\nand use only a fork() to spawn off the background process? How much of\nthe startup work consists only of recreating state that is lost by exec?\n\nIn particular, I'd imagine that the postmaster process already has open\n(or could have open) all the necessary files, shared memory, etc.\nThis state will be inherited automatically across the fork.\n\nTaking this a little further, one could imagine the postmaster\nmaintaining the same shared state as any backend (tracking SI cache,\nfor example). Then a forked copy should be Ready To Go with very\nlittle work except processing the client option string.\n\nHowever I can see a downside to this: bugs in the backend interaction\nstuff would become likely to take down the postmaster along with the\nbackends. The only thing that makes the postmaster more robust than\nthe backends is that it *isn't* doing as much as they do.\n\nSo probably the Apache-style solution (pre-started backends listen for\nclient connection requests) is the way to go if there is enough bang\nfor the buck to justify restructuring the postmaster/backend division\nof labor. Question is, how much will that buy that just getting rid\nof exec() won't?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 May 1998 10:30:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh] " }, { "msg_contents": "> \n> Bruce Momjian:\n> > > > OK, here is my argument for inlining tas().\n> > David Gould: \n> > > I am going out of town till Monday, so don't have time to give this\n> > > the thoughtful response it deserves. I will get back to you on this as\n> > > I am not quite convinced, but obviously in the face of this I need to\n> > > explain my reasoning.\n> > \n> > Sure. I just know that the reduction from 0.28 to 0.08 was performed\n> > one 0.01 at a time. See the MemSet() macro. I am sure you will hate it\n> > too, but it did reduce the number of calls to memset() and reduced\n> > wallclock execution time as measured from the client.\n> \n> This is always how micro-optimization goes, 1% and 2% gains here and there.\n> I am very familiar with it.\n\nI said 0.01 seconds at a time, not 1% at a time. At this point, a 0.01\nseconds savings is 12%, because the total test takes 0.08 seconds.\n\n> \n> Anyhow, I am just back, and it occured to me to ask you for an exact\n> reproducable test setup, so that I can run the same thing and play around\n> with it a bit. I am not convinced by your result for a number of reasons,\n> but rather than just make assertions or speculations, I think I should do a\n> little data collection.\n> \n> Also, if you have any other cases that you believe useful to test with, I\n> would be very happy to try those too.\n> \n> Have you done call graph profile for this, or just flat profiling? I think\n> you may find the call graph (gprof) output revealing, although perhaps not\n> on this particular topic...\n\nI ran gprof. I did not look at the call graph, just the total number of\ncalls. We have a very modular system, and the call overhead can get\nexessive. gprof shows tas() getting called far more than any other\nfunction. It shows it as 0.01 seconds, on a 0.08 second test! Now, I\nrealize that gprof measurement is not perfect, but it certainly shows\ntas as being called a lot.\n\nThe test is easy. Execute this from psql:\n\n\tselect * from pg_type where oid = 234234;\n\nCompile with profiling, run this from psql, and run gprof on the\ngmon.out file in pgsql/data/base/testdb.\n\nI don't understand your hesitation. The code WAS inlined. It was\ninlined because gprof showed is as being called a lot. Most of them are\nASM anyway, so what does it matter if it sits in a a *.c or *.h file, an\nasm() call looks the same in a macro or in a function.\n\nIf it makes you feel better, put it in something called tas.h, and add\nit as an include in all the files that include s_lock.h, or have\ns_lock.h include tas.h.\n\nI am not looking around for 1% optimization. I am using gprof output to\nimprove things that gprof shows need improving.\n\n> \n> One last item, appropos start up time. Illustra uses what we call \"Server\n> Caching\". Basically when a connect is terminated, the backend instead of\n> exiting goes into a pool of idle servers. When next a connection comes in for\n> the same database, instead of forking and initing a new server, we merely\n> reuse the old one. This saves lots of startup time. However, there are some\n> problems in practice so we might want to do something \"like this only\n> different\". The idea that occurred to me is to have the postmaster \n> \"pre-spawn\" some servers in each (configurable) database. These would run\n> all the initialization and then just wait for a socket to be handed to them.\n> The postmaster would during idle time replenish the pool of ready servers.\n> I think this might have a lot more impact on startup time than turning things\n> into macros...\n\n\nSure, it will have a lot more impact than making things into macros, and\nI am all for it, but inlining does improve things, and it was a macro\nthat worked on all platforms before it was changed. (Except\nlinux/alpha, which has to be a function.)\n\nWe have tons of macros already from Berkeley. ctags makes the macros\njust as easy for me to reference as functions.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 12:38:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Try again: S_LOCK reduced contentionh]" }, { "msg_contents": "> \n> >>>> I was thinking that we would have a pool of ready servers\n> >>>> _per_database_. That is, we would be able to configure say 8\n> >>>> servers in a particular DB, and say 4 in another DB etc. These\n> >>>> servers could run most of the way through initialization (open\n> >>>> catalogs, read in syscache etc). Then they would wait until a\n> >>>> connection for the desired DB was handed to them by the postmaster.\n> \n> What I'm wondering is just how much work will actually be saved by the\n> additional complexity.\n> \n> We are already planning to get rid of the exec() of the backend, right,\n> and use only a fork() to spawn off the background process? How much of\n> the startup work consists only of recreating state that is lost by exec?\n\nYes, exec() should be gone by 6.4.\n\n> \n> In particular, I'd imagine that the postmaster process already has open\n> (or could have open) all the necessary files, shared memory, etc.\n> This state will be inherited automatically across the fork.\n\nYes removal of exec() will allow us some further optimization. We can't\ninherit the open() across fork() because the file descriptor contains an\noffset, and if we shared them, then one fseek() would be seen by all\nbackends.\n\n> \n> Taking this a little further, one could imagine the postmaster\n> maintaining the same shared state as any backend (tracking SI cache,\n> for example). Then a forked copy should be Ready To Go with very\n> little work except processing the client option string.\n> \n> However I can see a downside to this: bugs in the backend interaction\n> stuff would become likely to take down the postmaster along with the\n> backends. The only thing that makes the postmaster more robust than\n> the backends is that it *isn't* doing as much as they do.\n> \n> So probably the Apache-style solution (pre-started backends listen for\n> client connection requests) is the way to go if there is enough bang\n> for the buck to justify restructuring the postmaster/backend division\n> of labor. Question is, how much will that buy that just getting rid\n> of exec() won't?\n\nNot sure. Removal of exec() takes 0.01 seconds off a 0.08 secion test.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 13:39:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh]" }, { "msg_contents": "Bruce Momjian:\n> David Gould:\n> > This is always how micro-optimization goes, 1% and 2% gains here and there.\n> > I am very familiar with it.\n> \n> I said 0.01 seconds at a time, not 1% at a time. At this point, a 0.01\n> seconds savings is 12%, because the total test takes 0.08 seconds.\n\nI think I may have been unclear here. I was merely attempting to agree that\noptimization is a process of accumulating small gains.\n \n> > Have you done call graph profile for this, or just flat profiling? I think\n> > you may find the call graph (gprof) output revealing, although perhaps not\n> > on this particular topic...\n> \n> I ran gprof. I did not look at the call graph, just the total number of\n> calls. We have a very modular system, and the call overhead can get\n> exessive. gprof shows tas() getting called far more than any other\n> function. It shows it as 0.01 seconds, on a 0.08 second test! Now, I\n> realize that gprof measurement is not perfect, but it certainly shows\n> tas as being called a lot.\n\nI agree the system is sometimes excessively layered resulting in many\ntrivial calls.\n\nI agree tas() is called a lot. I am trying to understand if the overhead seen\nbelow is in the call itself, or in the actual work of synchronization. \n\n> % cumulative self self total\n> time seconds seconds calls ms/call ms/call name\n> 20.0 0.02 0.02 mcount (463)\n> 10.0 0.03 0.01 5288 0.00 0.00 _tas [31]\n> 10.0 0.04 0.01 2368 0.00 0.00 _hash_search [30]\n> 10.0 0.05 0.01 1631 0.01 0.02 _malloc [11]\n> 10.0 0.06 0.01 101 0.10 0.10 _sbrk [35]\n> 10.0 0.07 0.01 56 0.18 0.20 _heapgettup [25]\n> 10.0 0.08 0.01 4 2.50 2.50 _write [32]\n> 10.0 0.09 0.01 2 5.00 5.00 ___sysctl [33]\n> 10.0 0.10 0.01 1 10.00 10.41 _cnfify [28]\n> 0.0 0.10 0.00 1774 0.00 0.00 _call_hash [468]\n> 0.0 0.10 0.00 1604 0.00 0.00 _tag_hash [469]\n> 0.0 0.10 0.00 1380 0.00 0.00 _OrderedElemPush [470]\n> 0.0 0.10 0.00 1380 0.00 0.00 _OrderedElemPushHead [471]\n> 0.0 0.10 0.00 1380 0.00 0.00 _OrderedElemPushInto [472]\n> 0.0 0.10 0.00 1375 0.00 0.02 _AllocSetAlloc [12]\n> 0.0 0.10 0.00 1375 0.00 0.02 _MemoryContextAlloc [13]\n> 0.0 0.10 0.00 1353 0.00 0.02 _palloc [14]\n> 0.0 0.10 0.00 1322 0.00 0.01 _SpinAcquire [45]\n\nI asked about the call graph for two reasons (the first of which is part\nof another thread):\n\n1) I would expect that the 1353 calls to palloc() are also responsible for: \n\n 1375 _MemoryContextAlloc\n 1375 _AllocSetAlloc\n 1380 _OrderedElemPushInto\n 1380 _OrderedElemPush\n for a total of (1353 + 1375 + 1375 + 1380 + 1380) = 6863 calls. \n (not including the further 1631 _malloc and 101 _sbrk calls).\n\n I am curious why these calls do not seem to show up on the cumulative time.\n\n2) I wonder how fine the resolution of the profile is. I am assuming that all\n the overhead of tas comes from either:\n - the call overhead\n - or the actual work done in tas().\n Given that, I wonder if the call overhead is the major part, it could be\n that the bus/cache synchronization is the real overhead. As a function,\n it is easy to identify tas(). As a macro it does not show up on the\n profile and its contribution to the overhead is distributed amoung all the\n callers which makes it less obvious on the profile. I was hoping the call\n graph would help identify which was the case.\n\nIn any case, I will test with it as a macro and as a function. It may also\nbe instructive to make a dummy tas() that does nothing and if that shows\nthe overhead to be in the actual synchronization, or in the calling. I will\ntest this too.\n\nMy intent here is not to be argumentative. My current mental model is that\nexcess calls are unfortunate but not especially harmful and not usually\nworth changing (sometimes of course an inline sequence allows the optimizer\nto do something clever and makes more difference than expected). If this view\nis incorrect, I would like to know and understand that so that I can adjust\nmy theory accordingly.\n\n> I don't understand your hesitation. The code WAS inlined. It was\n> inlined because gprof showed is as being called a lot. Most of them are\n> ASM anyway, so what does it matter if it sits in a a *.c or *.h file, an\n> asm() call looks the same in a macro or in a function.\n\nI do not feel strongly about this. I prefer the function on the grounds of\nclarity and ease of maintenance and porting. But I would be happy to make it\na macro even if the performance difference is not significant.\n\n> If it makes you feel better, put it in something called tas.h, and add\n> it as an include in all the files that include s_lock.h, or have\n> s_lock.h include tas.h.\n\nI am fine with folding it all into s_lock.h. If you wish, I will do so. \n\n> I am not looking around for 1% optimization. I am using gprof output to\n> improve things that gprof shows need improving.\n\nPlease, I am not intending _any_ criticism. I have no disagreement with what\nyou are doing. I am glad to see us working on performance. \n \n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"My life has been full of wonderful moments -\n it's only later that they become embarassing.\"\n -- Gerhard Berger\n", "msg_date": "Tue, 12 May 1998 13:21:30 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh]" }, { "msg_contents": "> \n> Bruce Momjian:\n> > David Gould:\n> > > This is always how micro-optimization goes, 1% and 2% gains here and there.\n> > > I am very familiar with it.\n> > \n> > I said 0.01 seconds at a time, not 1% at a time. At this point, a 0.01\n> > seconds savings is 12%, because the total test takes 0.08 seconds.\n> \n> I think I may have been unclear here. I was merely attempting to agree that\n> optimization is a process of accumulating small gains.\n\nTrue.\n\n> \n> > > Have you done call graph profile for this, or just flat profiling? I think\n> > > you may find the call graph (gprof) output revealing, although perhaps not\n> > > on this particular topic...\n> > \n> > I ran gprof. I did not look at the call graph, just the total number of\n> > calls. We have a very modular system, and the call overhead can get\n> > exessive. gprof shows tas() getting called far more than any other\n> > function. It shows it as 0.01 seconds, on a 0.08 second test! Now, I\n> > realize that gprof measurement is not perfect, but it certainly shows\n> > tas as being called a lot.\n> \n> I agree the system is sometimes excessively layered resulting in many\n> trivial calls.\n> \n> I agree tas() is called a lot. I am trying to understand if the overhead seen\n> below is in the call itself, or in the actual work of synchronization. \n\nYep. Much of the actual call time is locked up in the mcount line. It\nsays it was counting functions at the time of sampling, I think. So,\nmany times, inlining something that showed NO cpu time caused an\nimprovement because the mcount time went down.\n\nMy first attack was to reduce functions called for each column. When\nthose were gone, I went after ones that were called for each row. I am\ngoing to post timing on sequential scans that I think you will find\ninteresting.\n\n\n> \n> > % cumulative self self total\n> > time seconds seconds calls ms/call ms/call name\n> > 20.0 0.02 0.02 mcount (463)\n> > 10.0 0.03 0.01 5288 0.00 0.00 _tas [31]\n> > 10.0 0.04 0.01 2368 0.00 0.00 _hash_search [30]\n> > 10.0 0.05 0.01 1631 0.01 0.02 _malloc [11]\n> > 10.0 0.06 0.01 101 0.10 0.10 _sbrk [35]\n> > 10.0 0.07 0.01 56 0.18 0.20 _heapgettup [25]\n> > 10.0 0.08 0.01 4 2.50 2.50 _write [32]\n> > 10.0 0.09 0.01 2 5.00 5.00 ___sysctl [33]\n> > 10.0 0.10 0.01 1 10.00 10.41 _cnfify [28]\n> > 0.0 0.10 0.00 1774 0.00 0.00 _call_hash [468]\n> > 0.0 0.10 0.00 1604 0.00 0.00 _tag_hash [469]\n> > 0.0 0.10 0.00 1380 0.00 0.00 _OrderedElemPush [470]\n> > 0.0 0.10 0.00 1380 0.00 0.00 _OrderedElemPushHead [471]\n> > 0.0 0.10 0.00 1380 0.00 0.00 _OrderedElemPushInto [472]\n> > 0.0 0.10 0.00 1375 0.00 0.02 _AllocSetAlloc [12]\n> > 0.0 0.10 0.00 1375 0.00 0.02 _MemoryContextAlloc [13]\n> > 0.0 0.10 0.00 1353 0.00 0.02 _palloc [14]\n> > 0.0 0.10 0.00 1322 0.00 0.01 _SpinAcquire [45]\n> \n> I asked about the call graph for two reasons (the first of which is part\n> of another thread):\n> \n> 1) I would expect that the 1353 calls to palloc() are also responsible for: \n> \n> 1375 _MemoryContextAlloc\n> 1375 _AllocSetAlloc\n> 1380 _OrderedElemPushInto\n> 1380 _OrderedElemPush\n> for a total of (1353 + 1375 + 1375 + 1380 + 1380) = 6863 calls. \n> (not including the further 1631 _malloc and 101 _sbrk calls).\n> \n> I am curious why these calls do not seem to show up on the cumulative time.\n\n\nNot sure, but with such a quick test, the times are not significant. It\nis the number of calls, that can get very large for a large table scan. \nI look more for a pattern of calls, and when certain handling causes a\nlot of function call overhead.\n\nFor example, if the tables has 180k rows, and there are 180k calls to\nthe function, it is called one per row. If there are 360k calls, it is\ncalled two per row. I believe tas is called multiple times per row.\n\n\n> \n> 2) I wonder how fine the resolution of the profile is. I am assuming that all\n> the overhead of tas comes from either:\n> - the call overhead\n> - or the actual work done in tas().\n> Given that, I wonder if the call overhead is the major part, it could be\n> that the bus/cache synchronization is the real overhead. As a function,\n> it is easy to identify tas(). As a macro it does not show up on the\n> profile and its contribution to the overhead is distributed amoung all the\n> callers which makes it less obvious on the profile. I was hoping the call\n> graph would help identify which was the case.\n\nThis is true.\n\n> \n> In any case, I will test with it as a macro and as a function. It may also\n> be instructive to make a dummy tas() that does nothing and if that shows\n> the overhead to be in the actual synchronization, or in the calling. I will\n> test this too.\n\nIntresting, but again, with this type of test, I am only looking for\nareas of slowness, not actual function duration times. They are going\nto be meaningless in a small test.\n\n> \n> My intent here is not to be argumentative. My current mental model is that\n> excess calls are unfortunate but not especially harmful and not usually\n> worth changing (sometimes of course an inline sequence allows the optimizer\n> to do something clever and makes more difference than expected). If this view\n> is incorrect, I would like to know and understand that so that I can adjust\n> my theory accordingly.\n\nI understand. You want to know WHY it is improving performance. Not\nsure I can answer that. I will say that because SQL databases are so\ncompilcated, certain queries can generate very different call profiles,\nso I have tried to find cases where the call path is generating call\ntraffic, and inline it if is a very simple function.\n\nI have inlined very complex functions, but only when the are called FOR\nEVERY COLUMN. These cases really cause a big win on call overhead. See\ninclude/access/heapam.h. I am not proud of that code, but it makes a\nlarge difference.\n\nI could clearly see improvements in client timing by inlining functions\nthat were called a lot, so I continued to decrease the number of calls,\neven when the timing did not show a decrease because the times had\nbecome so small.\n\n> \n> > I don't understand your hesitation. The code WAS inlined. It was\n> > inlined because gprof showed is as being called a lot. Most of them are\n> > ASM anyway, so what does it matter if it sits in a a *.c or *.h file, an\n> > asm() call looks the same in a macro or in a function.\n> \n> I do not feel strongly about this. I prefer the function on the grounds of\n> clarity and ease of maintenance and porting. But I would be happy to make it\n> a macro even if the performance difference is not significant.\n\nI do not want to end up with most of our code in macros, nor to make the\ncode on big file. But, for items called a lot, macros seem to make\nsense, especially if the functions are small.\n\n> \n> > If it makes you feel better, put it in something called tas.h, and add\n> > it as an include in all the files that include s_lock.h, or have\n> > s_lock.h include tas.h.\n> \n> I am fine with folding it all into s_lock.h. If you wish, I will do so. \n> \n> > I am not looking around for 1% optimization. I am using gprof output to\n> > improve things that gprof shows need improving.\n> \n> Please, I am not intending _any_ criticism. I have no disagreement with what\n> you are doing. I am glad to see us working on performance. \n\nLet me know what you find in your testing.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 17:16:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh]" } ]
[ { "msg_contents": "[email protected] wrote:\n> From [email protected] Fri May 8 09:25:01 1998\n> Date: Fri, 8 May 1998 11:18:42 -0400\n> Message-Id: <[email protected]>\n> From: [email protected]\n> To: [email protected]\n> Reply-To: [email protected]\n> Subject: [HACKERS] Your Search Engine Listing\n> Sender: [email protected]\n> Precedence: bulk\n> \n> To: [email protected] \n> \n> Is your site listed with the top search engines? ListMe will \n> list you with 50 search engines and indexes for $90. \n> Satisfaction guaranteed! \n> \n> Search engines are the only way most people have to find internet sites.\n> But if your site is not listed, no one will find it.\n\nBLAH BLAH BLAH BLAH\n\nIt seems that no fileter will make us truly immune from SPAM, so for\nbetter or worse, here is my proposed spam-hunter (tm) clause:\n\nPurpose of the mailing list\n\nThe primary purpose of this mailing list is the discussion of ongoing\nwork to the Postgres database. Discussions of other databases,\nespecially as they compare with Postgres are also 'on topic'. Any\ncommercial postings including, but not limited to: Chain letters,\nPyramid schemes, \"Make Money Fast\", direct marketing schemes, web site\nadvertising, software advertising or any other type of advertising are\nconsidered unsolicited email (SPAM).\n\nThese SPAM postings waste the time and resources of the volunteer\nPostgres developers. Therefore, in the interest of keeping the\nPostgres mailing list free SPAM, the Postgres organization will charge\na fee of $50 per line for all such postings. In addition to the $50\nper line, fees may be charged for expenses related to collecting the\nfee, including, but not limited to: postage, detective services and\ncollection services.\n\n\n\nI voulenteer myself as an Internet PI. I don't think it makes much\nsense to pay someone up-front to track down a spammer.\n\nSo what does everyone think?\n\nOcie\n", "msg_date": "Fri, 8 May 1998 14:24:32 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "[HACKERS] Your Search Engine Listing (fwd)" }, { "msg_contents": "> It seems that no fileter will make us truly immune from SPAM, so for\n> better or worse, here is my proposed spam-hunter (tm) clause:\n> \n> Purpose of the mailing list\n> \n> The primary purpose of this mailing list is the discussion of ongoing\n> work to the Postgres database. Discussions of other databases,\n> especially as they compare with Postgres are also 'on topic'. Any\n> commercial postings including, but not limited to: Chain letters,\n> Pyramid schemes, \"Make Money Fast\", direct marketing schemes, web site\n> advertising, software advertising or any other type of advertising are\n> considered unsolicited email (SPAM).\n> \n> These SPAM postings waste the time and resources of the volunteer\n> Postgres developers. Therefore, in the interest of keeping the\n> Postgres mailing list free SPAM, the Postgres organization will charge\n> a fee of $50 per line for all such postings. In addition to the $50\n> per line, fees may be charged for expenses related to collecting the\n> fee, including, but not limited to: postage, detective services and\n> collection services.\n> \n> \n> \n> I voulenteer myself as an Internet PI. I don't think it makes much\n> sense to pay someone up-front to track down a spammer.\n> \n> So what does everyone think?\n\n\tSounds like a plan to me...be interesting to see what happens :)\n\n\n", "msg_date": "Fri, 8 May 1998 17:31:16 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Your Search Engine Listing (fwd)" } ]
[ { "msg_contents": "I think that 50$ per line is a bit out there. Maybe $0.10 per line *\nnumber of people on the mailing list. Then we are charging for a\nmeasurable/growing number of people.\nI also think that you should run it by a lawyer for proofing.\n\t\t-DEJ \n\n> -----Original Message-----\n> From:\[email protected] [SMTP:[email protected]]\n> Sent:\tFriday, May 08, 1998 4:25 PM\n> To:\[email protected]\n> Subject:\t[HACKERS] Your Search Engine Listing (fwd)\n> \n> [email protected] wrote:\n> > From [email protected] Fri May 8 09:25:01 1998\n> > Date: Fri, 8 May 1998 11:18:42 -0400\n> > Message-Id: <[email protected]>\n> > From: [email protected]\n> > To: [email protected]\n> > Reply-To: [email protected]\n> > Subject: [HACKERS] Your Search Engine Listing\n> > Sender: [email protected]\n> > Precedence: bulk\n> > \n> > To: [email protected] \n> > \n> > Is your site listed with the top search engines? ListMe will \n> > list you with 50 search engines and indexes for $90. \n> > Satisfaction guaranteed! \n> > \n> > Search engines are the only way most people have to find internet\n> sites.\n> > But if your site is not listed, no one will find it.\n> \n> BLAH BLAH BLAH BLAH\n> \n> It seems that no fileter will make us truly immune from SPAM, so for\n> better or worse, here is my proposed spam-hunter (tm) clause:\n> \n> Purpose of the mailing list\n> \n> The primary purpose of this mailing list is the discussion of ongoing\n> work to the Postgres database. Discussions of other databases,\n> especially as they compare with Postgres are also 'on topic'. Any\n> commercial postings including, but not limited to: Chain letters,\n> Pyramid schemes, \"Make Money Fast\", direct marketing schemes, web site\n> advertising, software advertising or any other type of advertising are\n> considered unsolicited email (SPAM).\n> \n> These SPAM postings waste the time and resources of the volunteer\n> Postgres developers. Therefore, in the interest of keeping the\n> Postgres mailing list free SPAM, the Postgres organization will charge\n> a fee of $50 per line for all such postings. In addition to the $50\n> per line, fees may be charged for expenses related to collecting the\n> fee, including, but not limited to: postage, detective services and\n> collection services.\n> \n> \n> \n> I voulenteer myself as an Internet PI. I don't think it makes much\n> sense to pay someone up-front to track down a spammer.\n> \n> So what does everyone think?\n> \n> Ocie\n", "msg_date": "Fri, 8 May 1998 16:49:31 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Your Search Engine Listing (fwd)" }, { "msg_contents": "Jackson, DeJuan wrote:\n> \n> I think that 50$ per line is a bit out there. Maybe $0.10 per line *\n> number of people on the mailing list. Then we are charging for a\n> measurable/growing number of people.\n> I also think that you should run it by a lawyer for proofing.\n> \t\t-DEJ \n\nThat sounds like a good idea. As for the cost -- why don't we set it\nso that it comes out to $50 per line. What is $50 / number of\nsubsribers?\n\nOcie\n", "msg_date": "Fri, 8 May 1998 15:16:13 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Your Search Engine Listing (fwd)" }, { "msg_contents": "On Fri, 8 May 1998 [email protected] wrote:\n\n> Jackson, DeJuan wrote:\n> > \n> > I think that 50$ per line is a bit out there. Maybe $0.10 per line *\n> > number of people on the mailing list. Then we are charging for a\n> > measurable/growing number of people.\n> > I also think that you should run it by a lawyer for proofing.\n> > \t\t-DEJ \n> \n> That sounds like a good idea. As for the cost -- why don't we set it\n> so that it comes out to $50 per line. What is $50 / number of\n> subsribers?\n\n\tpgsql-questions == ~1000 subscribers +/-\n\n", "msg_date": "Fri, 8 May 1998 18:41:33 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Your Search Engine Listing (fwd)" } ]
[ { "msg_contents": "PGHOME/src/tutorial/funcs.c .....\n\nI compiled it by GCC ,but not excutable\n\ncommand: gcc -fPIC -c funcs.c\n ld -G -Bdynamic -o funcs.so funcs.o\n\nI met errer mesage ==> \" stat failed on file /../../../*.so\"\n++++++++\n*******************************************************************************\n\nQUERY: CREATE FUNCTION c_overpaid(EMP, int4) RETURNS bool\n AS '/user1/grad/whtak/postgres/src/funcs.so' LANGUAGE 'c';\n\n*******************************************************************************\n\npress return to continue ..\n\nCREATE\n\n*******************************************************************************\n\nQUERY: SELECT add_one(3) AS four;\n\n*******************************************************************************\n\npress return to continue ..\n\nERROR: stat failed on file /user1/grad/whtak/postgres/src/funcs.so\n\n++++++\n\nLet me know what means errer message.\n\nGood Luck!!\n.", "msg_date": "Sat, 09 May 1998 15:22:46 +0900", "msg_from": "\"Woohyun,Tak\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help me!!! Pleass" }, { "msg_contents": "Woohyun,Tak wrote:\n> \n> PGHOME/src/tutorial/funcs.c .....\n> \n> I compiled it by GCC ,but not excutable\n> \n> command: gcc -fPIC -c funcs.c\n> ld -G -Bdynamic -o funcs.so funcs.o\n> \n> I met errer mesage ==> \" stat failed on file /../../../*.so\"\n\nPlease cut the complete filename in this message (starting at the first\n\"/\") and do an ls -l on it. \"Can't stat\" means that there is either no\nfile at the specied location or it is unreadable to the backend which\nruns as \"postgres\".\n\nGene\n", "msg_date": "Sun, 10 May 1998 10:02:14 +0000", "msg_from": "\"Eugene Selkov Jr.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Help me!!! Pleass" } ]
[ { "msg_contents": "I've committed changes to allow more automatic type conversion. Lots of\nfiles were touched, mostly in backend/parser/.\n\nThe new code tries to do the right thing for conversions, and does\nhandle cases which were problematic before:\n\n-- there isn't a floating point factorial operator...\ntgl=> select (4.3 !);\n?column?\n--------\n 24\n(1 row)\n\n-- there isn't an integer exponentiation operator...\ntgl=> select 2 ^ 3;\n?column?\n--------\n 8\n(1 row)\n\n-- concatenation on unspecified types didn't used to work...\ntgl=> select '123' || '456';\n?column?\n--------\n 123456\n(1 row)\n\n-- didn't used to correctly truncate strings into tables...\ntgl=> create table cc (c char(4));\nCREATE\ntgl=> insert into cc select '123' || '456';\nINSERT 268073 1\ntgl=> select * from cc;\n c\n----\n1234\n(1 row)\n\nSo, it should fix longstanding issues. However, the main goal should be\nthat it doesn't do the WRONG thing at any time. So, test away and post\nany problems or issues that come up; we have lots of time to fix things\nbefore v6.4.\n\nOne change in behavior is that I defined (for builtin types) the concept\nof a \"preferred type\" in each category/class of types (e.g. float8 is\nthe preferred type for numerics, datetime is the preferred type for\ndate/times, etc.). And, unspecified types are preferentially resolved to\nuse this preferred type. So, the following behavior has changed:\n\n-- this is now done as a float8 calculation, used to be float4...\ntgl=> select '123.456'::float4 * '1.99999999999';\n ?column?\n----------------\n246.912002562242\n(1 row)\n\nBefore, unknown types, such as the second string above, were resolved to\nbe the same type as the other type, if available. So the calculation\nwould have been truncated at ~7 decimal places.\n\nThe good thing about this is that the behavior of the above is now the\nsame as if the second string was specified without the quotes:\n\ntgl=> select '123.456'::float4 * 1.99999999999;\n ?column?\n----------------\n246.912002562242\n(1 row)\n\nwhere before it was evaluated differently in the two cases.\n\nAnyway, try things out, and I'll be writing this up for the docs. Will\npost the topics on hackers along the way...\n\nI haven't yet changed the regression tests to reflect the new behavior,\njust in case it needs to be different. Also, all regression tests pass\nwith the only differences as mentioned above. btw, the code still has\nlots of cleanup needed, moving subroutines around and taking out defunct\ncode. But will do that later.\n\nHave fun.\n\n - Tom\n", "msg_date": "Sun, 10 May 1998 00:14:11 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Automatic type conversion" }, { "msg_contents": "> I've committed changes to allow more automatic type conversion.\n\nbtw, this requires a dump/reload...\n\n - Tom\n", "msg_date": "Sun, 10 May 1998 02:13:48 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Automatic type conversion" }, { "msg_contents": "On Sun, May 10, 1998 at 12:14:11AM +0000, Thomas G. Lockhart wrote:\n> -- there isn't a floating point factorial operator...\n> tgl=> select (4.3 !);\n> ?column?\n> --------\n> 24\n> (1 row)\n\nAm I the only one that thinks the above is wrong? 4.3 factorial is\nmathematically undefined and does NOT equal 24.\n\nI don't think the automatic type conversion should automatically\ntruncate values without at least a warning. Preferably I'd like to be\nforced to do the conversion myself for cases like the above.\n\n-- \nDave Chapeskie <[email protected]>, DDM Consulting\n", "msg_date": "Sun, 10 May 1998 02:23:22 -0400", "msg_from": "Dave Chapeskie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Automatic type conversion" }, { "msg_contents": "Dave Chapeskie wrote:\n> \n> On Sun, May 10, 1998 at 12:14:11AM +0000, Thomas G. Lockhart wrote:\n> > -- there isn't a floating point factorial operator...\n> > tgl=> select (4.3 !);\n> > ?column?\n> > --------\n> > 24\n> > (1 row)\n> \n> Am I the only one that thinks the above is wrong? 4.3 factorial is\n> mathematically undefined and does NOT equal 24.\n\nJust put the gamma function in there and assume the argument is always a\nfloat. A decent gamma function algorithm should make a special case for\nintegers.\n\n--Gene\n", "msg_date": "Sun, 10 May 1998 08:07:56 +0000", "msg_from": "\"Eugene Selkov Jr.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Automatic type conversion" }, { "msg_contents": "> > -- there isn't a floating point factorial operator...\n> > tgl=> select (4.3 !);\n> > ?column?\n> > --------\n> > 24\n> > (1 row)\n> \n> Am I the only one that thinks the above is wrong? 4.3 factorial is\n> mathematically undefined and does NOT equal 24.\n> \n> I don't think the automatic type conversion should automatically\n> truncate values without at least a warning. Preferably I'd like to be\n> forced to do the conversion myself for cases like the above.\n\nYes, I included this one to provoke discussion :) \n\nPostgres has type extensibility, so the algorithms for matching up types\nand functions need to be very general. In this case, there is only one\nfunction defined for factorial, and it takes an integer argument. But of\ncourse Postgres now says \"ah! I know how to make an int from a float!\"\nand goes ahead and does it. If there were more than one function defined\nfor factorial, and if none of the arguments matched a float, then\nPostgres would conclude that there are too many functions to choose from\nand throw an error.\n\nOne way to address this is to never allow Postgres to \"demote\" a type;\ni.e. Postgres would be allowed to promote arguments to a \"higher\" type\n(e.g. int->float) but never allowed to demote arguments (e.g.\nfloat->int). But this would severely restrict type matching. I wanted to\ntry the more flexible case first to see whether it really does the\n\"wrong thing\"; in the case of factorial, the only recourse for someone\nwanting to calculate a factorial from a float is to convert to an int\nfirst anyway.\n\nOr, again for this factorial case, we can implement a floating point\nfactorial with either the gamma function (whatever that is :) or with an\nexplicit routine which checks for non-integral values.\n\nCould also print a notice when arguments are being converted, but that\nmight get annoying for most cases which are probably trivial ones.\n\n - Tom\n", "msg_date": "Sun, 10 May 1998 15:35:53 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Automatic type conversion" }, { "msg_contents": "Tom: \n> One way to address this is to never allow Postgres to \"demote\" a type;\n> i.e. Postgres would be allowed to promote arguments to a \"higher\" type\n> (e.g. int->float) but never allowed to demote arguments (e.g.\n> float->int). But this would severely restrict type matching. I wanted to\n> try the more flexible case first to see whether it really does the\n> \"wrong thing\"; in the case of factorial, the only recourse for someone\n> wanting to calculate a factorial from a float is to convert to an int\n> first anyway.\n\nI think that never demoting is the best way to procede here. If the type\nresolution search is too \"thorough\" it can be very confusing (see c++ for\nexample). As it is the interaction of SQL and the type system can create\nsurprises. Promoting both \"up\" and \"down\" is likely to make it very hard\nto figure out what any given query will do.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Mon, 11 May 1998 11:33:44 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Automatic type conversion" } ]
[ { "msg_contents": "Hi...\nGigantic table woes again... I get\nsc=> vacuum test_detail;\nFATAL 1: palloc failure: memory exhausted\n\nThis is a very simple table too:\n| word_id | int4 |4 |\n| url_id | int4 |4 |\n| word_count | int2 |2 |\n\n\nwhile vacuuming a rather big table:\nsc=> select count(*) from test_detail;\nField| Value\n-- RECORD 0 --\ncount| 78444613\n(1 row)\n\nThere is lots of free space on that drive:\n/dev/sd1s1e 8854584 6547824 1598400 80% /scdb\nThe test_detail table is in a few files too...\n-rw------- 1 postgres postgres 2147483648 May 9 23:28 test_detail\n-rw------- 1 postgres postgres 2147483648 May 9 23:23 test_detail.1\n-rw------- 1 postgres postgres 949608448 May 9 23:28 test_detail.2\n\n\nI am not running out of swap space either...\n\n\n\nunder top the backend just keeps growing.\n 492 postgres 85 0 16980K 19076K RUN 1:43 91.67% 91.48% postgres\nwhen it hit about 20 megs, it craps out. Swap space is 0% used, and I am\nnot even convinced this is using all 128 megs of ram either. Could\nsomething like memory fragementation be an issue?\n\n\nDoes anyone have any ideas other than buying a gig of ram?\n\n", "msg_date": "Sat, 9 May 1998 23:37:29 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Maybe a Vacuum bug in 6.3.2" }, { "msg_contents": "Michael Richards <[email protected]> writes:\n> I am not running out of swap space either...\n> under top the backend just keeps growing.\n> 492 postgres 85 0 16980K 19076K RUN 1:43 91.67% 91.48% postgres\n> when it hit about 20 megs, it craps out.\n\nSounds to me like you are hitting a kernel-imposed limit on process\nmemory size. This should be reconfigurable; check your kernel parameter\nsettings. You'll probably find it's set to 20Mb ... or possibly 16Mb\nfor data space, or some such. Set it to some more realistic fraction\nof your available swap space.\n\nIn the longer term, however, it's disturbing that vacuum evidently needs\nspace proportional to the table size. Can anything be done about that?\nSomeday I might want to have huge tables under Postgres...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 May 1998 13:20:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Maybe a Vacuum bug in 6.3.2 " }, { "msg_contents": "\nFiguring that, for now, this might be more an admin related question then\nanything, redirected appropriately...\n\n-hackers is not for bug reports, it is for development related issues...\n\n\nOn Sat, 9 May 1998, Michael Richards wrote:\n\n> under top the backend just keeps growing.\n> 492 postgres 85 0 16980K 19076K RUN 1:43 91.67% 91.48% postgres\n> when it hit about 20 megs, it craps out. Swap space is 0% used, and I am\n> not even convinced this is using all 128 megs of ram either. Could\n> something like memory fragementation be an issue?\n> \n> Does anyone have any ideas other than buying a gig of ram?\n\n\tJust a thought (long shot, at that)...what are your limits set at\nwhen you start up the postmaster? Are you hitting a limit, with the\npostmaster reporting it as an inability to allocate more memory?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 12 May 1998 22:29:00 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Maybe a Vacuum bug in 6.3.2" } ]
[ { "msg_contents": "\nHi...\nGigantic table woes again... I get\nsc=> vacuum test_detail;\nFATAL 1: palloc failure: memory exhausted\n\nThis is a very simple table too:\n| word_id | int4 |4 |\n| url_id | int4 |4 |\n| word_count | int2 |2 |\n\n\nwhile vacuuming a rather big table:\nsc=> select count(*) from test_detail;\nField| Value\n-- RECORD 0 --\ncount| 78444613\n(1 row)\n\nThere is lots of free space on that drive:\n/dev/sd1s1e 8854584 6547824 1598400 80% /scdb\nThe test_detail table is in a few files too...\n-rw------- 1 postgres postgres 2147483648 May 9 23:28 test_detail\n-rw------- 1 postgres postgres 2147483648 May 9 23:23 test_detail.1\n-rw------- 1 postgres postgres 949608448 May 9 23:28 test_detail.2\n\n\nI am not running out of swap space either...\n\n\n\nunder top the backend just keeps growing.\n 492 postgres 85 0 16980K 19076K RUN 1:43 91.67% 91.48% postgres\nwhen it hit about 20 megs, it craps out. Swap space is 0% used, and I am\nnot even convinced this is using all 128 megs of ram either. Could\nsomething like memory fragementation be an issue?\n\n\nDoes anyone have any ideas other than buying a gig of ram?\n\n\n-MIke\n\n", "msg_date": "Sun, 10 May 1998 01:33:10 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "A possible postgres 6.3.2 bug" } ]
[ { "msg_contents": "After making a complete fool of myself by posting a poorly\ntested (and conceived) patch to psql to enable \nspecification of a space or tab as a field separator, I \nhave posted a different and hopefully more intelligent\npatch to the patches list. A more detailed message\nhas been sent to the patches list.\n\nPlease review and fix it as necessary and as time permits.\n\nMarc Zuckman\[email protected]\n\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n_ Visit The Home and Condo MarketPlace\t\t _\n_ http://www.ClassyAd.com\t\t\t _\n_\t\t\t\t\t\t\t _\n_ FREE basic property listings/advertisements and searches. _\n_\t\t\t\t\t\t\t _\n_ Try our premium, yet inexpensive services for a real\t _\n_ selling or buying edge!\t\t\t\t _\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n\n", "msg_date": "Sun, 10 May 1998 10:46:03 -0400 (EDT)", "msg_from": "Marc Howard Zuckman <[email protected]>", "msg_from_op": true, "msg_subject": "psql.c patch" } ]
[ { "msg_contents": ">\n>Postgres has type extensibility, so the algorithms for matching up types\n>and functions need to be very general. In this case, there is only one\n>function defined for factorial, and it takes an integer argument. But of\n>course Postgres now says \"ah! I know how to make an int from a float!\"\n>and goes ahead and does it. If there were more than one function defined\n>for factorial, and if none of the arguments matched a float, then\n>Postgres would conclude that there are too many functions to choose from\n>and throw an error.\n\nMaking an int from a float is only defined for \"small\" values of the float.\nSo for the general case such a conversion would simply overflow the int,\ngiving it an undefined value. Does this make sense to you?\n\n>\n>One way to address this is to never allow Postgres to \"demote\" a type;\n>i.e. Postgres would be allowed to promote arguments to a \"higher\" type\n>(e.g. int->float) but never allowed to demote arguments (e.g.\n>float->int). But this would severely restrict type matching. I wanted to\n>try the more flexible case first to see whether it really does the\n>\"wrong thing\"; in the case of factorial, the only recourse for someone\n>wanting to calculate a factorial from a float is to convert to an int\n>first anyway.\n\nPlease bear with me since I haven't looked at the code. Are conversions\nbetween types defined in a way that is also extensible? I'm trying to say\nthat if I add a new type to the system, can I also specify which conversions\nare automatically allowed? (Something similar to the C++ \"explicite\"\nkeyword?).\n\n>\n>Or, again for this factorial case, we can implement a floating point\n>factorial with either the gamma function (whatever that is :) or with an\n>explicit routine which checks for non-integral values.\n\nAnd properly handles overflows.\n\n>\n>Could also print a notice when arguments are being converted, but that\n>might get annoying for most cases which are probably trivial ones.\n>\n> - Tom\n\nRegards,\n Maurice.\n\n\n", "msg_date": "Sun, 10 May 1998 18:22:49 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Automatic type conversion" }, { "msg_contents": "> Making an int from a float is only defined for \"small\" values of the \n> float. So for the general case such a conversion would simply overflow \n> the int, giving it an undefined value. Does this make sense to you?\n\nYes, it does. Look, I'm not saying everyone _should_ call factorial with\na float, only that if someone does, Postgres will try to accomplish it.\nDoesn't it make sense to you?\n\n> Are conversions between types defined in a way that is also \n> extensible? I'm trying to say that if I add a new type to the system, \n> can I also specify which conversions are automatically allowed? \n> (Something similar to the C++ \"explicite\" keyword?).\n\nYes, they are extensible in the sense that all conversions (except for a\nfew string type hacks at the moment) are done by looking for a function\nnamed with the same name as the target type, taking as a single argument\none with the specified source type. If you define one, then Postgres can\nuse it for conversions.\n\nAt the moment the primary mechanism uses the pg_proc table to look for\npossible conversion functions, along with a hardcoded notion of what\n\"preferred types\" and \"type categories\" are for the builtin types. For\nuser-defined types, explicit type conversion functions must be provided\n_and_ there must be a single path from source to possible targets for\nthe conversions. Otherwise there will result multiple possible\nconversions and Postgres will ask you to use a cast, much as it does in\nv6.3.x and before.\n\n> >Or, again for this factorial case, we can implement a floating point\n> >factorial with either the gamma function (whatever that is :) or with \n> >an explicit routine which checks for non-integral values.\n> And properly handles overflows.\n\nHey, it doesn't do any worse than before...\n\n - Tom\n", "msg_date": "Mon, 11 May 1998 04:14:46 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Automatic type conversion" } ]
[ { "msg_contents": "Would people tell me what platforms do NOT support the MAP_ANON flag to\nthe mmap() system call? You should find it in the mmap() manual page.\n\n*BSD has it, but I am not sure of the others. I am researching cache\nsize issues and the use of mmap vs. SYSV shared memory.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 10 May 1998 23:26:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "mmap and MAP_ANON" }, { "msg_contents": "\nI can't find MAP_ANON on Solaris 2.5.1 or 2.5.6. The man\npage claims the following options are avaliable:\n\n MAP_SHARED Share changes.\n MAP_PRIVATE Changes are private.\n MAP_FIXED Interpret addr exactly.\n MAP_NORESERVE Don't reserve swap space.\n\n\nIf you'd like, I can send along the whole man page.\n\n--------- Received message begins Here ---------\n\n> \n> Would people tell me what platforms do NOT support the MAP_ANON flag to\n> the mmap() system call? You should find it in the mmap() manual page.\n> \n> *BSD has it, but I am not sure of the others. I am researching cache\n> size issues and the use of mmap vs. SYSV shared memory.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> \n> \n\n-------------\nDiab Jerius Harvard-Smithsonian Center for Astrophysics\n 60 Garden St, MS 70, Cambridge MA 02138 USA\[email protected] vox: 617 496 7575 fax: 617 495 7356\n", "msg_date": "Mon, 11 May 1998 09:39:41 -0400", "msg_from": "[email protected] (Diab Jerius)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Would people tell me what platforms do NOT support the MAP_ANON flag to\n> the mmap() system call? You should find it in the mmap() manual page.\n> \n> *BSD has it, but I am not sure of the others. I am researching cache\n> size issues and the use of mmap vs. SYSV shared memory.\n\nSVR4 (at least older ones) does not support MMAP_ANON,\nbut the recommended in W. Richards Stevens' \n\"Advanced programming in the Unix environment\" (aka the Bible part 2)\nis to use /dev/zero.\n\nThis should be configurable with autoconf:\n\n<PSEUDO CODE>\n\nif (exists MAP_ANON) use it; else use /dev/zero\n\n------------\n\nflags = MAP_SHARED;\n#ifdef HAS_MMAP_ANON\nfd = -1;\nflags |= MAP_ANON;\n#else\nfd = open('/dev/zero, O_RDWR);\n#endif\narea = mmap(0, size, PROT_READ|PROT_WRITE, flags, fd, 0);\n\n</PSEUDO CODE>\n\n\n\tregards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n", "msg_date": "Mon, 11 May 1998 16:08:58 +0200", "msg_from": "\"G���ran Thyni\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Would people tell me what platforms do NOT support the MAP_ANON flag to\n> the mmap() system call? You should find it in the mmap() manual page.\n\nOn HPUX it seems to be spelled MAP_ANONYMOUS. At least if this means\nthe same thing as what you are talking about. The HP man page says\n\n: The MAP_FILE and MAP_ANONYMOUS flags control whether the region to be\n: mapped is a mapped file region or an anonymous shared memory region.\n: Exactly one of these flags must be selected.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 May 1998 10:42:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON " }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Would people tell me what platforms do NOT support the MAP_ANON flag to\n> the mmap() system call? You should find it in the mmap() manual page.\n\nDoesn't seem to appear in Linux (2.0.30 kernel). As another poster\ncommented, /dev/zero can be mapped for anonymous memory.\n\nOcie Mitchell\n", "msg_date": "Mon, 11 May 1998 13:56:25 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "G�ran Thyni wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > Would people tell me what platforms do NOT support the MAP_ANON flag to\n> > the mmap() system call? You should find it in the mmap() manual page.\n> >\n> > *BSD has it, but I am not sure of the others. I am researching cache\n> > size issues and the use of mmap vs. SYSV shared memory.\n> \n> SVR4 (at least older ones) does not support MMAP_ANON,\n> but the recommended in W. Richards Stevens'\n> \"Advanced programming in the Unix environment\" (aka the Bible part 2)\n> is to use /dev/zero.\n> \n> This should be configurable with autoconf:\n> \n> <PSEUDO CODE>\n> \n> if (exists MAP_ANON) use it; else use /dev/zero\n> \n> ------------\n> \n> flags = MAP_SHARED;\n> #ifdef HAS_MMAP_ANON\n> fd = -1;\n> flags |= MAP_ANON;\n> #else\n> fd = open('/dev/zero, O_RDWR);\n> #endif\n> area = mmap(0, size, PROT_READ|PROT_WRITE, flags, fd, 0);\n> \n> </PSEUDO CODE>\n\nOuch, hate to say this but:\nI played around with this last night and\nI can't get either of the above technics to work with Linux 2.0.33\n\nI will try it with the upcoming 2.2,\nbut for now, we can't loose shmem without loosing\na large part of the users (including some developers).\nflags = MAP_SHARED;\n\n<PSEUDO CODE>\n#ifdef HAS_WORKING_MMAP\n#ifdef HAS_MMAP_ANON\nfd = -1;\nflags |= MAP_ANON;\n#else\nfd = open('/dev/zero, O_RDWR);\n#endif\narea = mmap(0, size, PROT_READ|PROT_WRITE, flags, fd, 0);\n#else\nid = shget(...);\narea = shmat(...);\n#endif\n</PSEUDO CODE>\n\n\tnot happy,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n", "msg_date": "Tue, 12 May 1998 08:22:23 +0200", "msg_from": "\"G���ran Thyni\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > Would people tell me what platforms do NOT support the MAP_ANON flag to\n> > the mmap() system call? You should find it in the mmap() manual page.\n> \n> Doesn't seem to appear in Linux (2.0.30 kernel). As another poster\n> commented, /dev/zero can be mapped for anonymous memory.\n> \n\nOK, who doesn't have /dev/zero?\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 12:50:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "> \n> G���ran Thyni wrote:\n> > \n> > Bruce Momjian wrote:\n> > >\n> > > Would people tell me what platforms do NOT support the MAP_ANON flag to\n> > > the mmap() system call? You should find it in the mmap() manual page.\n> > >\n> > > *BSD has it, but I am not sure of the others. I am researching cache\n> > > size issues and the use of mmap vs. SYSV shared memory.\n> > \n> > SVR4 (at least older ones) does not support MMAP_ANON,\n> > but the recommended in W. Richards Stevens'\n> > \"Advanced programming in the Unix environment\" (aka the Bible part 2)\n> > is to use /dev/zero.\n> > \n> > This should be configurable with autoconf:\n> > \n> > <PSEUDO CODE>\n> > \n> > if (exists MAP_ANON) use it; else use /dev/zero\n> > \n> > ------------\n> > \n> > flags = MAP_SHARED;\n> > #ifdef HAS_MMAP_ANON\n> > fd = -1;\n> > flags |= MAP_ANON;\n> > #else\n> > fd = open('/dev/zero, O_RDWR);\n> > #endif\n> > area = mmap(0, size, PROT_READ|PROT_WRITE, flags, fd, 0);\n> > \n> > </PSEUDO CODE>\n> \n> Ouch, hate to say this but:\n> I played around with this last night and\n> I can't get either of the above technics to work with Linux 2.0.33\n> \n> I will try it with the upcoming 2.2,\n> but for now, we can't loose shmem without loosing\n> a large part of the users (including some developers).\n> flags = MAP_SHARED;\n> \n> <PSEUDO CODE>\n> #ifdef HAS_WORKING_MMAP\n> #ifdef HAS_MMAP_ANON\n> fd = -1;\n> flags |= MAP_ANON;\n> #else\n> fd = open('/dev/zero, O_RDWR);\n> #endif\n> area = mmap(0, size, PROT_READ|PROT_WRITE, flags, fd, 0);\n> #else\n> id = shget(...);\n> area = shmat(...);\n> #endif\n> </PSEUDO CODE>\n> \n\nWhat exactly did not work?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 12:57:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > \n> > Bruce Momjian wrote:\n> > > \n> > > Would people tell me what platforms do NOT support the MAP_ANON flag to\n> > > the mmap() system call? You should find it in the mmap() manual page.\n> > \n> > Doesn't seem to appear in Linux (2.0.30 kernel). As another poster\n> > commented, /dev/zero can be mapped for anonymous memory.\n> > \n> \n> OK, who doesn't have /dev/zero?\n\nI have been playing around with mmap on Linux. I have been unable to\nmmap /dev/zero or to use MAP_ANON in conjunction with MAP_SHARED.\nThere is no problem sharing memory when a real file is used.\nSolaris-sparc seems to have no trouble sharing memory mapped from\n/dev/zero. Very strange.\n\nOcie\n", "msg_date": "Tue, 12 May 1998 13:24:21 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > > \n> > > Bruce Momjian wrote:\n> > > > \n> > > > Would people tell me what platforms do NOT support the MAP_ANON flag to\n> > > > the mmap() system call? You should find it in the mmap() manual page.\n> > > \n> > > Doesn't seem to appear in Linux (2.0.30 kernel). As another poster\n> > > commented, /dev/zero can be mapped for anonymous memory.\n> > > \n> > \n> > OK, who doesn't have /dev/zero?\n> \n> I have been playing around with mmap on Linux. I have been unable to\n> mmap /dev/zero or to use MAP_ANON in conjunction with MAP_SHARED.\n> There is no problem sharing memory when a real file is used.\n> Solaris-sparc seems to have no trouble sharing memory mapped from\n> /dev/zero. Very strange.\n\nAnd very bad. We have to have a 100% usable solution, or have some if\nANON code, else shared memory.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 17:17:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "Bruce Momjian wrote:\n> > G�ran Thyni wrote:\n> >\n> > Ouch, hate to say this but:\n> > I played around with this last night and\n> > I can't get either of the above technics to work with Linux 2.0.33\n> >\n> > I will try it with the upcoming 2.2,\n> > but for now, we can't loose shmem without loosing\n> > a large part of the users (including some developers).\n> >\n> > <PSEUDO CODE>\n> > #ifdef HAS_WORKING_MMAP\n> > flags = MAP_SHARED;\n> > #ifdef HAS_MMAP_ANON\n> > fd = -1;\n> > flags |= MAP_ANON;\n> > #else\n> > fd = open('/dev/zero, O_RDWR);\n> > #endif\n> > area = mmap(0, size, PROT_READ|PROT_WRITE, flags, fd, 0);\n> > #else\n> > id = shget(...);\n> > area = shmat(...);\n> > #endif\n> > </PSEUDO CODE>\n> >\n> \n> What exactly did not work?\n\nOK, here's the story:\n\nLinux can only MAP_SHARED if the file is a *real* file, \ndevices or trick like MAP_ANON does only work with MAP_PRIVATE.\n\n2.1.101 does not work either which means 2.2 will probably not\nimplement this feature (feature freeze i in effect for 2.2).\n\n*But*,\n(I was thinking about this,)\nwe should IMHO take a step backwards to get a better view\nover the whole memory subsystem.\n- Why and for what is shared memory used in the first place?\n- Could we use mmap:ing of files at a higher level then\n src/backend/strorage/ipc/ipc.c to get even better performance\n and cleaness?\n\nI will, time permitting, look into cleaning up the shmem-init/exit\nroutines\nto work in a \"no-exec\" environment. I also has a hack to use\nmmap-shared/private,\nwhich of course is untested, since it does not work on my linux-boxen.\n\n\tregards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n", "msg_date": "Wed, 13 May 1998 08:17:12 +0200", "msg_from": "\"G���ran Thyni\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "> *But*,\n> (I was thinking about this,)\n> we should IMHO take a step backwards to get a better view\n> over the whole memory subsystem.\n> - Why and for what is shared memory used in the first place?\n> - Could we use mmap:ing of files at a higher level then\n> src/backend/strorage/ipc/ipc.c to get even better performance\n> and cleaness?\n\nYes, we could use mmap() to map the actual files. I will post time\ntimings on this soon.\n\nThe shared memory acts as a cache for us, that can be locked and not\nread in/out of the address space for each sharing, like it does when we\nuse the OS buffer cache.\n\n> \n> I will, time permitting, look into cleaning up the shmem-init/exit\n> routines\n> to work in a \"no-exec\" environment. I also has a hack to use\n> mmap-shared/private,\n> which of course is untested, since it does not work on my linux-boxen.\n> \n> \tregards,\n> -- \n> ---------------------------------------------\n> G���ran Thyni, sysadm, JMS Bildbasen, Kiruna\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 13 May 1998 11:47:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "\"G�ran Thyni\" <[email protected]> writes:\n> Linux can only MAP_SHARED if the file is a *real* file, \n> devices or trick like MAP_ANON does only work with MAP_PRIVATE.\n\nWell, this makes some sense: MAP_SHARED implies that the shared memory\nwill also be accessible to independently started processes, and\nto do that you have to have an openable filename to refer to the\ndata segment by.\n\nMAP_PRIVATE will *not* work for our purposes: according to my copy\nof mmap(2):\n\n: If MAP_PRIVATE is set in flags:\n: o Modification to the mapped region by the calling process is\n: not visible to other processes which have mapped the same\n: region using either MAP_PRIVATE or MAP_SHARED.\n: Modifications are not visible to descendant processes that\n: have inherited the mapped region across a fork().\n\nso privately mapped segments are useless for interprocess communication,\neven after we get rid of exec().\n\nmmaping /dev/zero, as has been suggested earlier in this thread,\nseems like a really bad idea to me. Would that not imply that\nany process anywhere in the system that also decides to mmap /dev/zero\nwould get its hands on the Postgres shared memory segment? You\ncan't restrict permissions on /dev/zero to prevent it.\n\nAm I right in thinking that the contents of the shared memory segment\ndo not need to outlive a particular postmaster run? (If they do, then\nwe have to mmap a real file anyway.) If so, then MAP_ANON(YMOUS) is\na reasonable solution on systems that support it. On those that\ndon't support it, we will have to mmap a real file owned by (and only\nreadable/writable by) the postgres user. Time for another configure\ntest.\n\nBTW, /dev/zero doesn't exist anyway on HPUX 9.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 May 1998 13:29:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON " }, { "msg_contents": "> \n> \"G���ran Thyni\" <[email protected]> writes:\n> > Linux can only MAP_SHARED if the file is a *real* file, \n> > devices or trick like MAP_ANON does only work with MAP_PRIVATE.\n> \n> Well, this makes some sense: MAP_SHARED implies that the shared memory\n> will also be accessible to independently started processes, and\n> to do that you have to have an openable filename to refer to the\n> data segment by.\n> \n> MAP_PRIVATE will *not* work for our purposes: according to my copy\n> of mmap(2):\n\nRight.\n> so privately mapped segments are useless for interprocess communication,\n> even after we get rid of exec().\n\nYep.\n\n> \n> mmaping /dev/zero, as has been suggested earlier in this thread,\n> seems like a really bad idea to me. Would that not imply that\n> any process anywhere in the system that also decides to mmap /dev/zero\n> would get its hands on the Postgres shared memory segment? You\n> can't restrict permissions on /dev/zero to prevent it.\n\nGood point.\n\n> \n> Am I right in thinking that the contents of the shared memory segment\n> do not need to outlive a particular postmaster run? (If they do, then\n> we have to mmap a real file anyway.) If so, then MAP_ANON(YMOUS) is\n> a reasonable solution on systems that support it. On those that\n> don't support it, we will have to mmap a real file owned by (and only\n> readable/writable by) the postgres user. Time for another configure\n> test.\n\nMAP_ANON is the best, because it can be restricted to only postmaster\nchildren.\n\nThe problem with using a real file is that the filesystem is going to be\nflushing those dirty pages to disk, and that could really hurt\nperformance.\n\nActually, when I install Informix, I always have to modify the kernel to\nallow a larger amount of SYSV shared memory. Maybe we just need to give\npeople per-OS instructions on how to do that. Under BSD/OS, I now have\n32MB of shared memory, or 3900 8k shared buffers.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 13 May 1998 14:02:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "Tom Lane wrote:\n> \n> \"G�ran Thyni\" <[email protected]> writes:\n> > Linux can only MAP_SHARED if the file is a *real* file, \n> > devices or trick like MAP_ANON does only work with MAP_PRIVATE.\n> \n> Well, this makes some sense: MAP_SHARED implies that the shared memory\n> will also be accessible to independently started processes, and\n> to do that you have to have an openable filename to refer to the\n> data segment by.\n> \n> MAP_PRIVATE will *not* work for our purposes: according to my copy\n> of mmap(2):\n> \n> : If MAP_PRIVATE is set in flags:\n> : o Modification to the mapped region by the calling process is\n> : not visible to other processes which have mapped the same\n> : region using either MAP_PRIVATE or MAP_SHARED.\n> : Modifications are not visible to descendant processes that\n> : have inherited the mapped region across a fork().\n> \n> so privately mapped segments are useless for interprocess communication,\n> even after we get rid of exec().\n> \n> mmaping /dev/zero, as has been suggested earlier in this thread,\n> seems like a really bad idea to me. Would that not imply that\n> any process anywhere in the system that also decides to mmap /dev/zero\n> would get its hands on the Postgres shared memory segment? You\n> can't restrict permissions on /dev/zero to prevent it.\n\nOn some systems, mmaping /dev/zero can be shared with child processes\nas in this example:\n\n#include <sys/types.h> \n#include <sys/mman.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <unistd.h>\n#include <stdio.h>\n#include <sys/wait.h>\n\nint main()\n{\n int fd;\n caddr_t ma;\n int i;\n int pagesize = sysconf(_SC_PAGESIZE);\n\n fd=open(\"/dev/zero\",O_RDWR);\n if (fd==-1) {\n perror(\"open\");\n exit(1);\n }\n\n ma=mmap((caddr_t) 0,\n\t pagesize,\n\t (PROT_READ|PROT_WRITE), \n\t MAP_SHARED,\n\t fd,\n\t 0);\n\n if ((int)ma == -1) {\n perror(\"mmap\");\n exit(1);\n }\n\n memset(ma,0,pagesize);\n\n i=fork();\n \n if (i==-1) {\n perror(\"fork\");\n exit(1);\n }\n\n if (i==0) { /* child */\n ((char*)ma)[0]=1;\n sleep(1);\n printf(\"child %d %d\\n\",((char*)ma)[0],((char*)ma)[1]);\n sleep(1);\n return 0;\n } else { /* parent */\n ((char*)ma)[1]=1;\n sleep(1);\n printf(\"parent %d %d\\n\",((char*)ma)[0],((char*)ma)[1]);\n }\n\n wait(NULL);\n munmap(ma,pagesize*10);\n\n return 0;\n}\n\n\nThis works on Solaris and as expected, both the parent and child are\nable to write into the memory and their changes are honored (the\nmemory is truely shared between processes. We can certainly map a\nreal file, and this might even give us some interesting crash recovery\noptions. The nice thing about doing away with the exec is that the\nmemory mapped in the parent process is avalible at the same address\nregion in every process, so we don't have to do funky pointer tricks.\n\nThe only problem I see with mmap is that we don't know exactly when a\npage will be written to disk. I.E. If you make two writes, the page\nmight get sync'ed between them, thus storing an inconsistant\nintermediate state to the disk. Perhaps with proper transaction\ncontrol, this is not a problem.\n\nThe question is should the individual database files be mapped into\nmemory, or should one \"pgmem\" file be mapped, with pages from\ndifferent files read into it. The first option would allow different\nbackend processes to map different pages of different files as they\nare needed. The postmaster could \"pre-map\" pages on behalf of the\nbackend processes as sort of an inteligent read-ahead mechanism.\n\nI'll try to write this seperate from Postgres just to see how it works.\n\nOcie\n", "msg_date": "Wed, 13 May 1998 11:38:42 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Would people tell me what platforms do NOT support the MAP_ANON flag to\n> the mmap() system call? You should find it in the mmap() manual page.\n> \n> *BSD has it, but I am not sure of the others. I am researching cache\n> size issues and the use of mmap vs. SYSV shared memory.\n\nWell, I haven't noticed this discussion. However, I can't understand one\nthing:\n\nWhy a lot of people investigate how to replace shared memory with\nmmapping anonymously but there is no discussion on replacing\nreads/writes with memory mapping of heap files.\n\nThis way we would save not only on having better system cache\nutilisation but also we would have less memory copying. For me it seems\nlike a more robust solution. I suggested it few months ago.\n\nIf it's a bad idea, I wonder why?\nAre there any systems that cannot do mmaps at all? \n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Wed, 13 May 1998 22:26:55 +0200", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > Would people tell me what platforms do NOT support the MAP_ANON flag to\n> > the mmap() system call? You should find it in the mmap() manual page.\n> > \n> > *BSD has it, but I am not sure of the others. I am researching cache\n> > size issues and the use of mmap vs. SYSV shared memory.\n> \n> Well, I haven't noticed this discussion. However, I can't understand one\n> thing:\n> \n> Why a lot of people investigate how to replace shared memory with\n> mmapping anonymously but there is no discussion on replacing\n> reads/writes with memory mapping of heap files.\n> \n> This way we would save not only on having better system cache\n> utilisation but also we would have less memory copying. For me it seems\n> like a more robust solution. I suggested it few months ago.\n> \n> If it's a bad idea, I wonder why?\n> Are there any systems that cannot do mmaps at all? \n\nmmap'ing a file is not necessary faster. I will post time timings soon\nthat show this is not the case.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 14 May 1998 00:35:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "Michal Mosiewicz asks:\n> Why a lot of people investigate how to replace shared memory with\n> mmapping anonymously but there is no discussion on replacing\n> reads/writes with memory mapping of heap files.\n> \n> This way we would save not only on having better system cache\n> utilisation but also we would have less memory copying. For me it seems\n> like a more robust solution. I suggested it few months ago.\n> \n> If it's a bad idea, I wonder why?\n\nUnfortunately, it is probably a bad idea.\n\nThe postgres buffer cache is a shared pool of pages containing an assortment\nof blocks from all the different tables in use by all the different backends.\n\nThat is, if backend 'a' is reading table 'ta', and backend 'b' is reading\ntable 'tb' then the buffer cache will have blocks from both table 'ta'\nand table 'tb' in it.\n\nThe benefit occurs when backend 'x' starts reading either table 'ta' or 'tb'.\nRather than have to go to disk, it finds the pages already loaded in the\nshare buffer cache. Likewise, if backend 'a' should modify a page in table\n'ta', the change is then visible to all the other backends (ignoring locks\nfor this discussion) without any explicit communication between the backends.\n\nIf we started creating a separate mmapped region for each table several\nproblems occur:\n\n - each time a backend wants to use a table it will have to somehow find out\n if it is already mapped, and then either map it (for the first time), or\n attach to an existing mapping created by another backend. This implies\n that the backends need to communicate with all the other backends to let\n them know what mappings they are using.\n\n - if two backends are using the same table, and the table is too big to\n map the whole thing, then each backend needs a \"window\" into the table.\n This becomes difficult if the two backends are using different parts of\n the table (ie, the first page and the last page).\n\n - there is a finite amount of memory available on the system for postgres\n to use. This will have to be split amoung all the open tables used by\n all the backends. If you have 50 backends each using 10 each with 3\n indexes, you now need 2,000 mappings in the system. Assuming that there\n are 2001 pages available for mapping, how do you decide with table gets\n to map 2 pages? How do you get all the backends to agree about this?\n\nEssentially, mapping tables separately creates a requirement for a huge\namount of communication and synchronization amoung the backends. And, even\nif this were not prohibitive, it ends up fragmenting the available memory\nfor buffers so badly that the cacheing becomes ineffective.\n\nSo, unless you are going to map whole tables and those tables are needed by\n_all_ the active backends the idea of mmapping separate tables is unworkable.\n\nThat said, there are tables that meet this criteria, for instance the\ntransaction logs and anchors. Here mmapping might indeed be useful but even\nso it would take some thought and a fair amount of work to gain any benefit.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Thu, 14 May 1998 11:39:56 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "David Gould wrote:\n\n> - each time a backend wants to use a table it will have to somehow find out\n> if it is already mapped, and then either map it (for the first time), or\n> attach to an existing mapping created by another backend. This implies\n> that the backends need to communicate with all the other backends to let\n> them know what mappings they are using.\n\nWhy backend has to check if it's already mapped? Let's say that backend\nA maps first page from file X using MAP_SHARED, then backend B maps\nfirst page using MAP_SHARED. So, at this moment they are pointing to the\nsame memory area without any communication. (at least that's the way it\nworks on Linux, in Linux even MAP_PRIVATE is the same memory region when\nyou mmap it twice until you write a byte in there - then it's copied).\nSo, why would we check what other backends map. We use MAP_SHARED to not\nhave to check it.\n \n> - if two backends are using the same table, and the table is too big to\n> map the whole thing, then each backend needs a \"window\" into the table.\n> This becomes difficult if the two backends are using different parts of\n> the table (ie, the first page and the last page).\n\nWell I wasn't even thinking on mapping anything more than just one page\nthat is needed. \n \n> - there is a finite amount of memory available on the system for postgres\n> to use. This will have to be split amoung all the open tables used by\n> all the backends. If you have 50 backends each using 10 each with 3\n> indexes, you now need 2,000 mappings in the system. Assuming that there\n> are 2001 pages available for mapping, how do you decide with table gets\n> to map 2 pages? How do you get all the backends to agree about this?\n\nIMHO, this is also not that much problem as it looks like. When the\nsystem is running out of virtual memory, the occupied pages are\npaged-out. The system does what actually buffer manager does - it writes\ndown the pages that are dirty, and simply frees memory from those that\nare not modified on a last recently used basis. So the only thing that\ncosts are the memory structures that describe the bindings between disk\nblocks and memory. And of course it's sometimes bad to use LRU\nalgorithm. Sometimes backend knows better which pages are best to\npage-out. \n\nI have to admit that this point seems to be potential source of\nperformance drop-downs and all the backends have to communicate to\nprevent it. But I don't think that this communication is huge. Note that\ncurrently all backends use quite large communication channel (256 pages\nlarge by default?) which is hardly used for communication purposes but\nrather for storage.\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Fri, 15 May 1998 04:11:12 +0200", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "\nThis is all old news, but I am trying to catch up on my hackers mail. This\nparticular post caught my eye to think carefully about before replying.\n\nMichal Mosiewicz <[email protected]> writes: \n> David Gould wrote:\n> \n> > - each time a backend wants to use a table it will have to somehow find out\n> > if it is already mapped, and then either map it (for the first time), or\n> > attach to an existing mapping created by another backend. This implies\n> > that the backends need to communicate with all the other backends to let\n> > them know what mappings they are using.\n> \n> Why backend has to check if it's already mapped? Let's say that backend\n> A maps first page from file X using MAP_SHARED, then backend B maps\n> first page using MAP_SHARED. So, at this moment they are pointing to the\n> same memory area without any communication. (at least that's the way it\n> works on Linux, in Linux even MAP_PRIVATE is the same memory region when\n> you mmap it twice until you write a byte in there - then it's copied).\n> So, why would we check what other backends map. We use MAP_SHARED to not\n> have to check it.\n>\n> > - if two backends are using the same table, and the table is too big to\n> > map the whole thing, then each backend needs a \"window\" into the table.\n> > This becomes difficult if the two backends are using different parts of\n> > the table (ie, the first page and the last page).\n> \n> Well I wasn't even thinking on mapping anything more than just one page\n> that is needed. \n\nYour statement about not checking if a file was mapped struck me as a problem\nbut on second thought, I was thinking about a typical dbms buffer cache,\nyou are proposing eliminating the dbms buffer cache and using mmap() to read\nfile pages directly relying on the OS cache. I agree that this could work.\n\nAnd, at least some OSes have pretty good buffer management and quick\nmmap() calls. Linux 2.1.101 seems to be able to do a mmap() in 25 usec on \na P166 according to lmbench, BSD and Solaris are quite a bit slower, and\nat the really slow end, IRIX and HPUX take hundreds of usec for mmap()).\n\nBut even given good OS mmap() and buffer management, there may still be\na performance justification for a separate DBMS buffer cache.\n\nSuppose many backends are sharing a small table eg a lookup table with a\nfew dozen rows, perhaps three pages worth. Suppose that most queries\nscan this table several times (eg multiple joins and subqueries). And\nsuppose most backends run several queries before being restarted.\n\nThis gives the situation where all the backends refer to same two or three\npages hundreds or thousands of times each.\n\nIn the traditional dbms buffer cache, the first backend to scan the table\ndoes say three reads(), and each backend does one mmap() at startup time\nto map the buffer cache. This means that a very few system calls suffice\nfor thousands of accesses to the shared table.\n\nYour proposal, if I have understood it, has one page mmapped() for the table\nby each backend. To get the next page another mmap() has to be done. This\nresults in three mmaps() per scan for each backend. So, even though the\ntable is fully cached by the OS, thousands of system calls are needed to\nservice all the scans. Even on systems with very fast mmap() I think this\nmay be a significant overhead.\n\nThat is, there may be a reason all the highend dbms's use their own buffer\ncaches.\n\nIf you are interested, this could be tested with not too much work. Simply\ninstrument the buffer manager to trace buffer lookups, and read()s, and\nwrite()s and log this to a file. Then write a simple program to run the\ntrace file performing the same operations only using mmap(). Try to get\na trace from a busy web site or other heavy duty application using postgres.\nI think that this will show that the buffer cache has its place in life.\nBut, I am prepared to hear otherwise.\n\n> > - there is a finite amount of memory available on the system for postgres\n> > to use. This will have to be split amoung all the open tables used by\n> > all the backends. If you have 50 backends each using 10 each with 3\n> > indexes, you now need 2,000 mappings in the system. Assuming that there\n> > are 2001 pages available for mapping, how do you decide with table gets\n> > to map 2 pages? How do you get all the backends to agree about this?\n> \n> IMHO, this is also not that much problem as it looks like. When the\n> system is running out of virtual memory, the occupied pages are\n> paged-out. The system does what actually buffer manager does - it writes\n> down the pages that are dirty, and simply frees memory from those that\n> are not modified on a last recently used basis. So the only thing that\n> costs are the memory structures that describe the bindings between disk\n> blocks and memory. And of course it's sometimes bad to use LRU\n> algorithm. Sometimes backend knows better which pages are best to\n> page-out. \n> \n> I have to admit that this point seems to be potential source of\n> performance drop-downs and all the backends have to communicate to\n> prevent it. But I don't think that this communication is huge. Note that\n> currently all backends use quite large communication channel (256 pages\n> large by default?) which is hardly used for communication purposes but\n> rather for storage.\n\nPerhaps. Still, to implement this would be a major task. I would prefer to\nspend that effort on adding page or row level locking for instance.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Sat, 30 May 1998 21:49:16 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" } ]
[ { "msg_contents": "^_^) \n^_^) Please cut the complete filename in this message (starting at the first\n^_^) \"/\") and do an ls -l on it. \"Can't stat\" means that there is either no\n^_^) file at the specied location or it is unreadable to the backend which\n^_^) runs as \"postgres\".\n^_^) \n^_^) Gene\n^_^) \n\n Dear Sir\n\n Thank you for your favor.\n\n The reason is that My diretory permission had 700.\n\n We changed from 700 to 755.\n\n Now postgres is runing \n\n --------------\n\n Good luck!!\n \n Tak.\n", "msg_date": "Mon, 11 May 1998 13:16:26 +0900 (KST)", "msg_from": "Tak Woohyun <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Help me!!! Please" } ]
[ { "msg_contents": "Seems like SQL functions cannot be used for defining functional\nindexes. Is this a feature or bug? (I couldn't find that restrictions\nin docs)\n\n> create table d1 (d datetime);\n> insert into d1 values('now'::datetime);\n> create index d1index1 on d1 (d);\n> create function date2month(datetime) returns datetime as ' select date_trunc(\\'month\\', datetime($1))' language 'sql';\n> create index d1index2 on d1 (date2month(d) datetime_ops);\n> ERROR: internal error: untrusted function not supported.\n\nNext, C functions work great for creating functional\nindexes. Good. Unfortunately, the functional index I have created\nseems never be used. Any suggestion?\n\ncreate table d1(d date);\nCREATE FUNCTION date2month(date)\nRETURNS datetime\nAS '/mnt2/home/mgr/t-ishii/doc/PostgreSQL/functional_index/date2month/date2month.so'\nLANGUAGE 'c';\n(300 records insertion here)\ncreate index d1index on d1 using btree (date2month(d) datetime_ops);\nvacuum d1;\nexplain select * from d1 where date2month(d) = 'Mon Mar 01 00:00:00 1999 JST'::datetime;\nNOTICE: QUERY PLAN:\n\nSeq Scan on d1 (cost=13.96 size=166 width=4)\n\nEXPLAIN\n\n---------------------- date2month.c --------------------\n#include \"postgres.h\"\n#include \"utils/builtins.h\"\n\nDateTime *date2month(DateADT date)\n{\n static char *month = \"month\";\n DateTime *d,*ret;\n union {\n text unit;\n char buf[128];\n } v;\n\n d = date_datetime(date);\n strcpy(VARDATA(&v.unit),month);\n VARSIZE(&v.unit) = strlen(month)+VARHDRSZ;\n ret = datetime_trunc(&v.unit,d);\n return(ret);\n}\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Mon, 11 May 1998 14:25:25 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "functional index" } ]
[ { "msg_contents": "\n-----Original Message-----\nFrom: Thomas G. Lockhart <[email protected]>\nTo: Maurice Gittens <[email protected]>\nCc: Dave Chapeskie <[email protected]>; Postgres Hackers List\n<[email protected]>\nDate: maandag 11 mei 1998 12:24\nSubject: Re: [HACKERS] Automatic type conversion\n\n\n>> Making an int from a float is only defined for \"small\" values of the\n>> float. So for the general case such a conversion would simply overflow\n>> the int, giving it an undefined value. Does this make sense to you?\n>\n>Yes, it does. Look, I'm not saying everyone _should_ call factorial with\n>a float, only that if someone does, Postgres will try to accomplish it.\n>Doesn't it make sense to you?\n\nIMO the issue is not related to the factorial function. I think we\nare/should\nbe discussing the general issue how to handle conversions from a type A to\na type B while the conversion function F from A to B is not defined\nfor all values of A.\n\n>\n>> Are conversions between types defined in a way that is also\n>> extensible? I'm trying to say that if I add a new type to the system,\n>> can I also specify which conversions are automatically allowed?\n>> (Something similar to the C++ \"explicite\" keyword?).\n>\n>Yes, they are extensible in the sense that all conversions (except for a\n>few string type hacks at the moment) are done by looking for a function\n>named with the same name as the target type, taking as a single argument\n>one with the specified source type. If you define one, then Postgres can\n>use it for conversions.\n>\n>At the moment the primary mechanism uses the pg_proc table to look for\n>possible conversion functions, along with a hardcoded notion of what\n>\"preferred types\" and \"type categories\" are for the builtin types. For\n>user-defined types, explicit type conversion functions must be provided\n>_and_ there must be a single path from source to possible targets for\n>the conversions. Otherwise there will result multiple possible\n>conversions and Postgres will ask you to use a cast, much as it does in\n>v6.3.x and before.\n\nThanks for the explanation.\n\n>\n>> >Or, again for this factorial case, we can implement a floating point\n>> >factorial with either the gamma function (whatever that is :) or with\n>> >an explicit routine which checks for non-integral values.\n>> And properly handles overflows.\n>\n>Hey, it doesn't do any worse than before...\nI don't know what the system used to do. I do however hope\nthat if a conversion is not defined that the system won't simply ignore\nthe error.\n\nDon't worry I'll shut up now.\n\nWith regards from Maurice.\n\n\n", "msg_date": "Mon, 11 May 1998 09:44:51 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Automatic type conversion" }, { "msg_contents": "> Don't worry I'll shut up now.\n\nWell, try things out if you have time and see if things work in general.\nIt took me quite some time to work through the issues and we'll need to\ntry many cases and have some back-and-forth discussion before everything\nis clear and we can determine what adjustments should be made.\n\nfyi, the original SQL developers for Postgres seemed to be divided on\nthe subject, but one camp concluded that almost no automatic type\nconversion was desirable. I think in practice that this is too extreme,\nsince many cases have an obvious best conversion strategy...\n\n - Tom\n", "msg_date": "Mon, 11 May 1998 14:47:09 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Automatic type conversion" } ]
[ { "msg_contents": "Maurice wrote:\n> Making an int from a float is only defined for \"small\" values of the float.\n> So for the general case such a conversion would simply overflow the int,\n> giving it an undefined value. Does this make sense to you?\n\nThis sure sounds good:\nselect (4.00000 !); -- would work, but \nselect (4.30000 !); -- would throw a runtime error like\nERROR: float to integer conversion error, value out of range.\n\nAndreas\n\n", "msg_date": "Mon, 11 May 1998 10:16:20 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "float --> int" } ]
[ { "msg_contents": "Hi, There are some codes in libpq/backend communication that seem very\nstrange to me.\n\nin backend/access/common/printtup.c printtup():\n\n\t\t\tpq_putint(strlen(outputstr) + VARHDRSZ, VARHDRSZ);\n\t\t\tpq_putnchar(outputstr, strlen(outputstr));\n\nthe first line above sends the data length and second actually sends\nthe data. My question is why the data length is \"strlen(outputstr) +\nVARHDRSZ\", not just strlen(outputstr). After some investigation, I\nfound a code fragment that might be an answer. In\ninterfaces/libpq/fe-exec.c getTuple():\n\n\t\t\t/* get the value length (the first four bytes are for length) */\n\t\t\tpqGetInt(&vlen, 4, pfin, pfdebug);\n\t\t\tif (binary == 0)\n\t\t\t{\n\t\t\t\tvlen = vlen - 4;\n\t\t\t}\n\nWoh! The misterious 4-byte has been subtracted by libpq! Seems they\nhave been remained just for historical reasons.\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Mon, 11 May 1998 18:18:38 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "questionable codes in libpq/backend communication" } ]
[ { "msg_contents": "Hi\n\nDoes anybody knows whats a Portal?\nPostgres' code is plenty of references to them. \nie: \n\n\tProcessPortal routine called inside the executor.\n\nwhich is the difference betwen a query processed calling to processPortal and\na query processed in the standard way (ExecutorRun, ExecutorEnd)...\n\n\n\t\t\t\t\t\tThanks \n-- \n\n\t--------------------------------------------------------------\n\t|10 IF \"LAS RANAS\"=\"TIENEN PELO\" THEN PRINT \"Windows is good\"|\n\t--------------------------------------------------------------\n\t\nCarlos Navarro Garcia \t([email protected])\n\t\t\t([email protected])\n\nD6006 o D6113 , Igual tienes suerte.\nPhone: Phone? Fax: Faaaaaaaax?\n\n\n", "msg_date": "Mon, 11 May 1998 11:59:13 +0200 (MET DST)", "msg_from": "Carlos Navarro Garcia <[email protected]>", "msg_from_op": true, "msg_subject": "Portals" } ]
[ { "msg_contents": "\n-----Original Message-----\nFrom: Andreas Zeugswetter <[email protected]>\nTo: '[email protected]' <[email protected]>\nDate: maandag 11 mei 1998 21:31\nSubject: [HACKERS] float --> int\n\n\n>Maurice wrote:\n>> Making an int from a float is only defined for \"small\" values of the\nfloat.\n>> So for the general case such a conversion would simply overflow the int,\n>> giving it an undefined value. Does this make sense to you?\n>\n>This sure sounds good:\n>select (4.00000 !); -- would work, but\n>select (4.30000 !); -- would throw a runtime error like\n>ERROR: float to integer conversion error, value out of range.\n>\n\n\nI agree. I'm just trying to say that the error should be flagged as such.\n\nMaurice\n\n", "msg_date": "Mon, 11 May 1998 15:27:52 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] float --> int" } ]
[ { "msg_contents": "Hi all,\n\nAfter almost one year using PostgreSQL, I'm still discovering things about\nlocks, we have little documentation about it:\n\nI know that one can lock a table in the following ways:\n\n* BEGIN statement:\n If you don't explicit lock a table using LOCK statement, it will be\n implicit locked only at first UPDATE, INSERT or DELETE operation.\n\n* DECLARE statement:\n Currently, PostgreSQL doesn't support READ ONLY cursors, once a cursor\n is declared, other users can only read data referenced by cursor.\n Write operations to the referenced table like UPDATE, INSERT,\n DELETE or DROP aren't allowed until the end of transaction.\n\n* LOCK statement:\n LOCK don't allows read access to locked tables by the other users.\n If another user try to SELECT a locked table, he must attend\n until the locked table is released.\n\nJust wondering if there are other ways to lock tables.\nAny replay will be appreciate. Thanks, Jose'\n\n", "msg_date": "Mon, 11 May 1998 14:49:37 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "locks" }, { "msg_contents": "Jose' Soares Da Silva wrote:\n> \n> Hi all,\n> \n> After almost one year using PostgreSQL, I'm still discovering things about\n> locks, we have little documentation about it:\n> \n> I know that one can lock a table in the following ways:\n> \n> * BEGIN statement:\n> If you don't explicit lock a table using LOCK statement, it will be\n> implicit locked only at first UPDATE, INSERT or DELETE operation.\n> \n> * DECLARE statement:\n> Currently, PostgreSQL doesn't support READ ONLY cursors, once a cursor\n> is declared, other users can only read data referenced by cursor.\n> Write operations to the referenced table like UPDATE, INSERT,\n> DELETE or DROP aren't allowed until the end of transaction.\n> \n> * LOCK statement:\n> LOCK don't allows read access to locked tables by the other users.\n> If another user try to SELECT a locked table, he must attend\n> until the locked table is released.\n> \n> Just wondering if there are other ways to lock tables.\n> Any replay will be appreciate. Thanks, Jose'\n> \n\n\nI believe it to be so that BEGIN locks each table when it first\noccurs in a statement. With multiple statements per transaction\nthis may lead to a deadlock.\n\nRoelof\n\n----------------------------------------------------------------\nHome is where the http://eboa.com/ is.\n----------------------------------------------------------------\n", "msg_date": "Mon, 11 May 1998 22:06:49 +0200", "msg_from": "Roelof Osinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] locks" } ]
[ { "msg_contents": "\n\n>[email protected] (David Gould) writes:\n>> The idea that occurred to me is to have the postmaster\n>> \"pre-spawn\" some servers in each (configurable) database. These would run\n>> all the initialization and then just wait for a socket to be handed to\nthem.\n>> The postmaster would during idle time replenish the pool of ready\nservers.\n>\n\nDoesn't Apache do something similar? It should be easy enough to borrow\ntheir\nimplementation.\n\nRegards,\n Maurice.\n\n\n\n", "msg_date": "Mon, 11 May 1998 17:18:38 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh] " }, { "msg_contents": "\nThey do it by having all children perform a \"listen\" on the socket..\nwould the ipc stuff function as usual in this case? I'm not clear on\nhow the ipc stuff works.\n\nOn Mon, 11 May 1998, at 17:18:38, Maurice Gittens wrote:\n\n> Doesn't Apache do something similar? It should be easy enough to borrow\n> their\n> implementation.\n> \n> Regards,\n> Maurice.\n> \n> \n> \n", "msg_date": "Mon, 11 May 1998 08:37:08 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] Try again: S_LOCK reduced contentionh] " } ]
[ { "msg_contents": "We installed the 6.3.2 on our system and now the nature of our data\ncorruption has changed. It used to give us \"BTP_CHAIN errors \" message\nbut now it is doing a core dump. I have two questions.\n\n1. Is there a way to check the status of the Index files to see if they\nare OK and not corrupted? Since I did not know of anyway to test the\nindex files, We tried to start the system without reindexing and it\nfailed, yet after rendering, it works fine for almost a day. The\ncorruption happens almost after the pick hour everyday.\n2. Do you know of a good way to read the Core Dump file? We have FreeBsd\nUnix and I have tried to read the file with \"gdb -c coredumpfile.core\"\ncommand, but it complains about the format of the file. Any clue?\n", "msg_date": "Mon, 11 May 1998 08:39:44 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Is there any way to check the status of the Index table." }, { "msg_contents": "[email protected] wrote:\n> \n> We installed the 6.3.2 on our system and now the nature of our data\n> corruption has changed. It used to give us \"BTP_CHAIN errors \" message\n> but now it is doing a core dump. I have two questions.\n> \n> 1. Is there a way to check the status of the Index files to see if they\n> are OK and not corrupted? Since I did not know of anyway to test the\n> index files, We tried to start the system without reindexing and it\n> failed, yet after rendering, it works fine for almost a day. The\n> corruption happens almost after the pick hour everyday.\n> 2. Do you know of a good way to read the Core Dump file? We have FreeBsd\n> Unix and I have tried to read the file with \"gdb -c coredumpfile.core\"\n> command, but it complains about the format of the file. Any clue?\n\nIt also needs to know which executable produced the core file. I\nusually omit the -c, simply \"gcc <executable> <core file>\".\n\nHope this helps,\n\nOcie\n\n", "msg_date": "Mon, 11 May 1998 13:54:26 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Is there any way to check the status of the Index\n table." } ]
[ { "msg_contents": "> Would people tell me what platforms do NOT support the MAP_ANON flag to\n> the mmap() system call? You should find it in the mmap() manual page.\n> \nIRIX doesn't seem to have it (checked bith Irix 5 and Irix 6)\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Mon, 11 May 1998 16:32:40 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Mon, 11 May 1998 20:43:00 -0100", "msg_from": "\"Oliver Mueschke\" <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "Hi\n\nDoes anybody knows whats a Portal?\nPostgres' code is plenty of references to them. \nie: \n\n ProcessPortal routine called inside the executor.\n\nwhich is the difference betwen a query processed calling to processPortal and\na query processed in the standard way (ExecutorRun, ExecutorEnd)...\n\n\n Thanks \n--\n\nCarlos Navarro Garcia \t([email protected])\n\t\t\t([email protected])\n\nD6006 o D6113 , Igual tienes suerte.\nPhone: Phone? Fax: Faaaaaaaax?\n\n\n", "msg_date": "Tue, 12 May 1998 08:00:00 +0200 (MET DST)", "msg_from": "Carlos Navarro Garcia <[email protected]>", "msg_from_op": true, "msg_subject": "Portals again" } ]
[ { "msg_contents": "> My question is why the data length is \"strlen(outputstr) +\n> VARHDRSZ\", not just strlen(outputstr). \n\nI also think that this is a little annoying, but there seems to be no \neasy way out, since all user code has a reference to VARHDRSZ.\nTherefore all existing code would need to be ported. Since I don't\nhave lots of C code I would vote for doing this, but what would\nthose with lots of code say :( ?\n\nAndreas\n\nPS: Maybe some good macros for users would be good to hide VARHDRSZ\n\n\n", "msg_date": "Tue, 12 May 1998 09:51:12 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] questionable codes in libpq/backend communication" } ]
[ { "msg_contents": "I have hit the Access 97 problem of ORDER BY not in target list with a\nsimple, one-table query. The problem does not seem to have anything to do\nwith joins.\n\nI have logged the SQL sent to the back end with the CommLog option in the\n238 driver (very useful, thanks Byron). When using ORDER BY, Access 97\nfirst sends a query to retrieve just the key(s) and then sends another to\nget the required data. Hence, for the first query unless the fields used\nin ORDER BY are key fields they are not in the target list. Nice one MS! \nNothing like making your software do the obvious thing, is there?\n\nThere is a work around. First create your query the usual way (point and\nclick if you like). Next, display the SQL. Finally, convert it into a\nPASSTHROUGH query, so that the backend receives your SQL as written. This\nworked for my very simple test, but I have not checked it out for more\ncomplex queries.\n\nBTW, Access 2.0 does not seem to have this problem, so why this strange\nbehaviour has been introduced into 97 I cannot imagine.\n\nTony Cowderoy\n", "msg_date": "Tue, 12 May 1998 09:01:57 +0100", "msg_from": "Tony Cowderoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] Group/Order by not in target - Was [NEW ODBC DRIVER]" } ]
[ { "msg_contents": "\nI've just finished working on the type\nconversion algorithms so understand the current \"atttypmod\" field a bit\nbetter, but have not decided how to extend it to multiple fields.\n\ndevide it into two 16 bit integers ? \n\nA mathematical package exists for infinite scale decimals, I think\nit was part of a 56 bit RSA cracking effort. It has all thinkable \noperations defined (some I have never heard of, and I am no beginner in math)\nI think I wrote to the list about it in the past, but I can't find it anymore. \n\nAndreas\n\n\n\n", "msg_date": "Tue, 12 May 1998 10:46:05 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Re: [QUESTIONS] money or dollar type" }, { "msg_contents": ">> I've just finished working on the type\n>> conversion algorithms so understand the current \"atttypmod\" field a \n>> bit better, but have not decided how to extend it to multiple fields.\n> divide it into two 16 bit integers ?\n\nAt the moment it already _is_ a 16 bit integer, so it would have to be\ndivided into two 8 bit integers. Still OK, but then it must be a\npositive number, so one field can be only 7 bits. I was thinking of\ntrying to solve the problem generally so that a type definition can also\ndefine a \"type support type\" similar to the current atttypmod, but which\ncould be single or multiple numbers, or a string, or... \n\nDon't know if it would be generally useful though; still thinking about\nhow to implement different character sets and collation sequences for\nstrings and it seems like this might help.\n\n> A mathematical package exists for infinite scale decimals, I think\n> it was part of a 56 bit RSA cracking effort. It has all thinkable\n> operations defined...\n\nWell, if you find it again let us know ;) In the meantime, the 64-bit\nintegers are probably the best candidate implementation.\n\n - Tom\n", "msg_date": "Tue, 12 May 1998 13:16:04 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: [QUESTIONS] money or dollar type" }, { "msg_contents": "> \n> \n> I've just finished working on the type\n> conversion algorithms so understand the current \"atttypmod\" field a bit\n> better, but have not decided how to extend it to multiple fields.\n> \n> devide it into two 16 bit integers ? \n\natttypmod is only 16 bits, so it would be two 8-bit values. I can\nchange it to 32-bits if needed.\n\n> \n> A mathematical package exists for infinite scale decimals, I think\n> it was part of a 56 bit RSA cracking effort. It has all thinkable \n> operations defined (some I have never heard of, and I am no beginner in math)\n> I think I wrote to the list about it in the past, but I can't find it anymore. \n\nMaybe Marc can find it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 12:59:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: [QUESTIONS] money or dollar type" }, { "msg_contents": "> \n> >> I've just finished working on the type\n> >> conversion algorithms so understand the current \"atttypmod\" field a \n> >> bit better, but have not decided how to extend it to multiple fields.\n> > divide it into two 16 bit integers ?\n> \n> At the moment it already _is_ a 16 bit integer, so it would have to be\n> divided into two 8 bit integers. Still OK, but then it must be a\n> positive number, so one field can be only 7 bits. I was thinking of\n> trying to solve the problem generally so that a type definition can also\n> define a \"type support type\" similar to the current atttypmod, but which\n> could be single or multiple numbers, or a string, or... \n\nuse unsigned short, that is 8 bits.\n\n> \n> Don't know if it would be generally useful though; still thinking about\n> how to implement different character sets and collation sequences for\n> strings and it seems like this might help.\n> \n> > A mathematical package exists for infinite scale decimals, I think\n> > it was part of a 56 bit RSA cracking effort. It has all thinkable\n> > operations defined...\n> \n> Well, if you find it again let us know ;) In the meantime, the 64-bit\n> integers are probably the best candidate implementation.\n\nYes, the 64-bit idea is good.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 13:01:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: [QUESTIONS] money or dollar type" } ]
[ { "msg_contents": "REGARDING Box operation algorithms\n\nHello,\n\nI was wondering if the algorithms used to implement the operations on\nbox/polygon/point datatypes are documented and available anywhere on the web?\n\nTIA,\nAndy\n\n", "msg_date": "12 May 1998 10:18:08 U", "msg_from": "\"Andy Farrell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Box operation algorithms" }, { "msg_contents": "> I was wondering if the algorithms used to implement the operations on\n> box/polygon/point datatypes are documented and available anywhere on \n> the web?\n\nUse the source, Luke...\n\n - Tom\n", "msg_date": "Tue, 12 May 1998 15:21:36 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Box operation algorithms" }, { "msg_contents": "Andy Farrell wrote:\n> \n> REGARDING Box operation algorithms\n> \n> Hello,\n> \n> I was wondering if the algorithms used to implement the operations on\n> box/polygon/point datatypes are documented and available anywhere on the web?\n> \n> TIA,\n> Andy\n\nMost of geo funcions are one-liners or very small. I think they are easy\nto understand. That's probably why no one (yet) felt an urge to document\nthem. If you have postgres sources, look in\n/usr/src/pgsql/src/backend/utils/adt/geo_ops.c and\n/usr/src/pgsql/src/backend/utils/adt/geo_selfuncs.c\n\nEssential part of geo_ops.c are input routines dedicated to parsing of\nexternal representations (box_in(), point_in() and friends). You can\nsafely ignore functions in this file whose names make no sense to you,\nunless you find an accidental feature in parsing.\n\n--Gene\n", "msg_date": "Tue, 12 May 1998 15:37:43 +0000", "msg_from": "\"Eugene Selkov Jr.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Box operation algorithms" } ]
[ { "msg_contents": "[email protected] wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > Would people tell me what platforms do NOT support the MAP_ANON flag to\n> > the mmap() system call? You should find it in the mmap() manual page.\n> \n> Doesn't seem to appear in Linux (2.0.30 kernel). As another poster\n> commented, /dev/zero can be mapped for anonymous memory.\n\nalthough 'man mmap' does not say it, it is present in sys/mman.h on \nlinux (at least 2.0.33)\n\nit is NOT present in Solaris x86 v.2.6\n\nit is NOT present in SINIX v5.42 (UNIX(r) System V Release 4.1)\n\n--------------\nHannu Krosing\n", "msg_date": "Tue, 12 May 1998 13:19:26 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "Hannu Krosing wrote:\n> \n> [email protected] wrote:\n> > \n> > Bruce Momjian wrote:\n> > >\n> > > Would people tell me what platforms do NOT support the MAP_ANON flag to\n> > > the mmap() system call? You should find it in the mmap() manual page.\n> > \n> > Doesn't seem to appear in Linux (2.0.30 kernel). As another poster\n> > commented, /dev/zero can be mapped for anonymous memory.\n> \n> although 'man mmap' does not say it, it is present in sys/mman.h on \n> linux (at least 2.0.33)\n\nIt appears there, but using it causes mmap to return EINVAL.\n\nOcie\n", "msg_date": "Tue, 12 May 1998 12:40:47 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" } ]
[ { "msg_contents": "I am using Byron's ODBC driver (version .0239 downloaded today) with\nAccess 7.00 under Win95 with Postgres 6.3. I am trying to link tables\ninto Access. Most tables work fine, but any field with the name 'name'\nor 'sortname' or even 'garbagename' may not be used as part of an\nindex. This occurs whether the index is picked up automatically by the\ndriver, or if you are asked to choose a unique field by Access.\n\nCan this be fixed? As a workaround, could the driver optionally not\ntell the client application about indices? In this way, I could tell\nAccess to ignore them and then (I think) I would be able to get at my\ndata.\n\nEwan Mellor.\n", "msg_date": "Tue, 12 May 1998 11:49:57 +0100", "msg_from": "Ewan Mellor <[email protected]>", "msg_from_op": true, "msg_subject": "MS Access & PsqlODBC: Invalid field name 'name'" }, { "msg_contents": "Set the \"Recognize Unique Indexes\" to disabled (unchecked).\nThen, when Access asks you for a unique field, don't select anything and\nhit ok.\nThus, you are telling access you have no index.\n\nThis should allow you to get at your data until we figure out what \"name\"\nhas to do with this problem.\n\nByron\n\nEwan Mellor wrote:\n\n> I am using Byron's ODBC driver (version .0239 downloaded today) with\n> Access 7.00 under Win95 with Postgres 6.3. I am trying to link tables\n> into Access. Most tables work fine, but any field with the name 'name'\n> or 'sortname' or even 'garbagename' may not be used as part of an\n> index. This occurs whether the index is picked up automatically by the\n> driver, or if you are asked to choose a unique field by Access.\n>\n> Can this be fixed? As a workaround, could the driver optionally not\n> tell the client application about indices? In this way, I could tell\n> Access to ignore them and then (I think) I would be able to get at my\n> data.\n>\n> Ewan Mellor.\n\n\n\n", "msg_date": "Tue, 12 May 1998 10:34:38 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] MS Access & PsqlODBC: Invalid field name 'name'" }, { "msg_contents": "Ewan Mellor wrote:\n> \n> I am using Byron's ODBC driver (version .0239 downloaded today) with\n> Access 7.00 under Win95 with Postgres 6.3. I am trying to link tables\n> into Access. Most tables work fine, but any field with the name 'name'\n> or 'sortname' or even 'garbagename' may not be used as part of an\n> index. This occurs whether the index is picked up automatically by the\n> driver, or if you are asked to choose a unique field by Access.\n\nStarting from v2.0 of Access the worn \"name\" became kind of reserved \nword in Access, as the table itself aquired an _attribute_ name, which \ncontains the name of the table.\n\nso having a field called name is a problem in anyway (for Access).\n\nI have no idea why \"sortname\" or \"garbagename\" does not work.\n\n> Can this be fixed? As a workaround, could the driver optionally not\n> tell the client application about indices? In this way, I could tell\n> Access to ignore them and then (I think) I would be able to get at my\n> data.\n\nyou can still do \n\nALTER TABLE yourtable RENAME name TO not_name_any_more;\n\n-------------\n\nHannu\n", "msg_date": "Tue, 12 May 1998 17:53:05 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] MS Access & PsqlODBC: Invalid field name 'name'" }, { "msg_contents": "Hannu Krosing wrote:\n> \n> Ewan Mellor wrote:\n> >\n> > I am using Byron's ODBC driver (version .0239 downloaded today) with\n> > Access 7.00 under Win95 with Postgres 6.3. I am trying to link tables\n> > into Access. Most tables work fine, but any field with the name 'name'\n> > or 'sortname' or even 'garbagename' may not be used as part of an\n> > index. This occurs whether the index is picked up automatically by the\n> > driver, or if you are asked to choose a unique field by Access.\n> \n> Starting from v2.0 of Access the worn \"name\" became kind of reserved\n> word in Access, as the table itself aquired an _attribute_ name, which\n> contains the name of the table.\n> \n> so having a field called name is a problem in anyway (for Access).\n\nSo it's a \"we're Microsoft and we can do what we want\" reserved word,\nand not an \"internationally recognised standard SQL\" reserved word :-(\n\n> I have no idea why \"sortname\" or \"garbagename\" does not work.\n> \n> > Can this be fixed? As a workaround, could the driver optionally not\n> > tell the client application about indices? In this way, I could tell\n> > Access to ignore them and then (I think) I would be able to get at my\n> > data.\n> \n> you can still do\n> \n> ALTER TABLE yourtable RENAME name TO not_name_any_more;\n\nI'd rather not - there is a reasonable amount of code sitting on top of\nthis DB :-(\n\nThanks a lot for your help,\n\nEwan.\n", "msg_date": "Tue, 12 May 1998 17:08:58 +0100", "msg_from": "Ewan Mellor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] MS Access & PsqlODBC: Invalid field name 'name'" }, { "msg_contents": "Byron Nikolaidis wrote:\n> \n> Set the \"Recognize Unique Indexes\" to disabled (unchecked).\n> Then, when Access asks you for a unique field, don't select anything and\n> hit ok.\n> Thus, you are telling access you have no index.\n> \n> This should allow you to get at your data until we figure out what \"name\"\n> has to do with this problem.\n\nI tried that, but it does not seem to help. Perhaps Access is asking\nfor the index information of its own accord?\n\nI have just discovered it also objects to:\n\nrt_url,\nurl,\ngenre,\nname_en,\nstored,\nstored1,\naddress, and\nserver.\n\nNote that many have succeeded. Curiouser and curiouser...\n\nThanks, Byron, both for your help and for what looks like it will be a\nreally useful driver. If only the whole planet didn't use Access...\n\n> Byron\n> \n> Ewan Mellor wrote:\n> \n> > I am using Byron's ODBC driver (version .0239 downloaded today) with\n> > Access 7.00 under Win95 with Postgres 6.3. I am trying to link tables\n> > into Access. Most tables work fine, but any field with the name 'name'\n> > or 'sortname' or even 'garbagename' may not be used as part of an\n> > index. This occurs whether the index is picked up automatically by the\n> > driver, or if you are asked to choose a unique field by Access.\n> >\n> > Can this be fixed? As a workaround, could the driver optionally not\n> > tell the client application about indices? In this way, I could tell\n> > Access to ignore them and then (I think) I would be able to get at my\n> > data.\n> >\n> > Ewan Mellor.\n", "msg_date": "Tue, 12 May 1998 17:24:12 +0100", "msg_from": "Ewan Mellor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] MS Access & PsqlODBC: Invalid field name 'name'" }, { "msg_contents": "Ewan Mellor wrote:\n\n> Byron Nikolaidis wrote:\n> >\n> > Set the \"Recognize Unique Indexes\" to disabled (unchecked).\n> > Then, when Access asks you for a unique field, don't select anything and\n> > hit ok.\n> > Thus, you are telling access you have no index.\n> >\n> > This should allow you to get at your data until we figure out what \"name\"\n> > has to do with this problem.\n>\n> I tried that, but it does not seem to help. Perhaps Access is asking\n> for the index information of its own accord?\n>\n> I have just discovered it also objects to:\n>\n> rt_url,\n> url,\n> genre,\n> name_en,\n> stored,\n> stored1,\n> address, and\n> server.\n>\n\nWait, I think I have it!\n\nAccess will not allow you to index on LongVarchar data types OR character types\nthat are longer than 254 characters (255 with null). I bet these columns you\nare having trouble with are Postgres TEXT types or varchars/chars that are over\n254.\n\nCheck out the odbc driver setup options dialog. You can map TEXT fields to\nplain varchar; then set the LongVarChar size to 254, and it should work!\n\nByron\n\n\n", "msg_date": "Tue, 12 May 1998 13:00:47 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] MS Access & PsqlODBC: Invalid field name 'name'" }, { "msg_contents": "Byron Nikolaidis wrote:\n> \n> Ewan Mellor wrote:\n> \n> > Byron Nikolaidis wrote:\n> > >\n> > > Set the \"Recognize Unique Indexes\" to disabled (unchecked).\n> > > Then, when Access asks you for a unique field, don't select anything and\n> > > hit ok.\n> > > Thus, you are telling access you have no index.\n> > >\n> > > This should allow you to get at your data until we figure out what \"name\"\n> > > has to do with this problem.\n> >\n> > I tried that, but it does not seem to help. Perhaps Access is asking\n> > for the index information of its own accord?\n> >\n> > I have just discovered it also objects to:\n> >\n> > rt_url,\n> > url,\n> > genre,\n> > name_en,\n> > stored,\n> > stored1,\n> > address, and\n> > server.\n> >\n> \n> Wait, I think I have it!\n> \n> Access will not allow you to index on LongVarchar data types OR character types\n> that are longer than 254 characters (255 with null). I bet these columns you\n> are having trouble with are Postgres TEXT types or varchars/chars that are over\n> 254.\n> \n> Check out the odbc driver setup options dialog. You can map TEXT fields to\n> plain varchar; then set the LongVarChar size to 254, and it should work!\n\nWell done indeed! With that you have elevated yourself to the higher\nechelons of gurudom. Congratulations. :-)\n\nOne for the FAQ methinks...\n\nEwan.\n", "msg_date": "Tue, 12 May 1998 19:17:35 +0100", "msg_from": "Ewan Mellor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] MS Access & PsqlODBC: Invalid field name 'name'" }, { "msg_contents": "On Tue, 12 May 1998, Ewan Mellor wrote:\n\n> Hannu Krosing wrote:\n> > \n> > Ewan Mellor wrote:\n> > >\n> > > I am using Byron's ODBC driver (version .0239 downloaded today) with\n> > > Access 7.00 under Win95 with Postgres 6.3. I am trying to link tables\n> > > into Access. Most tables work fine, but any field with the name 'name'\n> > > or 'sortname' or even 'garbagename' may not be used as part of an\n> > > index. This occurs whether the index is picked up automatically by the\n> > > driver, or if you are asked to choose a unique field by Access.\n> > \nI'm using Byron's ODBC v6.30.0238 with M$-Access-97 under Win95 with\nPostgreSQL v6.3. I can successful link tables into M$-Access even if they have\nthe word 'name' as column name or table name.\nM$-Access picked column 'name' as unique index and it seems work. I can\nread and write data into may table named 'gname'.\nthis is my example:\n\nodbc=> create table gname ( name name, pname int);\nCREATE\nodbc=> insert into gname values ( 'name',1234);\nINSERT 528554 1\nodbc=> select * from gname;\nname|pname\n----+-----\nname| 1234\n(1 row)\n\nodbc=> \\d gname\n\nTable = gname\n+-------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+-------------------------------+----------------------------------+-------+\n| name | name | 32 |\n| pname | int4 | 4 |\n+-------------------------------+----------------------------------+-------+\n Jose'\n\n", "msg_date": "Wed, 13 May 1998 10:13:20 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] MS Access & PsqlODBC: Invalid field name 'name'" }, { "msg_contents": "Hi,\n\n I'm using PostgreSQL-6.3 / psqlodbc 06.30.0242 / M$-Access97.\nI created a REPORT with a leftjoin that takes a lot of time.\nThere are 3850 rows in the main table.\nPostgreSQL takes about..............: 960 secs to print all records.\nThe same test using MySQL takes only: 85 secs and the same\ntest using M$-Access takes about....: 45 secs.\nI configured ODBC drive to write the log file to sees what ODBC is doing\nbut seems that it writes log file only while fetching rows.\nIs there a way to know what ODBC is doing. To know why it takes so long time?\n Thanks, Jose'\n\n", "msg_date": "Thu, 28 May 1998 10:50:19 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "ODBC is slow with M$-Access Report" }, { "msg_contents": "\n\nJose' Soares Da Silva wrote:\n\n> Hi,\n>\n> I'm using PostgreSQL-6.3 / psqlodbc 06.30.0242 / M$-Access97.\n> I created a REPORT with a leftjoin that takes a lot of time.\n> There are 3850 rows in the main table.\n> PostgreSQL takes about..............: 960 secs to print all records.\n> The same test using MySQL takes only: 85 secs and the same\n> test using M$-Access takes about....: 45 secs.\n\nThis is never a simple comparison. Performance using Access/PostgreSQL can be\ngreatly effected by the driver settings. In particular, if you tell MS Access\nthat there is a unique index on a table, at link time, or to \"Recognize Unique\nIndexed\" (and there is one), Access will generate queries which the backend will\nnot respond to very optimally. Especially where outer joins are concerned.\nThese queries are characterized by numerous OR(s). Unfortunately under these\nconditions the backend does make use of the very index that Access is trying to\ntake advantage of.\n\nSo relinking the table without Access's recognition of the primary key (unique\nindex) may help performance. The down side is that you may not modify a table\nfrom Access without a specified primary key.\n\nThere is also another factor. Does MySql support outer joins? PostgreSQL does\nnot at this time. MS Access will hide this fact from the users and perform the\njoin within Access. Thus, creating the situation described above.\n\n\n\n\n\n> I configured ODBC drive to write the log file to sees what ODBC is doing\n> but seems that it writes log file only while fetching rows.\n> Is there a way to know what ODBC is doing. To know why it takes so long time?\n> Thanks, Jose'\n\nThe CommLog was created to log SQL statement communication with the server. A\nmuch more detailed log can be activated from the \"ODBC Data Source Administrator\"\ndialog under the \"Tracing\" tab. If you use this feature you may want to clear it\nout first. It will also bring processing to a craw.\n\n", "msg_date": "Thu, 28 May 1998 10:06:43 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ODBC is slow with M$-Access Report" }, { "msg_contents": "We are working on a project that IMHO give more prestige to\nPostgreSQL.\nThe Hygea project concern the use of an Unix-like Operating sys-\ntem as \"back-end\" of a Client M$-windows application connected\nby ODBC that will be installed in about 80 Italian Helth Depart-\nments for the veterinary controls and prevention.\nTherefore...\n\nO.S.: We choose Linux for his proved reliability.\n\nClient: We choose to develop the Client with M$-Access because we\nneed (unfortunately) a complete integration with Micro$oft World.\n\nDatabase: We choose PostgreSQL for his reliability and for his\ncompatibility with SQL/92 standard recommendation and for his ex-\ncellent technical support provided by \"The PostgreSQL Development\nTeam\" and his mailing lists.\n\nNevertheless the union among M$-Access and PostgreSQL is quite\nsuffered for the following reasons:\n\n1. The PostgreSQL doesn't use the index with \"OR\" operator and\nso is not possible to define a multiple key to use with M$-Access\nand we need to retreat using OID as primary keys (thanks to Byron\nNikolaidis and David Hartwig of insightdist.com that are doing a\nreally great job with ODBC driver), but with the obvious consequences.\n\n2. As PostgreSQL doesn't allow an \"ORDER BY\" on columns not\nincluded in the target list of the \"SELECT\", (I know that it is\nSQL/92 standard, but IMO it's a fool thing), therefore, is not possible\nto have the \"dynaset \"sorted for any field that is different from\nthe key (in our case the useless OIDs).\n\n3. The times required to run complex reports (for example those that\ninclude LEFT JOINS) is very long (about 15 minutes to retrieve\n2850 rows).\n\nWe hope the PostgreSQL next release v6.4 may have some of these features\notherwise, we have to give up the project.\n\n> Jose' Soares Da Silva wrote:\n> \n> > Hi,\n> >\n> > I'm using PostgreSQL-6.3 / psqlodbc 06.30.0242 / M$-Access97.\n> > I created a REPORT with a leftjoin that takes a lot of time.\n> > There are 3850 rows in the main table.\n> > PostgreSQL takes about..............: 960 secs to print all records.\n> > The same test using MySQL takes only: 85 secs and the same\n> > test using M$-Access takes about....: 45 secs.\n> \n> This is never a simple comparison. Performance using Access/PostgreSQL can be\n> greatly effected by the driver settings. In particular, if you tell MS Access\n> that there is a unique index on a table, at link time, or to \"Recognize Unique\n> Indexed\" (and there is one), Access will generate queries which the backend will\n> not respond to very optimally. Especially where outer joins are concerned.\n> These queries are characterized by numerous OR(s). Unfortunately under these\n> conditions the backend does make use of the very index that Access is trying to\n> take advantage of.\n> \n> So relinking the table without Access's recognition of the primary key (unique\n> index) may help performance. The down side is that you may not modify a table\n> from Access without a specified primary key.\n> \n> There is also another factor. Does MySql support outer joins? PostgreSQL does\n> not at this time. MS Access will hide this fact from the users and perform the\n> join within Access. Thus, creating the situation described above.\n> \n> > I configured ODBC drive to write the log file to sees what ODBC is doing\n> > but seems that it writes log file only while fetching rows.\n> > Is there a way to know what ODBC is doing. To know why it takes so long time?\n> > Thanks, Jose'\n> \n> The CommLog was created to log SQL statement communication with the server. A\n> much more detailed log can be activated from the \"ODBC Data Source Administrator\"\n> dialog under the \"Tracing\" tab. If you use this feature you may want to clear it\n> out first. It will also bring processing to a craw.\n\n | |\n~~~~~~~~~~~~~~~~~~~~~~~~ | | ~~~~~~~~~~~~~~~~~~~~~~~~\n Progetto HYGEA ---- ---- www.sferacarta.com\n Sfera Carta Software ---- ---- [email protected]\n Via Bazzanese, 69 | | Fax. ++39 51 6131537\nCasalecchio R.(BO) Italy | | Tel. ++39 51 591054\n-----------------------------------------------------------------------------\n\n", "msg_date": "Mon, 1 Jun 1998 17:42:26 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ODBC is slow with M$-Access Report" }, { "msg_contents": "\n\nJose' Soares Da Silva wrote:\n\n> We are working on a project that IMHO give more prestige to\n> PostgreSQL.\n> The Hygea project concern the use of an Unix-like Operating sys-\n> tem as \"back-end\" of a Client M$-windows application connected\n> by ODBC that will be installed in about 80 Italian Helth Depart-\n> ments for the veterinary controls and prevention.\n> Therefore...\n\n> O.S.: We choose Linux for his proved reliability.\n>\n> Client: We choose to develop the Client with M$-Access because we\n> need (unfortunately) a complete integration with Micro$oft World.\n>\n> Database: We choose PostgreSQL for his reliability and for his\n> compatibility with SQL/92 standard recommendation and for his ex-\n> cellent technical support provided by \"The PostgreSQL Development\n> Team\" and his mailing lists.\n>\n> Nevertheless the union among M$-Access and PostgreSQL is quite\n> suffered for the following reasons:\n>\n> 1. The PostgreSQL doesn't use the index with \"OR\" operator and\n> so is not possible to define a multiple key to use with M$-Access\n> and we need to retreat using OID as primary keys (thanks to Byron\n> Nikolaidis and David Hartwig of insightdist.com that are doing a\n> really great job with ODBC driver), but with the obvious consequences.\n\n I am currently working on a solution as time will allow. Hopefully part of 6.4\n\n>\n>\n> 2. As PostgreSQL doesn't allow an \"ORDER BY\" on columns not\n> included in the target list of the \"SELECT\", (I know that it is\n> SQL/92 standard, but IMO it's a fool thing), therefore, is not possible\n> to have the \"dynaset \"sorted for any field that is different from\n> the key (in our case the useless OIDs).\n>\n\nThis fix is in alpha and will be in the 6.4 release. I do not know when 6.4 is slated\nfor release, but I am willing to send you a patch if it is critical for you to proceed.\n\n> 3. The times required to run complex reports (for example those that\n> include LEFT JOINS) is very long (about 15 minutes to retrieve\n> 2850 rows).\n>\n\nThe solution to your first item will resolve this also.\n\n> We hope the PostgreSQL next release v6.4 may have some of these features\n> otherwise, we have to give up the project.\n>\n> > Jose' Soares Da Silva wrote:\n> >\n> > > Hi,\n> > >\n> > > I'm using PostgreSQL-6.3 / psqlodbc 06.30.0242 / M$-Access97.\n> > > I created a REPORT with a leftjoin that takes a lot of time.\n> > > There are 3850 rows in the main table.\n> > > PostgreSQL takes about..............: 960 secs to print all records.\n> > > The same test using MySQL takes only: 85 secs and the same\n> > > test using M$-Access takes about....: 45 secs.\n> >\n> > This is never a simple comparison. Performance using Access/PostgreSQL can be\n> > greatly effected by the driver settings. In particular, if you tell MS Access\n> > that there is a unique index on a table, at link time, or to \"Recognize Unique\n> > Indexed\" (and there is one), Access will generate queries which the backend will\n> > not respond to very optimally. Especially where outer joins are concerned.\n> > These queries are characterized by numerous OR(s). Unfortunately under these\n> > conditions the backend does make use of the very index that Access is trying to\n> > take advantage of.\n> >\n> > So relinking the table without Access's recognition of the primary key (unique\n> > index) may help performance. The down side is that you may not modify a table\n> > from Access without a specified primary key.\n> >\n> > There is also another factor. Does MySql support outer joins? PostgreSQL does\n> > not at this time. MS Access will hide this fact from the users and perform the\n> > join within Access. Thus, creating the situation described above.\n> >\n> > > I configured ODBC drive to write the log file to sees what ODBC is doing\n> > > but seems that it writes log file only while fetching rows.\n> > > Is there a way to know what ODBC is doing. To know why it takes so long time?\n> > > Thanks, Jose'\n> >\n> > The CommLog was created to log SQL statement communication with the server. A\n> > much more detailed log can be activated from the \"ODBC Data Source Administrator\"\n> > dialog under the \"Tracing\" tab. If you use this feature you may want to clear it\n> > out first. It will also bring processing to a craw.\n>\n\n", "msg_date": "Tue, 02 Jun 1998 11:47:31 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ODBC is slow with M$-Access Report" }, { "msg_contents": "> \n> We are working on a project that IMHO give more prestige to\n> PostgreSQL.\n> The Hygea project concern the use of an Unix-like Operating sys-\n> tem as \"back-end\" of a Client M$-windows application connected\n> by ODBC that will be installed in about 80 Italian Helth Depart-\n> ments for the veterinary controls and prevention.\n> Therefore...\n> \n> O.S.: We choose Linux for his proved reliability.\n> \n> Client: We choose to develop the Client with M$-Access because we\n> need (unfortunately) a complete integration with Micro$oft World.\n> \n> Database: We choose PostgreSQL for his reliability and for his\n> compatibility with SQL/92 standard recommendation and for his ex-\n> cellent technical support provided by \"The PostgreSQL Development\n> Team\" and his mailing lists.\n\nGreat.\n\n> \n> Nevertheless the union among M$-Access and PostgreSQL is quite\n> suffered for the following reasons:\n> \n> 1. The PostgreSQL doesn't use the index with \"OR\" operator and\n> so is not possible to define a multiple key to use with M$-Access\n> and we need to retreat using OID as primary keys (thanks to Byron\n> Nikolaidis and David Hartwig of insightdist.com that are doing a\n> really great job with ODBC driver), but with the obvious consequences.\n\nYes, we need to work on this. I am sure performance really suffers\nbecause of this. Vadim, is this on your short list?\n\n> \n> 2. As PostgreSQL doesn't allow an \"ORDER BY\" on columns not\n> included in the target list of the \"SELECT\", (I know that it is\n> SQL/92 standard, but IMO it's a fool thing), therefore, is not possible\n> to have the \"dynaset \"sorted for any field that is different from\n> the key (in our case the useless OIDs).\n\nDavid at Insight just added this, so it certainly will be in 6.4.\n\n> \n> 3. The times required to run complex reports (for example those that\n> include LEFT JOINS) is very long (about 15 minutes to retrieve\n> 2850 rows).\n\nYea, we need this too. Not sure where we are with this. Can you give\nan example?\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 2 Jun 1998 12:21:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] ODBC is slow with M$-Access Report" }, { "msg_contents": "On Tue, 2 Jun 1998, David Hartwig wrote:\n\n> > O.S.: We choose Linux for his proved reliability.\n\n\t*quiet snicker*\n\n> > 2. As PostgreSQL doesn't allow an \"ORDER BY\" on columns not\n> > included in the target list of the \"SELECT\", (I know that it is\n> > SQL/92 standard, but IMO it's a fool thing), therefore, is not possible\n> > to have the \"dynaset \"sorted for any field that is different from\n> > the key (in our case the useless OIDs).\n> >\n> \n> This fix is in alpha and will be in the 6.4 release. I do not know when\n> 6.4 is slated for release, but I am willing to send you a patch if it is\n> critical for you to proceed. \n\n\t6.4 is slated for Oct 1st...we had thought Sep 1st, except that,\nbeing the tail end of the summer, alot of ppl tend to be in limbo \n\n\n", "msg_date": "Tue, 2 Jun 1998 12:34:40 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] ODBC is slow with M$-Access Report" }, { "msg_contents": "David Hartwig wrote:\n> \n> > 1. The PostgreSQL doesn't use the index with \"OR\" operator and\n> > so is not possible to define a multiple key to use with M$-Access\n> > and we need to retreat using OID as primary keys (thanks to Byron\n> > Nikolaidis and David Hartwig of insightdist.com that are doing a\n> > really great job with ODBC driver), but with the obvious consequences.\n> \n> I am currently working on a solution as time will allow. Hopefully part of 6.4\n> \n\nWill this solution be in ODBC driver (rewrite ORs to UNION) or in \nthe backend (fix the optimiser)?\n\n------\nHannu\n", "msg_date": "Tue, 02 Jun 1998 20:11:41 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ODBC is slow with M$-Access Report" }, { "msg_contents": "\n\nHannu Krosing wrote:\n\n> David Hartwig wrote:\n> >\n> > > 1. The PostgreSQL doesn't use the index with \"OR\" operator and\n> > > so is not possible to define a multiple key to use with M$-Access\n> > > and we need to retreat using OID as primary keys (thanks to Byron\n> > > Nikolaidis and David Hartwig of insightdist.com that are doing a\n> > > really great job with ODBC driver), but with the obvious consequences.\n> >\n> > I am currently working on a solution as time will allow. Hopefully part of 6.4\n> >\n>\n> Will this solution be in ODBC driver (rewrite ORs to UNION) or in\n> the backend (fix the optimiser)?\n>\n\nThe short answer is that the rewrite on the driver side is problematic.\n\nI had hoped to be further along with my feasibility research before raising the issue\nfor again discussion. But, now is as good a time as any. Let me first clarify the\nproblem for any ppl jumping into the middle of this thread.\n\nMany general purpose database clients applications such as MS Access routinely generate\nqueries with the following signature:\n\nSELECT k1, k2, k3, a4, a5, ... FROM t WHERE\n (k1 = const01 AND k2 = const02 AND k3 = const03) OR\n (k1 = const11 AND k2 = const12 AND k3 = const13) OR\n (k1 = const21 AND k2 = const22 AND k3 = const23) OR\n (k1 = const31 AND k2 = const32 AND k3 = const33) OR\n (k1 = const41 AND k2 = const42 AND k3 = const43) OR\n (k1 = const51 AND k2 = const52 AND k3 = const53) OR\n (k1 = const61 AND k2 = const62 AND k3 = const63) OR\n (k1 = const71 AND k2 = const72 AND k3 = const73) OR\n (k1 = const81 AND k2 = const82 AND k3 = const73) OR\n (k1 = const91 AND k2 = const92 AND k3 = const93);\n\nWhere k(n) id is the attribute for a multi-part primary key and const(m)(n) is any\nconstant.\n\nPerformance on this kind of a query is crucial to these client side tools. These are\nused to maneuver through large tables without having to slurp in the entire table.\nCurrently the backend optimizer tries to arrange the WHERE clause into conjunctive\nnormal form (cnfify()). Unfortunatley this process leads to memory exhaustion.\n\nI have come up with 3 methods of attacking the problem.\n\n1. As Mr. Krosing mentioned we could rewrite the query on the driver side. before\nsending it to the backend. One could identify the signature of such a query and upon\nverification replace all the ORs with a \"UNION SELECT k1, k2, k3, a4, ... FROM t\nWHERE\" I have tested this substitution with up to 30 OR groupings and it performs\nlike a charm. Thanks Bruce. Here is the kicker. If you do some guestimations using\na table with say 50 attributes, you will see that very quickly you will be bumping into\nthe 8K message limit. I am finding that this is not unusual in our user community.\n\n2. Use a similar strategy to the first method except you do the rewrite the query in\nthe backend; some where after parsing and before optimizations. The exact location\ncan be debated. The basic idea is to pre-qualify the rewrite by requiring only one\ntable, no subselects, no unions, etc. Then, identify the AND/OR signature in the\nqualifier expression tree. For each OR grouping, clone the main query tree (minus the\nqualifier clause) onto a list of query trees hanging off the UNION structure element.\nAll the while, pruning the OR nodes off and attaching them to the cloned query tree.\nThe code required by this approach is very isolated and should be low risk as a\nresult. My concern is that this approach is too narrow and does not fit well into the\nlong term goals of the project. My guess is that performance will be even better than\nthe first method.\n\n3. Get out of the way and let Vadim do his thing.\n\nComments?\n\n", "msg_date": "Tue, 02 Jun 1998 14:43:14 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] ODBC is slow with M$-Access Report" }, { "msg_contents": "> 3. Get out of the way and let Vadim do his thing.\n> \n> Comments?\n\nYes, I have queried him to find out where this sits on his list. It\nwould be intestesting to what, if anything, he has planned for 6.4. I\nthink I have forgotten.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 2 Jun 1998 17:52:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] ODBC is slow with M$-Access Report" }, { "msg_contents": "On Tue, 2 Jun 1998, David Hartwig wrote:\n\n<DELETED>\n> > Nikolaidis and David Hartwig of insightdist.com that are doing a\n> > really great job with ODBC driver), but with the obvious consequences.\n> \n> I am currently working on a solution as time will allow. Hopefully part of 6.4\n> > 2. As PostgreSQL doesn't allow an \"ORDER BY\" on columns not\n> > included in the target list of the \"SELECT\", (I know that it is\n> > SQL/92 standard, but IMO it's a fool thing), therefore, is not possible\n> > to have the \"dynaset \"sorted for any field that is different from\n> > the key (in our case the useless OIDs).\n> \n> This fix is in alpha and will be in the 6.4 release. I do not know when 6.4 is slated\n> for release, but I am willing to send you a patch if it is critical for you to proceed.\n> \n> > 3. The times required to run complex reports (for example those that\n> > include LEFT JOINS) is very long (about 15 minutes to retrieve\n> > 2850 rows).\n> >\n> The solution to your first item will resolve this also.\n> \nThis is a great new David, Thank you very much for your work, this allow us\nto go on with this important project.\nFor us, is enough to know that it will be available *maybe* on next release.\n Thanks a lot, Jose'\n\n", "msg_date": "Wed, 3 Jun 1998 10:48:06 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ODBC is slow with M$-Access Report" }, { "msg_contents": "On Tue, 2 Jun 1998, The Hermit Hacker wrote:\n\n> On Tue, 2 Jun 1998, David Hartwig wrote:\n> \n> > > O.S.: We choose Linux for his proved reliability.\n> \n> \t*quiet snicker*\nIf I understand, this mean incredibility.\nWe are using Linux since 1994 and we are satisfied. ;-)\n> > > 2. As PostgreSQL doesn't allow an \"ORDER BY\" on columns not\n> > > included in the target list of the \"SELECT\", (I know that it is\n> > > SQL/92 standard, but IMO it's a fool thing), therefore, is not possible\n> > > to have the \"dynaset \"sorted for any field that is different from\n> > > the key (in our case the useless OIDs).\n> > >\n> > \n> > This fix is in alpha and will be in the 6.4 release. I do not know when\n> > 6.4 is slated for release, but I am willing to send you a patch if it is\n> > critical for you to proceed. \n> \n> \t6.4 is slated for Oct 1st...we had thought Sep 1st, except that,\n> being the tail end of the summer, alot of ppl tend to be in limbo \nThank you,\n Jose'\n\n", "msg_date": "Wed, 3 Jun 1998 11:02:06 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] ODBC is slow with M$-Access Report" }, { "msg_contents": "On Tue, 2 Jun 1998, Bruce Momjian wrote:\n\n> > 3. The times required to run complex reports (for example those that\n> > include LEFT JOINS) is very long (about 15 minutes to retrieve\n> > 2850 rows).\n> \n> Yea, we need this too. Not sure where we are with this. Can you give\n> an example?\nOur problem is linked with using sub-reports, for now we solved this problem \nusing queries instead of sub-reports and it works well.\n Thanks any way,\n\t\t\t\t\t\t\t Jose'\n\n", "msg_date": "Thu, 4 Jun 1998 11:59:59 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] ODBC is slow with M$-Access Report" } ]
[ { "msg_contents": "I just tried creating a database in the current source tree, and got:\n\n\tNOTICE: _outNode: don't know how to print type 631 \n\tNOTICE: _outNode: don't know how to print type 601 \n\nThis is after a fresh initdb. 601 is:\n\n\t#define PG_TYPE_LSEG 601\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 15:03:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "outnode error in current source" }, { "msg_contents": "> I just tried creating a database in the current source tree, and got:\n> \n> NOTICE: _outNode: don't know how to print type 631\n> NOTICE: _outNode: don't know how to print type 601\n> \n> This is after a fresh initdb. 601 is:\n> \n> #define PG_TYPE_LSEG 601\n\nProblem fixed. I have left a bunch of parser debugging statements in the\nvarious parser routines which are enabled by a -DPARSEDEBUG. I left it\nin the Makefile by mistake. Sorry about that...\n\n - Tom\n", "msg_date": "Wed, 13 May 1998 05:00:41 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] outnode error in current source" } ]
[ { "msg_contents": "\nI've just synced myself to cvs, an rebuilt. So far, everything has gone\nfine, except when giving the postgres super user a password:\n\ntemplate1=> alter user postgres with password ******;\nNOTICE: _outNode: don't know how to print type 644 \nNOTICE: _outNode: don't know how to print type 603 \nNOTICE: _outNode: don't know how to print type 610 \nALTER USER\n\nAny ideas?\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 12 May 1998 20:11:10 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Unusual notices in latest snapshot" }, { "msg_contents": "I am getting similar stuff. Thomas will have to comment.\n\n> \n> \n> I've just synced myself to cvs, an rebuilt. So far, everything has gone\n> fine, except when giving the postgres super user a password:\n> \n> template1=> alter user postgres with password ******;\n> NOTICE: _outNode: don't know how to print type 644 \n> NOTICE: _outNode: don't know how to print type 603 \n> NOTICE: _outNode: don't know how to print type 610 \n> ALTER USER\n> \n> Any ideas?\n> \n> -- \n> Peter T Mount [email protected] or [email protected]\n> Main Homepage: http://www.retep.org.uk\n> ************ Someday I may rebuild this signature completely ;-) ************\n> Work Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 15:34:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unusual notices in latest snapshot" }, { "msg_contents": "> I am getting similar stuff. Thomas will have to comment.\n\nHey, what makes you think I have anything to do with it?\n\nHowever, I have heard that someone probably left a -DPARSEDEBUG in the\nbackend/parser/Makefile by mistake :(\n\nTake it out and recompile by doing a make clean in that directory and\nthen a make install in src...\n\nI'll fix the source tree now. Sorry for the problem.\n\n - Tom\n\n> > I've just synced myself to cvs, an rebuilt. So far, everything has gone\n> > fine, except when giving the postgres super user a password:\n> >\n> > template1=> alter user postgres with password ******;\n> > NOTICE: _outNode: don't know how to print type 644\n> > NOTICE: _outNode: don't know how to print type 603\n> > NOTICE: _outNode: don't know how to print type 610\n> > ALTER USER\n> >\n> > Any ideas?\n", "msg_date": "Wed, 13 May 1998 04:55:42 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Unusual notices in latest snapshot" } ]
[ { "msg_contents": "I need to translate some geometric objects and discovered that only a\nfew provide the plus/minus operators for translation. However, it\nseems like a trivial job to add or subtract an (x,y) pair from any\ngeometric object since they are all based on a series of points. I\ndon't mind working on the code, but I cannot seem to find out where\nthe code is.\n\nCan anyone give me a little guidance on what files to modify (and\nwhether there are any tricks to integrating it) so that I can expand\nthe range of geometric objects that can be translated?\n\nThanks for your help.\n\nCheers,\nBrook\n", "msg_date": "Tue, 12 May 1998 16:50:31 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "translation of geometric objects" }, { "msg_contents": "> I need to translate some geometric objects and discovered that only a\n> few provide the plus/minus operators for translation. However, it\n> seems like a trivial job to add or subtract an (x,y) pair from any\n> geometric object since they are all based on a series of points. I\n> don't mind working on the code, but I cannot seem to find out where\n> the code is.\n> Can anyone give me a little guidance on what files to modify (and\n> whether there are any tricks to integrating it) so that I can expand\n> the range of geometric objects that can be translated?\n\nIt looks like point, box, path, and circle are supported, but perhaps\nonly in one combination (e.g. box + point but not point + box). Looks\nlike lseg, line, and polygon are missing altogether. I'd like to have\nthe parser able to match up commutative operators by swapping arguments\nso we only would need to provide one of the combinations for the \"+\"\noperator, for example. Haven't done this yet and don't know how\ndifficult it will be.\n\nAnyway, the files to modify are:\n\nsrc/backend/utils/adt/geo_ops.c\nsrc/include/utils/geo_decls.h\nsrc/include/catalog/pg_proc.h\nsrc/include/catalog/pg_oper.h\n\nThere are some tricks to adding things to the catalogs; if you want to\ngenerate code for the geo_xxx files I'll contribute the catalog stuff.\nSend me patches on the current development source tree and I'll send\nback patches on the catalog for same.\n\nIf you want to do the catalog stuff yourself, use ./unused_oids and\n./duplicate_oids to help select which oids are available for new entries\nin the built-in catalogs.\n\n - Tom\n", "msg_date": "Wed, 13 May 1998 05:14:23 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] translation of geometric objects" } ]
[ { "msg_contents": "I just got thelatest version from cvs and tried to initdb. But I got a core\ndump:\n\n#0 0x4011d7fc in ?? () from /lib/libc.so.6\n#1 0x807b3a9 in IsSharedSystemRelationName (relname=0x817fdc8 \"pg_class\")\n at catalog.c:106\n#2 0x807b27e in relpath (relname=0x817fdc8 \"pg_class\") at catalog.c:34\n#3 0x80de602 in mdopen (reln=0x817fd80) at md.c:293\n#4 0x80df35d in smgropen (which=0, reln=0x817fd80) at smgr.c:189\n#5 0x8101fdb in RelationNameCacheGetRelation (\n relationName=0x8133c77 \"pg_class\") at relcache.c:1187\n#6 0x8102061 in RelationNameGetRelation (relationName=0x8133c77 \"pg_class\")\n at relcache.c:1264\n#7 0x806a56c in heap_openr (relationName=0x8133c77 \"pg_class\") at\nheapam.c:573\n#8 0x80ff6f9 in CatalogCacheInitializeCache (cache=0x81a0e90, relation=0x0)\n#9 0x8100362 in SearchSysCache (cache=0x81a0e90, v1=135476301, v2=0, v3=0, \n v4=0) at catcache.c:840\n#10 0x8103711 in SearchSysCacheTuple (cacheId=10, key1=135476301, key2=0, \n key3=0, key4=0) at syscache.c:435\n#11 0x8100b25 in getmyrelids () at inval.c:253\n#12 0x8100d2c in RelationInvalidateRelationCache (relation=0x8186620, \n tuple=0x81a0dd0, function=0x8100ad0 <RelationIdRegisterLocalInvalid>)\n at inval.c:480\n#13 0x8100e3c in RelationInvalidateHeapTuple (relation=0x8186620, \n tuple=0x81a0dd0) at inval.c:649\n#14 0x806b1b0 in heap_insert (relation=0x8186620, tup=0x81a0dd0)\n at heapam.c:1175\n#15 0x807a7e0 in InsertOneTuple (objectid=1550) at bootstrap.c:645\n#16 0x80786fd in Int_yyparse () at bootparse.y:209\n#17 0x807a264 in BootstrapMain (argc=6, argv=0xbffff948) at bootstrap.c:430\n#18 0x80a1d1d in main (argc=7, argv=0xbffff944) at main.c:106\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Wed, 13 May 1998 11:45:36 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "core dump during initdb" }, { "msg_contents": "> I just got the latest version from cvs and tried to initdb. But I got \n> a core dump:\n\nI'm not seeing that here with a snapshot from 980513 14:30UTC (a few\nminutes ago).\n\nRegression tests run with only a few \"failures\" which are artifacts of\nthe new type conversion/coersion capabilities.\n\n - Tom\n", "msg_date": "Wed, 13 May 1998 14:42:29 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] core dump during initdb" }, { "msg_contents": "Thomas G. Lockhart writes:\n> > I just got the latest version from cvs and tried to initdb. But I got \n> > a core dump:\n> \n> I'm not seeing that here with a snapshot from 980513 14:30UTC (a few\n> minutes ago).\n\nStrange. I tried running it under gdb, but it seems to stop in\nInt_yy_get_next_buffer. However, it went through IsSharedSystemRelationName\nfor pg_class already once. But pg_class was not listed in\nSharedSystemRelationNames.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 14 May 1998 10:25:38 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] core dump during initdb" } ]
[ { "msg_contents": "Hi,\n\n\tIs there any plan to deal with the problem of Postgres server going\nthrough inefficiencies of the filesystem (because the separate tables are\nplaced into separate files)?\n\n\tIf on Solaris, or Sun, there is a big enough table (say 300MB), it\ntakes 600MB of the disk space, and grows proportionally (2 times!). So we could\nobserve: 1) loss of disk space, and 2) loss of performance as well, because of\ngoing through file system block search (for big files) may not be very\nefficient, compare to Sybase&Co. (they use partitions, and deal with their own\n'file system'...)\n\n\tWell, one recommendation could be to go with the efficient\nfilesystem... But I believe it may not be appropriate for most of users.\n\n\tIf anybody knows how to overcome this problem, please let me know.\n\nThanks, regards,\n\n\n-- \nMikhail Routchiev ----- Credit Suisse First Boston Securities (Japan) Ltd. \nShiroyama Hills 4-3-1 Toranomon Minato-ku, Tokyo 105 ---------------------\nVoice:81-3-5404-9514 Fax:81-3-5404-9822 Email:[email protected]\n", "msg_date": "Wed, 13 May 1998 18:46:21 +0900", "msg_from": "\"Mikhail Routchiev\" <[email protected]>", "msg_from_op": true, "msg_subject": "One problem with Postgres..." }, { "msg_contents": "> \n> Hi,\n> \n> \tIs there any plan to deal with the problem of Postgres server going\n> through inefficiencies of the filesystem (because the separate tables are\n> placed into separate files)?\n> \n> \tIf on Solaris, or Sun, there is a big enough table (say 300MB), it\n> takes 600MB of the disk space, and grows proportionally (2 times!). So we could\n> observe: 1) loss of disk space, and 2) loss of performance as well, because of\n> going through file system block search (for big files) may not be very\n> efficient, compare to Sybase&Co. (they use partitions, and deal with their own\n> 'file system'...)\n> \n> \tWell, one recommendation could be to go with the efficient\n> filesystem... But I believe it may not be appropriate for most of users.\n> \n> \tIf anybody knows how to overcome this problem, please let me know.\n> \n\nWe have discussed this, and it is not as big a win as you may think. An\nInformix guy feels this is true, so it is not an idle thought. \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 13 May 1998 11:48:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] One problem with Postgres...\\" } ]
[ { "msg_contents": "\nI dropped and recreated a table and then tried to access it from a previously\nopened database connection (mod_perl). I did do a pg_dump -z -s -t for\nthe table info, and just changed a not null attribute. (anyway to change\nwithout recreating table?)\n\nI could not reproduce it in another db, but -- I was using two different\nusers in this case, and the user of the previously opened connection\nwas affected by the grant.\n\nIt would appear that we have some catalog cache updating problems..\nClosing the connection and re-opening always seems to fix it.\n", "msg_date": "Wed, 13 May 1998 17:19:24 -0700", "msg_from": "Brett McCormickS <[email protected]>", "msg_from_op": true, "msg_subject": "ERROR: RelationCatalogInformation: Relation 20705 not found" } ]
[ { "msg_contents": "\nsorry -- of course, the view stored the query plan which had the oid of the\nrelation, and since I'd recreated it, I couldn't select from the view.\nsorry. (sheepishly)\n", "msg_date": "Wed, 13 May 1998 17:21:56 -0700", "msg_from": "Brett McCormickS <[email protected]>", "msg_from_op": true, "msg_subject": "relation not found -- I was selecting from a view" } ]
[ { "msg_contents": "Someone was complaining about sequential scan speed, so I decided to run\na test.\n\nI have run the test by performing a total read-through of a 177MB table.\nThis exceeds all my cache sizes by over two times, so the cache is\ntotally useless (\"cache wipe\"). I have timed PostgreSQL's sequential\nscan (returning no rows), and various Unix methods of reading a 177MB\nfile.\n\nI have found that a sequential scan by PostgreSQL is almost as fast or\nfaster than various other Unix methods of reading files. In fact,\nmmap() is very slow, perhaps because you are changing the process\nvirtual table maps for each chunk you read in, and faulting them in,\nrather than using the file system for I/O.\n\nBasically, we beat 'wc', which is pretty good considering how little\n'wc' does.\n\nMy conclusion from this is that we really are not going to gain a lot of\nspeed by exploring some async solution, because if the data we need is\nnot in the cache, we really are going to spend most of our time waiting\nfor disk I/O.\n\nComments?\n\n---------------------------------------------------------------------------\n\n\n177MB file, BSD/OS 3.1, 64MB RAM, PostgreSQL current\n\nwc\t\t\t\t\t\t\t41 sec\nwc -l\t\t\t\t\t\t\t31 sec\ndd if=/u/pg/data/base/test/testv of=/dev/null bs=512\t32 sec\ndd if=/u/pg/data/base/test/testv of=/dev/null bs=8k\t31 sec\ndd if=/u/pg/data/base/test/testv of=/dev/null bs=256k\t31 sec\ndd if=/u/pg/data/base/test/testv of=/dev/null bs=1m\t30 sec\nmmap() of file in 8k chunks\t\t\t\t99 sec\nmmap() of file in 8mb chunks\t\t\t\t40 sec\nmmap() of file in 32mb chunks\t\t\t\t56 sec\n\nPostgreSQL sequential scan\t\t\t\t37 sec\n\n\n---------------------------------------------------------------------------\n\n/* mmap() test program */\n#include <stdio.h>\n#include <fcntl.h>\n#include <assert.h>\n#include <sys/types.h>\n#include <sys/mman.h>\n\n#define MMAP_SIZE 8192 /* chunk size */\n\nint main(int argc, char *argv[], char *envp[])\n{\n\tint i, j, fd, spaces = 0;\n\tint off;\n\tchar *addr;\n\n\tfd = open(\"/u/pg/data/base/test/testv\", O_RDONLY, 0);\n\tassert(fd != 0);\n\n\tfor (off = 0; 1; off += MMAP_SIZE)\n\t{\n\t\taddr = mmap(0, MMAP_SIZE, PROT_READ, 0, fd, off);\n\t\tassert(addr != NULL);\n\n\t\tfor (j = 0; j < MMAP_SIZE; j++)\n\t\t\tif (*(addr + j)\t!= ' ')\n\t\t\t\tspaces++;\n\t\tmunmap(addr,MMAP_SIZE);\n\t}\n\tprintf(\"%d\\n\",spaces);\n\treturn 0;\n}\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 14 May 1998 00:49:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Sequential scan speed, mmap, disk i/o" }, { "msg_contents": "> \n> Someone was complaining about sequential scan speed, so I decided to run\n> a test.\n\n> wc\t\t\t\t\t\t\t41 sec\n> wc -l\t\t\t\t\t\t\t31 sec\n> dd if=/u/pg/data/base/test/testv of=/dev/null bs=512\t32 sec\n> dd if=/u/pg/data/base/test/testv of=/dev/null bs=8k\t31 sec\n> dd if=/u/pg/data/base/test/testv of=/dev/null bs=256k\t31 sec\n> dd if=/u/pg/data/base/test/testv of=/dev/null bs=1m\t30 sec\n> mmap() of file in 8k chunks\t\t\t\t99 sec\n> mmap() of file in 8mb chunks\t\t\t\t40 sec\n> mmap() of file in 32mb chunks\t\t\t\t56 sec\n> \n> PostgreSQL sequential scan\t\t\t\t37 sec\n\nLet me add, these times are on a PP200, with SCSI Ultra Barracuda\ndrives, BSD/OS 3.1, 64MB RAM.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 14 May 1998 01:29:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Sequential scan speed, mmap, disk i/o" }, { "msg_contents": "> > Someone was complaining about sequential scan speed, so I decided to run\n> > a test.\n> \n> > wc\t\t\t\t\t\t\t41 sec\n> > wc -l\t\t\t\t\t\t\t31 sec\n> > dd if=/u/pg/data/base/test/testv of=/dev/null bs=512\t32 sec\n> > dd if=/u/pg/data/base/test/testv of=/dev/null bs=8k\t31 sec\n> > dd if=/u/pg/data/base/test/testv of=/dev/null bs=256k\t31 sec\n> > dd if=/u/pg/data/base/test/testv of=/dev/null bs=1m\t30 sec\n> > mmap() of file in 8k chunks\t\t\t\t99 sec\n> > mmap() of file in 8mb chunks\t\t\t\t40 sec\n> > mmap() of file in 32mb chunks\t\t\t\t56 sec\n> > \n> > PostgreSQL sequential scan\t\t\t\t37 sec\n> \n> Let me add, these times are on a PP200, with SCSI Ultra Barracuda\n> drives, BSD/OS 3.1, 64MB RAM.\n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n\nVery interesting. Is it possible to get the schema, the query, and a\na sample of the data or a generator program for the data? I am quite surprised\nto see us do so well, I would have guess that the per row overhead would\nhave us down far below wc.\n\nAlthough, on second though, dd has to write the data as well as read it, and\nwe don't, and wc has to examine every character, where if the \"where clause\"\napplies to only a portion of the row, we don't.\n\nStill, It would be nice to see more info about this test.\n\nBtw, I hope to post tommorrow some hopefully interesting results about the\nspeed of TAS and S_LOCK...\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n\n", "msg_date": "Thu, 14 May 1998 00:27:15 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequential scan speed, mmap, disk i/o" }, { "msg_contents": "Bruce Momjian wrote:\n> In fact,\n> mmap() is very slow, perhaps because you are changing the process\n> virtual table maps for each chunk you read in, and faulting them in,\n> rather than using the file system for I/O.\n\nHuh, very slow? I wouldn't agree. I rewrote your mmap program to allow\nfor using reads or mmaps.\n\nI tested it on 111MB file. I decided to use 8192 bytes buffer size\n(standard postgres page size). My system is Linux, P166, 64MBs of RAM\n(note that I have a lot of software running currently so the cache size\nis less than 25MBs. I also changed the for(j..) step size to j+=256 just\nto make sure that it won't influence the results too much and you will\nsee the difference better. mmap was run with (PROT_READ, MAP_SHARED)\n\nAverage results are (for sequential reading):\nUsing reads: total time - 21.39 (0.44user, 6.09system, 31%CPU)\nUsing mmaps: total time - 21.10 (0.57user, 4.92system, 25%CPU)\n\nNote, that in case of reads the program spends much more time in system\ncalls and uses more CPU. You may notice that in case of Linux using mmap\nis about 20% cheapper than read. In case of random reading it's slightly\nmore than 20% as I remember. Total time is in both cases similiar since\nthe throughput limit of my HD. \n\nBTW. Are you sure, that your program was counting mmaps properly? When I\nrun it on my system it counts much more than what it should. On my\nsystem offset crossed over file's boundary then it worked a minute or\nmore before it stopped. I attach my version (with hardcoded 111MBs file\nsize to prevent it, of course you may change it)\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND", "msg_date": "Thu, 14 May 1998 16:05:00 +0200", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequential scan speed, mmap, disk i/o" }, { "msg_contents": "> \n> > \n> > Someone was complaining about sequential scan speed, so I decided to run\n> > a test.\n> \n> > wc\t\t\t\t\t\t\t41 sec\n> > wc -l\t\t\t\t\t\t\t31 sec\n> > dd if=/u/pg/data/base/test/testv of=/dev/null bs=512\t32 sec\n> > dd if=/u/pg/data/base/test/testv of=/dev/null bs=8k\t31 sec\n> > dd if=/u/pg/data/base/test/testv of=/dev/null bs=256k\t31 sec\n> > dd if=/u/pg/data/base/test/testv of=/dev/null bs=1m\t30 sec\n> > mmap() of file in 8k chunks\t\t\t\t99 sec\n> > mmap() of file in 8mb chunks\t\t\t\t40 sec\n> > mmap() of file in 32mb chunks\t\t\t\t56 sec\n> > \n> > PostgreSQL sequential scan\t\t\t\t37 sec\n> \n> Let me add, these times are on a PP200, with SCSI Ultra Barracuda\n> drives, BSD/OS 3.1, 64MB RAM.\n\nAlso, the table was very small, with two ints, a char(10), and a\nvarchar(50), so PostgreSQL was processing most of the 177MB of data in\nterms of having to read most of each block.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 14 May 1998 10:51:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Sequential scan speed, mmap, disk i/o" }, { "msg_contents": "> Very interesting. Is it possible to get the schema, the query, and a\n> a sample of the data or a generator program for the data? I am quite surprised\n> to see us do so well, I would have guess that the per row overhead would\n> have us down far below wc.\n\nSure.\n\n\tcreate table test (x1 int, x2 int, x3 char(10), x4 varchar(50));\n\tinsert into test values (3, 8, 'asdf','asdf');\n\n\tinsert into test select * from test; <- continue until test is large\n\n\tselect * from test where x1 = 23423; <- this is what I timed\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 14 May 1998 10:54:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Sequential scan speed, mmap, disk i/o" }, { "msg_contents": "Bruce Momjian wrote:\n\n> My conclusion from this is that we really are not going to gain a lot of\n> speed by exploring some async solution, because if the data we need is\n> not in the cache, we really are going to spend most of our time waiting\n> for disk I/O.\n> \n> Comments?\n\nWell, I've just found an intersting article on AFP (Asynchronous\nPrefetch) at Sybase site.\n\nhttp://www.sybase.com/Partners/sun/apftech.html\n\nWhat is worth to note, and you seem to forget. If your app is spending\nit's time on waiting for single IO operation, you want save anything.\nHowever, if you manage to have multiple I/O requests served\nasynchronically you may get better performance on RAID systems, also\nyour I/O hardware may work better since the controllers may batch\nrequests, requeue them and optimise them (Of course not in case of IDE\ndisks).\n\nAlso, somebody asked about clustered indexes. I was looking for\ninformations on this technique at Sybase (which is a great source of\ninformation on various DB hints). If you read above document between\nlines, the conclusion comes that clustered index is something that\nallows the data from table to be mixed with index. I suppose that index\npages are clustered with data pages so if you find aproporiate record in\nindex, the data that this index entry points to is on the same page or\nclose.\n\nAt Sybase I have also found some interesting materials on Bitmap Indexes\n(this idea is relatively simple) which looks very interesting in case of\nsome types of queries.\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Sat, 16 May 1998 02:08:27 +0200", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": false, "msg_subject": "Async I/O" }, { "msg_contents": "> > mmap() is very slow, perhaps because you are changing the process\n> > virtual table maps for each chunk you read in, and faulting them in,\n> > rather than using the file system for I/O.\n> \n> Huh, very slow? I wouldn't agree. I rewrote your mmap program to allow\n> for using reads or mmaps.\n> \n> I tested it on 111MB file. I decided to use 8192 bytes buffer size\n> (standard postgres page size). My system is Linux, P166, 64MBs of RAM\n> (note that I have a lot of software running currently so the cache size\n> is less than 25MBs. I also changed the for(j..) step size to j+=256 just\n> to make sure that it won't influence the results too much and you will\n> see the difference better. mmap was run with (PROT_READ, MAP_SHARED)\n> \n> Average results are (for sequential reading):\n> Using reads: total time - 21.39 (0.44user, 6.09system, 31%CPU)\n> Using mmaps: total time - 21.10 (0.57user, 4.92system, 25%CPU)\n> \n> Note, that in case of reads the program spends much more time in system\n> calls and uses more CPU. You may notice that in case of Linux using mmap\n> is about 20% cheapper than read. In case of random reading it's slightly\n> more than 20% as I remember. Total time is in both cases similiar since\n> the throughput limit of my HD. \n> \n> BTW. Are you sure, that your program was counting mmaps properly? When I\n> run it on my system it counts much more than what it should. On my\n> system offset crossed over file's boundary then it worked a minute or\n> more before it stopped. I attach my version (with hardcoded 111MBs file\n> size to prevent it, of course you may change it)\n\nOK, here are my results using your test program:\n\nBasically, Linux is double my speed for 8k mmap'ed chunks. Around 32k\nchunks, I get closer, and 8mb chunks are the same. Glad to hear Linux\nhas optimized mmap() recently, because BSD/OS looks much slower than\nLinux on this.\n\nNow, why does PostgreSQL sequential scan a 160MB files in 37 seconds,\nusing standard its 8k buffers, when even your read test for me using 8k\nbuffers takes 54 seconds?\n\nIn storage/file/fd.c, I see it using read(), and I assume they are 8k\nchunks being read:\n\n returnCode = read(VfdCache[file].fd, buffer, amount);\n\n\nAlso attached is a modified version of my mmap() program, that uses\nfstat() to check the file size to know when to stop. However, I have\nalso have modified it to use a file size to match your file size.\n\nNot sure what to conclude from these numbers.\n\n---------------------------------------------------------------------------\n\nmmap, 8k\n 47.81 real 0.66 user 33.12 sys\n\nread, 8k\n 54.60 real 0.51 user 46.80 sys\n\nmmap, 32k\n 29.80 real 0.23 user 13.81 sys\n\nread, 32k\n 26.80 real 0.12 user 14.82 sys\n\nmmap, 8mb\n 21.25 real 0.03 user 5.49 sys\n\nread, 8mb\n 20.43 real 0.14 user 3.60 sys\n\n\nmy mmap, 8k, your file size\n 64.67 real 15.99 user 34.00 sys\n\nmy mmap, 32k, your file size\n 43.12 real 15.95 user 14.29 sys\n\nmy mmap, 8mb, your file size\n 34.31 real 15.88 user 5.39 sys\n\n\n---------------------------------------------------------------------------\n\n#include <stdio.h>\n#include <fcntl.h>\n#include <assert.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <sys/mman.h>\n\n#define MMAP_SIZE 8192 * 1024\n\nint main(int argc, char *argv[], char *envp[])\n{\n\tint i, j, fd, spaces = 0;\n\tint off;\n\tchar *addr;\n\tstruct stat filestat;\n\n\tfd = open(\"/u/pg/data/base/test/test\", O_RDONLY, 0);\n\tassert(fd != -1);\n\tassert(fstat(fd, &filestat) == 0);\n\n\tfilestat.st_size = 111329280;\n\n\tfor (off = 0; 1; off += MMAP_SIZE)\n\t{\n\t\taddr = mmap(0, MMAP_SIZE, PROT_READ, MAP_SHARED, fd, off);\n\t\tassert(addr != NULL);\n\t\tmadvise(addr, MMAP_SIZE, MADV_SEQUENTIAL);\n\n\t\tfor (j = 0; j < MMAP_SIZE; j++)\n\t\t{\n\t\t\tif (*(addr + j)\t!= ' ')\n\t\t\t\tspaces++;\n\t\t\tif (off + j + 1 == filestat.st_size)\n\t\t\t\tgoto done;\n\t\t}\n\t\tmunmap(addr,MMAP_SIZE);\n\t}\ndone:\n\tprintf(\"%d\\n\",spaces);\n\treturn 0;\n}\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 15 May 1998 21:06:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Sequential scan speed, mmap, disk i/o" }, { "msg_contents": "> What is worth to note, and you seem to forget. If your app is spending\n> it's time on waiting for single IO operation, you want save anything.\n> However, if you manage to have multiple I/O requests served\n> asynchronically you may get better performance on RAID systems, also\n> your I/O hardware may work better since the controllers may batch\n> requests, requeue them and optimise them (Of course not in case of IDE\n> disks).\n\nYes, perhaps using readv would be a win, and perhaps easy to do.\n\n> \n> Also, somebody asked about clustered indexes. I was looking for\n> informations on this technique at Sybase (which is a great source of\n> information on various DB hints). If you read above document between\n> lines, the conclusion comes that clustered index is something that\n> allows the data from table to be mixed with index. I suppose that index\n> pages are clustered with data pages so if you find aproporiate record in\n> index, the data that this index entry points to is on the same page or\n> close.\n\nSounds a lot like ISAM to me. And ISAM is a big win for static tables\nif you need throughput. Remember the word fragment issue, which\nCLUSTER fixed. I had an Ingres word fragment app that was terrible on\nbtree, and Marteen experienced. CLUSTER does simulate that for static\ntables, so it may not be such a big win. I suppose if you needed such\nperformance, and the table changed a lot, it may be good.\n\n> \n> At Sybase I have also found some interesting materials on Bitmap Indexes\n> (this idea is relatively simple) which looks very interesting in case of\n> some types of queries.\n> \n> Mike\n> \n> -- \n> WWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\n> add: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 15 May 1998 22:29:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Async I/O" }, { "msg_contents": "> Basically, Linux is double my speed for 8k mmap'ed chunks. Around 32k\n> chunks, I get closer, and 8mb chunks are the same. Glad to hear Linux\n> has optimized mmap() recently, because BSD/OS looks much slower than\n> Linux on this.\n\nWell Bruce, don't be too happy. Most people aren't yet running the\noptimized kernel; don't know if any of the benchmarks came from someone\nrunning a bleeding-edge development version, which is what 2.1.99 would\nbe; first feature-freeze release in preparation for v2.2 afaik :)\n\nAnd scrappy, no need to note that _all_ Linux kernels are bleeding edge\nreleases :)\n\n - Tom (from his Linux box...)\n", "msg_date": "Sat, 16 May 1998 02:40:14 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequential scan speed, mmap, disk i/o" }, { "msg_contents": "> \n> > Basically, Linux is double my speed for 8k mmap'ed chunks. Around 32k\n> > chunks, I get closer, and 8mb chunks are the same. Glad to hear Linux\n> > has optimized mmap() recently, because BSD/OS looks much slower than\n> > Linux on this.\n> \n> Well Bruce, don't be too happy. Most people aren't yet running the\n> optimized kernel; don't know if any of the benchmarks came from someone\n> running a bleeding-edge development version, which is what 2.1.99 would\n> be; first feature-freeze release in preparation for v2.2 afaik :)\n> \n> And scrappy, no need to note that _all_ Linux kernels are bleeding edge\n> releases :)\n\n[FYI, BSDI, Linux dev. release is beating BSDI for 8k mmaps() by 2x.]\n\nI must say, when I saw Linux beating BSDI by 2x, I started wondering if\nthe great BSDI engineers were sleeping or something. Now that I\nunderstand that this improvement is a somewhat new effort, I feel a\nlittle better.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 16 May 1998 00:43:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Sequential scan speed, mmap, disk i/o" }, { "msg_contents": "On Sat, 16 May 1998, Thomas G. Lockhart wrote:\n\n> And scrappy, no need to note that _all_ Linux kernels are bleeding edge\n> releases :)\n\n\tMoi? *innocent look*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 16 May 1998 02:41:39 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequential scan speed, mmap, disk i/o" }, { "msg_contents": "\nrunning a relatively 'bleeding edge' FreeBSD on hub.org, do you want to\ntry the same tests there? Not sure how much memory that it will require,\nbut I'm running pretty much the same revision at home as at the\noffice...how are you generating your 117Meg file for testing with? I'm\nwilling to run through the tests here and report on it...\n\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 16 May 1998 02:45:35 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequential scan speed, mmap, disk i/o" }, { "msg_contents": "Michal Mosiewicz wrote a few weeks ago:\n> \n> Also, somebody asked about clustered indexes. I was looking for\n> informations on this technique at Sybase (which is a great source of\n> information on various DB hints). If you read above document between\n> lines, the conclusion comes that clustered index is something that\n> allows the data from table to be mixed with index. I suppose that index\n> pages are clustered with data pages so if you find aproporiate record in\n> index, the data that this index entry points to is on the same page or\n> close.\n\nSybase clustered indexes are pretty standard stuff.\n\nOur B-tree indexes have the index leaf pages storing index rows containing a\nkey and a 'tid' that points to a data row in a separate heap.\n\nA clustered index is the same in the upper levels as our B-tree, but the leaf\npages contain the actual data rows. Thus the data is maintained in sorted\norder for the clustering key. Also, in Sybase, all table pages are chained\ntogether. This has the side effect of possibly speeding up sequential scans.\n\nVery nice except that key updates, or even index splits cause rows to move so\nall the secondary indexes must be updated. And, maintaining the page chain\nlinks not only adds overhead, but also proves to be very error prone, leading\nto crosslinked tables and other horrors.\n\nStill, for a huge class of applications clustered indexes are a big win. It\nwould be well worth adding this to pgsql. I would not do the page chaining\nthough.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\nError 605 [tm] is a trademark of Sybase Inc. -- dg\n", "msg_date": "Wed, 27 May 1998 01:11:48 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Async I/O" }, { "msg_contents": "> > Very interesting. Is it possible to get the schema, the query, and a\n> > a sample of the data or a generator program for the data? I am quite surprised\n> > to see us do so well, I would have guess that the per row overhead would\n> > have us down far below wc.\n> \n> Sure.\n> \n> \tcreate table test (x1 int, x2 int, x3 char(10), x4 varchar(50));\n> \tinsert into test values (3, 8, 'asdf','asdf');\n> \n> \tinsert into test select * from test; <- continue until test is large\n> \n> \tselect * from test where x1 = 23423; <- this is what I timed\n> \n\nSo I finally got around to playing with this a little and I get on my\n\nP133 (HX mb) 32 Mb mem, Linux 2.0.32 (glibc) with Quantum Atlas 2.1G disk\non NCR810 SCSI\n\nfor test at 1048576 rows, file size is 80281600 bytes.\n\n- time cat pg/test/data/base/dg/test >/dev/null\n 0.02user 3.38system 0:14.34elapsed 23%CPU = 5467 KB per second.\n \n- time wc pg/test/data/base/dg/test\n 9.12user 2.83system 0:15.38elapsed 77%CPU = 5098 KB per second.\n\n- time psql -c \"select * from test where x1 = 23423;\"\n 0:30.59elapsed (cpu for psql not meaningful, but top said 95% for postgres)\n = 2563 KB per second.\n Not bad!\n\n- time psql -c \"select count(*) from test;\"\n 0:50.46elapsed = 1554 KB per second.\n (trivial aggragate adds 20 seconds or 65%)\n\n- time psql -c \"select count(*) from test where x1 = 3;\"\n 1:03.22elapsed = 1240 KB per second.\n (trivial where clause adds another 13 seconds)\n\n- time psql -c \"select count(*) from test where x4 = 'asdf';\"\n 1:10.96elapsed = 1105 KB per second.\n (varchar compare vs int compare adds only 7.7 seconds).\n\n\nBtw, during all this, the disk hardly even made any noise (seeking).\next2 seems to lay things out pretty well. The data dir right now is on a\n/home, which is 86% full and it still managed to stream the 'cat' at about\nfull disk bandwidth.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"I believe OS/2 is destined to be the most important operating\n system, and possibly program, of all time\" - Bill Gates, Nov, 1987.\n", "msg_date": "Thu, 28 May 1998 02:14:36 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sequential scan speed, mmap, disk i/o" }, { "msg_contents": "> \n> > > Very interesting. Is it possible to get the schema, the query, and a\n> > > a sample of the data or a generator program for the data? I am quite surprised\n> > > to see us do so well, I would have guess that the per row overhead would\n> > > have us down far below wc.\n> > \n> > Sure.\n> > \n> > \tcreate table test (x1 int, x2 int, x3 char(10), x4 varchar(50));\n> > \tinsert into test values (3, 8, 'asdf','asdf');\n> > \n> > \tinsert into test select * from test; <- continue until test is large\n> > \n> > \tselect * from test where x1 = 23423; <- this is what I timed\n> > \n> \n> So I finally got around to playing with this a little and I get on my\n> \n> P133 (HX mb) 32 Mb mem, Linux 2.0.32 (glibc) with Quantum Atlas 2.1G disk\n> on NCR810 SCSI\n\nOK, I have a Barracuda drive, which is probably the same speed as the\nAtlas(Ultra SCSI), but have a PP200, which may be why my PostgreSQL\ncould keep up better with the disks.\n\nMy dd's showed ~6,000 KB/sec, postgresql was 4,800 KB/sec, and wc was\n4,500 KB/sec. Interesting how the speed fell off with the count(). \nThat is executor overhead, I am sure.\n\n> \n> for test at 1048576 rows, file size is 80281600 bytes.\n> \n> - time cat pg/test/data/base/dg/test >/dev/null\n> 0.02user 3.38system 0:14.34elapsed 23%CPU = 5467 KB per second.\n> \n> - time wc pg/test/data/base/dg/test\n> 9.12user 2.83system 0:15.38elapsed 77%CPU = 5098 KB per second.\n> \n> - time psql -c \"select * from test where x1 = 23423;\"\n> 0:30.59elapsed (cpu for psql not meaningful, but top said 95% for postgres)\n> = 2563 KB per second.\n> Not bad!\n> \n> - time psql -c \"select count(*) from test;\"\n> 0:50.46elapsed = 1554 KB per second.\n> (trivial aggragate adds 20 seconds or 65%)\n> \n> - time psql -c \"select count(*) from test where x1 = 3;\"\n> 1:03.22elapsed = 1240 KB per second.\n> (trivial where clause adds another 13 seconds)\n> \n> - time psql -c \"select count(*) from test where x4 = 'asdf';\"\n> 1:10.96elapsed = 1105 KB per second.\n> (varchar compare vs int compare adds only 7.7 seconds).\n> \n> \n> Btw, during all this, the disk hardly even made any noise (seeking).\n> ext2 seems to lay things out pretty well. The data dir right now is on a\n> /home, which is 86% full and it still managed to stream the 'cat' at about\n> full disk bandwidth.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 29 May 1998 13:28:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Sequential scan speed, mmap, disk i/o]" } ]
[ { "msg_contents": "unsubscribe\n\n", "msg_date": "Thu, 14 May 1998 09:08:50 +0000", "msg_from": "Fabrizio Sciarra <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Argh! No surprise it hang. I just managed to start it without its input.\nStupid me!\n\nAnyway, I still have core dumps. \n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 14 May 1998 11:13:25 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "It must be too early" } ]
[ { "msg_contents": ">> - Could we use mmap:ing of files at a higher level then\n>> src/backend/strorage/ipc/ipc.c to get even better performance\n>> and cleaness?\n>\n>Yes, we could use mmap() to map the actual files. I will post time\n>timings on this soon.\n\nI do not think this will be a practicable solution, since it would mean the whole db \nhas to mmap'ed. This means there has to be enough virtual memory to hold\nthe complete database, or at least one table at a time. Or do I understand this wrong ??\n\nAndreas\n\n", "msg_date": "Thu, 14 May 1998 11:27:51 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "> \n> >> - Could we use mmap:ing of files at a higher level then\n> >> src/backend/strorage/ipc/ipc.c to get even better performance\n> >> and cleaness?\n> >\n> >Yes, we could use mmap() to map the actual files. I will post time\n> >timings on this soon.\n> \n> I do not think this will be a practicable solution, since it would mean the whole db \n> has to mmap'ed. This means there has to be enough virtual memory to hold\n> the complete database, or at least one table at a time. Or do I understand this wrong ??\n\nWe can map parts of the table, even in 8k chunks. However, looking at\nmy sequential scan timing tests, it would be slower.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 14 May 1998 10:55:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] mmap and MAP_ANON" }, { "msg_contents": "Andreas Zeugswetter wrote:\n> \n> >> - Could we use mmap:ing of files at a higher level then\n> >> src/backend/strorage/ipc/ipc.c to get even better performance\n> >> and cleaness?\n> >\n> >Yes, we could use mmap() to map the actual files. I will post time\n> >timings on this soon.\n> \n> I do not think this will be a practicable solution, since it would mean the whole db\n> has to mmap'ed. This means there has to be enough virtual memory to hold\n> the complete database, or at least one table at a time. Or do I understand this wrong ??\n\nWhy would we map the whole database or even a whole table?\nYou can map the section of a file you are interested in.\n\n# man mmap\n\nBesides a sensible memory manager does not actually map the pages\nuntil they are access, unfort. not all OSes are sensible.\n\n\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n", "msg_date": "Thu, 14 May 1998 17:25:10 +0200", "msg_from": "\"G���ran Thyni\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] mmap and MAP_ANON" } ]
[ { "msg_contents": "> The problem with using a real file is that the filesystem is going to be\n> flushing those dirty pages to disk, and that could really hurt\n> performance.\ndefinitely\n> Actually, when I install Informix, I always have to modify the kernel to\n> allow a larger amount of SYSV shared memory. Maybe we just need to give\n> people per-OS instructions on how to do that. Under BSD/OS, I now have\n> 32MB of shared memory, or 3900 8k shared buffers.\nThis I think would be the best solution. There are actually not that many systems \nwith too low limits.\n\tAIX: per segment 256Mb max 10 segments per process (AIX 4.3 any number of segments)\n\nAndreas\n\t \n\n\n\n", "msg_date": "Thu, 14 May 1998 11:46:46 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] mmap and MAP_ANON" } ]
[ { "msg_contents": "This was an unexpected difference between these two types and I wonder\nif it was meant to be this way. Previously, a char8 field with the\nstring 'abc' would return 'abc' as expected. Now, with char(8), I get\nback 'abc ' instead. You can see this with my PygreSQL module\nor the C interface (which my module uses, of course.) This causes a\nlot of my programs to break.\n\nI have made a quick change to my Python module to handle this. Should\nI clean it up or can I expect the behaviour to go back the way it was?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 14 May 1998 10:40:39 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "char(8) vs char8" }, { "msg_contents": "\n\nD'Arcy J.M. Cain wrote:\n\n> This was an unexpected difference between these two types and I wonder\n> if it was meant to be this way. Previously, a char8 field with the\n> string 'abc' would return 'abc' as expected. Now, with char(8), I get\n> back 'abc ' instead. You can see this with my PygreSQL module\n> or the C interface (which my module uses, of course.) This causes a\n> lot of my programs to break.\n>\n\nchar(x) is the datatype 'bpchar' (blank padded char). Thus it is padded\nwith spaces to the field width.\n\nCouldn't you use something like \"select rtrim(column) from table\". This\nwill trim the spaces off.\n\nByron\n\n", "msg_date": "Thu, 14 May 1998 10:55:08 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] char(8) vs char8" }, { "msg_contents": "> This was an unexpected difference between these two types and I wonder\n> if it was meant to be this way. Previously, a char8 field with the\n> string 'abc' would return 'abc' as expected. Now, with char(8), I get\n> back 'abc ' instead. You can see this with my PygreSQL module\n> or the C interface (which my module uses, of course.) This causes a\n> lot of my programs to break.\n> \n> I have made a quick change to my Python module to handle this. Should\n> I clean it up or can I expect the behaviour to go back the way it was?\n\nThe behavior you want is varchar() rather than char().\n\n - Tom\n", "msg_date": "Thu, 14 May 1998 15:27:18 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] char(8) vs char8" }, { "msg_contents": "Thus spake Byron Nikolaidis\n> > string 'abc' would return 'abc' as expected. Now, with char(8), I get\n> > back 'abc ' instead. You can see this with my PygreSQL module\n> \n> Couldn't you use something like \"select rtrim(column) from table\". This\n> will trim the spaces off.\n\nIt wouldn't be as convenient as this example.\n\nimport pg\nfor d in pg.connect('database').query('select * from table').dictresult():\n print \"Num: %(field1)3d, '%(field2)s'\" % d\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 14 May 1998 13:10:44 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] char(8) vs char8" }, { "msg_contents": "Thus spake Thomas G. Lockhart\n> > string 'abc' would return 'abc' as expected. Now, with char(8), I get\n> > back 'abc ' instead. You can see this with my PygreSQL module\n> The behavior you want is varchar() rather than char().\n\nRight. Dump and reload time I guess. >:-/\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 14 May 1998 13:12:21 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] char(8) vs char8" } ]
[ { "msg_contents": "\n> This was an unexpected difference between these two types and I wonder\n> if it was meant to be this way. Previously, a char8 field with the\n> string 'abc' would return 'abc' as expected. Now, with char(8), I get\n> back 'abc ' instead. You can see this with my PygreSQL module\n> or the C interface (which my module uses, of course.) This causes a\n> lot of my programs to break.\n\nThis is the expected behavior for the char() datatype. If you have other variable \nlength fields in the table, you can simply use varchar() which does not blank pad\nto the specified length. If you don't have other variable length fields then you\nloose some performance if you switch to varchar(). \n\n> I have made a quick change to my Python module to handle this. Should\n> I clean it up or can I expect the behaviour to go back the way it was?\n\nNo, it will stay.\n\nAndreas\n\n\n", "msg_date": "Thu, 14 May 1998 16:58:50 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] char(8) vs char8" } ]
[ { "msg_contents": "INTERNET WIRE: http://www.internetwire.com\n\"Connecting Business to the World\"\n\nNOTE: List instructions and contact information at the end of this email.\n\nTo Get Full-Text Stories, click on the link associated with the story you wish to read.\n\nMay 14,1998 Headlines:\n\nAgents Technologies: Signs With Barry Friedman Enterprises For Worldwide Representation\nhttp://www.internetwire.com/technews/tn/tn980520.htx\n\nBusinessTech: Baby Bell Merger Not What The Congress Intended, Says Businesstech's Becker\nhttp://www.internetwire.com/technews/tn/tn980518.htx\n\nCyberjunction: Announces Internet Promotion Program For Travel Suppliers\nhttp://www.internetwire.com/technews/tn/tn980514.htx\n\nThe Cyber Media Show: Extranets Discussed On The Cyber Media Show\nWith Kim Bayne\nhttp://www.internetwire.com/technews/tn/tn980515.htx\n\nE*Trade Canada And Balisoft: Launch The Internet's First Web-Based Customer Service For Investors\nhttp://www.internetwire.com/technews/tn/tn980523.htx\n\nThe Irish Trade Board: Irish Software Database Launched On CD-ROM And Internet\nhttp://www.internetwire.com/technews/tn/tn980519.htx\n\nLGC Wireless & AG Communication: To Work Together On Wireless In-Building Solutions\nhttp://www.internetwire.com/technews/tn/tn980513.htx\n\nLockergnome: Free Newsgroups for Windows Users\nhttp://www.internetwire.com/technews/tn/tn980517.htx\n\nMecklermedia: Announces Launch Of UK.Internet.Com And Australia.Internet.Com\nhttp://www.internetwire.com/technews/tn/tn980516.htx\n\nTritium Network: Launches Free Internet Service In Dallas/Ft. Worth\nhttp://www.internetwire.com/technews/tn/tn980521.htx\n\nTritium Network: Launches Free Internet Service In Houston\nhttp://www.internetwire.com/technews/tn/tn980522.htx\n\n===============================================================\nDaily Debuts\n===============================================================\n\nMoon-Watch.com -- http://www.moon-watch.com/\nMoon-related photos, news, and information.\n\nApollo Eighteen -- http://www.apolloeighteen.com/\nResource for space history, education, news and debate.\n\nRace City USA -- http://www.racecitywebpages.com/\nRace City USA is Mooresville, NC.\n\nSweet Tooth -- http://www.sweettooth1.com/\nOnline candy store.\n\nThe Children's Home -- http://www.childrens-home.org/\nProviding special education and residential services to disadvantaged youth.\n\n===============================================================\n\nTO CHANGE YOUR EMAIL ADDRESS\n\nEmail [email protected]\nOn the Subject line, type: Address Change\nIn the BODY of the message, type: change [old_address] [new_address]\ne.g. change [email protected]@newplace.com\n\n===============================================================\n\nTO UNSUBSCRIBE\n\nEmail [email protected]\nOn the Subject line, type: Remove Internet Wire\n\n===============================================================\n\nGeneral Information\n\nEmail: [email protected]\nOr visit:\nhttp://www.internetwire.com\n\n\n\n", "msg_date": "Thu, 14 May 1998 12:35:49 -0700", "msg_from": "Internet Wire <[email protected]>", "msg_from_op": true, "msg_subject": "Internet Wire" } ]
[ { "msg_contents": "(moved to hackers list)\n\n> I am working on extending locale support for char/varchar types.\n> Q1. I touched ...src/include/utils/builtins.h to insert the following\n> macros:\n> -----\n> #ifdef USE_LOCALE\n> #define pgstrcmp(s1,s2,l) strcoll(s1,s2)\n> #else\n> #define pgstrcmp(s1,s2,l) strncmp(s1,s2,l)\n> #endif\n> -----\n> Is it right place? I think so, am I wrong?\n\nProbably the right place. Probably the wrong code; see below...\n\n> Q2. Bartunov said me I should read varlena.c. I read it and found\n> that for every strcoll() for both strings there are calls to allocate\n> memory (to make them null-terminated). Oleg said I need the same for\n> varchar.\n> Do I really need to allocate space for varchar? What about char? Is it\n> 0-terminated already?\n\nNo, neither bpchar nor varchar are guaranteed to be null terminated.\nYes, you will need to allocate (palloc()) local memory for this. Your\npgstrcmp() macros are not equivalent, since strncmp() will stop the\ncomparison at the specified limit (l) where strcoll() requires a null\nterminated string.\n\nIf you look in varlena.c you will find several places with\n #if USE_LOCALE\n ...\n #else\n ...\n #endif\n\nThose blocks will need to be replicated in varchar.c for both bpchar and\nvarchar support routines.\n\nThe first example I looked at in varlena.c seems to have trouble in that\nthe code looks a bit troublesome :( In the code snippet below (from\ntext_lt), both input strings are replicated and copied to the same\noutput length, even though the input lengths can be different. Looks\nwrong to me:\n\n memcpy(a1p, VARDATA(arg1), len);\n *(a1p + len) = '\\0';\n memcpy(a2p, VARDATA(arg2), len);\n *(a2p + len) = '\\0';\n\nInstead of \"len\" in each expression it should probably be \n len1 = VARSIZE(arg1)-VARHDRSZ\n len2 = VARSIZE(arg2)-VARHDRSZ\n\nAnother possibility for implementation is to write a string comparison\nroutine (e.g. varlena_cmp()) which takes two arguments and returns -1,\n0, or 1 for less than, equals, and greater than. All of the comparison\nroutines can call that one (which would have the #if USE_LOCALE), rather\nthan having USE_LOCALE spread through each comparison routine.\n\n - Tom\n", "msg_date": "Fri, 15 May 1998 13:18:13 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] char/varchar locale support" }, { "msg_contents": "Hi!\n\nOn Fri, 15 May 1998, Thomas G. Lockhart wrote:\n> Another possibility for implementation is to write a string comparison\n> routine (e.g. varlena_cmp()) which takes two arguments and returns -1,\n> 0, or 1 for less than, equals, and greater than. All of the comparison\n> routines can call that one (which would have the #if USE_LOCALE), rather\n> than having USE_LOCALE spread through each comparison routine.\n\n Yes, I thinked about this recently. It seems the best solution, perhaps.\n Thank you. I'll continue my work.\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Fri, 15 May 1998 17:43:07 +0400 (MSK DST)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] char/varchar locale support" }, { "msg_contents": "Oleg Broytmann wrote:\n> \n> Hi!\n> \n> On Fri, 15 May 1998, Thomas G. Lockhart wrote:\n> > Another possibility for implementation is to write a string comparison\n> > routine (e.g. varlena_cmp()) which takes two arguments and returns -1,\n> > 0, or 1 for less than, equals, and greater than. All of the comparison\n> > routines can call that one (which would have the #if USE_LOCALE), rather\n> > than having USE_LOCALE spread through each comparison routine.\n> \n> Yes, I thinked about this recently. It seems the best solution, perhaps.\n> Thank you. I'll continue my work.\n> \n> Oleg.\n> ----\n> Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n\n\nShouldn't this be done only for NATIONAL CHAR?\n\n/* m */\n", "msg_date": "Mon, 18 May 1998 12:11:55 +0200", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] char/varchar locale support" }, { "msg_contents": "Hi!\n\nOn Mon, 18 May 1998, Mattias Kregert wrote:\n> > > Another possibility for implementation is to write a string comparison\n> > > routine (e.g. varlena_cmp()) which takes two arguments and returns -1,\n> > > 0, or 1 for less than, equals, and greater than. All of the comparison\n> > > routines can call that one (which would have the #if USE_LOCALE), rather\n> > > than having USE_LOCALE spread through each comparison routine.\n> \n> Shouldn't this be done only for NATIONAL CHAR?\n\n It is what USE_LOCALE is intended for, isn't it?\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Mon, 18 May 1998 14:28:50 +0400 (MSK DST)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] char/varchar locale support" }, { "msg_contents": "> > Shouldn't this be done only for NATIONAL CHAR?\n> It is what USE_LOCALE is intended for, isn't it?\n\nSQL92 defines NATIONAL CHAR/VARCHAR as the data type to support implicit\nlocal character sets. The usual CHAR/VARCHAR would use the default\nSQL_TEXT character set. I suppose we could extend it to include NATIONAL\nTEXT also...\n\nAdditionally, SQL92 allows one to specify an explicit character set and\nan explicit collating sequence. The standard is not explicit on how one\nactually makes these known to the database, but Postgres should be well\nsuited to accomplishing this.\n\nAnyway, I'm not certain how common and wide-spread the NATIONAL CHAR\nusage is. Would users with installations having non-English data find\nusing NCHAR/NATIONAL CHAR/NATIONAL CHARACTER an inconvenience? Or would\nmost non-English installations find this better and more solid??\n\nAt the moment we have support for Russian and Japanese character sets,\nand these would need the maintainers to agree to changes.\n\nbtw, if we do implement NATIONAL CHARACTER I would like to do so by\nhaving it fit in with the full SQL92 character sets and collating\nsequences capabilities. Then one could specify what NATIONAL CHAR means\nfor an installation or perhaps at run time without having to\nrecompile...\n\n - Tom\n", "msg_date": "Mon, 18 May 1998 15:20:35 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PATCHES] char/varchar locale support" }, { "msg_contents": "Thomas G. Lockhart wrote:\n\n> btw, if we do implement NATIONAL CHARACTER I would like to do so by\n> having it fit in with the full SQL92 character sets and collating\n> sequences capabilities. Then one could specify what NATIONAL CHAR means\n> for an installation or perhaps at run time without having to\n> recompile...\n\nI fully agree that there should be a CREATE COLLATION syntax or similiar\nwith ability to add collation keyword in every place that needs a\ncharacter comparision, like btree indexes, orders, or simply comparision\noperators.\n\nThis mean that we should start probably from creating three-parameter\ncomparision functions with added a third parameter to select collation.\n\nAdditionally, it's worth to note that using strcoll is highly expensive.\nI've got some reports from people who used postgreSQL with national\ncharacters and noticed performance drop-downs up to 20 times (Linux). So\nit's needed to create a cheap comparision functions that will preserve\nit's translation tables during sessions.\n\nAnyhow, if anybody wants to try inefficient strcoll, long time ago I've\nsent a patch to sort chars/varchars using it. But I don't recommend it.\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Mon, 18 May 1998 18:35:47 +0200", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] char/varchar locale support" }, { "msg_contents": ">> > Shouldn't this be done only for NATIONAL CHAR?\n>> It is what USE_LOCALE is intended for, isn't it?\n\nLOCALE is not very usefull for multi-byte speakers.\n\n>SQL92 defines NATIONAL CHAR/VARCHAR as the data type to support implicit\n>local character sets. The usual CHAR/VARCHAR would use the default\n>SQL_TEXT character set. I suppose we could extend it to include NATIONAL\n>TEXT also...\n>\n>Additionally, SQL92 allows one to specify an explicit character set and\n>an explicit collating sequence. The standard is not explicit on how one\n>actually makes these known to the database, but Postgres should be well\n>suited to accomplishing this.\n>\n>Anyway, I'm not certain how common and wide-spread the NATIONAL CHAR\n>usage is. Would users with installations having non-English data find\n>using NCHAR/NATIONAL CHAR/NATIONAL CHARACTER an inconvenience? Or would\n>most non-English installations find this better and more solid??\n\nThe capability to specify implicit character sets for CHAR (that's\nwhat MB does) looks enough for multi-byte speakers except the\ncollation sequences.\n\nOne question to the SQL92's NCHAR is how one can specify several\ncharcter sets at one time. As you might know Japanese, Chineses,\nKorean uses multiple charcter sets. For example, EUC_JP, a widly used\nJapanese encoding system on Unix, includes 4 character sets: ASCII,\nJISX0201, JISX0208 and JISX0212.\n\n>At the moment we have support for Russian and Japanese character sets,\n>and these would need the maintainers to agree to changes.\n\nAdditionally we have support for Chinese, Korean. Moreover if the mule\ninternal code or unicode is prefered for the internal encoding system,\none could use almost any language in the world:-)\n\n>btw, if we do implement NATIONAL CHARACTER I would like to do so by\n>having it fit in with the full SQL92 character sets and collating\n>sequences capabilities. Then one could specify what NATIONAL CHAR means\n>for an installation or perhaps at run time without having to\n>recompile...\n\nCollating sequences look very usesful.\nAlso it would be nice if we could specify default character sets when\ncreating a database, table or fields.\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Tue, 19 May 1998 11:33:04 +0900", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] char/varchar locale support " } ]
[ { "msg_contents": "Someone posted a (readonly) benchtest of mmap vs\nread/write I/O using the following code:\n\n for (off = 0; 1; off += MMAP_SIZE)\n {\n addr = mmap(0, MMAP_SIZE, PROT_READ, 0, fd, off);\n assert(addr != NULL);\n\n for (j = 0; j < MMAP_SIZE; j++)\n if (*(addr + j) != ' ')\n spaces++;\n munmap(addr,MMAP_SIZE);\n }\n\nThis is unfair to mmap since mmap is called once\nper page. Better to mmap large regions (many\npages at once), then use msync() to force \nwrite any modified pages. Access purely in\nmemory mmap'd I/O is _many_ times faster than\nread/write under Solaris or Linux later\nthan 2.1.99 (prior to 2.1.99, Linux had\nslow mmap performance).\n\nLimitation on mmap is mainly that you\ncan't map more than 2Gb of data at once\nunder most existing O.S.s, (including\nheap and stack), so simplistic mapping\nof entire DBMS data files doesn't\nscale for large databases, and you\nneed to cache region mappings to\navoid running out of PTEs.\n\nThe need to collocate information in\nadjacent pages could be why Informix has\nclustered indexes, the internal structure\nof which I'd like to know more about.\n\n\t-Huw\n", "msg_date": "Sat, 16 May 1998 05:26:53 +0900", "msg_from": "Huw Rogers <[email protected]>", "msg_from_op": true, "msg_subject": "mmap vs read/write" }, { "msg_contents": "> This is unfair to mmap since mmap is called once\n> per page. Better to mmap large regions (many\n> pages at once), then use msync() to force \n> write any modified pages. Access purely in\n> memory mmap'd I/O is _many_ times faster than\n> read/write under Solaris or Linux later\n> than 2.1.99 (prior to 2.1.99, Linux had\n> slow mmap performance).\n\nThis makes me feel better. Linux is killing BSD/OS in mapping tests.\n\nSee my other posting.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 15 May 1998 18:16:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap vs read/write" }, { "msg_contents": "Huw Rogers wrote:\n> \n> Someone posted a (readonly) benchtest of mmap vs\n> read/write I/O using the following code:\n> \n> for (off = 0; 1; off += MMAP_SIZE)\n> {\n> addr = mmap(0, MMAP_SIZE, PROT_READ, 0, fd, off);\n> assert(addr != NULL);\n> \n> for (j = 0; j < MMAP_SIZE; j++)\n> if (*(addr + j) != ' ')\n> spaces++;\n> munmap(addr,MMAP_SIZE);\n> }\n> \n> This is unfair to mmap since mmap is called once\n> per page. Better to mmap large regions (many\n> pages at once), then use msync() to force \n> write any modified pages. Access purely in\n\nBetter yet, request the pages ahead of time and have another process\nmap them in \"asynchronously\". By the time the process is ready to map\nthe page in for itself, the page will have already been read in from\nthe disk, and a memory buffer will be allocated for it.\n\nI want to try and implement this in a simple demo program when I get a\nchance.\n\nOcie\n", "msg_date": "Fri, 15 May 1998 15:50:45 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] mmap vs read/write" } ]
[ { "msg_contents": "INTERNET WIRE: http://www.internetwire.com\n\nNOTE: List instructions and contact information at the end of this email.\n\nTo Get Full-Text Stories, click on the link associated with the story you wish to read.\n\nHeadlines for May 15,1998\n\nAmerican Heart Association: Heart-Healthy Greeting Cards for Every Occasion\nhttp://www.internetwire.com/technews/tn/tn980525.htx\n\nAris Software: Announces Availability Of NoetixViews For Oracle HRMS Applications\nhttp://www.internetwire.com/technews/wd/wd980344.htx\n\nCyberjunction: Announces Plan For Significant \"Point Of Sale\" Growth In The Next 36 Months\nhttp://www.internetwire.com/technews/tn/tn980524.htx\n\nDonaldson, Lufkin & Jenrette, Inc: Sprout Group InvestS $5 Million In Skila Inc.\nhttp://www.internetwire.com/technews/tn/tn980527.htx\n\nDonaldson, Lufkin & Jenrette, Inc: Declares Regular Quarterly Common Dividend\nhttp://www.internetwire.com/technews/tn/tn980528.htx\n\nDonegal Group Inc: Announces Agreement To Acquire Southern Heritage Insurance Company\nhttp://www.internetwire.com/technews/tn/tn980526.htx\n\nProNetLink(R): The World's First Import-Export 'Webtool', Launches May 15th\nhttp://www.internetwire.com/technews/wd/wd980343.htx\n\nSymantec: And TeleAdapt Help Norton Mobile Essentials Customers Connect From Anywhere\nhttp://www.internetwire.com/technews/wd/wd980345.htx\n\n===============================================================\nDaily Debuts\n===============================================================\n\nHastaLaVista - http://www.hastalavista.de/\nVirtual postcards.\n\nIT News - http://www.it-news.com/\nIT related information\n\nETA Online Review - http://www.exam-ta.ac.uk/onlinere.htm\nFeaturing reviews of latest hi-tech products\n\nCephalon, Inc. - http://www.cephalon.com/\nDiscovers, develops and markets neurological products.\n\nWelcomeTo Musician's Depot - http://www.musicians-depot.com/\nThe online source for musical equipment.\n\n===============================================================\n\nTO CHANGE YOUR EMAIL ADDRESS\n\nEmail [email protected]\nOn the Subject line, type: Address Change\nIn the BODY of the message, type: change [old_address] [new_address]\ne.g. change [email protected]@newplace.com\n\n===============================================================\n\nTO UNSUBSCRIBE\n\nEmail [email protected]\nOn the Subject line, type: Remove Internet Wire\n\n===============================================================\n\nGeneral Information\n\nEmail: [email protected]\nOr visit:\nhttp://www.internetwire.com\n\n\n\n", "msg_date": "Fri, 15 May 1998 13:30:28 -0700", "msg_from": "Internet Wire <[email protected]>", "msg_from_op": true, "msg_subject": "Internet Wire" } ]
[ { "msg_contents": "I think soon people are going to start calling me Mr. Big...Tables...\n\nI have a big table. 40M rows.\nOn the disk, it's size is:\n 2,090,369,024 bytes. So 2 gigs. On a 9 gig drive I can't sort this table.\nHow should one decide based on table size how much room is needed?\n\nAlso, this simple table consisting of only 2 int4 values is the exact size\nof an equally sized table consisting of only one int2. There seems to be\ntoo much overhead here. I realise there are extra things that have to be\nsaved, but I am not getting the size/performance I had hoped for... I am\nstarting to think this segment of the database would be better implemented\nwithout a dbms because it is not expected to change at all...\n\n-Mike\n\n", "msg_date": "Fri, 15 May 1998 20:09:31 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "sorting big tables :(" }, { "msg_contents": "> \n> I think soon people are going to start calling me Mr. Big...Tables...\n> \n> I have a big table. 40M rows.\n> On the disk, it's size is:\n> 2,090,369,024 bytes. So 2 gigs. On a 9 gig drive I can't sort this table.\n> How should one decide based on table size how much room is needed?\n> \n> Also, this simple table consisting of only 2 int4 values is the exact size\n> of an equally sized table consisting of only one int2. There seems to be\n> too much overhead here. I realise there are extra things that have to be\n> saved, but I am not getting the size/performance I had hoped for... I am\n> starting to think this segment of the database would be better implemented\n> without a dbms because it is not expected to change at all...\n> \n\nIt is taking so much disk space because it is using a TAPE sorting\nmethod, by breaking the file into tape chunks and sorting in pieces, the\nmerging.\n\nCan you try increasing your postgres -S parameter to some huge amount like 32MB\nand see if that helps? It should.\n\ni.e.\n\n\tpostmaster -i -B 400 $DEBUG -o '-F -S 1024' \"$@\" >server.log 2>&1\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 15 May 1998 20:00:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "On Fri, 15 May 1998, Bruce Momjian wrote:\n\n> > I have a big table. 40M rows.\n> > On the disk, it's size is:\n> > 2,090,369,024 bytes. So 2 gigs. On a 9 gig drive I can't sort this table.\n> > How should one decide based on table size how much room is needed?\n\n> It is taking so much disk space because it is using a TAPE sorting\n> method, by breaking the file into tape chunks and sorting in pieces, the\nThe files grow until I have 6 files of almost a gig each. At that point, I\nstart running out of space...\nThis TAPE sotring method. It is a simple merge sort? Do you know of a way\nthis could be done while using constant space and no more complexity in\nthe algorithim. Even if it is a little slower, the DBMS could decide based\non the table size whether it should use the tape sort or another one...\nBubble sort would not be my first choice tho :)\n\n-Mike\n\n", "msg_date": "Sat, 16 May 1998 12:47:52 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "> \n> On Fri, 15 May 1998, Bruce Momjian wrote:\n> \n> > > I have a big table. 40M rows.\n> > > On the disk, it's size is:\n> > > 2,090,369,024 bytes. So 2 gigs. On a 9 gig drive I can't sort this table.\n> > > How should one decide based on table size how much room is needed?\n> \n> > It is taking so much disk space because it is using a TAPE sorting\n> > method, by breaking the file into tape chunks and sorting in pieces, the\n> The files grow until I have 6 files of almost a gig each. At that point, I\n> start running out of space...\n> This TAPE sotring method. It is a simple merge sort? Do you know of a way\n> this could be done while using constant space and no more complexity in\n> the algorithim. Even if it is a little slower, the DBMS could decide based\n> on the table size whether it should use the tape sort or another one...\n> Bubble sort would not be my first choice tho :)\n\nTape sort is a standard Knuth sorting. It basically sorts in pieces,\nand merges. If you don't do this, the accessing around gets very poor\nas you page fault all over the file, and the cache becomes useless.\n\nThere is something optimal about having seven sort files. Not sure what\nto suggest. No one has complained about this before.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 17 May 1998 00:22:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "> \n> > \n> > On Fri, 15 May 1998, Bruce Momjian wrote:\n> > \n> > > > I have a big table. 40M rows.\n> > > > On the disk, it's size is:\n> > > > 2,090,369,024 bytes. So 2 gigs. On a 9 gig drive I can't sort this table.\n> > > > How should one decide based on table size how much room is needed?\n> > \n> > > It is taking so much disk space because it is using a TAPE sorting\n> > > method, by breaking the file into tape chunks and sorting in pieces, the\n> > The files grow until I have 6 files of almost a gig each. At that point, I\n> > start running out of space...\n> > This TAPE sotring method. It is a simple merge sort? Do you know of a way\n> > this could be done while using constant space and no more complexity in\n> > the algorithim. Even if it is a little slower, the DBMS could decide based\n> > on the table size whether it should use the tape sort or another one...\n> > Bubble sort would not be my first choice tho :)\n> \n> Tape sort is a standard Knuth sorting. It basically sorts in pieces,\n> and merges. If you don't do this, the accessing around gets very poor\n> as you page fault all over the file, and the cache becomes useless.\n> \n> There is something optimal about having seven sort files. Not sure what\n> to suggest. No one has complained about this before.\n\nI think this is a bug. There is no reason to use more than a little bit over\nthree times the input size for a sort. This is: input file, run files, output\nfile. If we are not able to sort a 2 gig table on a 9 gig partition we need\nto fix it. I suspect we have a bug in the implementation, but perhaps we\nneed to look at our choice of algorithm. In any case this problem should go\non the todo list.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Sun, 17 May 1998 01:18:49 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "On Sun, 17 May 1998, David Gould wrote:\n\n> I think this is a bug. There is no reason to use more than a little bit over\n> three times the input size for a sort. This is: input file, run files, output\n> file. If we are not able to sort a 2 gig table on a 9 gig partition we need\n> to fix it. I suspect we have a bug in the implementation, but perhaps we\n> need to look at our choice of algorithm. In any case this problem should go\n> on the todo list.\n\n\tHave to agree here...\n\n\tMicheal...if you were to dump that table into a text file, how big\nwould it turn out to be? Much smaller then 2gig, no? Then perform a Unix\nsort on that, how long would that take? Then reload the data...\n\n\tNeeding more then 7gig to sort a 2gig table sounds slightly off to\nme as well :(\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 17 May 1998 13:35:45 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "On Sun, 17 May 1998, Bruce Momjian wrote:\n\n> > > > I have a big table. 40M rows.\n> > > > On the disk, it's size is:\n> > > > 2,090,369,024 bytes. So 2 gigs. On a 9 gig drive I can't sort this table.\n> > > > How should one decide based on table size how much room is needed?\n> \n> Tape sort is a standard Knuth sorting. It basically sorts in pieces,\n> and merges. If you don't do this, the accessing around gets very poor\n> as you page fault all over the file, and the cache becomes useless.\nRight. I wasn't reading the right chapter. Internal sorting is much\ndifferent than external sorts. Internal suggests the use of a Quicksort\nalgorithim.\nMarc and I discussed over lunch. If I did a select * into, would it not\nmake more sense to sort the results into the resulting table rather than\ninto pieces and then copy into a table? From my limited knowlege, I think\nthis should save 8/7 N the space.\nIn this issue, I think there must be a lot more overhead than necessary.\nThe table consists of only\nint4, int4, int2\nI read 10 bytes / row of actual data here.\nInstead, 40M/2gigs is about\n50 bytes / record\nWhat is there other than oid (4? bytes)\n\n-Mike \n\n", "msg_date": "Tue, 19 May 1998 21:02:38 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "> \n> On Sun, 17 May 1998, Bruce Momjian wrote:\n> \n> > > > > I have a big table. 40M rows.\n> > > > > On the disk, it's size is:\n> > > > > 2,090,369,024 bytes. So 2 gigs. On a 9 gig drive I can't sort this table.\n> > > > > How should one decide based on table size how much room is needed?\n> > \n> > Tape sort is a standard Knuth sorting. It basically sorts in pieces,\n> > and merges. If you don't do this, the accessing around gets very poor\n> > as you page fault all over the file, and the cache becomes useless.\n> Right. I wasn't reading the right chapter. Internal sorting is much\n> different than external sorts. Internal suggests the use of a Quicksort\n> algorithim.\n> Marc and I discussed over lunch. If I did a select * into, would it not\n> make more sense to sort the results into the resulting table rather than\n> into pieces and then copy into a table? From my limited knowlege, I think\n> this should save 8/7 N the space.\n> In this issue, I think there must be a lot more overhead than necessary.\n\nNot sure if the internal tape is the same structure as a real table, but\nI doubt it. I seem to remember there is less overhead.\n\n> The table consists of only\n> int4, int4, int2\n> I read 10 bytes / row of actual data here.\n> Instead, 40M/2gigs is about\n> 50 bytes / record\n> What is there other than oid (4? bytes)\n\nInternal stuff so it looks like a real table, even though it is a\nresult, I think.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 19 May 1998 21:50:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "On Tue, 19 May 1998, Bruce Momjian wrote:\n\n> > \n> > On Sun, 17 May 1998, Bruce Momjian wrote:\n> > \n> > > > > > I have a big table. 40M rows.\n> > > > > > On the disk, it's size is:\n> > > > > > 2,090,369,024 bytes. So 2 gigs. On a 9 gig drive I can't sort this table.\n> > > > > > How should one decide based on table size how much room is needed?\n> > > \n> > > Tape sort is a standard Knuth sorting. It basically sorts in pieces,\n> > > and merges. If you don't do this, the accessing around gets very poor\n> > > as you page fault all over the file, and the cache becomes useless.\n> > Right. I wasn't reading the right chapter. Internal sorting is much\n> > different than external sorts. Internal suggests the use of a Quicksort\n> > algorithim.\n> > Marc and I discussed over lunch. If I did a select * into, would it not\n> > make more sense to sort the results into the resulting table rather than\n> > into pieces and then copy into a table? From my limited knowlege, I think\n> > this should save 8/7 N the space.\n> > In this issue, I think there must be a lot more overhead than necessary.\n> \n> Not sure if the internal tape is the same structure as a real table, but\n> I doubt it. I seem to remember there is less overhead.\n> \n> > The table consists of only\n> > int4, int4, int2\n> > I read 10 bytes / row of actual data here.\n> > Instead, 40M/2gigs is about\n> > 50 bytes / record\n> > What is there other than oid (4? bytes)\n> \n> Internal stuff so it looks like a real table, even though it is a\n> result, I think.\n\nOkay...I get to jump in here with both feet and arms flailing :)\n\nMichael and I had lunch today and talked about this, and I asked him to\nsend an email in to the list about it...unfortunately, he didn't translate\nour chat very well for here :) \n\nThis whole things makes absolutely no sense to me, as far as why it takes\n2.5 times more space to *sort* the table as the table size itself.\n\nHe starts with a 2gig table, and it runs out of disk space on a 9gig file\nsystem...\n\nNow, looking at question 3.26 in the FAQ, we have:\n\n40 bytes + each row header (approximate)\n10 bytes + two int4 fields + one int2 field\n 4 bytes + pointer on page to tuple\n-------- =\n54 bytes per row\n\nThe data page size in PostgreSQL is 8192(8k) bytes, so:\n\n8192 bytes per page\n------------------- = 157 rows per database page (rounded up)\n 54 bytes per row\n\n40000000 data rows\n----------------- = 254777 database pages\n157 rows per page\n\n254777 database pages * 8192 bytes per page = 2,087,133,184 or ~1.9gig\n\nNow, as a text file, this would amount to, what...~50MB?\n\nSo, if I were to do a 'copy out' to a text file, a Unix sort and then a\n'copy in', I would use up *less* disk space (by several orders of\nmagnitude) then doing the sort inside of PostgreSQL?\n\nWhy? \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 19 May 1998 23:53:44 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> Now, as a text file, this would amount to, what...~50MB?\n40M of records to produce a 50MB text file? How would you sort such a\n*compressed* file? ;-)\n \n> So, if I were to do a 'copy out' to a text file, a Unix sort and then a\n> 'copy in', I would use up *less* disk space (by several orders of\n> magnitude) then doing the sort inside of PostgreSQL?\n\nWell, I think it might be optimised slightly. Am I right that postgres\nuses heap (i.e. they look like tables) files during sorting? While this\nis a merge sort, those files doesn't have to be a table-like files.\nCertainly, they might variable length records without pages (aren't they\nused sequentially). Moreover we would consider packing tape files before\nwritting them down if necessary. Of course it will result in some\nperformance dropdown. However it's better to have less performance that\nbeing unable to sort it at all.\n\nLast question... What's the purpose of such a big sort? If somebody gets\n40M of sorted records in a result of some query, what would he do with\nit? Is he going to spent next years on reading this lecture? I mean,\nisn't it worth to query the database for necessary informations only and\nthen sort it?\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Wed, 20 May 1998 14:12:15 +0200", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "On Wed, 20 May 1998, Michal Mosiewicz wrote:\n\n> The Hermit Hacker wrote:\n> \n> > Now, as a text file, this would amount to, what...~50MB?\n> 40M of records to produce a 50MB text file? How would you sort such a\n> *compressed* file? ;-)\n\nMy math off? 40M rows at 11bytes each (2xint4+int2+\\n?) oops...ya, just\noff by a factor of ten...still, 500MB is a quarter of the size of the 2gig\nfile we started with...\n\n> > So, if I were to do a 'copy out' to a text file, a Unix sort and then a\n> > 'copy in', I would use up *less* disk space (by several orders of\n> > magnitude) then doing the sort inside of PostgreSQL?\n> \n> Well, I think it might be optimised slightly. Am I right that postgres\n> uses heap (i.e. they look like tables) files during sorting? While this\n> is a merge sort, those files doesn't have to be a table-like files.\n> Certainly, they might variable length records without pages (aren't they\n> used sequentially). Moreover we would consider packing tape files before\n> writting them down if necessary. Of course it will result in some\n> performance dropdown. However it's better to have less performance that\n> being unable to sort it at all.\n> \n> Last question... What's the purpose of such a big sort? If somebody gets\n> 40M of sorted records in a result of some query, what would he do with\n> it? Is he going to spent next years on reading this lecture? I mean,\n> isn't it worth to query the database for necessary informations only and\n> then sort it?\n\n\tthis I don't know...I never even really thought about that,\nactually...Michael? :) Only you can answer that one.\n\n\n", "msg_date": "Wed, 20 May 1998 08:24:19 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "> \n> On Wed, 20 May 1998, Michal Mosiewicz wrote:\n> \n> > The Hermit Hacker wrote:\n> > \n> > > Now, as a text file, this would amount to, what...~50MB?\n> > 40M of records to produce a 50MB text file? How would you sort such a\n> > *compressed* file? ;-)\n> \n> My math off? 40M rows at 11bytes each (2xint4+int2+\\n?) oops...ya, just\n> off by a factor of ten...still, 500MB is a quarter of the size of the 2gig\n> file we started with...\n\nActually, my description of the use of tape files was somewhat off. \nActually, the file is sorted by putting several batches in each tape\nfile, then reading the batches make another tape file with bigger\nbatches until there is one tape file and one big sorted batch. Also, if\nthe data is already sorted, it can do it in one pass, without making all\nthose small batches because of the way the data structure sorts them in\nmemory. Only Knuth can do the description justice, but suffice it to\nsay that the data can appear up to two places at once.\n\nThis is the first time I remember someone complaining about it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 20 May 1998 10:22:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "> > Well, I think it might be optimised slightly. Am I right that postgres\n> > uses heap (i.e. they look like tables) files during sorting? While this\n> > is a merge sort, those files doesn't have to be a table-like files.\n> > Certainly, they might variable length records without pages (aren't they\n> > used sequentially). Moreover we would consider packing tape files before\n> > writting them down if necessary. Of course it will result in some\n> > performance dropdown. However it's better to have less performance that\n> > being unable to sort it at all.\n> > \n> > Last question... What's the purpose of such a big sort? If somebody gets\n> > 40M of sorted records in a result of some query, what would he do with\n> > it? Is he going to spent next years on reading this lecture? I mean,\n> > isn't it worth to query the database for necessary informations only and\n> > then sort it?\n> \n> \tthis I don't know...I never even really thought about that,\n> actually...Michael? :) Only you can answer that one.\n\nI have an idea. Can he run CLUSTER on the data? If so, the sort will\nnot use small batches, and the disk space during sort will be reduced. \nHowever, I think CLUSTER will NEVER finish on such a file, unless it is\nalready pretty well sorted.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 20 May 1998 10:23:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "On Wed, 20 May 1998, Bruce Momjian wrote:\n\n> > > Well, I think it might be optimised slightly. Am I right that postgres\n> > > uses heap (i.e. they look like tables) files during sorting? While this\n> > > is a merge sort, those files doesn't have to be a table-like files.\n> > > Certainly, they might variable length records without pages (aren't they\n> > > used sequentially). Moreover we would consider packing tape files before\n> > > writting them down if necessary. Of course it will result in some\n> > > performance dropdown. However it's better to have less performance that\n> > > being unable to sort it at all.\n> > > \n> > > Last question... What's the purpose of such a big sort? If somebody gets\n> > > 40M of sorted records in a result of some query, what would he do with\n> > > it? Is he going to spent next years on reading this lecture? I mean,\n> > > isn't it worth to query the database for necessary informations only and\n> > > then sort it?\n> > \n> > \tthis I don't know...I never even really thought about that,\n> > actually...Michael? :) Only you can answer that one.\n> \n> I have an idea. Can he run CLUSTER on the data? If so, the sort will\n> not use small batches, and the disk space during sort will be reduced. \n> However, I think CLUSTER will NEVER finish on such a file, unless it is\n> already pretty well sorted.\n\n\tOkay...then we *do* have a table size limit problem? Tables that\njust get too large to be manageable? Maybe this is one area we should be\nlooking at as far as performance is concerned?\n\n\tOne thing that just pop'd to mind, concerning the above CLUSTER\ncommand...what would it take to have *auto-cluster'ng*? Maybe provide a\nmeans of marking a field in a table for this purpose?\n\n\tOne of the things that the Unix FS does is auto-defragmenting, at\nleast the UFS one does. Whenever the system is idle (from my\nunderstanding), the kernel uses that time to clean up the file systems, to\nreduce the file system fragmentation.\n\n\tThis is by no means SQL92, but it would be a neat\n\"extension\"...let me specify a \"CLUSTER on\" field. Then, as I'm entering\ndata into the database, periodically check for fragmentation of the data\nand clean up accordingly. If done by the system, reasonably often, it\nshouldn't take up *too* much time, as most of the data should already be\nin order...\n\n\tThat would have the side-benefit of speeding up the \"ORDER by\" on\nthat field also...\n\n\t\n\n", "msg_date": "Wed, 20 May 1998 10:50:11 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "> > I have an idea. Can he run CLUSTER on the data? If so, the sort will\n> > not use small batches, and the disk space during sort will be reduced. \n> > However, I think CLUSTER will NEVER finish on such a file, unless it is\n> > already pretty well sorted.\n> \n> \tOkay...then we *do* have a table size limit problem? Tables that\n> just get too large to be manageable? Maybe this is one area we should be\n> looking at as far as performance is concerned?\n\nWell, cluster moves one row at a time, so if the table is very\nfragmented, the code is slow because it is seeking all over the table. \nSee the cluster manual pages for an alternate solution, the uses ORDER\nBY.\n\n\n> \n> \tOne thing that just pop'd to mind, concerning the above CLUSTER\n> command...what would it take to have *auto-cluster'ng*? Maybe provide a\n> means of marking a field in a table for this purpose?\n\nHard to do. That's what we have indexes for.\n\n> \n> \tOne of the things that the Unix FS does is auto-defragmenting, at\n> least the UFS one does. Whenever the system is idle (from my\n> understanding), the kernel uses that time to clean up the file systems, to\n> reduce the file system fragmentation.\n> \n> \tThis is by no means SQL92, but it would be a neat\n> \"extension\"...let me specify a \"CLUSTER on\" field. Then, as I'm entering\n> data into the database, periodically check for fragmentation of the data\n> and clean up accordingly. If done by the system, reasonably often, it\n> shouldn't take up *too* much time, as most of the data should already be\n> in order...\n> \n> \tThat would have the side-benefit of speeding up the \"ORDER by\" on\n> that field also...\n\nWe actually can have a CLUSTER ALL command, that does this. No one has\nimplemented it yet.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 20 May 1998 11:02:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "\nOn Wed, 20 May 1998, The Hermit Hacker wrote:\n\n> \tOne of the things that the Unix FS does is auto-defragmenting, at\n> least the UFS one does. Whenever the system is idle (from my\n> understanding), the kernel uses that time to clean up the file systems, to\n> reduce the file system fragmentation.\n\n No, that doesn't happen. The only way to eliminate fragmentation is a\ndump/newfs/restore cycle. UFS does do fragmentation avoidance (which is\nreason UFS filesystems have a 10% reserve).\n\nTom\n\n", "msg_date": "Wed, 20 May 1998 09:42:13 -0700 (PDT)", "msg_from": "Tom <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "On Wed, 20 May 1998, Tom wrote:\n\n> \n> On Wed, 20 May 1998, The Hermit Hacker wrote:\n> \n> > \tOne of the things that the Unix FS does is auto-defragmenting, at\n> > least the UFS one does. Whenever the system is idle (from my\n> > understanding), the kernel uses that time to clean up the file systems, to\n> > reduce the file system fragmentation.\n> \n> No, that doesn't happen. The only way to eliminate fragmentation is a\n> dump/newfs/restore cycle. UFS does do fragmentation avoidance (which is\n> reason UFS filesystems have a 10% reserve).\n\n\tOkay, then we have two different understandings of this. My\nunderstanding was that the 10% reserve gave the OS a 'temp area' in which\nto move blocks to/from so that it could defrag on the fly...\n\n\tAm CC'ng this into [email protected] for a \"third\nopinion\"...am willing to admit I'm wrong *grin*\n\n\n", "msg_date": "Wed, 20 May 1998 13:17:34 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "> \tOkay, then we have two different understandings of this. My\n> understanding was that the 10% reserve gave the OS a 'temp area' in which\n> to move blocks to/from so that it could defrag on the fly...\n> \n> \tAm CC'ng this into [email protected] for a \"third\n> opinion\"...am willing to admit I'm wrong *grin*\n\nYou are wrong.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 20 May 1998 13:27:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "On Wed, 20 May 1998, Bruce Momjian wrote:\n\n> > \tOkay, then we have two different understandings of this. My\n> > understanding was that the 10% reserve gave the OS a 'temp area' in which\n> > to move blocks to/from so that it could defrag on the fly...\n> > \n> > \tAm CC'ng this into [email protected] for a \"third\n> > opinion\"...am willing to admit I'm wrong *grin*\n> \n> You are wrong.\n\n\tI just love short answers *roll eyes*\n\n\n", "msg_date": "Wed, 20 May 1998 13:28:07 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "> \n> On Wed, 20 May 1998, Bruce Momjian wrote:\n> \n> > > \tOkay, then we have two different understandings of this. My\n> > > understanding was that the 10% reserve gave the OS a 'temp area' in which\n> > > to move blocks to/from so that it could defrag on the fly...\n> > > \n> > > \tAm CC'ng this into [email protected] for a \"third\n> > > opinion\"...am willing to admit I'm wrong *grin*\n> > \n> > You are wrong.\n> \n> \tI just love short answers *roll eyes*\n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 20 May 1998 13:30:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "On Wed, May 20, 1998 at 01:17:34PM -0400, The Hermit Hacker wrote:\n> On Wed, 20 May 1998, Tom wrote:\n> > No, that doesn't happen. The only way to eliminate fragmentation is a\n> > dump/newfs/restore cycle. UFS does do fragmentation avoidance (which is\n> > reason UFS filesystems have a 10% reserve).\n> \n> \tOkay, then we have two different understandings of this. My\n> understanding was that the 10% reserve gave the OS a 'temp area' in which\n> to move blocks to/from so that it could defrag on the fly...\n\nNo. What is done is (quite correctly) fragmentation avoidance. Big\nfiles are even sometimes fragmented on purpose, to allow small files\nthat are written later to avoid being fragmented.\n\nEivind.\n", "msg_date": "Wed, 20 May 1998 22:44:30 +0200", "msg_from": "Eivind Eklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "\n> Last question... What's the purpose of such a big sort? If somebody gets\n> 40M of sorted records in a result of some query, what would he do with\n> it? Is he going to spent next years on reading this lecture? I mean,\n> isn't it worth to query the database for necessary informations only and\n> then sort it?\n\nNot all query results are for human eyes. I'd venture to say that more\nqueries are fed into report generators for formatting than are looked at\ndirectly from psql.\n\nA sort is required in some cases where not explicitly requested.\n\nFor example, a GROUP BY clause. You _could_ get the data back ungrouped,\nbut then you'd have to pipe it to another application or script to do the\nsorting and then the grouping (or perhaps group on the fly). But then\nperhaps that app/script will eat all the memory or disk space and you'd\nbe in the same pickle as before.\n\ndarrenk\n", "msg_date": "Wed, 20 May 1998 19:44:31 -0400", "msg_from": "\"Stupor Genius\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] sorting big tables :(" }, { "msg_contents": "Hello!\n\nOn Wed, 20 May 1998, The Hermit Hacker wrote:\n> > No, that doesn't happen. The only way to eliminate fragmentation is a\n> > dump/newfs/restore cycle. UFS does do fragmentation avoidance (which is\n> > reason UFS filesystems have a 10% reserve).\n> \n> \tOkay, then we have two different understandings of this. My\n> understanding was that the 10% reserve gave the OS a 'temp area' in which\n> to move blocks to/from so that it could defrag on the fly...\n\n No, you are wrong. This 10% is temp area reserved for emergent\nsituations - when root bring system down to single-user and do system\nmaintainance.\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 21 May 1998 10:12:50 +0400 (MSK DST)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" }, { "msg_contents": "On Thu, 21 May 1998, Oleg Broytmann wrote:\n\n> Hello!\n> \n> On Wed, 20 May 1998, The Hermit Hacker wrote:\n> > > No, that doesn't happen. The only way to eliminate fragmentation is a\n> > > dump/newfs/restore cycle. UFS does do fragmentation avoidance (which is\n> > > reason UFS filesystems have a 10% reserve).\n> > \n> > \tOkay, then we have two different understandings of this. My\n> > understanding was that the 10% reserve gave the OS a 'temp area' in which\n> > to move blocks to/from so that it could defrag on the fly...\n> \n> No, you are wrong. This 10% is temp area reserved for emergent\n> situations - when root bring system down to single-user and do system\n> maintainance.\n\n\tActually, in this one you are only partly right. Only root has\n*access* to using that extra 10%, but, as I've been corrected by several\nppl, including a couple on the FreeBSD list, that 10% is meant to\n*prevent/reduce* fragmentation. \n\n", "msg_date": "Thu, 21 May 1998 07:47:13 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sorting big tables :(" } ]
[ { "msg_contents": "\n\tI took the time today to take a close look at the regression tests\nthat fail when they are run on Linux/Alpha. I used the May 16th snapshot\non my RH4.0 UDB for the tests.\n\n\tThere are four types of failures that I could determine....\n\n* Error message mismatch: One error message is in the expected regression\nresults, but the actual results generate a different one. Though they both\nhave the same meaning. Though for float{4,8} the expected on some was an\nunderflow, and instead a zero was silently inserted. Might be due to\ncatching the error at different points, or due to #ifdefs. Theses occured\nmainly for int2, oidint2, and float{4,8} tests. Harmless for now. \n\n* Range mismatch: Apparently Linux/Alpha thinks that such things as int4\nand oidint4 have a larger range than the expected results do. The result\nis that some inserts of larger numbers that should have failed, did not,\nand caused additional rows returned on selects. Probably due to 32bit\nexpected results vs. 64bit actual results. Needs to be fixed, but not\nfatal for now.\n\n* Complete failures: These are the tests that resulted in postgres\nthrowing an arthimetic trap (as reported by the kernel), and sometimes\nseg faults by psql. These only happened on the time and date related\ntests, such as datetime, abstime, etc... Apparently date and time are\ntotally broken for Linux/Alpha at this time. These are the first problems\nthat need to be solved.\n\n* Cascade Effects: Simply more advanced tests that depend on lower level\ntests to succed. If they don't, then these don't. This includes both\nHorology and Random. Should go away if we fix the above (two, three?)\nproblems.\n\n\tThat is about where things stand at the moment. I plan to start\nlooking at the date/time stuff soon, but I don't know how far I will get,\ndue to my limited C hacking ablities (nothing like learning by doing\nthough... :). Any advice/pointer/suggestions to get especially date/time\nworking, or on fixing any of these above problems would be greatly\napperciated. Thanks!\n\n\tPS. If anyone is just dying to fix any of these problems\nthemselves, then feel free to do so. :) Just tell me, so we don't\nduplicate work, and make sure I get any patches you make so I can test\nthings on my end! \n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n", "msg_date": "Sat, 16 May 1998 10:48:17 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Regression Test Analysis for Linux/Alpha..." } ]
[ { "msg_contents": "OK, thanks to Tom Lane's many patches, I have query cancel working on my\nmachine. However, it is not working with Unix domain sockets. I get:\n\n\tCannot send cancel request:\n\tPQrequestCancel() -- couldn't send OOB data: errno=45\n\tOperation not supported\n\nThis is under BSDI 3.1.\n\nDo Unix Domain sockets support OOB(out-of-band) data?\n\nI will commit patches soon.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n\n", "msg_date": "Mon, 18 May 1998 00:25:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Query cancel and OOB data" } ]
[ { "msg_contents": "I'm currently feeding ecpg with Oracle's examples. It accepts almost all of\nit, except the prepare statement and the typedef stuff. I'd like to do the\nsame with examples from other DBs. So if you have embedded SQL stuff from\nIngres, Informix, Sybase, whatever, please try it with ecpg and send me\nthose statements that cause a parse error. Of course you can send me your\nwhole examples files, and I run the tests myself.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Mon, 18 May 1998 10:47:59 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "example code" } ]
[ { "msg_contents": "Hi everybody,\n\nCan anybody who has access to the www.postgresql.org website add the \nmention of the pgsql-interfaces mailing list.\n\nI wrote to the webmaster there with no results.\n\nI think it should reduce the number of people who try to subscribe to \nthe old (officially dead) postodbc list ;)\n\nHannu\n", "msg_date": "Mon, 18 May 1998 14:48:27 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "WWW site kas no mention of interfaces list" } ]
[ { "msg_contents": "I can't remember what the outcome was, but what about UNICODE?\n\nOne of the partially implemented bits of JDBC is the handling of UNICODE\nstrings (which Java uses all the time).\n\n--\nPeter T Mount, [email protected], [email protected]\nJDBC FAQ: http://www.retep.org.uk/postgres\nPlease note that this is from my works email. If you reply, please cc my\nhome address.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On\nBehalf Of Thomas G. Lockhart\nSent: Monday, May 18, 1998 4:43 PM\nTo: [email protected]\nCc: Mattias Kregert; Postgres Hackers List; [email protected]; Tatsuo\nIshii\nSubject: Re: [HACKERS] Re: [PATCHES] char/varchar locale support\n\n\n> > Shouldn't this be done only for NATIONAL CHAR?\n> It is what USE_LOCALE is intended for, isn't it?\n\nSQL92 defines NATIONAL CHAR/VARCHAR as the data type to support implicit\nlocal character sets. The usual CHAR/VARCHAR would use the default\nSQL_TEXT character set. I suppose we could extend it to include NATIONAL\nTEXT also...\n\nAdditionally, SQL92 allows one to specify an explicit character set and\nan explicit collating sequence. The standard is not explicit on how one\nactually makes these known to the database, but Postgres should be well\nsuited to accomplishing this.\n\nAnyway, I'm not certain how common and wide-spread the NATIONAL CHAR\nusage is. Would users with installations having non-English data find\nusing NCHAR/NATIONAL CHAR/NATIONAL CHARACTER an inconvenience? Or would\nmost non-English installations find this better and more solid??\n\nAt the moment we have support for Russian and Japanese character sets,\nand these would need the maintainers to agree to changes.\n\nbtw, if we do implement NATIONAL CHARACTER I would like to do so by\nhaving it fit in with the full SQL92 character sets and collating\nsequences capabilities. Then one could specify what NATIONAL CHAR means\nfor an installation or perhaps at run time without having to\nrecompile...\n\n - Tom\n\n", "msg_date": "Mon, 18 May 1998 16:50:39 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: [PATCHES] char/varchar locale support" }, { "msg_contents": "> I can't remember what the outcome was, but what about UNICODE?\n> One of the partially implemented bits of JDBC is the handling of \n> UNICODE strings (which Java uses all the time).\n\nI can't remember the outcome either, but when this was discussed on the\nlist earlier I had posted a url reference to a character coding\ndiscussion from the DocBook SGML folks. I vaguely recall that (for their\ntypesetting purposes) UNICODE didn't solve all problems.\n\nI also vaguely recall that the most common extended-byte encoding\nsequence is that used in Japan (EUC-jp?).\n\nAre we ready to gear up for another discussion on this topic? If so,\nsomeone should go through the archives and summarize the previous\ndiscussions so we don't re-invent the wheel...\n\n - Tom\n", "msg_date": "Mon, 18 May 1998 16:29:13 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] char/varchar locale support" }, { "msg_contents": "\nspeaking of archives, the digest archives are a little hard to use..\na standard mailing list archive would be grand -- I can probably point\nto some software if need be. are the postgres lists archived in the\nstandard majordomo way? (as in, berkeley mail format?)\n\nOn Mon, 18 May 1998, at 16:29:13, Thomas G. Lockhart wrote:\n\n> > I can't remember what the outcome was, but what about UNICODE?\n> > One of the partially implemented bits of JDBC is the handling of \n> > UNICODE strings (which Java uses all the time).\n> \n> I can't remember the outcome either, but when this was discussed on the\n> list earlier I had posted a url reference to a character coding\n> discussion from the DocBook SGML folks. I vaguely recall that (for their\n> typesetting purposes) UNICODE didn't solve all problems.\n> \n> I also vaguely recall that the most common extended-byte encoding\n> sequence is that used in Japan (EUC-jp?).\n> \n> Are we ready to gear up for another discussion on this topic? If so,\n> someone should go through the archives and summarize the previous\n> discussions so we don't re-invent the wheel...\n> \n> - Tom\n", "msg_date": "Mon, 18 May 1998 09:31:08 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] char/varchar locale support" } ]
[ { "msg_contents": "> Bruce Momjian wrote:\n> \n> > OK, thanks to Tom Lane's many patches, I have query cancel working on my\n> > machine. However, it is not working with Unix domain sockets. I get:\n> >\n> > Cannot send cancel request:\n> > PQrequestCancel() -- couldn't send OOB data: errno=45\n> > Operation not supported\n> >\n> > This is under BSDI 3.1.\n> >\n> > Do Unix Domain sockets support OOB(out-of-band) data?\n> >\n> \n> Unix domain sockets don't support OOB (Stevens, Unix Network Programming).\n\nYea, I found that too, late last night, Section 6.14, page 332.\n\nI basically need some way to 'signal' the backend of a cancellation\nrequest. Polling the socket is not an option because it would impose\ntoo great a performance penalty. Maybe async-io on a read(), but that\nis not going to be very portable.\n\nI could pass the backend pid to the front end, and send a kill(SIG_URG)\nto that pid on a cancel, but the frontend can be running as a different\nuser than the backend. Problem is, the only communcation channel is\nthat unix domain socket.\n\nWe basically need some way to get the attention of the backend,\nhopefully via some signal.\n\nAny ideas?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 18 May 1998 13:35:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "> > Bruce Momjian wrote:\n> > \n> > > OK, thanks to Tom Lane's many patches, I have query cancel working on my\n> > > machine. However, it is not working with Unix domain sockets. I get:\n> > >\n> > > Cannot send cancel request:\n> > > PQrequestCancel() -- couldn't send OOB data: errno=45\n> > > Operation not supported\n> > >\n> > > This is under BSDI 3.1.\n> > >\n> > > Do Unix Domain sockets support OOB(out-of-band) data?\n> > >\n> > \n> > Unix domain sockets don't support OOB (Stevens, Unix Network Programming).\n> \n> Yea, I found that too, late last night, Section 6.14, page 332.\n> \n> I basically need some way to 'signal' the backend of a cancellation\n> request. Polling the socket is not an option because it would impose\n> too great a performance penalty. Maybe async-io on a read(), but that\n> is not going to be very portable.\n> \n> I could pass the backend pid to the front end, and send a kill(SIG_URG)\n> to that pid on a cancel, but the frontend can be running as a different\n> user than the backend. Problem is, the only communcation channel is\n> that unix domain socket.\n> \n> We basically need some way to get the attention of the backend,\n> hopefully via some signal.\n> \n> Any ideas?\n\nUse TCP. On most modern systems (eg Linux ;-) ), TCP especially on the local\nmachine is very efficient. Not quite as efficient as a Unix domain socket,\nbut close enough that no one will notice.\n\nTo investigate this, see Larry McVoy's wonderful lmbench suite...\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Mon, 18 May 1998 10:59:29 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "> \n> > > Bruce Momjian wrote:\n> > > \n> > > > OK, thanks to Tom Lane's many patches, I have query cancel working on my\n> > > > machine. However, it is not working with Unix domain sockets. I get:\n> > > >\n> > > > Cannot send cancel request:\n> > > > PQrequestCancel() -- couldn't send OOB data: errno=45\n> > > > Operation not supported\n> > > >\n> > > > This is under BSDI 3.1.\n> > > >\n> > > > Do Unix Domain sockets support OOB(out-of-band) data?\n> > > >\n> > > \n> > > Unix domain sockets don't support OOB (Stevens, Unix Network Programming).\n> > \n> > Yea, I found that too, late last night, Section 6.14, page 332.\n> > \n> > I basically need some way to 'signal' the backend of a cancellation\n> > request. Polling the socket is not an option because it would impose\n> > too great a performance penalty. Maybe async-io on a read(), but that\n> > is not going to be very portable.\n> > \n> > I could pass the backend pid to the front end, and send a kill(SIG_URG)\n> > to that pid on a cancel, but the frontend can be running as a different\n> > user than the backend. Problem is, the only communcation channel is\n> > that unix domain socket.\n> > \n> > We basically need some way to get the attention of the backend,\n> > hopefully via some signal.\n> > \n> > Any ideas?\n> \n> Use TCP. On most modern systems (eg Linux ;-) ), TCP especially on the local\n> machine is very efficient. Not quite as efficient as a Unix domain socket,\n> but close enough that no one will notice.\n> \n> To investigate this, see Larry McVoy's wonderful lmbench suite...\n\nWe implemented Unix domain sockets for performance, and security. Hard\nto beat a Unix domain socket's security. I need the SIB_URG signal.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 18 May 1998 22:17:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "Bruce Momjian wrote:\n> \nBruce Momjian wrote:\n> \n> Yea, I found that too, late last night, Section 6.14, page 332.\n> \n> I basically need some way to 'signal' the backend of a cancellation\n> request. Polling the socket is not an option because it would impose\n> too great a performance penalty. Maybe async-io on a read(), but that\n> is not going to be very portable.\n> \n> I could pass the backend pid to the front end, and send a kill(SIG_URG)\n> to that pid on a cancel, but the frontend can be running as a different\n> user than the backend. Problem is, the only communcation channel is\n> that unix domain socket.\n> \n> We basically need some way to get the attention of the backend,\n> hopefully via some signal.\n> \n> Any ideas?\n\npostmaster could be listening (adding to select()) on a \"signal socket\"\nfor cancel request and shot down its children on request.\n\nhow do we make such a scheme secure ??\n\n\tterveiset,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna", "msg_date": "Tue, 19 May 1998 14:29:50 +0200", "msg_from": "\"G���ran Thyni\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "> > We basically need some way to get the attention of the backend,\n> > hopefully via some signal.\n> > \n> > Any ideas?\n> \n> postmaster could be listening (adding to select()) on a \"signal socket\"\n> for cancel request and shot down its children on request.\n> \n> how do we make such a scheme secure ??\n\n\nI think that is the big question.\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 19 May 1998 15:33:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "> I basically need some way to 'signal' the backend of a cancellation\n> request. Polling the socket is not an option because it would impose\n> too great a performance penalty. Maybe async-io on a read(), but that\n> is not going to be very portable.\n> \n> I could pass the backend pid to the front end, and send a kill(SIG_URG)\n> to that pid on a cancel, but the frontend can be running as a different\n> user than the backend. Problem is, the only communcation channel is\n> that unix domain socket.\n> \n> We basically need some way to get the attention of the backend,\n> hopefully via some signal.\n\nOK, I think I have a solution. I recommend we pass the backend pid to\nthe client as part of connection startup. Then, when the client wants\nto cancel a query, it sends a cancel packet to its backend (new packet\ntype), and then sends that pid to the postmaster with a new packet type.\n\nWhen the postmaster receives the packet with the pid, it sends a signal\nto that pid/backend. The backend does a recv(MSG_PEEK) to see if it has\na pending packet with a cancel request. If it does, it cancels, if not,\nit ignores it. In the read loop of the backend, all cancel requests are\nignored.\n\nSo the cancel packet to the postmaster only causes the backend to look\nfor a pending cancel packet.\n\nThis does a few things for us. It allows us to use cancel in unix\ndomain sockets, and in Java or anything that can't support OOB. In\nfact, I would recommend discarding OOB in favor of this method.\n\nAlso, it does not require the postmaster to authenticate the cancel\nrequest. This could be hard, especially if the user has to type in a\npassword. No one wants to type in a password to cancel a query.\n\nComments?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 22 May 1998 10:48:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, I think I have a solution. I recommend we pass the backend pid to\n> the client as part of connection startup. Then, when the client wants\n> to cancel a query, it sends a cancel packet to its backend (new packet\n> type), and then sends that pid to the postmaster with a new packet type.\n\n> When the postmaster receives the packet with the pid, it sends a signal\n> to that pid/backend. The backend does a recv(MSG_PEEK) to see if it has\n> a pending packet with a cancel request. If it does, it cancels, if not,\n> it ignores it. In the read loop of the backend, all cancel requests are\n> ignored.\n\nOK, I guess the point of sending the normal-channel packet is to\nauthenticate the cancel request? Otherwise anyone could send a cancel\nrequest to the postmaster, if they know the backend PID.\n\nI see a few flaws however:\n\n1. What if the postmaster/signal/backend path is completed before the\nnormal-channel cancel packet arrives? The backend looks, sees no\npacket, and ignores the request. Oops. This scenario is not at all\nimplausible across a remote connection, since the first transmission\nof the normal-channel packet might be lost to a data glitch. By the\ntime the client-side TCP stack decides to retransmit, it's too late.\n\n2. I don't think you could use this to abort out of a COPY IN transfer,\nbecause the confirmation packet would be impossible to distinguish\nfrom data reliably. In general there's a risk of confusion if the\nserver might be looking for the confirmation packet when the client\nthinks it's in the middle of sending a regular request.\n\n3. There's still a possibility of a denial-of-service attack.\nA bad guy could send a flood of cancel requests with the right PID,\nand he'd slow down the server substantially even if nothing ever gets\ncancelled. (Also, because of point 2, some of the forged cancels\nmight succeed...)\n\n\n> This does a few things for us. It allows us to use cancel in unix\n> domain sockets, and in Java or anything that can't support OOB. In\n> fact, I would recommend discarding OOB in favor of this method.\n\nThe real advantage of OOB for this purpose is that there's no\npossibility of confusing the cancel request with normal data.\n\n\nI still like the idea I floated a couple days ago: have the initial\nhandshake provide both the PID of the backend and a \"secret code\"\nrandomly generated by the server for that connection. The client\nmust transmit both the PID and the code to the postmaster for the\ncancel request to be accepted. That method has all the advantages:\n\n1. The client doesn't have to supply a password; libpq will retain\nall the necessary info internally.\n\n2. The probability of defeating the scheme can be made arbitrarily\nsmall (much smaller than guessing a password, say) with a long enough\nsecret code. 8 or so random bytes ought to do.\n\n3. There's no problem with synchronization between the client/postmaster\nand client/backend data paths, because no data need be sent across the\nclient/backend path. This is just as good as using OOB to keep the\ncancel separate from normal traffic.\n\n4. Don't have to depend on having OOB facility.\n\nThe only disadvantage I can see is having to open a new postmaster\nconnection every time you want to cancel; but hopefully that won't\nbe very often, so performance shouldn't be much of an issue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 May 1998 11:19:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data " }, { "msg_contents": "\nI have taken some time to think about this some more.\n\n> Bruce Momjian <[email protected]> writes:\n> > OK, I think I have a solution. I recommend we pass the backend pid to\n> > the client as part of connection startup. Then, when the client wants\n> > to cancel a query, it sends a cancel packet to its backend (new packet\n> > type), and then sends that pid to the postmaster with a new packet type.\n> \n> > When the postmaster receives the packet with the pid, it sends a signal\n> > to that pid/backend. The backend does a recv(MSG_PEEK) to see if it has\n> > a pending packet with a cancel request. If it does, it cancels, if not,\n> > it ignores it. In the read loop of the backend, all cancel requests are\n> > ignored.\n> \n> OK, I guess the point of sending the normal-channel packet is to\n> authenticate the cancel request? Otherwise anyone could send a cancel\n> request to the postmaster, if they know the backend PID.\n\nYes, that is the intent of the normal-channel packet.\n\n> \n> I see a few flaws however:\n> \n> 1. What if the postmaster/signal/backend path is completed before the\n> normal-channel cancel packet arrives? The backend looks, sees no\n> packet, and ignores the request. Oops. This scenario is not at all\n> implausible across a remote connection, since the first transmission\n> of the normal-channel packet might be lost to a data glitch. By the\n> time the client-side TCP stack decides to retransmit, it's too late.\n\nYes, this could happen, but it is only a cancel request. Another way to\nproceed is to have the server query the client after it receives the\nrequest from the postmaster, but that seems odd.\n\n> \n> 2. I don't think you could use this to abort out of a COPY IN transfer,\n> because the confirmation packet would be impossible to distinguish\n> from data reliably. In general there's a risk of confusion if the\n> server might be looking for the confirmation packet when the client\n> thinks it's in the middle of sending a regular request.\n\nYes, that is a good point.\n\n> \n> 3. There's still a possibility of a denial-of-service attack.\n> A bad guy could send a flood of cancel requests with the right PID,\n> and he'd slow down the server substantially even if nothing ever gets\n> cancelled. (Also, because of point 2, some of the forged cancels\n> might succeed...)\n\nYes, but does this increase our denial-of-service vulnerability, or just\ngive the person one more way to slow things down?\n\n> \n> \n> > This does a few things for us. It allows us to use cancel in unix\n> > domain sockets, and in Java or anything that can't support OOB. In\n> > fact, I would recommend discarding OOB in favor of this method.\n> \n> The real advantage of OOB for this purpose is that there's no\n> possibility of confusing the cancel request with normal data.\n\nYes, that is true. I just am grasping for a unix domain and\njava/non-oob solution.\n\n> \n> \n> I still like the idea I floated a couple days ago: have the initial\n> handshake provide both the PID of the backend and a \"secret code\"\n> randomly generated by the server for that connection. The client\n> must transmit both the PID and the code to the postmaster for the\n> cancel request to be accepted. That method has all the advantages:\n> \n> 1. The client doesn't have to supply a password; libpq will retain\n> all the necessary info internally.\n> \n> 2. The probability of defeating the scheme can be made arbitrarily\n> small (much smaller than guessing a password, say) with a long enough\n> secret code. 8 or so random bytes ought to do.\n> \n> 3. There's no problem with synchronization between the client/postmaster\n> and client/backend data paths, because no data need be sent across the\n> client/backend path. This is just as good as using OOB to keep the\n> cancel separate from normal traffic.\n> \n> 4. Don't have to depend on having OOB facility.\n> \n> The only disadvantage I can see is having to open a new postmaster\n> connection every time you want to cancel; but hopefully that won't\n> be very often, so performance shouldn't be much of an issue.\n\nYes, the overhead of opening a new postmaster connection is very small,\nespecially because no backend is started. I was trying to avoid the\n'magic cookie' solution for a few reasons:\n\n\t1) generating a random secret codes can be slow (I may be wrong)\n\n\t2) the random key is sent across the network with a cancel\nrequest, so once it is used, it can be used by a malcontent to cancel\nany query for that backend. He doesn't need to spoof any packets to\ninsert it into the TCP/IP stream, he just connects to the postmaster and\nsends the secret key. For long-running queries, that may be a problem. \nNot sure how much of a vulnerability that is.\n\n\t3) I hesitate to add the bookkeeping in the postmaster and libpq\nof that pid/secret key combination. Seems like some bloat we could do\nwithout.\n\n\t4) You have to store the secret key in the client address space,\npossibly open to snooping.\n\nHowever, in thinking about it, I don't think there is any way to avoid\nyour solution of pid/secret key. The postmaster, on receiving the\nsecret key, can send a signal to the backend, and the query will be\ncancelled. Nothing will be sent along the backend/client channel. All\nother interfaces that want cancel handling will have to add some code\nfor this too.\n\nThis basically simulates OOB by sending a message to the postmaster,\nwhich is always listening, and having it send a signal, which is\npossible because they are owned by the same user.\n\nActually, in:\n\n\tConnCreate()\n\nI see a call to:\n\n RandomSalt(port->salt);\n\nAny idea what that is used for? Maybe we can use that? And why is it\nbeing generated for non-crypt connections? Seems like a waste if\nrandom() is an expensive function. He calls it twice, once for each of\nthe two salt characters. Looks like a cheap function on BSDI from the\nlooks of the library code.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 24 May 1998 01:06:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I was trying to avoid the\n> 'magic cookie' solution for a few reasons:\n\n> \t1) generating a random secret codes can be slow (I may be wrong)\n\nNot really. A typical system rand() subroutine is a multiply and an\nadd. For the moment I'd recommend generating an 8-byte random key with\nsomething like\n\n\tfor (i=0; i<8; i++)\n\t\tkey[i] = rand() & 0xFF;\n\nwhich isn't going to take enough time to notice.\n\nThe above isn't cryptographically secure (which means that a person who\nreceives a \"random\" key generated this way might be able to predict the\nnext one you generate). But it will do to get the protocol debugged,\nand we can improve it later. I have Schneier's \"Applied Cryptography\"\nand will study its chapter on secure random number generators.\n\n> \t2) the random key is sent across the network with a cancel\n> request, so once it is used, it can be used by a malcontent to cancel\n> any query for that backend.\n\nTrue, if you have a packet sniffer then you've got big troubles ---\non the other hand, a packet sniffer can also grab your password,\nmake his own connection to the server, and wreak much more havoc\nthan just issuing a cancel. I don't see that this adds any\nvulnerability that wasn't there before.\n\n> \t3) I hesitate to add the bookkeeping in the postmaster and libpq\n> of that pid/secret key combination. Seems like some bloat we could do\n> without.\n\nThe libpq-side bookkeeping is trivial. I'm not sure about the\npostmaster though. Does the postmaster currently keep track of\nall operating backend processes, or not? If it does, then another\nfield per process doesn't seem like a problem.\n\n> \t4) You have to store the secret key in the client address space,\n> possibly open to snooping.\n\nSee password. In any case, someone with access to the client address\nspace can probably manage to send packets from the client, too. So\n\"security\" based on access to the client/backend connection isn't any\nbetter.\n\n> This basically simulates OOB by sending a message to the postmaster,\n> which is always listening, and having it send a signal, which is\n> possible because they are owned by the same user.\n\nRight.\n\nMaybe we should look at this as a fallback that libpq uses if it\ntries OOB and that doesn't work? Or is it not a good idea to have\ntwo mechanisms?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 May 1998 11:29:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data " }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > I was trying to avoid the\n> > 'magic cookie' solution for a few reasons:\n> \n> > \t1) generating a random secret codes can be slow (I may be wrong)\n> \n> Not really. A typical system rand() subroutine is a multiply and an\n> add. For the moment I'd recommend generating an 8-byte random key with\n> something like\n> \n> \tfor (i=0; i<8; i++)\n> \t\tkey[i] = rand() & 0xFF;\n> \n> which isn't going to take enough time to notice.\n\nActually, just sending a random int as returned from random() is enough.\nrandom() returns a long here, but just cast it to int.\n\n> \n> The above isn't cryptographically secure (which means that a person who\n> receives a \"random\" key generated this way might be able to predict the\n> next one you generate). But it will do to get the protocol debugged,\n> and we can improve it later. I have Schneier's \"Applied Cryptography\"\n> and will study its chapter on secure random number generators.\n\nYes, that may be true. Not sure if having a single random() value can\npredict the next one. If we just use on random() return value, I don't\nthink that is possible.\n\n> \n> > \t2) the random key is sent across the network with a cancel\n> > request, so once it is used, it can be used by a malcontent to cancel\n> > any query for that backend.\n> \n> True, if you have a packet sniffer then you've got big troubles ---\n> on the other hand, a packet sniffer can also grab your password,\n> make his own connection to the server, and wreak much more havoc\n> than just issuing a cancel. I don't see that this adds any\n> vulnerability that wasn't there before.\n\nYes.\n\n> \n> > \t3) I hesitate to add the bookkeeping in the postmaster and libpq\n> > of that pid/secret key combination. Seems like some bloat we could do\n> > without.\n> \n> The libpq-side bookkeeping is trivial. I'm not sure about the\n> postmaster though. Does the postmaster currently keep track of\n> all operating backend processes, or not? If it does, then another\n> field per process doesn't seem like a problem.\n\nYes. The backend does already have such a per-connection structure, so\nadding it is trivial too.\n\n> \n> > \t4) You have to store the secret key in the client address space,\n> > possibly open to snooping.\n> \n> See password. In any case, someone with access to the client address\n> space can probably manage to send packets from the client, too. So\n> \"security\" based on access to the client/backend connection isn't any\n> better.\n\nYep.\n\n> \n> > This basically simulates OOB by sending a message to the postmaster,\n> > which is always listening, and having it send a signal, which is\n> > possible because they are owned by the same user.\n> \n> Right.\n> \n> Maybe we should look at this as a fallback that libpq uses if it\n> tries OOB and that doesn't work? Or is it not a good idea to have\n> two mechanisms?\n\nYou have convinced me. Let's bag OOB, and use this new machanism. I\ncan do the backend changes, I think.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 24 May 1998 13:20:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> on the other hand, a packet sniffer can also grab your password,\n> make his own connection to the server, and wreak much more havoc\n> than just issuing a cancel. I don't see that this adds any\n> vulnerability that wasn't there before.\n\nAhem. Not true for those of us who use Kerberos authentication.\nWe never send our passwords over the network, instead using them\nas (part of) a key that's used to encrypt other data.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "24 May 1998 20:47:01 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "> \n> Tom Lane <[email protected]> writes:\n> \n> > on the other hand, a packet sniffer can also grab your password,\n> > make his own connection to the server, and wreak much more havoc\n> > than just issuing a cancel. I don't see that this adds any\n> > vulnerability that wasn't there before.\n> \n> Ahem. Not true for those of us who use Kerberos authentication.\n> We never send our passwords over the network, instead using them\n> as (part of) a key that's used to encrypt other data.\n\nOK, lets review this, with thought about our various authentication\noptions:\n\n\ttrust, password, ident, crypt, krb4, krb5\n\nAs far as I know, they all transmit queries and results as clear text\nacross the network. They encrypt the passwords and tickets, but not the\ndata. [Even kerberos does not encrypt the data stream, does it?]\n\nSo, if someone snoops the network, they will see the query and results,\nand see the cancel secret key. Of course, once they see the cancel\nsecret key, it is trivial for them to send that to the postmaster to\ncancel a query. However, if they are already snooping, how much harder\nis it for them to insert their own query into the tcp stream? If it is \nas easy as sending the cancel secret key, then the additional\nvulnerability of being able to replay the cancel packet is trivial\ncompared to the ability to send your own query, so we don't loose\nanything by using a non-encrypted cancel secret key.\n\nOf course, if the stream were encrypted, they could not see the secret key\nneeds to be accepted and sent in an encrypted format.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 24 May 1998 23:57:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> OK, lets review this, with thought about our various authentication\n> options:\n> \n> \ttrust, password, ident, crypt, krb4, krb5\n> \n> As far as I know, they all transmit queries and results as clear text\n> across the network. They encrypt the passwords and tickets, but not the\n> data. [Even kerberos does not encrypt the data stream, does it?]\n\nTrue. Encrypted communication should be an option, though. With\nKerberos, the ability to do this securely is already there in the\nlibrary, so it would be natural to use it. Adding encryption to the\ncommunication between client and postmaster is probably a good thing\neven if we don't (yet) encrypt that between client and backend, and\nwould also be a good, simple way to start implementing it.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "25 May 1998 07:30:35 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yes, that may be true. Not sure if having a single random() value can\n> predict the next one. If we just use on random() return value, I don't\n> think that is possible.\n\nIn typical rand() implementations, having the whole of one output value\nis sufficient to give you all future outputs. That's why I recommended\nusing only 8 bits from each of several outputs. I believe that is still\nbreakable, but less trivially so. (I will be going on vacation\nWednesday morning and don't have time to research better methods before\nthen, but I do know they exist.)\n\nThe real question we need to ask here is not the details of generating\na one-time secret key, but what attacks we need to defend against and\nhow to do that. A simple secret code per my original proposal is clearly\nnot proof against a packet-sniffing attacker. Should we beef up the\ncoding, or consider that such an attacker must be met directly by\nencrypting communications? If the latter, how do we encrypt the first\npacket sent to or from the postmaster?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 May 1998 12:14:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data " }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> > I was trying to avoid the\n> > 'magic cookie' solution for a few reasons:\n> \n> > \t1) generating a random secret codes can be slow (I may be wrong)\n> \n> Not really. A typical system rand() subroutine is a multiply and an\n> add. For the moment I'd recommend generating an 8-byte random key with\n> something like\n> \n> \tfor (i=0; i<8; i++)\n> \t\tkey[i] = rand() & 0xFF;\n> \n> which isn't going to take enough time to notice.\n> \n> The above isn't cryptographically secure (which means that a person who\n> receives a \"random\" key generated this way might be able to predict the\n> next one you generate). But it will do to get the protocol debugged,\n> and we can improve it later. I have Schneier's \"Applied Cryptography\"\n> and will study its chapter on secure random number generators.\n\nA neat feature of linux is that it has a kernel random number\ngenerator which is fed random data from interrupt times. The only\ndrawback is that this is sort of a \"pool\", so whn the pool is full,\ndrawing 8 bytes from it is not a problem, but when the pool is\ndrained, it can take some time to generate more data. At any rate, it\nmight be a good starting point for a postgres random number generator\n-- sample usage of shared memory and perform a hash on this. From\n\"applied cryptography\":\n\n\"In effect, the system degrades gracefully from perfect to practical\nrandomness when the demand exceeds the supply. In this case it\nbecomes theoretically possible .. to determine a previous or\nsubsequent result. But this requires inverting MD5, which is\ncomputationally infeasible\"\n\napplied cryptography, 2nd eddition, p427.\n\nThis is sort of what we want. As random as the key can be, but able\nto generate a pseudo-random key if we're short on time.\n\nOcie\n", "msg_date": "Tue, 26 May 1998 14:00:57 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" }, { "msg_contents": "\n/dev/urandom performs a similar function without the wait. of course\nnot all of the data is new, but it should still be pretty secure.\n\nOn Tue, 26 May 1998, at 14:00:57, [email protected] wrote:\n\n> A neat feature of linux is that it has a kernel random number\n> generator which is fed random data from interrupt times. The only\n> drawback is that this is sort of a \"pool\", so whn the pool is full,\n> drawing 8 bytes from it is not a problem, but when the pool is\n> drained, it can take some time to generate more data. At any rate, it\n> might be a good starting point for a postgres random number generator\n> -- sample usage of shared memory and perform a hash on this. From\n> \"applied cryptography\":\n", "msg_date": "Tue, 26 May 1998 14:11:18 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" } ]
[ { "msg_contents": "INTERNET WIRE: http://www.internetwire.com\nNews and Links from Around the Web\n\nNOTE: List instructions and contact information at the end of this email.\n\nTo Get Full-Text Stories, click on the link associated with the story you wish to read.\n\nHeadlines for May 18,1998\n\nCNN Interactive And The IDG Network: Team To Deliver Number One Global Technology News Resource On The Web\nhttp://www.internetwire.com/technews/tn/tn980533.htx\n\nIPO Monitor: Announces Site Enhancements\nhttp://www.internetwire.com/technews/tn/tn980537.htx\n\nLockergnome: The Gnome Is Becoming A Giant\nhttp://www.internetwire.com/technews/tn/tn980532.htx\n\nMedweigh, Inc.: A Web-Based Weight Loss Center That Gets Results\nhttp://www.internetwire.com/technews/tn/tn980534.htx\n\nMeeting City: An \"Oscar\" for your website?\nhttp://www.internetwire.com/technews/tn/tn980530.htx\n\nMercury Center: News Site Marks Fifth Anniversary With Move To Free Subscription\nhttp://www.internetwire.com/technews/tn/tn980531.htx\n\nNovaStor: Ships NOVANET 7 Alliance Network Backup Software\nhttp://www.internetwire.com/technews/tn/tn980538.htx\n\nPM Media: \"World Wide Web Radio Show\" Now Heard On Talk Radio Network In 52 Markets\nhttp://www.internetwire.com/technews/tn/tn980535.htx\n\nTritium Network: Names Maurice J. Moore Vice President Of Advertising Sales\nhttp://www.internetwire.com/technews/tn/tn980536.htx\n\n===============================================================\nDaily Debuts\n===============================================================\n\nPBD Technologies -- http://www.softspider.com\nWeb site traffic builder program for Windows.\n\nAdventurers Guild -- http://www.adventurers-guild.com\nAll things fantasy.\n\nArenaFan -- http://www.arenafan.com\nDedicated to the Arena Football League.\n\nCaffeine Archive -- http://www.caffeinearchive.com\nLinks to caffeinated sites all over the internet.\n\nEnvironmental General Store -- http://www.mamasearth.com\nProducts and ideas for earth-friendly living.\n\n===============================================================\n\nTO CHANGE YOUR EMAIL ADDRESS\n\nEmail [email protected]\nOn the Subject line, type: Address Change\nIn the BODY of the message, type: change [old_address] [new_address]\ne.g. change [email protected]@newplace.com\n\n===============================================================\n\nTO UNSUBSCRIBE\n\nEmail [email protected]\nOn the Subject line, type: Remove Internet Wire\n\n===============================================================\n\nGeneral Information\n\nEmail: [email protected]\nOr visit:\nhttp://www.internetwire.com\n\n\n\n", "msg_date": "Mon, 18 May 1998 11:35:30 -0700", "msg_from": "Internet Wire <[email protected]>", "msg_from_op": true, "msg_subject": "Internet Wire" } ]
[ { "msg_contents": "\nIt appears that compiling with Kerberos 5 support turned on against MIT\nKerberos 1.0.5 produces some breakage.\n\nAnyone seen this?\n\n(I'm keen to use Kerberos 5 as its a deployed company wide and Sybase does\nnot support it so it would make PostgreSQL look really good if this\nworked.)\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Mon, 18 May 1998 15:48:50 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": true, "msg_subject": "Kerberos 5 breakage." }, { "msg_contents": "> \n> \n> It appears that compiling with Kerberos 5 support turned on against MIT\n> Kerberos 1.0.5 produces some breakage.\n> \n> Anyone seen this?\n> \n> (I'm keen to use Kerberos 5 as its a deployed company wide and Sybase does\n> not support it so it would make PostgreSQL look really good if this\n> worked.)\n> \n\nLast I heard, Kerberos worked, but that was a while ago. Feel free to\nsend in some patches.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 19 May 1998 14:25:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Kerberos 5 breakage." }, { "msg_contents": "On Tue, 19 May 1998, Bruce Momjian wrote:\n> Last I heard, Kerberos worked, but that was a while ago. Feel free to\n> send in some patches.\n\nI've compiled with kerberos 4 compatibility mode libraries in kerberos 5\nand it appears to compile, link and run but I've not got a good testbed\nfor kerberos 4.\n\nWhile Kerberos 5 authentication and authorization is nice, I'd like to\ninvestigate the possibility of adding encryption as well.\n\nI've got to complete the setup of a test enviornment for this before I can\nstart on the code.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Tue, 19 May 1998 14:30:42 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Kerberos 5 breakage." }, { "msg_contents": "\"Matthew N. Dodd\" <[email protected]> writes:\n\n> I've compiled with kerberos 4 compatibility mode libraries in kerberos 5\n> and it appears to compile, link and run but I've not got a good testbed\n> for kerberos 4.\n\nThe Kerberos 4 stuff works fine with real Kerberos 4 libraries.\n\n> While Kerberos 5 authentication and authorization is nice, I'd like to\n> investigate the possibility of adding encryption as well.\n\nAbsolutely. This should be specified in the pg_hba.conf file, so that\nyou could demand Kerberos authentication plus encryption for sensitive\ndata. When not demanded by pg_hba.conf, it should be a client option.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "20 May 1998 19:03:31 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Kerberos 5 breakage." }, { "msg_contents": "On 20 May 1998, Tom Ivar Helbekkmo wrote:\n> > While Kerberos 5 authentication and authorization is nice, I'd like to\n> > investigate the possibility of adding encryption as well.\n> \n> Absolutely. This should be specified in the pg_hba.conf file, so that\n> you could demand Kerberos authentication plus encryption for sensitive\n> data. When not demanded by pg_hba.conf, it should be a client option.\n\nI read through the SSL patch and am convinced that we need a little more\ncoherent arrangment of interface methods. Allowing direct manipulation of\nthe file descriptors is really going to make adding stuff like this (SSL,\nKerb5 encryption etc) next to impossible.\n\nTake a look at Apache 1.2 vx. 1.3 for an idea of what I'm talking about.\n\nAlso, allowing writes of single characters is bad; you incur a context\nswitch each write. The client and server should be writing things into\nlargish buffers and writing those instead of doing small writes.\n\nThe existence of the following scare me...\n\npqPutShort(int integer, FILE *f)\npqPutLong(int integer, FILE *f)\npqGetShort(int *result, FILE *f)\npqGetLong(int *result, FILE *f)\npqGetNBytes(char *s, size_t len, FILE *f)\npqPutNBytes(const char *s, size_t len, FILE *f)\npqGetString(char *s, size_t len, FILE *f)\npqPutString(const char *s, FILE *f)\npqGetByte(FILE *f)\npqPutByte(int c, FILE *f)\n\n(from src/backend/libpq/pqcomprim.c)\n\nA select based I/O buffering system would seem to be in order here...\n\nI'd like to see these routines passing around a connection information\nstruct that contains the file handle and other connection options as well.\n\nI'll not bother beating on this anymore as I'm unlikely to cover anything\nthat has not already been covered. Regardless, this issue needs some\ncritical analysis before any code is changed.\n\nFailing to address this issue really raises the cost of adding stuff like\nSSL and Kerberos5 encryption.\n\nTake a look at src/main/buff.c and src/include/buff.h in Apache 1.3 at how\nthey use their 'struct buff_struct' for some interesting examples.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Wed, 20 May 1998 14:02:08 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Kerberos 5 breakage." }, { "msg_contents": "At 11:02 AM -0700 5/20/98, Matthew N. Dodd wrote:\n>Also, allowing writes of single characters is bad; you incur a context\n>switch each write. The client and server should be writing things into\n>largish buffers and writing those instead of doing small writes.\n>\n>The existence of the following scare me...\n>\n>pqPutShort(int integer, FILE *f)\n.\n.\n.\n\nCan't these be defined as macros the way get/put stuff is done in stdio.h?\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n\n\n", "msg_date": "Wed, 20 May 1998 11:38:51 -0700", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Kerberos 5 breakage." }, { "msg_contents": "On Wed, 20 May 1998, Henry B. Hotz wrote:\n> Can't these be defined as macros the way get/put stuff is done in stdio.h?\n\nWhich macros?\n\nLooking at stdio.h and the FILE struct/typedef; I wonder if its possible\nto override the _read and _write function pointers and sub in our own\ndepending on when encryption scheme is in use.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Wed, 20 May 1998 14:46:59 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Kerberos 5 breakage." }, { "msg_contents": "\"Matthew N. Dodd\" <[email protected]> writes:\n> Also, allowing writes of single characters is bad; you incur a context\n> switch each write. The client and server should be writing things into\n> largish buffers and writing those instead of doing small writes.\n\n> The existence of the following scare me...\n> pqPutShort(int integer, FILE *f)\n> pqPutLong(int integer, FILE *f)\n> [etc]\n\nLook again. Those functions use <stdio.h>, which provides buffering.\nThey don't need to do it themselves.\n\nIt might be good to put a layer underneath these functions to allow\ninsertion of encryption or something like that, but efficiency is not\na valid argument for doing it.\n\nOn the client side, in the recent libpq rewrite I took out usage of\nstdio and did my own buffering instead, but that was just so that\nI could control when and how the client would block for input.\nI don't think it bought any speedup.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 May 1998 17:15:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Kerberos 5 breakage. " }, { "msg_contents": "At 11:46 AM -0700 5/20/98, Matthew N. Dodd wrote:\n>On Wed, 20 May 1998, Henry B. Hotz wrote:\n>> Can't these be defined as macros the way get/put stuff is done in stdio.h?\n>\n>Which macros?\n>\nI haven't actually looked, but I think it's pretty standard for\ngetchar/putchar to just do I/O from some local-to-the-program buffers.\nOnly when they overflow does it become a real system/library call, but they\nlook like function calls to the C program.\n\nIt's also true that you can play games with incremental linking and symbol\ntable stripping to insert your own wrapper on a system routine, but I would\nnot recommend that. It's much too likely to create portability problems.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n\n\n", "msg_date": "Wed, 20 May 1998 14:15:57 -0700", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Kerberos 5 breakage." } ]
[ { "msg_contents": " It just occurred to me, while browsing through the last hacker's\ndigest, that OOB might be more trouble than it's worth. What about\nhaving the Postmaster listen on a second socket as an alternative?\nCertain commands could be issued there, outside of the main connection\n(possibly even via UDP) as one-shot deals (not requiring a persistent\nconnection) and the postmaster could pass the information on to the\nappropriate backend in a nice, intrusive fashion. :)\n\n-Brandon :)\n", "msg_date": "Mon, 18 May 1998 22:57:08 -0500 (CDT)", "msg_from": "Brandon Ibach <[email protected]>", "msg_from_op": true, "msg_subject": "Query cancellation and OOB" }, { "msg_contents": "> \n> It just occurred to me, while browsing through the last hacker's\n> digest, that OOB might be more trouble than it's worth. What about\n> having the Postmaster listen on a second socket as an alternative?\n> Certain commands could be issued there, outside of the main connection\n> (possibly even via UDP) as one-shot deals (not requiring a persistent\n> connection) and the postmaster could pass the information on to the\n> appropriate backend in a nice, intrusive fashion. :)\n\nYes, but you have to make sure the cancel is ONLY coming from the proper\nclient. The signal thing is nice, so how do you simulate that, unless\nyou have the postmaster send the signal to the proper child?\n\nI hate to add lots of stuff just to get CANCEL to work on unix domain\nsockets.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 19 May 1998 01:15:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancellation and OOB" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> It just occurred to me, while browsing through the last hacker's\n>> digest, that OOB might be more trouble than it's worth. What about\n>> having the Postmaster listen on a second socket as an alternative?\n\nI kinda like this. You could eliminate the need for signal() at all,\nwhich seems like a good idea --- the postmaster could just set the\ncancel flag directly in shared memory.\n\n> Yes, but you have to make sure the cancel is ONLY coming from the proper\n> client.\n\nOr his authorized designee. In a multi-process application I think it\nmight be legitimate for a thread other than the one talking to the\nbackend to want to issue the cancel.\n\nHow about this: during the startup protocol, the client is sent the PID\nof the backend, as well as some random number custom-generated for that\nconnection. To execute a cancel request, the postmaster must be handed\nback both the PID of a live backend and the matching random number.\n\nFurther protection could be provided by requiring the cancel requester\nto go through a full authorization handshake.\n\nBTW I see no need for the postmaster to listen on a separate socket for\nthis purpose. The main connection-accepting socket would do fine. This\nis just a different kind of request message that can arrive there.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 May 1998 10:36:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query cancellation and OOB " } ]
[ { "msg_contents": "Hi, ALL!\n\nI came back\n\nVadim\n", "msg_date": "Tue, 19 May 1998 13:27:06 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "hi!" } ]
[ { "msg_contents": ">Use TCP. On most modern systems (eg Linux ;-) ), TCP especially on the\nlocal\n>machine is very efficient. Not quite as efficient as a Unix domain socket,\n>but close enough that no one will notice.\n>\n\n\nUnix domain sockets are not only more efficient. They also are less of a\nsecurity\nrisk IMO.\n\nWith regards from Maurice.\n\n\n", "msg_date": "Tue, 19 May 1998 09:47:15 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Query cancel and OOB data" } ]
[ { "msg_contents": "\n> Any ideas?\n\nUse TCP. On most modern systems (eg Linux ;-) ), TCP especially on the local\nmachine is very efficient. Not quite as efficient as a Unix domain socket,\nbut close enough that no one will notice.\n\nI have not experienced noteable performance differences on my AIX box either.\nOf course when doing client server communication I always suggest PVM3.\nIt can be configured to do nearly all current communication methods,\nincluding hardware based ones on MPP/NUMA systems.\n\nAndreas\n\n\n", "msg_date": "Tue, 19 May 1998 10:11:22 +0200", "msg_from": "Andreas Zeugswetter <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Query cancel and OOB data" } ]
[ { "msg_contents": "> Date: Mon, 18 May 1998 10:47:59 +0200 (CEST)\n> From: Michael Meskes <[email protected]>\n> Subject: example code\n> \n> I'm currently feeding ecpg with Oracle's examples. It accepts almost all of\n> it, except the prepare statement and the typedef stuff. I'd like to do the\n> same with examples from other DBs. So if you have embedded SQL stuff from\n> Ingres, Informix, Sybase, whatever, please try it with ecpg and send me\n> those statements that cause a parse error. Of course you can send me your\n> whole examples files, and I run the tests myself.\n\nYou might want to download Interbase for RedHat Linux 4.2 from \nhttp://www.interbase.com/ .\n\nI have'nt installed it yet (they claim it has known problems with RH\n5.0), \nbut it also seems to have embedded SQL included. \n\nAnd I assume it has some examples included\n\nHannu\n", "msg_date": "Tue, 19 May 1998 12:33:05 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hackers-digest V1 #820" }, { "msg_contents": "Hannu Krosing writes:\n> You might want to download Interbase for RedHat Linux 4.2 from \n> http://www.interbase.com/ .\n> \n> I have'nt installed it yet (they claim it has known problems with RH\n> 5.0), \n> but it also seems to have embedded SQL included. \n> \n> And I assume it has some examples included\n\nI'll try. The download is pretty slow at the moment (668 bytes/sec). :-)\n\nAnyway, thanks.\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Tue, 19 May 1998 12:08:58 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hackers-digest V1 #820" } ]
[ { "msg_contents": "Andreas Zeugswetter writes:\n> Here is the Informix demo stuff.\n> I guess only demo[1-3].ec are of any significance. I guess these are all copyrighted,\n\nThanks a lot. Now I have: Oracle, Informix, YardSQL (if they have some, I\nhaven't checked yet) and Interbase (once the download finishes).\n\n> so please don't distribute. \n\nNo problem. I just try to preprocess them.\n\n> Liebe Gru?e\n\nEbenso liebe Gruesse\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Tue, 19 May 1998 12:10:43 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: [HACKERS] example code" } ]