threads
listlengths
1
2.99k
[ { "msg_contents": "After Bruce's fine piece of detective work in finding a bogus keylist\ncomparison routine, the Postgres optimizer runs a *lot* faster than\nbefore.\n\n>> We have to bump the default value of GEQO threshold up again...\n>> it's way too low now...\n\n> Yes. I need to know what value to set it at. Do you have some way\n> to test that.\n\nI ran some variants of Charles Hornberger's multiway join that started\nthe whole discussion. Run times (with profiling on, but that shouldn't\naffect the ratios much) now look like\n\nGEQO off\t\t\t# Indexes available\n\n# Tables\t\t0\t12\t13\t14\t15\t16\n\n7\t\t\t1.6\t2.0\n8\t\t\t3.6\t4.5\t4.3\n9\t\t\t10.7\t12.3\t\t12.3\n10\t\t\t51.2\t55.0\t\t\t54.2\n11\t\t\t224.4\t227.6\t\t\t\t213.9\n\n(For reference, the comparable run time for the 7t/12i case was 2630 sec\nbefore Bruce fixed it! It's not every day that you see a 1300:1 speedup\nfrom changing a couple lines of code...)\n\nAs you can see, the number of indexes is no longer a significant factor\nin the optimizer's runtime. I therefore recommend that we revert the\nGEQO threshold computation back to the way it was: just use the number\nof tables involved. Simple, quick, easy to understand.\n\nThe next question is what the default GEQO threshold value ought to be.\nI ran the same tests with and without GEQO; with GEQO on, the runtimes\nlook like\n\nGEQO on\t\t\t\t# Indexes available\n\n# Tables\t\t0\t12\t13\t14\t15\t16\n\n7\t\t\t9.4\t12.3\n8\t\t\t17.8\t22.8\t23.1\n9\t\t\t45.9\t61.9\t\t59.6\n10\t\t\t58.5\t74.9\t\t\t72.9\n11\t\t\t71.6\t79.3\t\t\t\t77.9\n\nSo, assuming this is a reasonably representative case, it looks like\nGEQO should kick in at a threshold of 11 tables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Feb 1999 20:25:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "New results for GEQO threshold" } ]
[ { "msg_contents": "Hello Hackers ...\n\nThe following patch is needed to compile the current development tree with perl \nsupport. The addition of MemoryContextAlloc, MemoryContextFree, and \nMemoryContexRealloc in ./src/include/utils/palloc.h require these additional \nheader files.\n\nAlso where is the current TODO list, I'd like to pick a small project and help \nout when I can.\n\nThanks,\n-Ryan \n\nP.S.\n\nIf I created this patch wrong, or I posted it to the wrong place let me know and \nI'll correct it next time.\n\n*** ./src/interfaces/libpq/Makefile.in.orig Wed Feb 10 19:11:55 1999\n--- ./src/interfaces/libpq/Makefile.in Wed Feb 10 19:09:25 1999\n***************\n*** 106,121 ****\n $(HEADERDIR)/utils/elog.h\n $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/utils/palloc.h \\\n $(HEADERDIR)/utils/palloc.h\n- $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/utils/mcxt.h \\\n- $(HEADERDIR)/utils/mcxt.h\n- $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/nodes/memnodes.h \\\n- $(HEADERDIR)/nodes/memnodes.h\n- $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/nodes/nodes.h \\\n- $(HEADERDIR)/nodes/nodes.h\n- $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/lib/fstack.h \\\n- $(HEADERDIR)/lib/fstack.h\n- $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/utils/memutils.h \\\n- $(HEADERDIR)/utils/memutils.h\n $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/access/attnum.h \\\n $(HEADERDIR)/access/attnum.h\n $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/executor/spi.h \\\n--- 106,111 ----\n***************\n*** 139,146 ****\n @if [ ! -d $(HEADERDIR)/libpq ]; \\\n then mkdir $(HEADERDIR)/libpq; fi\n @if [ ! -d $(HEADERDIR)/utils ]; \\\n- then mkdir $(HEADERDIR)/nodes; fi\n- @if [ ! -d $(HEADERDIR)/nodes ]; \\\n then mkdir $(HEADERDIR)/utils; fi\n @if [ ! -d $(HEADERDIR)/access ]; \\\n then mkdir $(HEADERDIR)/access; fi\n--- 129,134 ----\n\n", "msg_date": "Wed, 10 Feb 1999 19:28:58 -0700 (MST)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "interface libpq Makefile.in patch" }, { "msg_contents": "See the /doc directory or the web site under Support.\n\n> Hello Hackers ...\n> \n> The following patch is needed to compile the current development tree with perl \n> support. The addition of MemoryContextAlloc, MemoryContextFree, and \n> MemoryContexRealloc in ./src/include/utils/palloc.h require these additional \n> header files.\n> \n> Also where is the current TODO list, I'd like to pick a small project and help \n> out when I can.\n> \n> Thanks,\n> -Ryan \n> \n> P.S.\n> \n> If I created this patch wrong, or I posted it to the wrong place let me know and \n> I'll correct it next time.\n> \n> *** ./src/interfaces/libpq/Makefile.in.orig Wed Feb 10 19:11:55 1999\n> --- ./src/interfaces/libpq/Makefile.in Wed Feb 10 19:09:25 1999\n> ***************\n> *** 106,121 ****\n> $(HEADERDIR)/utils/elog.h\n> $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/utils/palloc.h \\\n> $(HEADERDIR)/utils/palloc.h\n> - $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/utils/mcxt.h \\\n> - $(HEADERDIR)/utils/mcxt.h\n> - $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/nodes/memnodes.h \\\n> - $(HEADERDIR)/nodes/memnodes.h\n> - $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/nodes/nodes.h \\\n> - $(HEADERDIR)/nodes/nodes.h\n> - $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/lib/fstack.h \\\n> - $(HEADERDIR)/lib/fstack.h\n> - $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/utils/memutils.h \\\n> - $(HEADERDIR)/utils/memutils.h\n> $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/access/attnum.h \\\n> $(HEADERDIR)/access/attnum.h\n> $(INSTALL) $(INSTLOPTS) $(SRCDIR)/include/executor/spi.h \\\n> --- 106,111 ----\n> ***************\n> *** 139,146 ****\n> @if [ ! -d $(HEADERDIR)/libpq ]; \\\n> then mkdir $(HEADERDIR)/libpq; fi\n> @if [ ! -d $(HEADERDIR)/utils ]; \\\n> - then mkdir $(HEADERDIR)/nodes; fi\n> - @if [ ! -d $(HEADERDIR)/nodes ]; \\\n> then mkdir $(HEADERDIR)/utils; fi\n> @if [ ! -d $(HEADERDIR)/access ]; \\\n> then mkdir $(HEADERDIR)/access; fi\n> --- 129,134 ----\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 Feb 1999 22:07:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] interface libpq Makefile.in patch" }, { "msg_contents": "Ryan Bradetich <[email protected]> writes:\n> The following patch is needed to compile the current development tree\n> with perl support. The addition of MemoryContextAlloc,\n> MemoryContextFree, and MemoryContexRealloc in\n> ./src/include/utils/palloc.h require these additional header files.\n\nSomething wrong here ... palloc should not be visible outside the\nbackend. libpq used to have vestigial dependencies on some backend\nheader files, but I thought I'd got rid of them.\n\nI have not compiled the perl module in a while; I'll check this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Feb 1999 00:18:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] interface libpq Makefile.in patch" }, { "msg_contents": "On Thu, Feb 11, 1999 at 12:18:06AM -0500, Tom Lane wrote:\n> Ryan Bradetich <[email protected]> writes:\n> > The following patch is needed to compile the current development tree\n> > with perl support. The addition of MemoryContextAlloc,\n> > MemoryContextFree, and MemoryContexRealloc in\n> > ./src/include/utils/palloc.h require these additional header files.\n> \n> Something wrong here ... palloc should not be visible outside the\n> backend. libpq used to have vestigial dependencies on some backend\n> header files, but I thought I'd got rid of them.\n> \n> I have not compiled the perl module in a while; I'll check this.\n\nThey are back in. And even worse palloc.h include mcxt.h and this one isn't\neven installed.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Thu, 11 Feb 1999 07:26:49 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] interface libpq Makefile.in patch" }, { "msg_contents": ">> The following patch is needed to compile the current development tree\n>> with perl support. The addition of MemoryContextAlloc,\n>> MemoryContextFree, and MemoryContexRealloc in\n>> ./src/include/utils/palloc.h require these additional header files.\n\n> Something wrong here ... palloc should not be visible outside the\n> backend.\n\nI have fixed the immediate symptom of the problem by making the Perl5\nmodule not require libpq-int.h, which is better programming practice\nanyway.\n\nHowever, it is true that including libpq-int.h now requires access\nto backend include files that are not currently being installed into\n/usr/local/pgsql/include. Thus, compiling an outside application\nthat uses libpq-int.h will presently fail.\n\nI had intended all along to someday stop exporting libpq-int.h, but I\ndidn't really want to break code dependent on it this soon :-(.\nIn any case, I think that the very same problem will occur for backend\nextension code (SPI) compiled outside the Postgres source tree --- the\nreal problem is that \"postgres.h\" can't be included from the install\ntree anymore.\n\nI think we have two reasonable alternatives:\n\n(1) Install a bunch more backend-internals header files, along the lines\nof Ryan's proposed patch. Evidently we need\n\tinclude/utils/mcxt.h\n\tinclude/nodes/memnodes.h\n\tinclude/nodes/nodes.h\n\tinclude/lib/fstack.h\n\tinclude/utils/memutils.h\nand possibly other stuff.\n\n(2) Try to clean up the palloc macros so that they don't need quite as\nmany random include files to be available. (Jan? Any chance of\nreducing the tonnage a little?)\n\n\nBTW, it'd really be a good idea to stop using libpq's makefile as the\nplace where backend header files are installed...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Feb 1999 18:42:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] interface libpq Makefile.in patch " } ]
[ { "msg_contents": "Hi,\n\nI have what I first thought would be a trivial problem, in that I require\nthe 2 VARCHAR columns in the following table to have the data stored in\nupper case.\n\ntest_table\n+-----------------------------+------------------------------+------+\n| Field | Type |Length|\n+-----------------------------+------------------------------+------+\n| user_id | int4 | 4 |\n| name | varchar() | 10 |\n| password | varchar() | 10 |\n+-----------------------------+------------------------------+------+\n\n\nI considered just using UPPER() in every SELECT statement, or creating a\nview that SELECTed the 2 VARCHAR cols using UPPER but I assume that this\nwould have more performance issues than having the data converted to\nuppercase only once during an insert or update. (Please correct me if I am\nwrong).\n\nAfter looking at triggers and rules I came up with the following solution:\n\nCREATE VIEW test AS SELECT * FROM test_table;\n\nCREATE RULE insert_test AS\nON INSERT TO test DO INSTEAD\nINSERT INTO test_table (user_id, name, password) VALUES(new.user_id,\nUPPER(new.name), UPPER(new.password));\n\nCREATE RULE update_test AS\nON UPDATE TO test DO INSTEAD\nUPDATE test_table SET name = UPPER(new.name), password = UPPER(new.password)\nWHERE user_id = new.user_id;\n\nwhich means that any insert or update to the test view is stored in upper\ncase as required.\n\nHowever, I still have two concerns about this.\n\n1) What impact on performance does using a VIEW in this way have?\n2) Users can still enter data straight into test_table in lower case\nbypassing the \"rules\"\n\nFirst off, is there an easier way to ensure that data is stored in uppercase\nfor certain columns (not the whole table). And if not does anyone have\ncomments on performance issues, or ways of stopping users accidentally or\nintentionally inserting lower case data straight into the table rather than\nthe view?\n\nMany thanks in advance,\n\n---[ Neil Burrows ]-----------------------------------------------------\nE-mail: [email protected] British Telecom Plc.\n : [email protected] Glasgow Engineering Centre\nWeb : http://www.remo.demon.co.uk/ Highburgh Rd. Glasgow UK\n-----------< Any views expressed are not those of my employer >-----------\n\n", "msg_date": "Thu, 11 Feb 1999 10:15:27 -0000", "msg_from": "\"Neil Burrows\" <[email protected]>", "msg_from_op": true, "msg_subject": "RULE questions." }, { "msg_contents": ">\n> Hi,\n>\n> I have what I first thought would be a trivial problem, in that I require\n> the 2 VARCHAR columns in the following table to have the data stored in\n> upper case.\n>\n> test_table\n> +-----------------------------+------------------------------+------+\n> | Field | Type |Length|\n> +-----------------------------+------------------------------+------+\n> | user_id | int4 | 4 |\n> | name | varchar() | 10 |\n> | password | varchar() | 10 |\n> +-----------------------------+------------------------------+------+\n>\n>\n> I considered just using UPPER() in every SELECT statement, or creating a\n> view that SELECTed the 2 VARCHAR cols using UPPER but I assume that this\n> would have more performance issues than having the data converted to\n> uppercase only once during an insert or update. (Please correct me if I am\n> wrong).\n\n It's right.\n\n>\n> After looking at triggers and rules I came up with the following solution:\n>\n> CREATE VIEW test AS SELECT * FROM test_table;\n>\n> CREATE RULE insert_test AS\n> ON INSERT TO test DO INSTEAD\n> INSERT INTO test_table (user_id, name, password) VALUES(new.user_id,\n> UPPER(new.name), UPPER(new.password));\n>\n> CREATE RULE update_test AS\n> ON UPDATE TO test DO INSTEAD\n> UPDATE test_table SET name = UPPER(new.name), password = UPPER(new.password)\n> WHERE user_id = new.user_id;\n\n 1. Make sure user_id is unique or extend the WHERE clause in\n the UPDATE rule. To explain why:\n\n user_id | name\n --------+----------\n 1 | aaa\n 1 | bbb\n 2 | ccc\n\n UPDATE test SET name = 'ddd' WHERE name = 'aaa';\n\n user_id | name\n --------+----------\n 1 | ddd\n 1 | ddd\n 2 | ccc\n\n This is because the rule will find the user_id 1 for name\n 'aaa' and then updates any row with user_id 1.\n\n 2. Change the WHERE clause in the UPDATE rule to compare\n against old.user_id and add \"user_id = new.user_id\" to\n the SET clause. Otherwise it would not be possible to\n change the user_id because this thrown away by the rule.\n\n 3. Don't forget the ON DELETE rule. Maybe you don't want\n once given user_id's to be changed or deleted. Then 2.\n and 3. aren't right.\n\n>\n> which means that any insert or update to the test view is stored in upper\n> case as required.\n>\n> However, I still have two concerns about this.\n>\n> 1) What impact on performance does using a VIEW in this way have?\n\n Only the rewriting overhead per query. The rewrite system\n changes the querytree generated by the parser in such a way\n that the planner/optimizer will get the same input as if the\n query really was the SELECT from test_table. If you have a\n view\n\n CREATE VIEW test AS SELECT * FROM test_table;\n\n the two statements\n\n SELECT * FROM test;\n SELECT * FROM test_table;\n\n are totally equivalent from the planners/optimizers (and so\n from the executors) point of view. The rewriting overhead\n depends on how complex the statements and rule definitions\n are. But not on the number of rows affected in the statement.\n Selecting thousands of rows has the same speed than doing it\n from the real tables behind a view. It's very small because\n compared against parser/planner/optimizer it has to do very\n few system cache lookups and works mostly with the data that\n is already in memory.\n\n> 2) Users can still enter data straight into test_table in lower case\n> bypassing the \"rules\"\n\n Not necessarily. Since v6.4 rule actions (in contrast to\n triggers up to now) inherit the access permissions of the\n owner of the relation they're fired on.\n\n CREATE TABLE test_table ...;\n CREATE VIEW test AS SELECT * FROM test_table;\n\n REVOKE ALL ON test_table FROM public;\n GRANT ALL ON test_table TO me;\n\n REVOKE ALL ON test FROM public;\n GRANT ALL ON test TO me;\n GRANT SELECT, INSERT, UPDATE, DELETE ON test TO public;\n\n Now any user can access test, but nobody but me can access\n test_table. Not even a SELECT does work. They can do most\n things on test. But the rule actions are executed under the\n permissions of me, so they work silently.\n\n YOU MUST NOT GRANT ALL TO PUBLIC. ALL includes RULE\n permission, so a user could change the rules on test, do some\n things (maybe on any of your other tables) and reinstall the\n original state of rules!\n\n In addition to that, consider the case you really don't want\n once given user_id's ever to change. Nor you like them to be\n ever reused. But they should disappear on DELETE.\n\n CREATE TABLE test_table (user_id int,\n name varchar(10),\n pass varchar(10),\n alive bool);\n\n CREATE UNIQUE INDEX test_user_id ON test_table (user_id);\n\n CREATE VIEW test AS SELECT * FROM test_data\n WHERE alive;\n\n CREATE RULE ins_test AS ON INSERT TO test\n DO INSTEAD INSERT INTO test_table\n VALUES (new.user_id, UPPER(new.name), UPPER(new.pass), 't');\n\n CREATE RULE upd_test AS ON UPDATE TO test\n DO INSTEAD UPDATE test_table\n SET name = UPPER(new.name), pass = UPPER(new.pass)\n WHERE user_id = old.user_id AND alive;\n\n CREATE RULE del_test AS ON DELETE TO test\n DO INSTEAD UPDATE test_table\n SET alive = 'f'\n WHERE user_id = old.user_id AND alive;\n\n Plus all the REVOKE and GRANT. This setup denies changes to\n user_id, makes the row's disappear on DELETE but throw's an\n error 'cannot insert duplicate ...' if someone tries to reuse\n a user_id. Only the owner of the test_table can reincarnate a\n once deleted account.\n\n>\n> First off, is there an easier way to ensure that data is stored in uppercase\n> for certain columns (not the whole table). And if not does anyone have\n> comments on performance issues, or ways of stopping users accidentally or\n> intentionally inserting lower case data straight into the table rather than\n> the view?\n\n The Postgres rewrite rule system is the most powerful way to\n do that.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 11 Feb 1999 13:13:37 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [SQL] RULE questions." }, { "msg_contents": "Thus spake Neil Burrows\n> First off, is there an easier way to ensure that data is stored in uppercase\n> for certain columns (not the whole table). And if not does anyone have\n> comments on performance issues, or ways of stopping users accidentally or\n> intentionally inserting lower case data straight into the table rather than\n> the view?\n\nThis makes me think of two features missing in PostgreSQL that I would\nlove to see. I know it's probably to late to think about it now for\n6.5 but I wonder what others think about this.\n\nFirst, as suggested above, how about an option to automatically convert\ndata to upper case on entry? I realize that triggers can do the job but\nit seems to be needed often enough that putting it into the definition\nfor the field seems useful. I guess a lower option would make sense too.\n\nSecond, an option to CREATE INDEX to make the index case insensitive.\nOther RDBMS systems do this and it is nice not to depend on users being\nconsistent when entering names. Consider (\"albert\", \"Daniel\", \"DENNIS\")\nwhich would sort exactly opposite. Also, in a primary key field (or\nunique index) it would be nice if \"A\" was rejected if \"a\" already was\nin the database.\n\nThoughts?\n\nFollowups to hackers.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 11 Feb 1999 07:33:00 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] RULE questions." }, { "msg_contents": "Hi,\n\n\n> > I have what I first thought would be a trivial problem, in that\n> I require\n> > the 2 VARCHAR columns in the following table to have the data stored in\n> > upper case.\n\n\n> 1. Make sure user_id is unique or extend the WHERE clause in\n> the UPDATE rule. To explain why:\n\nThis is actually just a small test table, and the real one has quite a few\nmore columns, but I did mean to make user_id unique, just forgot. :)\n\n\n> 2. Change the WHERE clause in the UPDATE rule to compare\n> against old.user_id and add \"user_id = new.user_id\" to\n> the SET clause. Otherwise it would not be possible to\n> change the user_id because this thrown away by the rule.\n\nThe thinking behind it was that user_id shouldn't be able changed but I\naccidentally neglected to mention that.\n\n> > 2) Users can still enter data straight into test_table in lower case\n> > bypassing the \"rules\"\n\n> Not necessarily. Since v6.4 rule actions (in contrast to\n> triggers up to now) inherit the access permissions of the\n> owner of the relation they're fired on.\n\nAhh, I see. I thought that the rule actions used the current users access\npermissions, not the owners. That's much handier, thanks.\n\n\n> In addition to that, consider the case you really don't want\n> once given user_id's ever to change. Nor you like them to be\n> ever reused. But they should disappear on DELETE.\n>\n> CREATE TABLE test_table (user_id int,\n> name varchar(10),\n> pass varchar(10),\n> alive bool);\n>\n\nAnd that's a great way of doing what I was going to start looking at next.\n:)\n\n> The Postgres rewrite rule system is the most powerful way to\n> do that.\n\nThanks very much for your time and comments here. It's certainly made\nthings clearer.\n\nThanks again,\n\n---[ Neil Burrows ]-----------------------------------------------------\nE-mail: [email protected] British Telecom Plc.\n : [email protected] Glasgow Engineering Centre\nWeb : http://www.remo.demon.co.uk/ Highburgh Rd. Glasgow UK\n-----------< Any views expressed are not those of my employer >-----------\n\n", "msg_date": "Thu, 11 Feb 1999 12:34:12 -0000", "msg_from": "\"Neil Burrows\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [SQL] RULE questions." }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> Second, an option to CREATE INDEX to make the index case insensitive.\n\nThat, at least, we can already do: build the index on lower(field) not\njust field. Or upper(field) if that seems more natural to you.\n\n> Also, in a primary key field (or\n> unique index) it would be nice if \"A\" was rejected if \"a\" already was\n> in the database.\n\nMaking either of the above a UNIQUE index should accomplish that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Feb 1999 10:35:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] RULE questions. " }, { "msg_contents": "D'Arcy J.M. Cain wrote:\n> \n> This makes me think of two features missing in PostgreSQL that I would\n> love to see. I know it's probably to late to think about it now for\n> 6.5 but I wonder what others think about this.\n> \n> First, as suggested above, how about an option to automatically convert\n> data to upper case on entry? I realize that triggers can do the job but\n> it seems to be needed often enough that putting it into the definition\n> for the field seems useful. I guess a lower option would make sense too.\n\nThese could probably be implemened more effectively using rules. Having\nthe \nrules generated automatically for simple cases would of course be nice,\nbut a warning at least should be given to user about creating the rule, \nlike it's currently done with primary key.\n\nOr maybe it would be better to support virtual fields, like this :\n\ncreate table people(\n first_name varchar(25),\n last_name varchar(25),\n upper_first_name VIRTUAL upper(first_name),\n upper_last_name VIRTUAL upper(last_name),\n full_name VIRTUAL (upper_first_name || ' ' || upper_last_name)\nprimary key\n);\n\nand then untangle this in the backend and create required rules and\nindexes automatically ?\n\n> Second, an option to CREATE INDEX to make the index case insensitive.\n\nIf you have this option on idex, how do you plan to make sure that the \nindex is actually used ?\n\nIt may be better to do it explicitly -\n\n1. create index on upper(field)\n\n2. use where upper(field) = 'MYDATA'\n\n---------------\nHannu\n", "msg_date": "Thu, 11 Feb 1999 19:39:47 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] RULE questions." }, { "msg_contents": ">\n> D'Arcy J.M. Cain wrote:\n> >\n> > This makes me think of two features missing in PostgreSQL that I would\n> > love to see. I know it's probably to late to think about it now for\n> > 6.5 but I wonder what others think about this.\n> >\n> > First, as suggested above, how about an option to automatically convert\n> > data to upper case on entry? I realize that triggers can do the job but\n> > it seems to be needed often enough that putting it into the definition\n> > for the field seems useful. I guess a lower option would make sense too.\n>\n> These could probably be implemened more effectively using rules. Having\n> the\n> rules generated automatically for simple cases would of course be nice,\n> but a warning at least should be given to user about creating the rule,\n> like it's currently done with primary key.\n\n No it can't.\n\n Such a rule would look like\n\n CREATE RULE xxx AS ON INSERT TO this_table\n DO INSTEAD INSERT INTO this_table ...\n\n The rule system will be triggerd on an INSERT INTO\n this_table, rewrite and generate another parsetree that is an\n INSERT INTO this_table, which is recursively rewritten again\n applying rule xxx...\n\n That's an endless recursion. A rule can never do the same\n operation to a table it is fired for.\n\n The old pre-Postgres95 university version (Postgres release\n 4.2) had the possibility to define rules that UPDATE NEW.\n They where buggy and didn't worked sometimes at all. Instead\n of fixing them, this functionality got removed when Postgres\n became 95 :-(\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 11 Feb 1999 19:01:49 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] RULE questions." }, { "msg_contents": "Thus spake Tom Lane\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > Second, an option to CREATE INDEX to make the index case insensitive.\n> \n> That, at least, we can already do: build the index on lower(field) not\n> just field. Or upper(field) if that seems more natural to you.\n\nAlmost. I guess I wasn't completely clear. Here's an example.\n\ndarcy=> create table x (a int, t text);\nCREATE\ndarcy=> create unique index ti on x (lower(t) text_ops);\nCREATE\ndarcy=> insert into x values (1, 'abc');\nINSERT 19021 1\ndarcy=> insert into x values (2, 'ABC');\nERROR: Cannot insert a duplicate key into a unique index\ndarcy=> insert into x values (2, 'Def');\nINSERT 19023 1\ndarcy=> select * from x;\na|t \n-+---\n1|abc\n2|Def\n(2 rows)\n\ndarcy=> select * from x where t = 'ABC';\na|t\n-+-\n(0 rows)\n\nNote that it prevented me from adding the upper case dup just fine. The\nlast select is the issue. It's necessary for the user to know how it is\nstored before doing the select. I realize that you can do this.\n\ndarcy=> select * from x where lower(t) = 'abc';\n\nBut other systems make this more convenient by just making 'ABC' and 'abc'\nequivalent.\n\nMind you, it may not be possible in our system without creating a new,\ncase-insensitive type.\n\n> > Also, in a primary key field (or\n> > unique index) it would be nice if \"A\" was rejected if \"a\" already was\n> > in the database.\n> \n> Making either of the above a UNIQUE index should accomplish that.\n\nTrue. I'm thinking of the situation where you want the primary key to\nbe case-insensitive. You can't control that on the auto-generated\nunique index so you have to add a second unique index on the same\nfield. Again, perhaps a new type is the proper way to handle this.\n\nSpeaking of primary keys, there's one more thing needed to make primary\nsupport complete, I think. Didn't we originally say that a primary\nkey field was immutable? We should be able to delete the record but\nnot change the value of the field in an update. Would this be hard\nto do?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 11 Feb 1999 14:03:48 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] RULE questions." }, { "msg_contents": "D'Arcy J.M. Cain wrote:\n\n> But other systems make this more convenient by just making 'ABC' and 'abc'\n> equivalent.\n>\n> Mind you, it may not be possible in our system without creating a new,\n> case-insensitive type.\n\n And that wouldn't be too hard. For example, implementing\n citext (case insensitive text) could use text's input/output\n functions and all the things for lower/upper case conversion,\n concatenation, substring etc (these are SQL language wrappers\n as we already have tons of). Only new comparision operators\n have to be built that compare case insensitive and then\n creating a new operator class for it. All qualifications and\n the sorting in indices, order by, group by are done with the\n operators defined for the type.\n\n Also comparision wrappers like to compare text = citext would\n be useful, which simply uses citext_eq().\n\n> > Making either of the above a UNIQUE index should accomplish that.\n>\n> True. I'm thinking of the situation where you want the primary key to\n> be case-insensitive. You can't control that on the auto-generated\n> unique index so you have to add a second unique index on the same\n> field. Again, perhaps a new type is the proper way to handle this.\n\n The above citext type would inherit this auto.\n\n>\n> Speaking of primary keys, there's one more thing needed to make primary\n> support complete, I think. Didn't we originally say that a primary\n> key field was immutable? We should be able to delete the record but\n> not change the value of the field in an update. Would this be hard\n> to do?\n\n No efford on that. I'm planning to reincarnate attribute\n specification for rules and implement a RAISE statement. The\n attributes (this time it will be multiple) suppress rule\n action completely if none of the attributes appear in the\n queries targetlist (what they must on UPDATE to change).\n\n So at create table time, a rule like\n\n CREATE RULE somename AS ON UPDATE TO table\n ATTRIBUTE att1, att2\n WHERE old.att1 != new.att1 OR old.att2 != old.att2\n DO RAISE EXCEPTION 'Primary key of \"table\" cannot be changed';\n\n could be installed. As long as nobody specifies the fields of\n the primary key in it's update, the rewrite system will not\n add the RAISE query to the querytree list, so no checking is\n done at all.\n\n But as soon as one of the attributes appears in the UPDATE,\n there will be one extra query RAISE executed prior to the\n UPDATE itself and check that all the new values are the old\n ones. This would have the extra benefit, that the transaction\n would abort BEFORE any changes have been made to the table at\n all (remember that UPDATE in Postgres means another heap\n tuple for each touched row and one more invalid tuple for\n vacuum to throw away and for in-the-middle-aborted updates it\n means so-far-I-came more never committed heap tuples that\n vacuum has to send to byte-hell).\n\n This will not appear in v6.5 (hopefully in v6.6). But it's\n IMHO the best solution. With the mentioned RAISE, plus the\n currently discussed deferred queries etc. we would have the\n rule system ready to support ALL the constraint stuff\n (cascaded delete, foreign key). But the more we use the rule\n system, the more important it becomes that we get rid of the\n block limit for tuples.\n\n I think it would be better to spend your efford on that\n issue.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 11 Feb 1999 21:04:42 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] RULE questions." }, { "msg_contents": "Jan Wieck wrote:\n>\n> > These could probably be implemened more effectively using rules. Having\n> > the\n> > rules generated automatically for simple cases would of course be nice,\n> > but a warning at least should be given to user about creating the rule,\n> > like it's currently done with primary key.\n> \n> No it can't.\n> \n> Such a rule would look like\n> \n> CREATE RULE xxx AS ON INSERT TO this_table\n> DO INSTEAD INSERT INTO this_table ...\n> \n> The rule system will be triggerd on an INSERT INTO\n> this_table, rewrite and generate another parsetree that is an\n> INSERT INTO this_table, which is recursively rewritten again\n> applying rule xxx...\n> \n> That's an endless recursion. A rule can never do the same\n> operation to a table it is fired for.\n\nBut when doing that at the table creation time, then the table can\nactually \nbe defined as a view on storage table and rules for insert update and\ndelete\nbe defined for this view that do the actual data manipulation on the \nstorage table.\n\nOr is the rule system currently not capable for this ?\n\nWhen some field is changed to UPPER-ONLY status using alter table, the\ntable \ncould be renamed to staorage table and all the rules be created ?\n\n\nAnd the other question - what is the status of ALTER TABLE commands - \ncan we add/remove/disable constraints without recreating the table ?\n\nIs constraint and index disabling supported at all ?\n\n-------------------\nHannu\n", "msg_date": "Fri, 12 Feb 1999 19:09:41 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RULE (and ALTER TABLE) questions " }, { "msg_contents": "> But when doing that at the table creation time, then the table can\n> actually\n> be defined as a view on storage table and rules for insert update and\n> delete\n> be defined for this view that do the actual data manipulation on the\n> storage table.\n\n That's IMHO a too specific case to do it generally with the\n rule system. Should be some kind of constraint handled by\n the parser in putting an UPPER() func node around the\n targetlist expression.\n\n There could be more general support implemented, in that a\n user can allways tell that a custom function should be called\n with the result of the TLE-expr before the value is dropped\n into the tuple on INSERT/UPDATE.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 12 Feb 1999 20:16:57 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RULE (and ALTER TABLE) questions" } ]
[ { "msg_contents": "My current PyGreSQL module croaks on this on older versions of PostgreSQL.\nCan someone tell me in exactly which release this function was added? I\ncouldn't find anything in the change logs.\n\nThanks.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 11 Feb 1999 07:44:33 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": true, "msg_subject": "PQsocket" }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> My current PyGreSQL module croaks on this on older versions of PostgreSQL.\n> Can someone tell me in exactly which release this function was added?\n\n6.4. Prior versions of libpq didn't have any support for asynchronous\noperations at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Feb 1999 10:36:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PQsocket " } ]
[ { "msg_contents": "Hello!\n\n Next time you'll send a patch could you use tools in\n .../src/tools/make_diff\n\n I've applied the patch to 6.4.2 on Debian 2.0 and ran locale test on\nkoi8-r locale. The locale test before the patch passed and test after patch\npassed as well. I didn't note any difference. What difference you expected?\n\n Please supply data for locale test (look into .../src/test/locale). This\nis not related to your patch, we're just collecting test data.\n\nOn Wed, 10 Feb 1999, Angelos Karageorgiou wrote:\n\n> I am using postgres 6.4.2 on BSD/OS 3.1 with a Greek locale that I\n> have developed. I knew that regexes with postgress would not work because\n> of something I did but a posting from another follow from Sweden gave me a\n> clue that the problem must be with the regex package and not the locale.\n> \n> So I investigated the code and found out the pg_isdigit(int ch),\n> pg_isalpha(int ch) and the associated functions do a comparison of\n> characters as ints. I changed a few crucial points with a cast to\n> (unsigned char) and voila , regexs in Greek with full locale support. My\n> guess is that an int != unsigned char when comparing, the sign bit is\n> probably the culprit.\n> \n> Please test the patch on some other language too, Swedish or Finish\n> would be a nice touch.\n> \n> Patch follows, but it is trivial really.\n> ---------------------------------------------------------------------------------\n> *** regcomp.c\tTue Sep 1 07:31:25 1998\n> --- regcomp.c.patched\tWed Feb 10 19:57:11 1999\n> ***************\n> *** 1038,1046 ****\n> {\n> \tassert(pg_isalpha(ch));\n> \tif (pg_isupper(ch))\n> ! \t\treturn tolower(ch);\n> \telse if (pg_islower(ch))\n> ! \t\treturn toupper(ch);\n> \telse\n> /* peculiar, but could happen */\n> \t\treturn ch;\n> --- 1038,1046 ----\n> {\n> \tassert(pg_isalpha(ch));\n> \tif (pg_isupper(ch))\n> ! \t\treturn tolower((unsigned char)ch);\n> \telse if (pg_islower(ch))\n> ! \t\treturn toupper((unsigned char)ch);\n> \telse\n> /* peculiar, but could happen */\n> \t\treturn ch;\n> ***************\n> *** 1055,1067 ****\n> static void\n> bothcases(p, ch)\n> struct parse *p;\n> ! int\t\t\tch;\n> {\n> \tpg_wchar *oldnext = p->next;\n> \tpg_wchar *oldend = p->end;\n> \tpg_wchar\tbracket[3];\n> \n> ! \tassert(othercase(ch) != ch);/* p_bracket() would recurse */\n> \tp->next = bracket;\n> \tp->end = bracket + 2;\n> \tbracket[0] = ch;\n> --- 1055,1067 ----\n> static void\n> bothcases(p, ch)\n> struct parse *p;\n> ! int\t\tch;\n> {\n> \tpg_wchar *oldnext = p->next;\n> \tpg_wchar *oldend = p->end;\n> \tpg_wchar\tbracket[3];\n> \n> ! \tassert(othercase(ch) != (unsigned char)ch);/* p_bracket() would recurse */\n> \tp->next = bracket;\n> \tp->end = bracket + 2;\n> \tbracket[0] = ch;\n> ***************\n> *** 1084,1090 ****\n> {\n> \tcat_t\t *cap = p->g->categories;\n> \n> ! \tif ((p->g->cflags & REG_ICASE) && pg_isalpha(ch) && othercase(ch) != ch)\n> \t\tbothcases(p, ch);\n> \telse\n> \t{\n> --- 1084,1090 ----\n> {\n> \tcat_t\t *cap = p->g->categories;\n> \n> ! \tif ((p->g->cflags & REG_ICASE) && pg_isalpha(ch) && othercase(ch) != (unsigned char)ch)\n> \t\tbothcases(p, ch);\n> \telse\n> \t{\n> ***************\n> *** 1862,1868 ****\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n> #else\n> ! \treturn (isdigit(c));\n> #endif\n> }\n> \n> --- 1862,1868 ----\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isdigit(c));\n> #else\n> ! \treturn (isdigit((unsigned char)c));\n> #endif\n> }\n> \n> ***************\n> *** 1872,1878 ****\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isalpha(c));\n> #else\n> ! \treturn (isalpha(c));\n> #endif\n> }\n> \n> --- 1872,1878 ----\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isalpha(c));\n> #else\n> ! \treturn (isalpha((unsigned char)c));\n> #endif\n> }\n> \n> ***************\n> *** 1882,1888 ****\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isupper(c));\n> #else\n> ! \treturn (isupper(c));\n> #endif\n> }\n> \n> --- 1882,1888 ----\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && isupper(c));\n> #else\n> ! \treturn (isupper((unsigned char)c));\n> #endif\n> }\n> \n> ***************\n> *** 1892,1897 ****\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && islower(c));\n> #else\n> ! \treturn (islower(c));\n> #endif\n> }\n> --- 1892,1897 ----\n> #ifdef MULTIBYTE\n> \treturn (c >= 0 && c <= UCHAR_MAX && islower(c));\n> #else\n> ! \treturn (islower((unsigned char)c));\n> #endif\n> }\n> \n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n\n", "msg_date": "Thu, 11 Feb 1999 18:30:13 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: your mail" } ]
[ { "msg_contents": "Hi!\n\nOn Thu, 11 Feb 1999, Angelos Karageorgiou wrote:\n> > I've applied the patch to 6.4.2 on Debian 2.0 and ran locale test on\n> > koi8-r locale. The locale test before the patch passed and test after patch\n> > passed as well. I didn't note any difference. What difference you expected?\n> > \n> \n> Are you using the multibyte character set or the sigle byte ? I was having\n\n Single byte.\n\n> problems with the sigle byte char set where select * where message ~* \"os\" \n> would give me different results than select * where message ~* \"OS\". of\n> course \"OS\" is the iso-8859-7 greek letters omikron and sigma, I just used\n> the english letters here to demostrate the problem\n\n If you look into .../src/test/locale/koi8-r, you'll find there exactly\nthe same tests. These tests are working right in my locale without your\npatch.\n For me it seems like a compiler (or compiler option) problem - signed\nvs. unsigned chars.\n\n> > Please supply data for locale test (look into .../src/test/locale). This\n> > is not related to your patch, we're just collecting test data.\n> \n> I could post some strings in Greek , but it would be meaningless to you I\n> am afraid, and worse without a font you would not be able to see them ,\n> that is why I called upon a swedish of finish fellow to test the\n> differences out.\n\n Tests in .../src/test/locale/koi8-r are meaningless to non-Russian, yet\nthey are in test suite. :)\n We are collecting test to help people test their native locales, not\nforeign locales, really.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 11 Feb 1999 18:43:57 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: your mail" } ]
[ { "msg_contents": "Hi!\n\nOn Thu, 11 Feb 1999, Angelos Karageorgiou wrote:\n> > For me it seems like a compiler (or compiler option) problem - signed\n> > vs. unsigned chars.\n> \n> Yes you are right , the problem is BSD/OS specific , and indeed it has to\n> do with unsigned chars vs signed chars. I just did not know if others had\n> the problem too and since and a cast to (unsigned char) has no effect to\n> an 8bit char I decided to post the patch. \n> \n> Even test-ctype gives out different results when cp is cast as unsigned\n> chat and not a plain char. would you like the output from test-ctype for\n> unsigned chars ?\n\n I am not sure. This should be discussed among other developers. What we\nshould use: signed or unsigned chars, anyone has an idea?\n\n> BTW i appreciate the work on postgres it is an awesome package \n\n Welcome!\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 11 Feb 1999 19:02:32 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: your mail" }, { "msg_contents": "Thus spake Oleg Broytmann\n> On Thu, 11 Feb 1999, Angelos Karageorgiou wrote:\n> > > For me it seems like a compiler (or compiler option) problem - signed\n> > > vs. unsigned chars.\n> > \n> > Yes you are right , the problem is BSD/OS specific , and indeed it has to\n> > do with unsigned chars vs signed chars. I just did not know if others had\n> > the problem too and since and a cast to (unsigned char) has no effect to\n> > an 8bit char I decided to post the patch. \n> > \n> > Even test-ctype gives out different results when cp is cast as unsigned\n> > chat and not a plain char. would you like the output from test-ctype for\n> > unsigned chars ?\n> \n> I am not sure. This should be discussed among other developers. What we\n> should use: signed or unsigned chars, anyone has an idea?\n\nIn all my own code, I always set the compiler option to make char an\nunsigned type. For portability I like to know that the behaviour\nwon't change as long as I carry over my compiler options. I like\nthat way better than casting since I don't get conflict warnings\nfor sending unsigned (or signed) char to library functions. Remember,\nchar, signed char and unsigned char are 3 distinct types even though\nchar has to behave exactly like one of the other two. Setting it up on\nthe compiler command line gets around that.\n\nAs for signed vs. unsigned, I don't think it matters that much. I chose\nunsigned since I never do signed arithmetic on char and if I ever did I\nwould like to have the extra keywork to draw attention to it.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 11 Feb 1999 14:17:24 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: your mail" }, { "msg_contents": "On Thu, 11 Feb 1999, D'Arcy J.M. Cain wrote:\n> > should use: signed or unsigned chars, anyone has an idea?\n> \n> In all my own code, I always set the compiler option to make char an\n> unsigned type. For portability I like to know that the behaviour\n> won't change as long as I carry over my compiler options. I like\n> that way better than casting since I don't get conflict warnings\n> for sending unsigned (or signed) char to library functions. Remember,\n> char, signed char and unsigned char are 3 distinct types even though\n> char has to behave exactly like one of the other two. Setting it up on\n> the compiler command line gets around that.\n> \n> As for signed vs. unsigned, I don't think it matters that much. I chose\n> unsigned since I never do signed arithmetic on char and if I ever did I\n> would like to have the extra keywork to draw attention to it.\n\n That is what I think of, and what I usually use - tweak compiler options\nto unsigned char.\n So, my conclusion - reject the patch and teach people to change compiler\noptions.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n\n", "msg_date": "Fri, 12 Feb 1999 14:23:01 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: your mail" }, { "msg_contents": "Ah, here is the e-mail objecting to the unsigned patch.\n\n\n> Hi!\n> \n> On Thu, 11 Feb 1999, Angelos Karageorgiou wrote:\n> > > For me it seems like a compiler (or compiler option) problem - signed\n> > > vs. unsigned chars.\n> > \n> > Yes you are right , the problem is BSD/OS specific , and indeed it has to\n> > do with unsigned chars vs signed chars. I just did not know if others had\n> > the problem too and since and a cast to (unsigned char) has no effect to\n> > an 8bit char I decided to post the patch. \n> > \n> > Even test-ctype gives out different results when cp is cast as unsigned\n> > chat and not a plain char. would you like the output from test-ctype for\n> > unsigned chars ?\n> \n> I am not sure. This should be discussed among other developers. What we\n> should use: signed or unsigned chars, anyone has an idea?\n> \n> > BTW i appreciate the work on postgres it is an awesome package \n> \n> Welcome!\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 10:05:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: your mail" }, { "msg_contents": "\nAh, here is an even clearer statement on unsigned.\n\n\n\n> On Thu, 11 Feb 1999, D'Arcy J.M. Cain wrote:\n> > > should use: signed or unsigned chars, anyone has an idea?\n> > \n> > In all my own code, I always set the compiler option to make char an\n> > unsigned type. For portability I like to know that the behaviour\n> > won't change as long as I carry over my compiler options. I like\n> > that way better than casting since I don't get conflict warnings\n> > for sending unsigned (or signed) char to library functions. Remember,\n> > char, signed char and unsigned char are 3 distinct types even though\n> > char has to behave exactly like one of the other two. Setting it up on\n> > the compiler command line gets around that.\n> > \n> > As for signed vs. unsigned, I don't think it matters that much. I chose\n> > unsigned since I never do signed arithmetic on char and if I ever did I\n> > would like to have the extra keywork to draw attention to it.\n> \n> That is what I think of, and what I usually use - tweak compiler options\n> to unsigned char.\n> So, my conclusion - reject the patch and teach people to change compiler\n> options.\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 10:10:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: your mail" } ]
[ { "msg_contents": "I�m making tests for data types and I wonder if the following is a bug!\n\nusing age() function I get: @ 1 hour 13 mins 27.88 secs\nusing date_part() over age() I get:\n\n1 for 'hour' OK!\n13 for 'minute' OK!\n27.8765430000003 for 'second' OK!\n0 for 'day' OK!\n0 for 'month' OK!\n0 for 'year' OK!\n\nBut\n\n1 for 'decade' NOT OK!\n1 for 'century' NOT OK!\n1 for 'millenium' NOT OK!\n\nSee trancrition bellow\n\ncd=> \\d th\n\nTable =\nth\n+----------------------------------+----------------------------------+--\n-----+\n| Field | Type\n |\nLength|\n+----------------------------------+--------------------------------\n--+-------+\n| data | date\n | 4 |\n| hora | time\n | 8 |\n| ms | int4\n | 4 |\n| dt | datetime\n | 8\n|\n+----------------------------------+----------------------------------+---\n----+\ncd=> select * from th\ncd-> \\g\ndata |hora | ms|dt\n\n\n----------+--------+----------+--------------------------\n11/02/1999|10:33:\n31|1234567890|11/02/1999 10:33:31.12\nEDT\n11/02/1999|10:33:31|1234567890|11/02/1999 10:33:31.13 EDT\n(2\nrows)\n\ncd=> select age('now', dt) , date_part('millenium', age('now',\ndt)::timespan), date_part('century', age('now', dt)::timespan),\ndate_part('decade', age('now', dt)::timespan) , date_part('year',\nage('now', dt)::timespan), date_part('month', age('now', dt)::timespan),\ndate_part('day', age('now', dt)::timespan) , date_part('hour', age('now',\ndt)::timespan) , date_part('minute', age('now', dt)::timespan) ,\ndate_part('second', age('now', dt)::timespan) from th\\g\nage\n\n|date_part|date_part|date_part|date_part|date_part|date_part|date_part|date_\npart|\ndate_part\n---------------------------+---------+---------+---------+--------\n-+---------+---------+---------+---------+----------------\n@ 1 hour 13 mins\n27.88 secs| 1| 1| 1| 0| 0| 0|\n 1| 13|27.8765430000003\n@ 1 hour 13 mins 27.87 secs| 1|\n 1| 1| 0| 0| 0| 1|\n13|27.8654319999996\n(2 rows)\n\ncd=> \n\n------------------------------------------------------------------\nEng. Roberto Jo�o Lopes Garcia E-mail: [email protected]\nF. 55 11 848 9906 FAX 55 11 848 9955\n\nMHA Engenharia Ltda\nE-mail: [email protected] WWW: http://www.mha.com.br\n\nAv Maria Coelho Aguiar, 215 Bloco D 2 Andar\nCentro Empresarial de Sao Paulo\nSao Paulo - BRASIL - 05805 000\n-------------------------------------------------------------------\n\n", "msg_date": "Thu, 11 Feb 1999 14:26:12 -0200", "msg_from": "Roberto Joao Lopes Garcia <[email protected]>", "msg_from_op": true, "msg_subject": "date_part() BUG?" }, { "msg_contents": "> I�m making tests for data types and I wonder if the following is a \n> bug!\n> using age() function I get: @ 1 hour 13 mins 27.88 secs\n> using date_part() over age() I get:\n<snip tests for years through seconds>\n> But\n> 1 for 'decade' NOT OK!\n> 1 for 'century' NOT OK!\n> 1 for 'millenium' NOT OK!\n\nI can see that here. Will look at it. Thanks for the report...\n\n - Tom\n", "msg_date": "Thu, 11 Feb 1999 17:08:59 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] date_part() BUG?" }, { "msg_contents": "> > using age() function I get: @ 1 hour 13 mins 27.88 secs\n> > using date_part() over age() I get:\n> <snip tests for years through seconds>\n> > But\n> > 1 for 'decade' NOT OK!\n> > 1 for 'century' NOT OK!\n> > 1 for 'millenium' NOT OK!\n> I can see that here. Will look at it. Thanks for the report...\n\nSorry, it was a cut-and-paste error, with an explicit \"+ 1\" where it\nshouldn't be. Patch enclosed, which includes another recent fix for date\ninput having mixed US/Euro-style formats and text months. If you've\nalready applied that one, then strip it out of the patch before applying\n(or tell patch the right thing when it complains).\n\nThanks again for the report.\n\n - Tom", "msg_date": "Thu, 11 Feb 1999 17:38:12 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] date_part() BUG?" } ]
[ { "msg_contents": "Bruce, did you ever commit the temp table stuff? If so what was the\nsyntax?\nFYI using snapshot 2/10/1999.\n\t-DEJ\n", "msg_date": "Thu, 11 Feb 1999 11:15:09 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Temp tables" }, { "msg_contents": "> Bruce, did you ever commit the temp table stuff? If so what was the\n> syntax?\n> FYI using snapshot 2/10/1999.\n> \t-DEJ\n> \n> \n\nCREATE TEMP TABLE. See src/test/regress/sql/temp.sql.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Feb 1999 12:32:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Temp tables" } ]
[ { "msg_contents": "subscribe\n", "msg_date": "Thu, 11 Feb 1999 15:18:30 -0200 (BSC)", "msg_from": "\"Allan C. Lemos\" <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": ">From [email protected] Thu Feb 11 14:58:39 1999\nReceived: from orion.SAPserv.Hamburg.dsh.de (Tpolaris2.sapham.debis.de [53.2.131.8])\n\tby hub.org (8.9.2/8.9.1) with SMTP id OAA55995\n\tfor <[email protected]>; Thu, 11 Feb 1999 14:58:38 -0500 (EST)\n\t(envelope-from [email protected])\nReceived: by orion.SAPserv.Hamburg.dsh.de \n\tfor [email protected] \n\tid m10B2Lq-000EBRC; Thu, 11 Feb 99 21:04 MET\nMessage-Id: <[email protected]>\nFrom: [email protected] (Jan Wieck)\nSubject: Re: [HACKERS] Re: [SQL] RULE questions.\nTo: [email protected] (D'Arcy\" \"J.M.\" Cain)\nDate: Thu, 11 Feb 1999 21:04:42 +0100 (MET)\nCc: [email protected], [email protected]\nReply-To: [email protected] (Jan Wieck)\nIn-Reply-To: <[email protected]> from \"D'Arcy\" \"J.M.\" Cain\" at Feb 11, 99 02:03:48 pm\nX-Mailer: ELM [version 2.4 PL25]\nContent-Type: text\n\nD'Arcy J.M. Cain wrote:\n\n> But other systems make this more convenient by just making 'ABC' and 'abc'\n> equivalent.\n>\n> Mind you, it may not be possible in our system without creating a new,\n> case-insensitive type.\n\n And that wouldn't be too hard. For example, implementing\n citext (case insensitive text) could use text's input/output\n functions and all the things for lower/upper case conversion,\n concatenation, substring etc (these are SQL language wrappers\n as we already have tons of). Only new comparision operators\n have to be built that compare case insensitive and then\n creating a new operator class for it. All qualifications and\n the sorting in indices, order by, group by are done with the\n operators defined for the type.\n\n Also comparision wrappers like to compare text = citext would\n be useful, which simply uses citext_eq().\n\n> > Making either of the above a UNIQUE index should accomplish that.\n>\n> True. I'm thinking of the situation where you want the primary key to\n> be case-insensitive. You can't control that on the auto-generated\n> unique index so you have to add a second unique index on the same\n> field. Again, perhaps a new type is the proper way to handle this.\n\n The above citext type would inherit this auto.\n\n>\n> Speaking of primary keys, there's one more thing needed to make primary\n> support complete, I think. Didn't we originally say that a primary\n> key field was immutable? We should be able to delete the record but\n> not change the value of the field in an update. Would this be hard\n> to do?\n\n No efford on that. I'm planning to reincarnate attribute\n specification for rules and implement a RAISE statement. The\n attributes (this time it will be multiple) suppress rule\n action completely if none of the attributes appear in the\n queries targetlist (what they must on UPDATE to change).\n\n So at create table time, a rule like\n\n CREATE RULE somename AS ON UPDATE TO table\n ATTRIBUTE att1, att2\n WHERE old.att1 != new.att1 OR old.att2 != old.att2\n DO RAISE EXCEPTION 'Primary key of \"table\" cannot be changed';\n\n could be installed. As long as nobody specifies the fields of\n the primary key in it's update, the rewrite system will not\n add the RAISE query to the querytree list, so no checking is\n done at all.\n\n But as soon as one of the attributes appears in the UPDATE,\n there will be one extra query RAISE executed prior to the\n UPDATE itself and check that all the new values are the old\n ones. This would have the extra benefit, that the transaction\n would abort BEFORE any changes have been made to the table at\n all (remember that UPDATE in Postgres means another heap\n tuple for each touched row and one more invalid tuple for\n vacuum to throw away and for in-the-middle-aborted updates it\n means so-far-I-came more never committed heap tuples that\n vacuum has to send to byte-hell).\n\n This will not appear in v6.5 (hopefully in v6.6). But it's\n IMHO the best solution. With the mentioned RAISE, plus the\n currently discussed deferred queries etc. we would have the\n rule system ready to support ALL the constraint stuff\n (cascaded delete, foreign key). But the more we use the rule\n system, the more important it becomes that we get rid of the\n block limit for tuples.\n\n I think it would be better to spend your efford on that\n issue.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n\n", "msg_date": "Thu, 11 Feb 1999 14:58:46 -0500 (EST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "BOUNCE [email protected]: Imbalanced parentheses or angle\n\tbrackets" } ]
[ { "msg_contents": "Hi,\n\nI have find some data type conversion problems with Pgsql (or with me).\n\n1) \tQuery: select (1::float8 * 1::float8)::text;\n\tResult: Fri 31 Dec 22:00:01 1999\n\n2)\tQuery: select (2000000::int8 * 2000000::int8)::text;\n\tResult: ERROR: int8 conversion to int4 is out of range.\nNote: I need this conversion to use as argument of function.\n\n3)\tQuery: select '01/01/1999'::date + '1 month'::timespan;\n\tResult OK: 02/01/1999 00:00:00\n\tQuery: select '01/01/1999'::date + '2 month'::timespan;\n\tResult not OK: 02/28/1999 23:00:00\nNote: I think it's using daylight saving (02/13/1999), but for financial uses the result should be: 03/01/1999\n\n4)\tQuery: select sum(int4_field::int8)::text from table_a;\n\tResult: Pgsql don't know how to transform node 107 (or something like this).\n\n5)\tTable_a (query 4) has 400,000 rows. Is it normal postmaster allocate 125MB of memory ?\n\nThanks,\n\nRicardo Coelho.\n\n", "msg_date": "Thu, 11 Feb 1999 19:51:21 -0200", "msg_from": "\"Ricardo J.C.Coelho\" <[email protected]>", "msg_from_op": true, "msg_subject": "Type conversion" }, { "msg_contents": "> I have some data type conversion problems with Pgsql\n> 1) Query: select (1::float8 * 1::float8)::text;\n> Result: Fri 31 Dec 22:00:01 1999\n\ntgl=> select (1::float8 * 1::float8)::text;\nResult: 1\n\nFixed in the current development sources. It requires a couple of new\nroutines in the system tables to work correctly, so has not been applied\nto the v6.4.x tree. You can disable this incorrect coersion in v6.4.x\nbut it will not learn how to do it correctly without these extra\nroutines.\n\n> 2) Query: select (2000000::int8 * 2000000::int8)::text;\n> Result: ERROR: int8 conversion to int4 is out of range.\n> Note: I need this conversion to use as argument of function.\n\nHmm. Needs a new routine in the system tables to avoid trying to convert\ndown to int4. Will look at it.\n\n> 3) Query: select '01/01/1999'::date + '1 month'::timespan;\n> Result OK: 02/01/1999 00:00:00\n> Query: select '01/01/1999'::date + '2 month'::timespan;\n> Result not OK: 02/28/1999 23:00:00\n\ntgl=> select '01/01/1999'::date + '2 month'::timespan;\nResult: Mon Mar 01 00:00:00 1999 PST\n\nI think there was a problem in the date->datetime conversion wrt time\nzone. Look on the web site in /pub/patches for some \"dt.c\" patches. It\nis also fixed in the current and v6.4.x sources. If you aren't running\nv6.4.2, it may be fixed there, and if you are running v6.4.2 then look\nfor the patches.\n\n> 4) Query: select sum(int4_field::int8)::text from table_a;\n> Result: Pgsql don't know how to transform node 107\n\ntgl=> select sum(i::int8)::text from t1;\nERROR: Function 'text(int8)' does not exist\n\nBut we already knew that from example (2), right?\n\n> 5) Table_a (query 4) has 400,000 rows. Is it normal postmaster \n> allocate 125MB of memory ?\n\nMaybe. What query were you executing exactly? How many intermediate\nvalues would be floating around? To help with speed, not all internal\nallocations are freed before the end of the transaction.\n\nThanks for the report.\n\n - Tom\n", "msg_date": "Fri, 12 Feb 1999 03:10:02 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Type conversion" } ]
[ { "msg_contents": "> > Bruce, did you ever commit the temp table stuff? If so what was the\n> > syntax?\n> > FYI using snapshot 2/10/1999.\n> > \t-DEJ\n> > \n> > \n> \n> CREATE TEMP TABLE. See src/test/regress/sql/temp.sql.\n\nSo, it's not in the snapshot... Anyone know why or am I just wrong?\n\t-DEJ\n", "msg_date": "Thu, 11 Feb 1999 15:52:18 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Temp tables" }, { "msg_contents": "> > > Bruce, did you ever commit the temp table stuff? If so what was the\n> > > syntax?\n> > > FYI using snapshot 2/10/1999.\n> > > \t-DEJ\n> > > \n> > > \n> > \n> > CREATE TEMP TABLE. See src/test/regress/sql/temp.sql.\n> \n> So, it's not in the snapshot... Anyone know why or am I just wrong?\n\nBeats me. Should be in there.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Feb 1999 17:46:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Temp tables" } ]
[ { "msg_contents": "I am working on it now, but it is currently not working properly.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Feb 1999 16:56:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "optimizer is broken" }, { "msg_contents": "I have fixed the optimizer, and it is working properly again, and faster\ntoo.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Feb 1999 00:56:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizer is fixed, and faster" }, { "msg_contents": "On Fri, 12 Feb 1999, Bruce Momjian wrote:\n\n> I have fixed the optimizer, and it is working properly again, and faster\n> too.\n\nWhat about cheaper? 8^)\n\n--\nTodd Graham Lewis 32�49'N,83�36'W (800) 719-4664, x2804\n******Linux****** MindSpring Enterprises [email protected]\n\n\"Those who write the code make the rules.\" -- Jamie Zawinski\n\n", "msg_date": "Fri, 12 Feb 1999 01:00:15 -0500 (EST)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer is fixed, and faster" }, { "msg_contents": "On Fri, 12 Feb 1999, Todd Graham Lewis wrote:\n\n> On Fri, 12 Feb 1999, Bruce Momjian wrote:\n> \n> > I have fixed the optimizer, and it is working properly again, and faster\n> > too.\n> \n> What about cheaper? 8^)\n\n\"You too can get your fixed and faster optimizer for *today only* at the\n*low low* price of...$9.95...not available in stores, only calling out\nexclusive, will last for the next 6 minutes, toll free number. Do *NOT*\nbe the last on your block to own one of these\"\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 12 Feb 1999 10:20:29 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer is fixed, and faster" }, { "msg_contents": "> On Fri, 12 Feb 1999, Todd Graham Lewis wrote:\n> \n> > On Fri, 12 Feb 1999, Bruce Momjian wrote:\n> > \n> > > I have fixed the optimizer, and it is working properly again, and faster\n> > > too.\n> > \n> > What about cheaper? 8^)\n> \n> \"You too can get your fixed and faster optimizer for *today only* at the\n> *low low* price of...$9.95...not available in stores, only calling out\n> exclusive, will last for the next 6 minutes, toll free number. Do *NOT*\n> be the last on your block to own one of these\"\n\nLet me also mention I spent almost 3 hours on the phone with Tom Lane\nhelping me on this. Thanks to Tom.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Feb 1999 11:28:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Optimizer is fixed, and faster" } ]
[ { "msg_contents": "> > > > Bruce, did you ever commit the temp table stuff? If so \n> what was the\n> > > > syntax?\n> > > > FYI using snapshot 2/10/1999.\n> > > > \t-DEJ\n> > > > \n> > > CREATE TEMP TABLE. See src/test/regress/sql/temp.sql.\n> > \n> > So, it's not in the snapshot... Anyone know why or am I just wrong?\n> \n> Beats me. Should be in there.\nNeither the syntax nor the regression sql file are there, so I know it's\nnot something as simple as an initdb.\nI'll download the snapshot from tomorrow morning, try it again, and let\nyou know what I find. Do we have a list of what features should\nalready be implemented? I know we haven't beta'ed yet but I thought I'd\ntest a few things.\n\t-DEJ\n", "msg_date": "Thu, 11 Feb 1999 17:10:15 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Temp tables" }, { "msg_contents": "> > > > > Bruce, did you ever commit the temp table stuff? If so \n> > what was the\n> > > > > syntax?\n> > > > > FYI using snapshot 2/10/1999.\n> > > > > \t-DEJ\n> > > > > \n> > > > CREATE TEMP TABLE. See src/test/regress/sql/temp.sql.\n> > > \n> > > So, it's not in the snapshot... Anyone know why or am I just wrong?\n> > \n> > Beats me. Should be in there.\n> Neither the syntax nor the regression sql file are there, so I know it's\n> not something as simple as an initdb.\n> I'll download the snapshot from tomorrow morning, try it again, and let\n> you know what I find. Do we have a list of what features should\n> already be implemented? I know we haven't beta'ed yet but I thought I'd\n> test a few things.\n> \t-DEJ\n> \n\nNo list yet.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Feb 1999 18:35:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Temp tables" } ]
[ { "msg_contents": "\nOn Thu, 11 Feb 1999, Tom Lane wrote:\n> I am not able to reproduce the problem on HPUX, using either current\n> sources or 6.4.2. Looks like it must be platform specific.\n\n Of course it is platform-specific. I reported the problem on\nglibc2-based linucies, but the same database works fine (and allows VACUUM\nANALYZE) on sparc-solaris.\n Don't know about libc5 linux - I have no one in hand.\n\n> Could you build the backend with -g and send a gdb backtrace from the\n> corefile produced when the crash occurs?\n\n I'll do it this Saturday.\n\nOleg.\n---- \n Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Fri, 12 Feb 1999 13:29:14 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux" } ]
[ { "msg_contents": "Hello again,\n\nThanks again to those who pointed me to the semaphore problem. I,\nunfortunately have another problem:\n\nSolaris7 on a Sparc20 running 6.4.2. Occasionally (once or twice a\nday) under a very light load, brain-dead child processes begin to\naccumulate in my system. If left unchecked, eventually the parent\nprocess runs out of resources and dies, orphaning all the lost\nprocesses. (Now that I have solved the semaphore error, it appears\nto be the backend limit of 64 processes.)\n\nHere is a snapshot of truss on some of the processes:\n# truss -p 5879\nsemop(259915776, 0xEFFFC560, 1) (sleeping...)\n# truss -p 5912\nsemop(259915776, 0xEFFFC190, 1) (sleeping...)\n# truss -p 5915\nsemop(259915776, 0xEFFFC190, 1) (sleeping...)\n# truss -p 5931\nsemop(259915776, 0xEFFFC280, 1) (sleeping...)\n# truss -p 5926\nsemop(259915776, 0xEFFFC280, 1) (sleeping...)\n\nThey all appear to be waiting on a semaphore operation which\napparently never happens. The number of stalled processes grows\nrapidly (it has gone from 12 to 21 while I wrote this e-mail).\n\nThe stalled processes all started between 6:57am PST and 7:18am PST,\nhere is what postmaster wrote to the log:\nFeb 12 06:56:46 constantinople POSTMASTER: FATAL: pq_putnchar:\nfputc() failed: errno=32\nFeb 12 06:57:42 constantinople POSTMASTER: NOTICE: Deadlock\ndetected -- See the lock(l) manual page for a possible cause.\nFeb 12 06:57:42 constantinople POSTMASTER: ERROR: WaitOnLock: error\non wakeup - Aborting this transaction\nFeb 12 06:57:42 constantinople POSTMASTER: NOTICE: Deadlock\ndetected -- See the lock(l) manual page for a possible cause.\nFeb 12 06:57:42 constantinople POSTMASTER: ERROR: WaitOnLock: error\non wakeup - Aborting this transaction\nFeb 12 07:02:18 constantinople POSTMASTER: FATAL: pq_putnchar:\nfputc() failed: errno=32\nFeb 12 07:02:19 constantinople last message repeated 2 times\n\nMost of the time, things just work, but it appears that once\nsomethins has gone awry, I experience a spiraling death.\n\nThoughts? Suggestions? Help? :)\n\nDwD\n--\nDaryl W. Dunbar\nhttp://www.com, Where the Web Begins!\nmailto:[email protected]\n\n", "msg_date": "Fri, 12 Feb 1999 10:39:36 -0500", "msg_from": "\"Daryl W. Dunbar\" <[email protected]>", "msg_from_op": true, "msg_subject": "More postmaster troubles" }, { "msg_contents": "> Solaris7 on a Sparc20 running 6.4.2. Occasionally (once or twice a\n> day) under a very light load, brain-dead child processes begin to\n> accumulate in my system. If left unchecked, eventually the parent\n> process runs out of resources and dies, orphaning all the lost\n> processes. (Now that I have solved the semaphore error, it appears\n> to be the backend limit of 64 processes.)\n\nHave you installed following patches? This solves the problem when #\nof backends reaches MaxBackendId. I'm not sure if your problem relates\nto this, though.\n\n-------------------------------- cut here ---------------------------\n*** postgresql-6.4.2/src/backend/postmaster/postmaster.c.orig\tSun Nov 29 10:52:32 1998\n--- postgresql-6.4.2/src/backend/postmaster/postmaster.c\tSat Jan 9 18:14:52 1999\n***************\n*** 238,243 ****\n--- 238,244 ----\n static long PostmasterRandom(void);\n static void RandomSalt(char *salt);\n static void SignalChildren(SIGNAL_ARGS);\n+ static int CountChildren(void);\n \n #ifdef CYR_RECODE\n void\t\tGetCharSetByHost(char *, int, char *);\n***************\n*** 754,764 ****\n \t\t\t\t * by the backend.\n \t\t\t\t */\n \n! \t\t\t\tif (BackendStartup(port) != STATUS_OK)\n! \t\t\t\t\tPacketSendError(&port->pktInfo,\n \t\t\t\t\t\t\t\t\t\"Backend startup failed\");\n! \t\t\t\telse\n! \t\t\t\t\tstatus = STATUS_ERROR;\n \t\t\t}\n \n \t\t\t/* Close the connection if required. */\n--- 755,771 ----\n \t\t\t\t * by the backend.\n \t\t\t\t */\n \n! if (CountChildren() < MaxBackendId) {\n! \t\t\t\t\tif (BackendStartup(port) != STATUS_OK)\n! \t\t\t\t\t\tPacketSendError(&port->pktInfo,\n \t\t\t\t\t\t\t\t\t\"Backend startup failed\");\n! \t\t\t\t\telse {\n! \t\t\t\t\t\tstatus = STATUS_ERROR;\n! \t\t\t\t\t}\n! \t\t\t\t} else {\n! \t\t\t\t\tPacketSendError(&port->pktInfo,\n! \t\t\t\t\t\"There are too many backends\");\n! \t\t\t\t}\n \t\t\t}\n \n \t\t\t/* Close the connection if required. */\n***************\n*** 1617,1620 ****\n--- 1624,1655 ----\n \t}\n \n \treturn random() ^ random_seed;\n+ }\n+ \n+ /*\n+ * Count up number of chidren processes.\n+ */\n+ static int\n+ CountChildren(void)\n+ {\n+ \tDlelem\t *curr,\n+ \t\t\t *next;\n+ \tBackend *bp;\n+ \tint\t\t\tmypid = getpid();\n+ \tint\tcnt = 0;\n+ \n+ \tcurr = DLGetHead(BackendList);\n+ \twhile (curr)\n+ \t{\n+ \t\tnext = DLGetSucc(curr);\n+ \t\tbp = (Backend *) DLE_VAL(curr);\n+ \n+ \t\tif (bp->pid != mypid)\n+ \t\t{\n+ \t\t\tcnt++;\n+ \t\t}\n+ \n+ \t\tcurr = next;\n+ \t}\n+ \treturn(cnt);\n }\n\n", "msg_date": "Sat, 13 Feb 1999 15:03:26 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More postmaster troubles " }, { "msg_contents": "Thank you Tatsousan. This patch will solve the dying process\nproblem when I reach MaxBackendId (which I increased from 64 to\n128). However, I do not know what is causing the spiraling death of\nthe processes in the first place. :(\n\nIs there some place I should be looking for other patches, besides\nthose listed on www.postgresql.org?\n\nThank you for your continued help.\n\nDwD\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf\n> Of Tatsuo Ishii\n> Sent: Saturday, February 13, 1999 1:03 AM\n> To: Daryl W. Dunbar\n> Cc: pgsql-hackers@postgreSQL. org\n> Subject: Re: [HACKERS] More postmaster troubles\n>\n>\n> > Solaris7 on a Sparc20 running 6.4.2. Occasionally\n> (once or twice a\n> > day) under a very light load, brain-dead child\n> processes begin to\n> > accumulate in my system. If left unchecked, eventually\n> the parent\n> > process runs out of resources and dies, orphaning all the lost\n> > processes. (Now that I have solved the semaphore\n> error, it appears\n> > to be the backend limit of 64 processes.)\n>\n> Have you installed following patches? This solves the\n> problem when #\n> of backends reaches MaxBackendId. I'm not sure if your\n> problem relates\n> to this, though.\n>\n> -------------------------------- cut here\n> ---------------------------\n> ***\n> postgresql-6.4.2/src/backend/postmaster/postmaster.c.orig\n> Sun Nov 29 10:52:32 1998\n> --- postgresql-6.4.2/src/backend/postmaster/postmaster.c\n> Sat Jan 9 18:14:52 1999\n> ***************\n> *** 238,243 ****\n> --- 238,244 ----\n> static long PostmasterRandom(void);\n> static void RandomSalt(char *salt);\n> static void SignalChildren(SIGNAL_ARGS);\n> + static int CountChildren(void);\n>\n> #ifdef CYR_RECODE\n> void\t\tGetCharSetByHost(char *, int, char *);\n> ***************\n> *** 754,764 ****\n> \t\t\t\t * by the backend.\n> \t\t\t\t */\n>\n> ! \t\t\t\tif (BackendStartup(port) !=\n> STATUS_OK)\n> !\n> PacketSendError(&port->pktInfo,\n>\n> \t\"Backend startup failed\");\n> ! \t\t\t\telse\n> ! \t\t\t\t\tstatus = STATUS_ERROR;\n> \t\t\t}\n>\n> \t\t\t/* Close the connection if required. */\n> --- 755,771 ----\n> \t\t\t\t * by the backend.\n> \t\t\t\t */\n>\n> ! if (CountChildren() <\n> MaxBackendId) {\n> ! \t\t\t\t\tif\n> (BackendStartup(port) != STATUS_OK)\n> !\n> PacketSendError(&port->pktInfo,\n>\n> \t\"Backend startup failed\");\n> ! \t\t\t\t\telse {\n> ! \t\t\t\t\t\tstatus =\n> STATUS_ERROR;\n> ! \t\t\t\t\t}\n> ! \t\t\t\t} else {\n> !\n> PacketSendError(&port->pktInfo,\n> ! \t\t\t\t\t\"There are too many\n> backends\");\n> ! \t\t\t\t}\n> \t\t\t}\n>\n> \t\t\t/* Close the connection if required. */\n> ***************\n> *** 1617,1620 ****\n> --- 1624,1655 ----\n> \t}\n>\n> \treturn random() ^ random_seed;\n> + }\n> +\n> + /*\n> + * Count up number of chidren processes.\n> + */\n> + static int\n> + CountChildren(void)\n> + {\n> + \tDlelem\t *curr,\n> + \t\t\t *next;\n> + \tBackend *bp;\n> + \tint\t\t\tmypid = getpid();\n> + \tint\tcnt = 0;\n> +\n> + \tcurr = DLGetHead(BackendList);\n> + \twhile (curr)\n> + \t{\n> + \t\tnext = DLGetSucc(curr);\n> + \t\tbp = (Backend *) DLE_VAL(curr);\n> +\n> + \t\tif (bp->pid != mypid)\n> + \t\t{\n> + \t\t\tcnt++;\n> + \t\t}\n> +\n> + \t\tcurr = next;\n> + \t}\n> + \treturn(cnt);\n> }\n>\n\n", "msg_date": "Sat, 13 Feb 1999 13:23:29 -0500", "msg_from": "\"Daryl W. Dunbar\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] More postmaster troubles " }, { "msg_contents": "\"Daryl W. Dunbar\" <[email protected]> writes:\n> Thank you Tatsousan. This patch will solve the dying process\n> problem when I reach MaxBackendId (which I increased from 64 to\n> 128). However, I do not know what is causing the spiraling death of\n> the processes in the first place. :(\n\nHmm. I have noticed at least one place in the code where there is an\nundocumented hard-wired dependency on MaxBackendId, to wit MAX_PROC_SEMS\nin include/storage/proc.h which is set at 128. Presumably it should be\nequal to MaxBackendId (and I intend to fix that soon). Evidently that\nparticular bug is not hurting you (yet) but perhaps there are similar\nerrors elsewhere that kick in sooner. Do you see the spiraling-death\nproblem if you run with MaxBackendId at its customary value of 64?\n\nThe log extract you posted before mentions \"fputc() failed: errno=32\"\nwhich suggests an unexpected client disconnect during a transaction.\nI suspect the backend that gets that disconnect is failing to clean up\nproperly before exiting, and is leaving one or more locks locked.\nWe don't have enough info yet to track down the cause, but I suggest\nwe could narrow it down some by seeing whether the problem goes away\nwith a lower MaxBackendId setting.\n\n(You might also want to work on making your clients more robust,\nbut I'd like to see if we can solve the backend bug first ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Feb 1999 15:22:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More postmaster troubles " }, { "msg_contents": "Tom,\n\nI have to date experienced the problem only with MaxBackendId set to\n64. Today I installed a version of the code with it set to 128\n(just picked that number out of luck, but would like to get it\nhigher). By the way, I had to tune the kernel to allow me to\nincrease MaxBackendId, this time in shared memory (SHMMAX).\n\nAs for the clients, they are web users via mod_perl/DBI/DBD:Pg. It\nis possible that the user is hitting the stop button right at a time\nwhich hangs the connection (backend), but I have been unable to\nreproduce that so far. That was my first thought on this problem.\nThe fact that it apparently spirals is disturbing, I highly doubt\nthere is a user out there hitting the stop key 64 times in a row. :)\n\nThanks for your help,\n\nDwD\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Saturday, February 13, 1999 3:23 PM\n> To: Daryl W. Dunbar\n> Cc: [email protected]\n> Subject: Re: [HACKERS] More postmaster troubles\n>\n>\n> \"Daryl W. Dunbar\" <[email protected]> writes:\n> > Thank you Tatsousan. This patch will solve the dying process\n> > problem when I reach MaxBackendId (which I increased from 64 to\n> > 128). However, I do not know what is causing the\n> spiraling death of\n> > the processes in the first place. :(\n>\n> Hmm. I have noticed at least one place in the code where\n> there is an\n> undocumented hard-wired dependency on MaxBackendId, to\n> wit MAX_PROC_SEMS\n> in include/storage/proc.h which is set at 128.\n> Presumably it should be\n> equal to MaxBackendId (and I intend to fix that soon).\n> Evidently that\n> particular bug is not hurting you (yet) but perhaps there\n> are similar\n> errors elsewhere that kick in sooner. Do you see the\n> spiraling-death\n> problem if you run with MaxBackendId at its customary value of 64?\n>\n> The log extract you posted before mentions \"fputc()\n> failed: errno=32\"\n> which suggests an unexpected client disconnect during a\n> transaction.\n> I suspect the backend that gets that disconnect is\n> failing to clean up\n> properly before exiting, and is leaving one or more locks locked.\n> We don't have enough info yet to track down the cause,\n> but I suggest\n> we could narrow it down some by seeing whether the\n> problem goes away\n> with a lower MaxBackendId setting.\n>\n> (You might also want to work on making your clients more robust,\n> but I'd like to see if we can solve the backend bug first ...)\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Sat, 13 Feb 1999 15:34:30 -0500", "msg_from": "\"Daryl W. Dunbar\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] More postmaster troubles " } ]
[ { "msg_contents": "Already asked this in the other lists so here.\n\nI need to store some polygons that are larger than 8K.\nI was reading in hackers archives talk about a solution\nto the 8K limit. Was anything done? If so, what do I\nneed to do to solve my problem?\n\nI don't subscribe to the hackers mailing list should I \nsubscribe if I ask a question here?\n\n\nRegards,\nKenneth R. Mort <[email protected]>\nTreeTop Research\nBrooklyn, NY, USA\n", "msg_date": "Fri, 12 Feb 1999 23:11:48 -0500", "msg_from": "\"Ken Mort\" <[email protected]>", "msg_from_op": true, "msg_subject": "8K block limit" } ]
[ { "msg_contents": "\nDoes anyone know what's going on here?\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n---------- Forwarded message ----------\nDate: Fri, 12 Feb 1999 18:59:00 -0800\nFrom: Jason Venner <[email protected]>\nTo: Peter T Mount <[email protected]>\nSubject: JDBC lo crashes etc\n\n\nI recompiled my 6.3.2 with cassert turned on, and using the 6.4.2 jdbc driver, I get the following\nafter inserting a bunch of images.\n\npostmaster.log: NOTICE: SIMarkEntryData: cache state reset\npostmaster.log: Failed Assertion(\"!(RelationNameCache->hctl->nkeys == 10):\", File: \"relcache.c\", Line: 1523)\npostmaster.log: !(RelationNameCache->hctl->nkeys == 10) (0) [Illegal seek]\n\n", "msg_date": "Sat, 13 Feb 1999 11:02:49 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "JDBC lo crashes etc (fwd)" }, { "msg_contents": "I'd also like to know - this is one of the errors I have seen using\nPostGres when using multiple clients, and why we've had to implement\na lock manager outside of PostGres to limit access to a single client\nat a time.\n\nThomas\n\nPeter T Mount wrote:\n> \n> Does anyone know what's going on here?\n> \n> --\n> Peter T Mount [email protected]\n> Main Homepage: http://www.retep.org.uk\n> PostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n> Java PDF Generator: http://www.retep.org.uk/pdf\n> \n> ---------- Forwarded message ----------\n> Date: Fri, 12 Feb 1999 18:59:00 -0800\n> From: Jason Venner <[email protected]>\n> To: Peter T Mount <[email protected]>\n> Subject: JDBC lo crashes etc\n> \n> I recompiled my 6.3.2 with cassert turned on, and using the 6.4.2 jdbc driver, I get the following\n> after inserting a bunch of images.\n> \n> postmaster.log: NOTICE: SIMarkEntryData: cache state reset\n> postmaster.log: Failed Assertion(\"!(RelationNameCache->hctl->nkeys == 10):\", File: \"relcache.c\", Line: 1523)\n> postmaster.log: !(RelationNameCache->hctl->nkeys == 10) (0) [Illegal seek]\n\n-- \n------------------------------------------------------------\nThomas Reinke Tel: (416) 460-7021\nDirector of Technology Fax: (416) 598-2319\nE-Soft Inc. http://www.e-softinc.com\n", "msg_date": "Sun, 14 Feb 1999 22:15:08 -0500", "msg_from": "Thomas Reinke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JDBC lo crashes etc (fwd)" } ]
[ { "msg_contents": "\nOff topic. But hey, it's saturday morning.\n\nScrappy wrote...\n\n> On Fri, 12 Feb 1999, Todd Graham Lewis wrote:\n> \n> > On Fri, 12 Feb 1999, Bruce Momjian wrote:\n> > \n> > > I have fixed the optimizer, and it is working properly again, and faster\n> > > too.\n> > \n> > What about cheaper? 8^)\n> \n> \"You too can get your fixed and faster optimizer for *today only* at the\n> *low low* price of...$9.95...not available in stores, only calling out\n> exclusive, will last for the next 6 minutes, toll free number. Do *NOT*\n> be the last on your block to own one of these\"\n> \n\nOne 'zen of engineering' lesson I was taught was when many years ago a\nmentor of mine wrote on the board:\n\n\tGOOD\n\tFAST\n\tCHEAP\n\n pick any two\n\nPostgreSQL, it seems, may be an exception to this rule.\n\n-- cary\n\n\n", "msg_date": "Sat, 13 Feb 1999 09:17:30 -0500 (EST)", "msg_from": "\"Cary O'Brien\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Optimizer is fixed, and faster" } ]
[ { "msg_contents": "Is anyone else seeing failure of the \"rules\" regression test with\ncurrent CVS sources, or is it just me?\n\nLooking at the differences, I see that rules.sql uses getpgusername(),\nwhich means that it is certain to create a \"failure\" if run under any\nunusual user name. This is bad (and the fact that the committed version\nof rules.out was evidently made under the nonstandard name \"pgsql\"\ndoesn't help). I suggest removing that usage.\n\nThe other differences seem to be ones where the same tuples are returned\nbut not in the same order as is obtained on the system where the\nexpected-output file was made. I recall a similar complaint back in\nlate October 98, and I think the root cause now is the same as it was\nthen. To produce the \"shoelace\" view, Postgres is doing a merge join,\nwhich involves qsort()'ing the tuples of the base tables --- and for\nequal-keyed items qsort() can return the items in an\nimplementation-dependent order. So the regression test will succeed or\nfail depending on the vagaries of the local qsort().\n\nI suggest adding \"ORDER BY sl_name\", or some such, to each of the views\nin the rules test that is made from a join.\n\nBTW, it's possible that this system-dependency in the rules test was\npreviously masked by the optimizer bugs that Bruce has fixed recently;\nthat would explain why it wasn't seen before. I know I wasn't seeing\nthis difference until last week. But if the optimizer was previously\npicking a join method that didn't involve a sort, the problem would\nbe masked.\n\n\t\t\tregards, tom lane\n\n\n*** expected/rules.out\tTue Feb 9 17:44:57 1999\n--- results/rules.out\tSat Feb 13 14:31:56 1999\n***************\n*** 919,929 ****\n sl1 | 5|black | 80|cm | 80\n sl2 | 6|black | 100|cm | 100\n sl7 | 7|brown | 60|cm | 60\n- sl3 | 0|black | 35|inch | 88.9\n sl4 | 8|black | 40|inch | 101.6\n sl8 | 1|brown | 40|inch | 101.6\n- sl5 | 4|brown | 1|m | 100\n sl6 | 0|brown | 0.9|m | 90\n (8 rows)\n \n QUERY: SELECT * FROM shoe_ready WHERE total_avail >= 2;\n--- 919,929 ----\n sl1 | 5|black | 80|cm | 80\n sl2 | 6|black | 100|cm | 100\n sl7 | 7|brown | 60|cm | 60\n sl4 | 8|black | 40|inch | 101.6\n+ sl3 | 0|black | 35|inch | 88.9\n sl8 | 1|brown | 40|inch | 101.6\n sl6 | 0|brown | 0.9|m | 90\n+ sl5 | 4|brown | 1|m | 100\n (8 rows)\n \n QUERY: SELECT * FROM shoe_ready WHERE total_avail >= 2;\n***************\n*** 950,957 ****\n QUERY: UPDATE shoelace_data SET sl_avail = 6 WHERE sl_name = 'sl7';\n QUERY: SELECT * FROM shoelace_log;\n sl_name |sl_avail|log_who|log_when\n! ----------+--------+-------+--------\n! sl7 | 6|pgsql |epoch \n (1 row)\n \n QUERY: CREATE RULE shoelace_ins AS ON INSERT TO shoelace\n--- 950,957 ----\n QUERY: UPDATE shoelace_data SET sl_avail = 6 WHERE sl_name = 'sl7';\n QUERY: SELECT * FROM shoelace_log;\n sl_name |sl_avail|log_who |log_when\n! ----------+--------+--------+--------\n! sl7 | 6|postgres|epoch \n (1 row)\n \n QUERY: CREATE RULE shoelace_ins AS ON INSERT TO shoelace\n***************\n*** 997,1030 ****\n sl1 | 5|black | 80|cm | 80\n sl2 | 6|black | 100|cm | 100\n sl7 | 6|brown | 60|cm | 60\n- sl3 | 0|black | 35|inch | 88.9\n sl4 | 8|black | 40|inch | 101.6\n sl8 | 1|brown | 40|inch | 101.6\n! sl5 | 4|brown | 1|m | 100\n sl6 | 0|brown | 0.9|m | 90\n (8 rows)\n \n QUERY: insert into shoelace_ok select * from shoelace_arrive;\n QUERY: SELECT * FROM shoelace;\n sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm\n ----------+--------+----------+------+--------+---------\n- sl1 | 5|black | 80|cm | 80\n sl2 | 6|black | 100|cm | 100\n sl7 | 6|brown | 60|cm | 60\n sl4 | 8|black | 40|inch | 101.6\n sl3 | 10|black | 35|inch | 88.9\n- sl8 | 21|brown | 40|inch | 101.6\n sl5 | 4|brown | 1|m | 100\n sl6 | 20|brown | 0.9|m | 90\n (8 rows)\n \n QUERY: SELECT * FROM shoelace_log;\n sl_name |sl_avail|log_who|log_when\n! ----------+--------+-------+--------\n! sl7 | 6|pgsql |epoch \n! sl3 | 10|pgsql |epoch \n! sl6 | 20|pgsql |epoch \n! sl8 | 21|pgsql |epoch \n (4 rows)\n \n QUERY: CREATE VIEW shoelace_obsolete AS\n--- 997,1030 ----\n sl1 | 5|black | 80|cm | 80\n sl2 | 6|black | 100|cm | 100\n sl7 | 6|brown | 60|cm | 60\n sl4 | 8|black | 40|inch | 101.6\n sl8 | 1|brown | 40|inch | 101.6\n! sl3 | 0|black | 35|inch | 88.9\n sl6 | 0|brown | 0.9|m | 90\n+ sl5 | 4|brown | 1|m | 100\n (8 rows)\n \n QUERY: insert into shoelace_ok select * from shoelace_arrive;\n QUERY: SELECT * FROM shoelace;\n sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm\n ----------+--------+----------+------+--------+---------\n sl2 | 6|black | 100|cm | 100\n+ sl1 | 5|black | 80|cm | 80\n sl7 | 6|brown | 60|cm | 60\n+ sl8 | 21|brown | 40|inch | 101.6\n sl4 | 8|black | 40|inch | 101.6\n sl3 | 10|black | 35|inch | 88.9\n sl5 | 4|brown | 1|m | 100\n sl6 | 20|brown | 0.9|m | 90\n (8 rows)\n \n QUERY: SELECT * FROM shoelace_log;\n sl_name |sl_avail|log_who |log_when\n! ----------+--------+--------+--------\n! sl7 | 6|postgres|epoch \n! sl3 | 10|postgres|epoch \n! sl6 | 20|postgres|epoch \n! sl8 | 21|postgres|epoch \n (4 rows)\n \n QUERY: CREATE VIEW shoelace_obsolete AS\n***************\n*** 1053,1065 ****\n QUERY: SELECT * FROM shoelace;\n sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm\n ----------+--------+----------+------+--------+---------\n- sl1 | 5|black | 80|cm | 80\n sl2 | 6|black | 100|cm | 100\n sl7 | 6|brown | 60|cm | 60\n- sl4 | 8|black | 40|inch | 101.6\n sl3 | 10|black | 35|inch | 88.9\n! sl8 | 21|brown | 40|inch | 101.6\n sl10 | 1000|magenta | 40|inch | 101.6\n sl5 | 4|brown | 1|m | 100\n sl6 | 20|brown | 0.9|m | 90\n (9 rows)\n--- 1053,1065 ----\n QUERY: SELECT * FROM shoelace;\n sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm\n ----------+--------+----------+------+--------+---------\n sl2 | 6|black | 100|cm | 100\n+ sl1 | 5|black | 80|cm | 80\n sl7 | 6|brown | 60|cm | 60\n sl3 | 10|black | 35|inch | 88.9\n! sl4 | 8|black | 40|inch | 101.6\n sl10 | 1000|magenta | 40|inch | 101.6\n+ sl8 | 21|brown | 40|inch | 101.6\n sl5 | 4|brown | 1|m | 100\n sl6 | 20|brown | 0.9|m | 90\n (9 rows)\n\n----------------------\n\n", "msg_date": "Sat, 13 Feb 1999 15:03:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Failures in 'rules' regression test" }, { "msg_contents": "Tom Lane wrote:\n> \n> Is anyone else seeing failure of the \"rules\" regression test with\n> current CVS sources, or is it just me?\n\n\"me too\"\n\nThough my output is in a slightly different order again, ie., different\nsystem, so your qsort() theory seems good.\n\nBTW, the error messages seem to have changed (running NetBSD-current),\nso apart from rules, everything passes.\n\nfloat8 .. failed\ngeometry .. failed\nmisc .. failed\nrules .. failed\n\n*** expected/float8-NetBSD.out Sat Feb 6 19:53:55 1999\n--- results/float8.out Sun Feb 14 14:16:38 1999\n***************\n*** 209,217 ****\n (5 rows)\n \n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('10e400');\n! ERROR: Bad float8 input format '10e400'\n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e400');\n! ERROR: Bad float8 input format '-10e400'\n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('10e-400');\n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e-400');\n QUERY: DELETE FROM FLOAT8_TBL;\n--- 209,217 ----\n (5 rows)\n \n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('10e400');\n! ERROR: Input '10e400' is out of range for float8\n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e400');\n! ERROR: Input '-10e400' is out of range for float8\n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('10e-400');\n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e-400');\n QUERY: DELETE FROM FLOAT8_TBL;\n\n*** expected/geometry-NetBSD.out Sat Feb 6 19:53:55 1999\n--- results/geometry.out Sun Feb 14 14:16:40 1999\n***************\n*** 87,93 ****\n \n QUERY: SELECT '' AS count, p.f1, l.s, l.s # p.f1 AS intersection\n FROM LSEG_TBL l, POINT_TBL p;\n! ERROR: There is more than one possible operator '#' for types 'lseg' and 'point'\n You will have to retype this query using an explicit cast\n QUERY: SELECT '' AS thirty, p.f1, l.s, p.f1 ## l.s AS closest\n FROM LSEG_TBL l, POINT_TBL p;\n--- 87,93 ----\n \n QUERY: SELECT '' AS count, p.f1, l.s, l.s # p.f1 AS intersection\n FROM LSEG_TBL l, POINT_TBL p;\n! ERROR: Unable to identify an operator '#' for types 'lseg' and 'point'\n You will have to retype this query using an explicit cast\n QUERY: SELECT '' AS thirty, p.f1, l.s, p.f1 ## l.s AS closest\n FROM LSEG_TBL l, POINT_TBL p;\n\n*** expected/misc.out Sun Feb 14 14:16:25 1999\n--- results/misc.out Sun Feb 14 14:18:42 1999\n***************\n*** 6,19 ****\n SET stringu1 = reverse_name(onek.stringu1)\n WHERE onek.stringu1 = 'JBAAAA' and\n onek.stringu1 = tmp.stringu1;\n- NOTICE: Non-functional update, only first update is performed\n- NOTICE: Non-functional update, only first update is performed\n QUERY: UPDATE tmp\n SET stringu1 = reverse_name(onek2.stringu1)\n WHERE onek2.stringu1 = 'JCAAAA' and\n onek2.stringu1 = tmp.stringu1;\n- NOTICE: Non-functional update, only first update is performed\n- NOTICE: Non-functional update, only first update is performed\n QUERY: DROP TABLE tmp;\n QUERY: COPY onek TO '/home/prlw1/pgsql/src/test/regress/input/../results/onek.data';\n QUERY: DELETE FROM onek;\n--- 6,15 ----\n\n\nCheers,\n\nPatrick\n", "msg_date": "Sun, 14 Feb 1999 14:34:38 +0000 (GMT)", "msg_from": "\"Patrick Welche\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Failures in 'rules' regression test" }, { "msg_contents": "> \n> Is anyone else seeing failure of the \"rules\" regression test with\n> current CVS sources, or is it just me?\n\n Must have been me :-(\n\n\tI added some more tests recently - will take a look at it.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n", "msg_date": "Mon, 15 Feb 1999 12:27:40 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Failures in 'rules' regression test" } ]
[ { "msg_contents": "I have changed comments like my-function-name-- to my_function_name.\n\nThis affects only comments.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Feb 1999 17:51:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Source code cleanup" }, { "msg_contents": "On Sat, Feb 13, 1999 at 05:51:04PM -0500, Bruce Momjian wrote:\n> I have changed comments like my-function-name-- to my_function_name.\n> \n> This affects only comments.\n\nI have no idea whether your changes affect ecpg as well. But someone did\nchange something on the ecpg subtree while a huge patch of mine is still\nwaiting to be applied. I have no idea whether it will patch in cleanly now.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Sun, 14 Feb 1999 11:08:17 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Source code cleanup" }, { "msg_contents": "> On Sat, Feb 13, 1999 at 05:51:04PM -0500, Bruce Momjian wrote:\n> > I have changed comments like my-function-name-- to my_function_name.\n> > \n> > This affects only comments.\n> \n> I have no idea whether your changes affect ecpg as well. But someone did\n> change something on the ecpg subtree while a huge patch of mine is still\n> waiting to be applied. I have no idea whether it will patch in cleanly now.\n\nIf you use cvs, you can do a cvs update and the changes will be merged\nin. If not, I will merge them in by hand.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Feb 1999 13:15:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Source code cleanup" }, { "msg_contents": "On Mon, Feb 15, 1999 at 01:15:47PM -0500, Bruce Momjian wrote:\n> If you use cvs, you can do a cvs update and the changes will be merged\n\nI have no idea how to do this. I use cvsup.\n\n> in. If not, I will merge them in by hand.\n\nNo, that's too much work. I can resend it, no big deal.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Mon, 15 Feb 1999 20:02:40 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Source code cleanup" }, { "msg_contents": "> On Mon, Feb 15, 1999 at 01:15:47PM -0500, Bruce Momjian wrote:\n> > If you use cvs, you can do a cvs update and the changes will be merged\n> \n> I have no idea how to do this. I use cvsup.\n\ncvsup can't do it, I think. It just overlays the changed file.\n\n> \n> > in. If not, I will merge them in by hand.\n> \n> No, that's too much work. I can resend it, no big deal.\n\nWere there a lot of changes affecting you? Sorry.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Feb 1999 14:20:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Source code cleanup" } ]
[ { "msg_contents": "Is there any documentation on database recovery. I am new to Postgres. I\nreally like what I see. I have written a membership database application\nfor a NPO in New Mexico. It is written in Access 97. I want to migrate to\na real database engine and I am strongly considering Postgres. I have\nseveral questions:\n\n1)\tIs transaction logging available, how does it work, and how do I set\nit up. I have been through most of the on-line documentation several time\n(available on the internet) and have not found anything that talks about\ntransaction logging.\n\n2)\tDatabase recovery. If I make a back up at 10:00am and the database\ngoes south at 1:00pm, can I restore back to 10:00am and automatically\nre-post any/all transactions that occurred between 10:00am and 1:00pm\nwithout requiring the users to re-enter all their data?\n\n3)\tIs any working on mirroring or shadowing? I would like to be able\nhave a backup database engine on a second back-up server get automatically\nupdated soon after an update is posted to the main server.\n\nThanks, Michael\n\n\t-----Original Message-----\n\tFrom:\tTom Lane [SMTP:[email protected]]\n\tSent:\tSaturday, February 13, 1999 1:23 PM\n\tTo:\tDaryl W. Dunbar\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] More postmaster troubles \n\n\t\"Daryl W. Dunbar\" <[email protected]> writes:\n\t> Thank you Tatsousan. This patch will solve the dying process\n\t> problem when I reach MaxBackendId (which I increased from 64 to\n\t> 128). However, I do not know what is causing the spiraling death\nof\n\t> the processes in the first place. :(\n\n\tHmm. I have noticed at least one place in the code where there is\nan\n\tundocumented hard-wired dependency on MaxBackendId, to wit\nMAX_PROC_SEMS\n\tin include/storage/proc.h which is set at 128. Presumably it should\nbe\n\tequal to MaxBackendId (and I intend to fix that soon). Evidently\nthat\n\tparticular bug is not hurting you (yet) but perhaps there are\nsimilar\n\terrors elsewhere that kick in sooner. Do you see the\nspiraling-death\n\tproblem if you run with MaxBackendId at its customary value of 64?\n\n\tThe log extract you posted before mentions \"fputc() failed:\nerrno=32\"\n\twhich suggests an unexpected client disconnect during a transaction.\n\tI suspect the backend that gets that disconnect is failing to clean\nup\n\tproperly before exiting, and is leaving one or more locks locked.\n\tWe don't have enough info yet to track down the cause, but I suggest\n\twe could narrow it down some by seeing whether the problem goes away\n\twith a lower MaxBackendId setting.\n\n\t(You might also want to work on making your clients more robust,\n\tbut I'd like to see if we can solve the backend bug first ...)\n\n\t\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Feb 1999 17:13:03 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] More postmaster troubles " } ]
[ { "msg_contents": "Is there any documentation on database recovery. I am new to Postgres. I\nreally like what I see. I have written a membership database application\nfor a NPO in New Mexico. It is written in Access 97. I want to migrate to\na real database engine and I am strongly considering Postgres. I have\nseveral questions:\n\n1)\tIs transaction logging available, how does it work, and how do I set\nit up. I have been through most of the on-line documentation several time\n(available on the internet) and have not found anything that talks about\ntransaction logging.\n\n2)\tDatabase recovery. If I make a back up at 10:00am and the database\ngoes south at 1:00pm, can I restore back to 10:00am and automatically\nre-post any/all transactions that occurred between 10:00am and 1:00pm\nwithout requiring the users to re-enter all their data?\n\n3)\tIs any working on mirroring or shadowing? I would like to be able\nhave a backup database engine on a second back-up server get automatically\nupdated soon after an update is posted to the main server.\n\nThanks, Michael\n\n", "msg_date": "Sat, 13 Feb 1999 18:11:34 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Transaction logging?" } ]
[ { "msg_contents": "Is there any documentation on database recovery. I am new to Postgres. I\nreally like what I see. I have written a membership database application\nfor a NPO in New Mexico. It is written in Access 97. I want to migrate to\na real database engine and I am strongly considering Postgres. I have\nseveral questions:\n1)\tIs transaction logging available, how does it work, and how do I set\nit up. I have been through most of the on-line documentation several time\n(available on the internet) and have not found anything that talks about\ntransaction logging.\n2)\tDatabase recovery. If I make a back up at 10:00am and the database\ngoes south at 1:00pm, can I restore back to 10:00am and automatically\nre-post any/all transactions that occurred between 10:00am and 1:00pm\nwithout requiring the users to re-enter all their data?\n3)\tIs any working on mirroring or shadowing? I would like to be able\nhave a backup database engine on a second back-up server get automatically\nupdated soon after an update is posted to the main server.\n\nThanks, Michael\n", "msg_date": "Sat, 13 Feb 1999 18:47:04 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Transaction logging?" }, { "msg_contents": "On Sat, 13 Feb 1999, Michael Davis wrote:\n\n> Is there any documentation on database recovery. I am new to Postgres. I\n> really like what I see. I have written a membership database application\n> for a NPO in New Mexico. It is written in Access 97. I want to migrate to\n> a real database engine and I am strongly considering Postgres. I have\n> several questions:\n> 1)\tIs transaction logging available, how does it work, and how do I set\n> it up. I have been through most of the on-line documentation several time\n> (available on the internet) and have not found anything that talks about\n> transaction logging.\n> 2)\tDatabase recovery. If I make a back up at 10:00am and the database\n> goes south at 1:00pm, can I restore back to 10:00am and automatically\n> re-post any/all transactions that occurred between 10:00am and 1:00pm\n> without requiring the users to re-enter all their data?\n> 3)\tIs any working on mirroring or shadowing? I would like to be able\n> have a backup database engine on a second back-up server get automatically\n> updated soon after an update is posted to the main server.\n> \n> Thanks, Michael\n> \n\nTransaction logging and database recovery have all received\nconsiderable discussion, but are not yet implemented. Furthermore,\nto my knowledge they are not included in the upcoming release of\nv 6.5.\n\nI am unaware of any efforts to implement database replication.\n\nMarc Zuckman\[email protected]\n\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n_ Visit The Home and Condo MarketPlace\t\t _\n_ http://www.ClassyAd.com\t\t\t _\n_\t\t\t\t\t\t\t _\n_ FREE basic property listings/advertisements and searches. _\n_\t\t\t\t\t\t\t _\n_ Try our premium, yet inexpensive services for a real\t _\n_ selling or buying edge!\t\t\t\t _\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n\n", "msg_date": "Sat, 13 Feb 1999 19:51:33 -0500 (EST)", "msg_from": "Marc Howard Zuckman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Transaction logging?" } ]
[ { "msg_contents": "I've applied some patches to the REL6_4 branch for relatively obscure\ndate/time fixes. These are (mostly) the same patches I had posted on\n/pub/patches/ after REL6_4 had been declared dead :(\n\nI also applied a patch to remove the equivalence of float8 and datetime\nwhich had lead to bizarre interpretations of float8 as datetime when\ncoercing to string types.\n\nThe current development tree has some more complete fixes for numeric\nconversions to and from strings.\n\n - Tom\n", "msg_date": "Sun, 14 Feb 1999 04:19:25 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Patches applied for v6.4.3" } ]
[ { "msg_contents": "Can someone comment on what Bushy plans do in the optimizer, and whether\nthe code has any value?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Feb 1999 13:35:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "bushy plans" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Can someone comment on what Bushy plans do in the optimizer, and whether\n> the code has any value?\n\nNo, currently. But please don't remove them. It would be nice\nto have bushy plans implemented.\n\nVadim\n", "msg_date": "Mon, 15 Feb 1999 09:18:25 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bushy plans" }, { "msg_contents": "> Can someone comment on what Bushy plans do in the optimizer, and whether\n> the code has any value?\n\nIf anyone wants to research this, the driving field appears to be\nin JoinInfo called 'inactive', soon to be called bushy_inactive.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Feb 1999 21:29:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] bushy plans" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > Can someone comment on what Bushy plans do in the optimizer, and whether\n> > the code has any value?\n> \n> No, currently. But please don't remove them. It would be nice\n> to have bushy plans implemented.\n\nPlease tell me what they are supposed to do. I can get it working, I\nthink. I will not remove it. I have ifdef'ed it, though. If you tell\nme what it is, I will check it to see if it works.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Feb 1999 21:36:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] bushy plans" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Can someone comment on what Bushy plans do in the optimizer, and whether\n> > the code has any value?\n> \n> If anyone wants to research this, the driving field appears to be\n> in JoinInfo called 'inactive', soon to be called bushy_inactive.\n\nNo - BushyPlanFlag.\n\nVadim\n", "msg_date": "Mon, 15 Feb 1999 09:36:24 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bushy plans" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > > Can someone comment on what Bushy plans do in the optimizer, and whether\n> > > the code has any value?\n> > \n> > If anyone wants to research this, the driving field appears to be\n> > in JoinInfo called 'inactive', soon to be called bushy_inactive.\n> \n> No - BushyPlanFlag.\n\nYes, that enables Bushy Plans, but the work of bushy plans seems to be\ndriven by that field.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Feb 1999 21:38:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] bushy plans" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > Can someone comment on what Bushy plans do in the optimizer, and whether\n> > > the code has any value?\n> >\n> > No, currently. But please don't remove them. It would be nice\n> > to have bushy plans implemented.\n> \n> Please tell me what they are supposed to do. I can get it working, I\n> think. I will not remove it. I have ifdef'ed it, though. If you tell\n> me what it is, I will check it to see if it works.\n\nIt doesn't work (failed assertion).\n\nWell, currently both geqo and old optimizer produces\nleft-sided plans: inner relation of an join is always\n_base_ relation (not join relation). In bushy plans\nboth outer and inner relations may be join ones.\n~1.5 - 2 years ago I added right-sided plans:\nouter relation is base, inner relation may be join.\nSometimes right-sided plans are 30% faster than left-sided.\nBushy plans could be more faster...\nBTW, I broke execution of right-sided plans ~ 1 year ago\nwhile implementing subqueries (near materialization node)\nand still have no time to fix it (seems no one except me\nused them so I didn't worry about this -:)).\n\nVadim\n", "msg_date": "Mon, 15 Feb 1999 10:02:44 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bushy plans" }, { "msg_contents": "> > Please tell me what they are supposed to do. I can get it working, I\n> > think. I will not remove it. I have ifdef'ed it, though. If you tell\n> > me what it is, I will check it to see if it works.\n> \n> It doesn't work (failed assertion).\n> \n> Well, currently both geqo and old optimizer produces\n> left-sided plans: inner relation of an join is always\n> _base_ relation (not join relation). In bushy plans\n> both outer and inner relations may be join ones.\n> ~1.5 - 2 years ago I added right-sided plans:\n> outer relation is base, inner relation may be join.\n> Sometimes right-sided plans are 30% faster than left-sided.\n> Bushy plans could be more faster...\n> BTW, I broke execution of right-sided plans ~ 1 year ago\n> while implementing subqueries (near materialization node)\n> and still have no time to fix it (seems no one except me\n> used them so I didn't worry about this -:)).\n\nThis explaination helps greatly. This clears up what is happening in\nthe code. I will keep the bushy stuff, and see if I can get it working,\nthough if the problem is outside of the optimizer, I will have trouble.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Feb 1999 00:31:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] bushy plans" }, { "msg_contents": "\nOne more thing. The optimizer was in terrible shape. It is just like\nthe parser and rewrite system before someone went through those and\ncleaned them up and fixed the bugs.\n\nWe get few reports of optimizer problems, so I was unaware of just how\nbroken it was.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Feb 1999 00:57:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] bushy plans" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > BTW, I broke execution of right-sided plans ~ 1 year ago\n> > while implementing subqueries (near materialization node)\n> > and still have no time to fix it (seems no one except me\n> > used them so I didn't worry about this -:)).\n> \n> This explaination helps greatly. This clears up what is happening in\n> the code. I will keep the bushy stuff, and see if I can get it working,\n> though if the problem is outside of the optimizer, I will have trouble.\n ^^^^^^^^^^^^^^^^^^^^^^^^\nI'll help with executor if you'll fix bushy plans generation.\n\nVadim\n", "msg_date": "Mon, 15 Feb 1999 14:06:40 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bushy plans" } ]
[ { "msg_contents": "auth 1d0280b9 subscribe pgsql-hackers [email protected]\n\n\n", "msg_date": "Sun, 14 Feb 1999 18:48:47", "msg_from": "Oscar Cabello <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "\nHallo,\n\nI am having enormous troubles with user defined types in langugae c in\npostgres. Do you know what am I doing wrong? I am afraid there is a\nbug in postgres.\n\nLook at this:\n\nFile testdb.c:\n---------------------\n#include <stdio.h>\n#include <string.h>\n#include <malloc.h>\n\ntypedef struct _mytype {\n\tint a;\n\tint b;\n} mytype;\n\nmytype *\nmytype_in (char *str)\n{\n\tmytype *ret;\n\n\tret = malloc (sizeof(mytype));\n\tret->a = 1;\n\tret->b = 2;\n\treturn (ret);\n}\n\nchar *\nmytype_out (mytype *mt)\n{\n\treturn (strdup(\"9,9\"));\n}\n\n-----------------------------\n\nYou can see that this type is good for nothing.\n(Except for demonstrational purpose.)\n\nThis C-code I compile:\n gcc -fPIC -c testdb.c\n ld -G -Bdynamic -o testdb.so testdb.o\n\n\nAnd then run these queries:\n CREATE FUNCTION mytype_in (opaque)\n RETURNS mytype\n AS '/localpath/testdb.so'\n LANGUAGE 'c';\n\n CREATE FUNCTION mytype_out (opaque)\n RETURNS opaque\n AS '/localpath/testdb.so'\n LANGUAGE 'c';\n\n CREATE TYPE mytype (\n internallength = 8,\n input = mytype_in,\n output = mytype_out);\n\n CREATE TABLE pok (x mytype, txt varchar);\n\n insert into pok (x,txt) values ('Anything','Anything');\n\n------------\n\nYou will see that so far goes everything fine.\nAfter typing \"Select * from pok\" we get core dump (in earlier versions of\npsql) or an infinite loop in brand new versions.\n\nYou will also find that when the user defined type goes in the 'pok' table\nas the last one, everything will work.\n\n\nI am running postgres 6.4.2 on SunOS 5.5.1.\n\nAny ideas?\nYour help will be much appreciated.\n\nPetr Danecek\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Sun, 14 Feb 1999 20:59:35 +0100 (CET)", "msg_from": "Petr Danecek <[email protected]>", "msg_from_op": true, "msg_subject": "Bug in user defined types in postgres?!" } ]
[ { "msg_contents": "\nHallo,\n\nI am having enormous troubles with user defined types in langugae c in\npostgres. Do you know what am I doing wrong? I am afraid there is a\nbug in postgres.\n\nLook at this:\n\nFile testdb.c:\n---------------------\n#include <stdio.h>\n#include <string.h>\n#include <malloc.h>\n\ntypedef struct _mytype {\n\tint a;\n\tint b;\n} mytype;\n\nmytype *\nmytype_in (char *str)\n{\n\tmytype *ret;\n\n\tret = malloc (sizeof(mytype));\n\tret->a = 1;\n\tret->b = 2;\n\treturn (ret);\n}\n\nchar *\nmytype_out (mytype *mt)\n{\n\treturn (strdup(\"9,9\"));\n}\n\n-----------------------------\n\nYou can see that this type is good for nothing.\n(Except for demonstrational purpose.)\n\nThis C-code I compile:\n gcc -fPIC -c testdb.c\n ld -G -Bdynamic -o testdb.so testdb.o\n\n\nAnd then run these queries:\n CREATE FUNCTION mytype_in (opaque)\n RETURNS mytype\n AS '/localpath/testdb.so'\n LANGUAGE 'c';\n\n CREATE FUNCTION mytype_out (opaque)\n RETURNS opaque\n AS '/localpath/testdb.so'\n LANGUAGE 'c';\n\n CREATE TYPE mytype (\n internallength = 8,\n input = mytype_in,\n output = mytype_out);\n\n CREATE TABLE pok (x mytype, txt varchar);\n\n insert into pok (x,txt) values ('Anything','Anything');\n\n------------\n\nYou will see that so far goes everything fine.\nAfter typing \"Select * from pok\" we get core dump (in earlier versions of\npsql) or an infinite loop in brand new versions.\n\nYou will also find that when the user defined type goes in the 'pok' table\nas the last one, everything will work.\n\n\nI am running postgres 6.4.2 on SunOS 5.5.1.\n\nAny ideas?\nYour help will be much appreciated.\n\nPetr Danecek\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Sun, 14 Feb 1999 21:02:06 +0100 (CET)", "msg_from": "Petr Danecek <[email protected]>", "msg_from_op": true, "msg_subject": "Bug in user defined types in postgres?! (fwd)" }, { "msg_contents": "Petr Danecek <[email protected]> writes:\n\n> #include <stdio.h>\n> #include <string.h>\n> #include <malloc.h>\n\nYou can't use malloc() -- you have to use PostgreSQL's own palloc().\nSo you want to replace that \"#include <malloc.h>\" with:\n\n #include <postgres.h>\n #include <utils/palloc.h>\n\nSo, the actual allocation in mytype_in() must be changed:\n\n> mytype *\n> mytype_in (char *str)\n> {\n> \tmytype *ret;\n> \n> \tret = malloc (sizeof(mytype));\n\nHere, the call should be to palloc() instead of malloc(), thus:\n\n\tret = palloc (sizeof(mytype));\n\nThe reason for this is that PostgreSQL does its own memory management,\nfor efficiency reasons, and if you suddenly start calling malloc(),\nyou screw up its logic.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "15 Feb 1999 08:12:49 +0100", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug in user defined types in postgres?! (fwd)" } ]
[ { "msg_contents": "> I have fixed the optimizer, and it is working properly again, and faster\n> too.\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nLooks good Bruce.\n\nHere are some explain results from the 6.4.2 release and the development tree.\n\nPostgres 6.4.2:\n---------------\nQUERY: EXPLAIN\nSELECT hosts.host,\n passwords.login,\n passwords.uid,\n groups.grp,\n passwords.gecos,\n passwords.home,\n passwords.shell\nFROM hosts,\n passwords,\n groups\nWHERE hosts.host_id = passwords.host_id AND\n groups.host_id = passwords.host_id AND\n groups.gid = passwords.gid;\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=30894.02 size=2358855 width=108)\n -> Nested Loop (cost=20459.89 size=278240 width=84)\n -> Index Scan using hosts_pkey on hosts (cost=13.90 size=198 width=16)\n -> Index Scan using passwords_pkey on passwords (cost=103.26 \nsize=154973 width=68)\n -> Seq Scan (cost=20459.89 size=0 width=0)\n -> Sort (cost=164.82 size=0 width=0) \n -> Seq Scan on groups (cost=164.82 size=3934 width=24)\n\nDevelopment Tree:\n-----------------\nQUERY: EXPLAIN\nSELECT hosts.host,\n passwords.login,\n passwords.uid,\n groups.grp,\n passwords.gecos,\n passwords.home,\n passwords.shell\nFROM hosts,\n passwords,\n groups\nWHERE hosts.host_id = passwords.host_id AND\n groups.host_id = passwords.host_id AND\n groups.gid = passwords.gid;\nNOTICE: QUERY PLAN:\n\nHash Join (cost=4309.91 size=40 width=108)\n -> Nested Loop (cost=4291.52 size=40 width=92)\n -> Seq Scan on groups (cost=160.82 size=3934 width=24)\n -> Index Scan using passwords_host_id_key on passwords (cost=1.05 \nsize=154973 width=68)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on hosts (cost=8.53 size=198 width=16)\n\n-Ryan\n", "msg_date": "Sun, 14 Feb 1999 16:18:02 -0700 (MST)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Optimizer is fixed, and faster" }, { "msg_contents": "\nThis is exactly what I need. My testing is very limited. I basically\ntest the functionality, but not real-world samples. I am still working.\nI will let everyone know when I am done, and you can throw any queries\nat it.\n\nWas there a speedup with the new optimizer? Was the new plan faster? \nThe new optimizer uses 'cost' much more reliably. I hope our cost\nestimates for various join types is accurate.\n\n\n> > I have fixed the optimizer, and it is working properly again, and faster\n> > too.\n> \n> Looks good Bruce.\n> \n> Here are some explain results from the 6.4.2 release and the development tree.\n> \n> Postgres 6.4.2:\n> ---------------\n> QUERY: EXPLAIN\n> SELECT hosts.host,\n> passwords.login,\n> passwords.uid,\n> groups.grp,\n> passwords.gecos,\n> passwords.home,\n> passwords.shell\n> FROM hosts,\n> passwords,\n> groups\n> WHERE hosts.host_id = passwords.host_id AND\n> groups.host_id = passwords.host_id AND\n> groups.gid = passwords.gid;\n> NOTICE: QUERY PLAN:\n> \n> Merge Join (cost=30894.02 size=2358855 width=108)\n> -> Nested Loop (cost=20459.89 size=278240 width=84)\n> -> Index Scan using hosts_pkey on hosts (cost=13.90 size=198 width=16)\n> -> Index Scan using passwords_pkey on passwords (cost=103.26 \n> size=154973 width=68)\n> -> Seq Scan (cost=20459.89 size=0 width=0)\n> -> Sort (cost=164.82 size=0 width=0) \n> -> Seq Scan on groups (cost=164.82 size=3934 width=24)\n> \n> Development Tree:\n> -----------------\n> QUERY: EXPLAIN\n> SELECT hosts.host,\n> passwords.login,\n> passwords.uid,\n> groups.grp,\n> passwords.gecos,\n> passwords.home,\n> passwords.shell\n> FROM hosts,\n> passwords,\n> groups\n> WHERE hosts.host_id = passwords.host_id AND\n> groups.host_id = passwords.host_id AND\n> groups.gid = passwords.gid;\n> NOTICE: QUERY PLAN:\n> \n> Hash Join (cost=4309.91 size=40 width=108)\n> -> Nested Loop (cost=4291.52 size=40 width=92)\n> -> Seq Scan on groups (cost=160.82 size=3934 width=24)\n> -> Index Scan using passwords_host_id_key on passwords (cost=1.05 \n> size=154973 width=68)\n> -> Hash (cost=0.00 size=0 width=0)\n> -> Seq Scan on hosts (cost=8.53 size=198 width=16)\n> \n> -Ryan\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Feb 1999 18:36:29 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer is fixed, and faster" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> This is exactly what I need. My testing is very limited. I basically\n> test the functionality, but not real-world samples. I am still working.\n> I will let everyone know when I am done, and you can throw any queries\n> at it.\n> \n> Was there a speedup with the new optimizer? Was the new plan faster?\n> The new optimizer uses 'cost' much more reliably. I hope our cost\n> estimates for various join types is accurate.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nNo. It's ok for netsloop only.\n\nVadim\n", "msg_date": "Mon, 15 Feb 1999 09:23:10 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer is fixed, and faster" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > This is exactly what I need. My testing is very limited. I basically\n> > test the functionality, but not real-world samples. I am still working.\n> > I will let everyone know when I am done, and you can throw any queries\n> > at it.\n> > \n> > Was there a speedup with the new optimizer? Was the new plan faster?\n> > The new optimizer uses 'cost' much more reliably. I hope our cost\n> > estimates for various join types is accurate.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> No. It's ok for netsloop only.\n\nThat is bad. Can you tell someone how to compute those, so perhaps they\ncan give us accurate numbers.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Feb 1999 21:37:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer is fixed, and faster" } ]
[ { "msg_contents": "Hello hackers...\n\nI tried this with version 6.4.2 and the current development tree.\n\nryan=> create table test (test int8 PRIMARY KEY);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index test_pkey for table test\nERROR: Can't find a default operator class for type 20.\n\n\n\nI also tried this:\n\nryan=> create table test (i int8);\nCREATE\nryan=> create index test_pkey on test (i);\nERROR: Can't find a default operator class for type 20.\n\n\n\nand this:\n\nryan=> create index test_pkey on test using btree (i int8_ops);\nERROR: DefineIndex: int8_ops class not found\n\n\nFinally I tried to cast it to an int4_ops to see what happened:\n\nryan=> create unique index test_pkey on test using btree (i int4_ops);\nCREATE\n\n\n\nlooks good but...\n\nryan=> insert into test values (5);\nINSERT 1133758 1\n\nryan=> insert into test values (5);\nINSERT 1133759 1\n\nryan=> select * from test;\ni\n-\n5\n5\n(2 rows)\n\n\n\nDoesn't look quite right to me. If no-one else is working on this, maybe \nthis would be a good project for me to look into.\n\n- Ryan\n\n", "msg_date": "Sun, 14 Feb 1999 19:53:42 -0700 (MST)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Is the int8_ops being implimented?" }, { "msg_contents": "> ryan=> create table test (test int8 PRIMARY KEY);\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index test_pkey for table test\n> ERROR: Can't find a default operator class for type 20.\n> Doesn't look quite right to me. If no-one else is working on this, \n> maybe this would be a good project for me to look into.\n\nSure, go ahead. If you run into trouble or want suggestions, let me know\nsince I have this on my ToDo list anyway...\n\n - Thomas\n", "msg_date": "Wed, 17 Feb 1999 15:32:04 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Is the int8_ops being implimented?" } ]
[ { "msg_contents": "auth f70b79dc subscribe pgsql-hackers [email protected]\n\n\n", "msg_date": "Mon, 15 Feb 1999 09:35:24 +0530", "msg_from": "bhavesh <[email protected]>", "msg_from_op": true, "msg_subject": "Confirmation for subscribe pgsql-hackers" } ]
[ { "msg_contents": "\n\n\"Colin Price (EML)\" ha scritto:\n\n> > -----Original Message-----\n> > From: jose' soares [mailto:[email protected]]\n> > Sent: Friday, February 12, 1999 1:10 PM\n> > To: Colin Price (EML)\n> > Subject: Re: [GENERAL] pg_dump query about views\n> >\n> >\n> >\n> >\n> > \"Colin Price (EML)\" ha scritto:\n> >\n> > > Again, apologies if this is a duplication from the past but\n> > I can't it in\n> > > pgsql-questions :\n> > >\n> > > -------------------------------\n> > > In the reference section, it states there are problems with\n> > dumping views\n> > > and rules. A pg_dumpall/pg_dump stores the view as a table\n> > with a rule.\n> > > Therefore, when loaded back in, the view is now a table and\n> > not loaded into\n> > > pg_view.\n> > >\n> > > To change this, do I create a simple script to remove the\n> > 'CREATE TABLE' and\n> > > transform the 'CREATE RULE' into a create view statement>\n> > > ---------------------------------\n> > >\n> > > As always, thank you in advance,\n> > > Colin PRICE.\n> >\n> > Tables and views are the same thing for PostgreSQL but views\n> > have a rule called\n> > \"_RETtablename\"\n> > to fetch rows from tablename instead of view. AFAIK\n> > pg_dump/pg_dumpall should\n> > work well in v6.4.\n> >\n> > - Jose' -\n> ==========================================================================\n> Cheers for your response. I agree, pg_dump/pg_dumpall works fine.\n> It seems I was looking at this problem from the wrong direction.\n>\n> I thought this was a pg_dump problem.\n> I now believe this to be a view storage issue and was hoping you could\n> complete the following steps to confirm my findings. It should only take\n> you 2 minutes to cut and paste the code.\n>\n> I would be very grateful for your help on this matter.\n> Thank you in advance,\n> Colin PRICE\n>\n> ============================================================================\n> ==\n> - Object : To confirm that pg stores ambiguious fieldnames when creating\n> views\n>\n> 1.. Create table 1 and populate it\n>\n> DROP TABLE \"useraccount\";\n> CREATE TABLE \"useraccount\" (\n> \"id\" int4 NOT NULL,\n> \"login\" character varying(20) NOT NULL,\n> \"usertypeid\" int4 NOT NULL,\n> \"rowstatusid\" int2 DEFAULT 0 NOT NULL);\n>\n> INSERT INTO \"useraccount\" values (1, 'cprice', 2, 0);\n> INSERT INTO \"useraccount\" values (2, 'cprice2', 1, 0);\n> INSERT INTO \"useraccount\" values (3, 'cprice3', 1, 1);\n>\n> 2.. Create table 2 and populate it\n>\n> DROP TABLE \"usertype\";\n> CREATE TABLE \"usertype\" (\n> \"id\" int4 NOT NULL,\n> \"description\" character varying(255) NOT NULL,\n> \"rowstatusid\" int2 NOT NULL);\n> INSERT INTO \"usertype\" values (1, 'Standard user', 0);\n> INSERT INTO \"usertype\" values (2, 'Manager', 0);\n>\n> 3.. Create view :\n>\n> drop view v_usertype;\n> create view v_usertype as\n> select\n> usertype.description as usertypedescription,\n> useraccount.login as login\n> from usertype, useraccount\n> where usertype.id = useraccount.usertypeid\n> and useraccount.rowstatusid = 0;\n>\n> 4.. View the storage of the view.\n>\n> select * from pg_views where viewname like 'v_usertype';\n>\n> The output should be :\n> ===================================================\n> viewname |viewowner|definition\n> ----------+---------+----------\n> v_usertype|postgres |SELECT \"description\" AS \"usertypedescription\", \"login\"\n> FROM\n> \"usertype\", \"useraccount\" WHERE (\"id\" = \"usertypeid\") AND (\"rowstatusid\" =\n> '0':\n> :\"int4\");\n> (1 row)\n> ===================================================\n> Note the rowstatusid fieldname has now become ambiguous since it is present\n> within both tables. Therefore, when exported with pg_dump and re-loaded, the\n> table 'v_usertype' is created but the rule fails.\n>\n> I would be grateful if the above could be confirmed or I could be pointed in\n> the right direction.\n\nThis is a bug. Report it to hackers.\n\n--\n - Jose' -\n\nAnd behold, I tell you these things that ye may learn wisdom; that ye may\nlearn that when ye are in the service of your fellow beings ye are only\nin the service of your God. - Mosiah 2:17 -\n\n\n", "msg_date": "Mon, 15 Feb 1999 17:10:35 +0100", "msg_from": "\"jose' soares\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] pg_dump query about views" } ]
[ { "msg_contents": "Hi,\n\nA newby question:\nI packed Cascading Stylesheets into a system of tables. i want to create\na style.css **file** out of the tables after the User updated some of\nthe values:\n\nTo understand what i mean better:\n1.) I created 3 tables including all possible attributes/values of css\n2.) I created 3 table including _some_ selected items of above (ok,\nredundancie, but very effective...)\n3.) i want to create a file out of 2.) like that...\n\ncreate rule on update of 2.)\n \"make me a textfile including all data from 2.)\" <<<<=== what i need\n\nIMPORTANT: it should be able that i can add some _letters_, which do not\nappear in the table (2.) (header, footer, \":\",\"{\",etc. as field\nseperator)\n\nany ideas/tips/hints \n\nThanks a lot\n\nMArtin Moederndorfer\n", "msg_date": "Mon, 15 Feb 1999 18:41:12 +0100", "msg_from": "Martin =?iso-8859-1?Q?M=F6derndorfer?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Creating textfile from postgres tables" } ]
[ { "msg_contents": "Already asked this in the other lists so here.\n\nI need to store some polygons that are larger than 8K.\nI was reading in hackers archives talk about a solution\nto the 8K limit. Was anything done? If so, what do I\nneed to do to solve my problem?\n\n\nRegards,\nKenneth R. Mort <[email protected]>\nTreeTop Research\nBrooklyn, NY, USA\n", "msg_date": "Mon, 15 Feb 1999 23:25:19 -0500", "msg_from": "\"Ken Mort\" <[email protected]>", "msg_from_op": true, "msg_subject": "8K block limit" }, { "msg_contents": "On Mon, 15 Feb 1999, Ken Mort wrote:\n\n> Already asked this in the other lists so here.\n> \n> I need to store some polygons that are larger than 8K.\n> I was reading in hackers archives talk about a solution\n> to the 8K limit. Was anything done? If so, what do I\n> need to do to solve my problem?\n\nThere is an option that can be set at compile time to set the block size\nfrom 8k to something like 32 or 64K (not sure which).\n\nNote: Changing the block size may have a performance hit however.\n\nAnother way is to break the polygons down into smaller pieces.\n\nPeter\n\n--\n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Tue, 16 Feb 1999 20:16:34 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8K block limit" }, { "msg_contents": "> On Mon, 15 Feb 1999, Ken Mort wrote:\n> \n> > Already asked this in the other lists so here.\n> > \n> > I need to store some polygons that are larger than 8K.\n> > I was reading in hackers archives talk about a solution\n> > to the 8K limit. Was anything done? If so, what do I\n> > need to do to solve my problem?\n> \n> There is an option that can be set at compile time to set the block size\n> from 8k to something like 32 or 64K (not sure which).\n\nI think it is 32k. (tuple offset in a block is limited to 15 bits)\n\n> Note: Changing the block size may have a performance hit however.\n\nWhy?\n---\nTatsuo Ishii\n", "msg_date": "Wed, 17 Feb 1999 22:59:04 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8K block limit " }, { "msg_contents": "On Wed, 17 Feb 1999, Tatsuo Ishii wrote:\n\n> > On Mon, 15 Feb 1999, Ken Mort wrote:\n> > \n> > > Already asked this in the other lists so here.\n> > > \n> > > I need to store some polygons that are larger than 8K.\n> > > I was reading in hackers archives talk about a solution\n> > > to the 8K limit. Was anything done? If so, what do I\n> > > need to do to solve my problem?\n> > \n> > There is an option that can be set at compile time to set the block size\n> > from 8k to something like 32 or 64K (not sure which).\n> \n> I think it is 32k. (tuple offset in a block is limited to 15 bits)\n> \n> > Note: Changing the block size may have a performance hit however.\n> \n> Why?\n\nI think some file systems are more optimised for 8K blocks. I may be\nthinking on the original reason for the 8k limit in the first place, but I\nremember there was discussions about this when the block size was altered.\n\nPeter\n\n-- \nPeter Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as being the\nofficial words of Maidstone Borough Council\n\n\n", "msg_date": "Wed, 17 Feb 1999 15:45:58 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8K block limit " }, { "msg_contents": "> \n> I think some file systems are more optimised for 8K blocks. I may be\n> thinking on the original reason for the 8k limit in the first place, but I\n> remember there was discussions about this when the block size was altered.\n\nYes, most UFS file systems use 8k blocks/2k fragments. It allows write\nof block in one i/o operation.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Feb 1999 11:57:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8K block limit" }, { "msg_contents": "> > I think some file systems are more optimised for 8K blocks. I may be\n> > thinking on the original reason for the 8k limit in the first \n> > place, but I remember there was discussions about this when the block\n> > size was altered.\n> \n> Yes, most UFS file systems use 8k blocks/2k fragments. It allows write\n> of block in one i/o operation.\n\nThe max is 32k because of the aforementioned 15 bits available, but I'd\nbe a bit cautious of trying it. When I put this in, the highest I could\nget to work on AIX was 16k. Pushing it up to 32k caused major breakage\nin the system internals. Had to reboot the machine and fsck the file\nsystem. Some files were linked incorrectly, other files disappeared, etc,\na real mess.\n\nNot sure exactly what it corrupted, but I'd try the 32k limit on a non-\nproduction system first...\n\nDarren\n", "msg_date": "Wed, 17 Feb 1999 20:37:36 -0500", "msg_from": "\"Stupor Genius\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] 8K block limit" }, { "msg_contents": ">> > I think some file systems are more optimised for 8K blocks. I may be\n>> > thinking on the original reason for the 8k limit in the first \n>> > place, but I remember there was discussions about this when the block\n>> > size was altered.\n>> \n>> Yes, most UFS file systems use 8k blocks/2k fragments. It allows write\n>> of block in one i/o operation.\n\nBut modern Unixes have read/write ahead i/o if it seems a sequential\naccess, don't they. I did some testing on my LinuxPPC box.\n\n0. create table t2(i int,c char(4000));\n1. time psql -c \"copy t2 from '/tmp/aaa'\" test\n (aaa has 5120 records and this will create 20MB table)\n2. time psql -c \"select count(*) from t2\" test\n3. total time of the regression test\n\no result of testing 1\n\n 8K: 0.02user 0.04system 3:26.20elapsed\n32K: 0.03user 0.06system 0:48.25elapsed\n\n 32K is 4 times faster than 8k!\n\no result of testing 2\n\n 8K: 0.02user 0.04system 6:00.31elapsed\n32K: 0.04user 0.02system 1:02.13elapsed\n\n 32K is neary 6 times faster than 8k!\n\no result of testing 3\n\n 8K: 11.46user 9.51system 6:08.24\n32K: 11.34user 9.54system 7:35.35\n\n 32K is a little bit slower than 8K?\n\nMy thought:\n\nIn my test case the tuple size is relatively large, so by using\nordinary size tuple, we may get different results. And of course\ndifferent OS may behave differently...\n\nAnother point is the access method. I only tested for seq scan. I\ndon't know for index scan.\n\nAdditional testings are welcome...\n\n>The max is 32k because of the aforementioned 15 bits available, but I'd\n>be a bit cautious of trying it. When I put this in, the highest I could\n>get to work on AIX was 16k. Pushing it up to 32k caused major breakage\n>in the system internals. Had to reboot the machine and fsck the file\n>system. Some files were linked incorrectly, other files disappeared, etc,\n>a real mess.\n>\n>Not sure exactly what it corrupted, but I'd try the 32k limit on a non-\n>production system first...\n\nI did above on 6.4.2. What kind of version are you using? Or maybe\nplatform dependent problem?\n\nBTW, the biggest problem is there are some hard coded query length\nlimits in somewhere(for example MAX_MESSAGE_LEN in libpq-int.h). Until\nthese get fixed, 32K option is only useful for (possible) performance\nboosting.\n---\nTatsuo Ishii\n", "msg_date": "Thu, 18 Feb 1999 12:09:02 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8K block limit " }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> But modern Unixes have read/write ahead i/o if it seems a sequential\n> access, don't they. I did some testing on my LinuxPPC box.\n> \n> 0. create table t2(i int,c char(4000));\n> 1. time psql -c \"copy t2 from '/tmp/aaa'\" test\n> (aaa has 5120 records and this will create 20MB table)\n> 2. time psql -c \"select count(*) from t2\" test\n> 3. total time of the regression test\n> \n> o result of testing 1\n> \n> 8K: 0.02user 0.04system 3:26.20elapsed\n> 32K: 0.03user 0.06system 0:48.25elapsed\n> \n> 32K is 4 times faster than 8k!\n> \n> o result of testing 2\n> \n> 8K: 0.02user 0.04system 6:00.31elapsed\n> 32K: 0.04user 0.02system 1:02.13elapsed\n> \n> 32K is neary 6 times faster than 8k!\n\nDid you use the same -B for 8K and 32K ?\nYou should use 4x buffers in 8K case!\n\nVadim\n", "msg_date": "Thu, 18 Feb 1999 10:17:01 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8K block limit" }, { "msg_contents": ">Tatsuo Ishii wrote:\n>> \n>> But modern Unixes have read/write ahead i/o if it seems a sequential\n>> access, don't they. I did some testing on my LinuxPPC box.\n>> \n>> 0. create table t2(i int,c char(4000));\n>> 1. time psql -c \"copy t2 from '/tmp/aaa'\" test\n>> (aaa has 5120 records and this will create 20MB table)\n>> 2. time psql -c \"select count(*) from t2\" test\n>> 3. total time of the regression test\n>> \n>> o result of testing 1\n>> \n>> 8K: 0.02user 0.04system 3:26.20elapsed\n>> 32K: 0.03user 0.06system 0:48.25elapsed\n>> \n>> 32K is 4 times faster than 8k!\n>> \n>> o result of testing 2\n>> \n>> 8K: 0.02user 0.04system 6:00.31elapsed\n>> 32K: 0.04user 0.02system 1:02.13elapsed\n>> \n>> 32K is neary 6 times faster than 8k!\n>\n>Did you use the same -B for 8K and 32K ?\n>You should use 4x buffers in 8K case!\n\nOk. This time I started postmaster as 'postmaster -S -i -B 256'.\n\ntest1:\n0.03user 0.02system 3:21.65elapsed\n\ntest2:\n0.01user 0.08system 5:30.94elapsed\n\na little bit faster, but no significant difference?\n--\nTatsuo Ishii\n", "msg_date": "Thu, 18 Feb 1999 12:33:51 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8K block limit " }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> >\n> >Did you use the same -B for 8K and 32K ?\n> >You should use 4x buffers in 8K case!\n> \n> Ok. This time I started postmaster as 'postmaster -S -i -B 256'.\n> \n> test1:\n> 0.03user 0.02system 3:21.65elapsed\n> \n> test2:\n> 0.01user 0.08system 5:30.94elapsed\n> \n> a little bit faster, but no significant difference?\n\nYes. So, 32K is sure value for a few simultaneous sessions.\n\nVadim\n", "msg_date": "Thu, 18 Feb 1999 11:42:31 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8K block limit" }, { "msg_contents": "> Additional testings are welcome...\n> \n> >The max is 32k because of the aforementioned 15 bits available, but I'd\n> >be a bit cautious of trying it. When I put this in, the highest I could\n> >get to work on AIX was 16k. Pushing it up to 32k caused major breakage\n> >in the system internals. Had to reboot the machine and fsck the file\n> >system. Some files were linked incorrectly, other files \n> disappeared, etc,\n> >a real mess.\n> >\n> >Not sure exactly what it corrupted, but I'd try the 32k limit on a non-\n> >production system first...\n> \n> I did above on 6.4.2. What kind of version are you using? Or maybe\n> platform dependent problem?\n\nMy platform at the time was AIX 4.1.4.0 and it was an definitely AIX\nthat broke, not postgres.\n\nGlad to hear it works at 32k on other systems though!\n\nDarren\n\n\n", "msg_date": "Thu, 18 Feb 1999 05:50:52 -0500", "msg_from": "\"Stupor Genius\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] 8K block limit " } ]
[ { "msg_contents": "\nThis is in v6.4 ...\n\nsurveys=> create table systems (\nsurveys-> operating_system text,\nsurveys-> count int4 );\nCREATE\nsurveys=> insert into systems \nsurveys-> select sys_type,count(sys_type)\nsurveys-> from op_sys\nsurveys-> where sys_type is not null\nsurveys-> group by sys_type;\nERROR: insert: more expressions than target columns\nsurveys=> \\d systems\n\nTable = systems\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| operating_system | text | var |\n| count | int4 | 4 |\n+----------------------------------+----------------------------------+-------+\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 16 Feb 1999 02:11:54 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Odd error message: insert into...select from ..." } ]
[ { "msg_contents": "\nsurveys=> explain select count(sys_type) as tot_sys_type,sys_type\nsurveys-> from op_sys\nsurveys-> where sys_type is not null \nsurveys-> and tot_sys_type > 100\nsurveys-> group by sys_type\nsurveys-> order by tot_sys_type desc;\nERROR: attribute 'tot_sys_type' not found\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 16 Feb 1999 02:14:30 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Might already be fixed...?" } ]
[ { "msg_contents": "Hi,\n\nThere are some VARCHAR built-in functions, but I can't use them...\n\nhygea=> \\df varchar\nresult |function |arguments |description\n-------+---------------+--------------+---------------------\nbool |varchareq |varchar varcha|equal\nbool |varcharge |varchar varcha|greater-than-or-equal\nbool |varchargt |varchar varcha|greater-than\nbool |varcharle |varchar varcha|less-than-or-equal\nbool |varcharlt |varchar varcha|less-than\nbool |varcharne |varchar varcha|not equal\nint4 |varcharcmp |varchar varcha|less-equal-greater\nint4 |varcharlen |varchar |character length\nint4 |varcharoctetlen|varchar |octet length\nvarchar|varchar |name |convert\nvarchar|varchar |varchar int4 |\n(11 rows)\n\nhygea=> select varchar('boo'::varchar,12);\nERROR: parser: parse error at or near \"'\"\nhygea=> select varchar('boo'::name);\nERROR: parser: parse error at or near \"'\"\n\n - Jose' -\n\n\n\nHi,\nThere are some VARCHAR built-in functions, but I can't use them...\nhygea=> \\df varchar\nresult |function       |arguments    \n|description\n-------+---------------+--------------+---------------------\nbool   |varchareq      |varchar\nvarcha|equal\nbool   |varcharge      |varchar\nvarcha|greater-than-or-equal\nbool   |varchargt      |varchar\nvarcha|greater-than\nbool   |varcharle      |varchar\nvarcha|less-than-or-equal\nbool   |varcharlt      |varchar\nvarcha|less-than\nbool   |varcharne      |varchar\nvarcha|not equal\nint4   |varcharcmp     |varchar varcha|less-equal-greater\nint4   |varcharlen     |varchar      \n|character length\nint4   |varcharoctetlen|varchar      \n|octet length\nvarchar|varchar        |name         \n|convert\nvarchar|varchar        |varchar\nint4  |\n(11 rows)\nhygea=> select varchar('boo'::varchar,12);\nERROR:  parser: parse error at or near \"'\"\nhygea=> select varchar('boo'::name);\nERROR:  parser: parse error at or near \"'\"\n                              \n- Jose' -", "msg_date": "Tue, 16 Feb 1999 10:31:41 +0100", "msg_from": "\"jose' soares\" <[email protected]>", "msg_from_op": true, "msg_subject": "varchar function" } ]
[ { "msg_contents": "Hi !!\n\nWhen I run initdb, an error comes like this:\n\n[postgres@oak postgres]$ initdb\ninitdb: using /usr/local/pgsql/lib/local1_template1.bki.source as input \nto create the template database.\ninitdb: using /usr/local/pgsql/lib/global1.bki.source as input to create \nthe global classes.\ninitdb: using /usr/local/pgsql/lib/pg_hba.conf.sample as the host-based \nauthentication control file.\n \nWe are initializing the database system with username postgres \n(uid=100).\nThis user will own all the files and must also own the server process.\n \nCreating Postgres database system directory /usr/local/pgsql/data\n \nCreating Postgres database system directory /usr/local/pgsql/data/base\n \ninitdb: creating template databasein \n/usr/local/pgsql/data/base/template1\nRunning: postgres -boot -C -F -D/usr/local/pgsql/data -Q template1\n syntax error 2305 : parse error\nCreating global classes in /base\nRunning: postgres -boot -C -F -D/usr/local/pgsql/data -Q template1\n \nAdding template1 database to pg_database...\nRunning: postgres -boot -C -F -D/usr/local/pgsql/data -Q template1 < \n/tmp/create\n.15280\nERROR: pg_atoi: error in \"template1\": can't parse \"template1\"\nERROR: pg_atoi: error in \"template1\": can't parse \"template1\"\ninitdb: could not log template database\ninitdb: cleaning up.\n[postgres@oak postgres]$\n\nPlease suggest what I have to run initdb successfully. I would really \nappriciate your help.\n\nThanks,\n\nRandy.\n\n______________________________________________________\nGet Your Private, Free Email at http://www.hotmail.com\n", "msg_date": "Tue, 16 Feb 1999 11:06:19 PST", "msg_from": "\"Randy Singh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help for running initdb" } ]
[ { "msg_contents": "A while ago a question was posted regarding an error message\n\"SIMarkEntryData: cache state reset\", and I don't believe any\nresponse ever came. I'm forwarding the original two notes on\nthis. Any information that can clear up this error message would\nbe greatly appreciated.\n\nThomas\n\nThomas Reinke wrote:\n> \n> I'd also like to know - this is one of the errors I have seen using\n> PostGres when using multiple clients, and why we've had to implement\n> a lock manager outside of PostGres to limit access to a single client\n> at a time.\n> \n> Thomas\n> \n> Peter T Mount wrote:\n> >\n> > Does anyone know what's going on here?\n> >\n> > --\n> > Peter T Mount [email protected]\n> > Main Homepage: http://www.retep.org.uk\n> > PostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n> > Java PDF Generator: http://www.retep.org.uk/pdf\n> >\n> > ---------- Forwarded message ----------\n> > Date: Fri, 12 Feb 1999 18:59:00 -0800\n> > From: Jason Venner <[email protected]>\n> > To: Peter T Mount <[email protected]>\n> > Subject: JDBC lo crashes etc\n> >\n> > I recompiled my 6.3.2 with cassert turned on, and using the 6.4.2 jdbc driver, I get the following\n> > after inserting a bunch of images.\n> >\n> > postmaster.log: NOTICE: SIMarkEntryData: cache state reset\n> > postmaster.log: Failed Assertion(\"!(RelationNameCache->hctl->nkeys == 10):\", File: \"relcache.c\", Line: 1523)\n> > postmaster.log: !(RelationNameCache->hctl->nkeys == 10) (0) [Illegal seek]\n>\n", "msg_date": "Tue, 16 Feb 1999 10:12:44 -0500", "msg_from": "Thomas Reinke <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: [HACKERS] JDBC lo crashes etc (fwd)]" } ]
[ { "msg_contents": "Dear Friends,\n\tI am trying to compile postgresql to an ULTRA 10 machine operated\nby the Solaris system. I face this error in compilation that I cannot fix.\nPlease if you can help me, reply soon. Thank you for concern and looking\nforward to hearing from you.\n\nSincerely,\nMohamed Hefny\nGraduate Student and Research Assitant\nComputer Science Department\nThe American University in Cairo \n\n", "msg_date": "Wed, 17 Feb 1999 10:24:02 +0200 (EET)", "msg_from": "Mohamed Salah-Al-Din Hefny <[email protected]>", "msg_from_op": true, "msg_subject": "Problem in Compiling" } ]
[ { "msg_contents": "Dear Friends,\n I am trying to compile postgresql to an ULTRA 10 machine operated\nby the Solaris system. I face this error in compilation that I cannot fix. \nPlease if you can help me, reply soon. Thank you for concern and looking\nforward to hearing from you. \n\nSincerely, Mohamed Hefny Graduate Student and Research Assitant \nComputer Science Department \nThe American University in Cairo\n\nERROR:\n In file included from /usr/include/sys/stream.h:26,\n from /usr/include/netinet/in.h:38,\n from /usr/include/netdb.h:96,\n from bsql.c:2: /usr/include/sys/model.h:32: #error \"No\nDATAMODEL_NATIVE specified\" \n\n\n", "msg_date": "Wed, 17 Feb 1999 10:41:44 +0200 (EET)", "msg_from": "Mohamed Salah-Al-Din Hefny <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres Compilation Error" } ]
[ { "msg_contents": "Hi,\n\nI'm trying to create a varchar(float8) to cast float to varchar but I\ncan't create it.\nI can create bpchar(foat8) and text(float8) and it works well but\nvarchar(float8).\n\nEXAPLE:\n\ncreate table test(f float, n name);\nCREATE\ninsert into test values(1.23, current_user);\nINSERT 192042 1\nselect cast(f as text) from test;\ntext\n-------------------------\n2000-01-01 01:00:01.23+01 <----this is wrong then I create\ntext(float8)\n(1 row)\n\nselect cast(f as char) from test;\nERROR: No such function 'bpchar' with the specified attributes\nselect cast(f as varchar) from test;\nERROR: No such function 'varchar' with the specified attributes\ncreate function text(float8) returns text as\n'begin\n return $1;\nend;' language 'plpgsql';\nCREATE\ncreate function bpchar(float8) returns bpchar as\n'begin\n return $1;\nend;' language 'plpgsql';\nCREATE\ncreate function varchar(float8) returns varchar as\n'begin\n return $1;\nend;' language 'plpgsql';\nERROR: parser: parse error at or near \"varchar\" <---there's a parser\nerror.\nselect cast(f as text) from test;\ntext\n----\n1.23 <------and now it works\n(1 row)\n\nselect cast(f as char) from test;\nbpchar\n------\n 1.23\n(1 row)\n\n----I see there are some varchar built-in functions but I can't use them\nalso...\n\n\\df varchar\nresult |function |arguments |description\n-------+---------------+--------------+---------------------\nbool |varchareq |varchar varcha|equal\nbool |varcharge |varchar varcha|greater-than-or-equal\nbool |varchargt |varchar varcha|greater-than\nbool |varcharle |varchar varcha|less-than-or-equal\nbool |varcharlt |varchar varcha|less-than\nbool |varcharne |varchar varcha|not equal\nint4 |varcharcmp |varchar varcha|less-equal-greater\nint4 |varcharlen |varchar |character length\nint4 |varcharoctetlen|varchar |octet length\nvarchar|varchar |name |convert\nvarchar|varchar |varchar int4 |\n(11 rows)\n\nselect varchar(n) from test;\nERROR: parser: parse error at or near \"n\"\n\n\n--Any ideas ?\n\n-Jose'-\n\n\n\nHi,\nI'm trying to create a varchar(float8) to cast float to varchar\nbut I can't create it.\nI can create bpchar(foat8) and text(float8) and it works well but\nvarchar(float8).\nEXAPLE:\ncreate table test(f float, n name);\nCREATE\ninsert into test values(1.23, current_user);\nINSERT 192042 1\nselect cast(f as text) from test;\ntext\n-------------------------\n2000-01-01 01:00:01.23+01   <----this is wrong then\nI create text(float8)\n(1 row)\nselect cast(f as char) from test;\nERROR:  No such function 'bpchar' with the specified attributes\nselect cast(f as varchar) from test;\nERROR:  No such function 'varchar' with the specified attributes\ncreate function text(float8) returns text as\n'begin\n        return $1;\nend;' language 'plpgsql';\nCREATE\ncreate function bpchar(float8) returns bpchar as\n'begin\n        return $1;\nend;' language 'plpgsql';\nCREATE\ncreate function varchar(float8) returns varchar as\n'begin\n        return $1;\nend;' language 'plpgsql';\nERROR:  parser: parse error at or near \"varchar\"  <---there's\na parser error.\nselect cast(f as text) from test;\ntext\n----\n1.23    <------and now it works\n(1 row)\nselect cast(f as char) from test;\nbpchar\n------\n  1.23\n(1 row)\n----I see there are some varchar built-in functions but I can't\nuse them also...\n\\df varchar\nresult |function       |arguments    \n|description\n-------+---------------+--------------+---------------------\nbool   |varchareq      |varchar\nvarcha|equal\nbool   |varcharge      |varchar\nvarcha|greater-than-or-equal\nbool   |varchargt      |varchar\nvarcha|greater-than\nbool   |varcharle      |varchar\nvarcha|less-than-or-equal\nbool   |varcharlt      |varchar\nvarcha|less-than\nbool   |varcharne      |varchar\nvarcha|not equal\nint4   |varcharcmp     |varchar varcha|less-equal-greater\nint4   |varcharlen     |varchar      \n|character length\nint4   |varcharoctetlen|varchar      \n|octet length\nvarchar|varchar        |name         \n|convert\nvarchar|varchar        |varchar\nint4  |\n(11 rows)\nselect varchar(n) from test;\nERROR:  parser: parse error at or near \"n\"\n \n--Any ideas ?\n-Jose'-", "msg_date": "Wed, 17 Feb 1999 13:00:50 +0100", "msg_from": "\"jose' soares\" <[email protected]>", "msg_from_op": true, "msg_subject": "varchar function" }, { "msg_contents": "> I'm trying to create a varchar(float8) to cast float to varchar but I\n> can't create it.\n> --Any ideas ?\n\n>From the current development tree (I'm pretty sure; may not have\ncommitted everything yet):\n\npostgres=> select varchar('123'::float8);\nERROR: parser: parse error at or near \"'\"\npostgres=> select varchar(float8 '123');\nERROR: parser: parse error at or near \"float8\"\npostgres=> select (float8 '123')::varchar;\n?column?\n--------\n 123\n(1 row)\n\nIt seems that there are some problems with calls to functions named\nvarchar().\n\n - Tom\n", "msg_date": "Thu, 18 Feb 1999 03:16:57 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar function" } ]
[ { "msg_contents": "\nIs known what functions and data have to be exported (= some interface) from\nthe backend, so they can be used by dynamicly loaded modules?\n\n----------------------------------------------\nDaniel Horak\nnetwork and system administrator\ne-mail: [email protected]\nprivat e-mail: [email protected]\n----------------------------------------------\n", "msg_date": "Wed, 17 Feb 1999 14:19:25 -0000", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "interface between backend and dynamicly loaded modules" } ]
[ { "msg_contents": "\tERROR: internal error: untrusted function not supported.\nWhat does it mean and how do I fix it?\n\nI'm trying to create a lower function to work with varchar ( I would\nguess this would fall in with type coercion, Tom?). \n\n", "msg_date": "Wed, 17 Feb 1999 11:20:19 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "What does this mean?" }, { "msg_contents": "> ERROR: internal error: untrusted function not supported.\n> What does it mean and how do I fix it?\n> I'm trying to create a lower function to work with varchar ( I would\n> guess this would fall in with type coercion, Tom?).\n\n>From the current development tree:\n\npostgres=> select lower('ABCD'::varchar);\nlower\n-----\nabcd\n(1 row)\n\nIt's been a while since I've run across that message. Do you want to\nsend me your code?\n\n - Tom\n", "msg_date": "Thu, 18 Feb 1999 03:24:14 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What does this mean?" } ]
[ { "msg_contents": "Hi all\n\nFirst, what ever came of the talk of a 6.4.3 and supporting a release\nback?\n\nA client of mine has 6.3 installed, and I need to know if I should advise\nthem to go to 6.4.2? or if 6.4.3 is about to come out? or how close is\n6.5? (I want to use some of the features not in 6.3.x)\n\n-------------------------------------\n\nOn another matter, version numbers. What would every one think of numbers\nwhere to 2nd number being odd or even denotes development or stable\nrespectively? like with the Linux kernel.\n\nNow would be a great time to move to it, as it has already been discused\nto use 6.4.x as the \"stable\" version, the current development version is\n6.5, so when it is ready it could be released as 6.6.0\n\nTanks and have a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner I'm excited about life! How about YOU!?\nProfessional Web Hosting and site design to include programming\nProudly powered by R H Linux 4.2, Apache 1.3.x, PHP 3.x, PostgreSQL 6.x\n-----------------------------------------------------------------------\nOnly if you know where you're going can you get there.\n\n", "msg_date": "Wed, 17 Feb 1999 13:46:42 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "6.4.3? and version numbers" }, { "msg_contents": "On Wed, 17 Feb 1999, Terry Mackintosh wrote:\n\n> Hi all\n> \n> First, what ever came of the talk of a 6.4.3 and supporting a release\n> back?\n> \n> A client of mine has 6.3 installed, and I need to know if I should advise\n> them to go to 6.4.2? or if 6.4.3 is about to come out? or how close is\n> 6.5? (I want to use some of the features not in 6.3.x)\n> \n> -------------------------------------\n> \n> On another matter, version numbers. What would every one think of numbers\n> where to 2nd number being odd or even denotes development or stable\n> respectively? like with the Linux kernel.\n\nNot in your wildest dreams...that has been asked over and over again ove\nthe past few years, and I veto it each and every time...we follow the *BSD\ndevelopment model, not the Linux one...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 17 Feb 1999 15:58:14 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.3? and version numbers" }, { "msg_contents": "On Wed, 17 Feb 1999, The Hermit Hacker wrote:\n\n> On Wed, 17 Feb 1999, Terry Mackintosh wrote:\n> \n> > Hi all\n> > \n> > First, what ever came of the talk of a 6.4.3 and supporting a release\n> > back?\n> > \n> > A client of mine has 6.3 installed, and I need to know if I should advise\n> > them to go to 6.4.2? or if 6.4.3 is about to come out? or how close is\n> > 6.5? (I want to use some of the features not in 6.3.x)\n> > \n> > -------------------------------------\n> > \n> > On another matter, version numbers. What would every one think of numbers\n> > where to 2nd number being odd or even denotes development or stable\n> > respectively? like with the Linux kernel.\n> \n> Not in your wildest dreams...that has been asked over and over again ove\n> the past few years, and I veto it each and every time...we follow the *BSD\n> development model, not the Linux one...\n\nEven though I'm in the Linux camp, I agree with Marc. I'd think it would\ncause a lot of confusion to us developers trying to keep track of more\nthan one source tree.\n\nEven my own development projects follow the PostgreSQL (aka BSD) model, as\nto my mind it's the easiest to maintain.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Wed, 17 Feb 1999 22:18:24 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.3? and version numbers" }, { "msg_contents": "Thus spake The Hermit Hacker\n> > On another matter, version numbers. What would every one think of numbers\n> > where to 2nd number being odd or even denotes development or stable\n> > respectively? like with the Linux kernel.\n> \n> Not in your wildest dreams...that has been asked over and over again ove\n> the past few years, and I veto it each and every time...we follow the *BSD\n> development model, not the Linux one...\n\nBesides, we never have anything but stable releases anyway, right? :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 17 Feb 1999 22:59:32 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.3? and version numbers" } ]
[ { "msg_contents": "\n[I'm posting this to the hackers list, as I think this is something deep\nin the backend, and not JDBC - Peter]\n\nI've been talking to Jason Venner <[email protected]> over the last couple\nof days with an interesting problem.\n\nHe's got a small Java application that restores large objects from a\nbackup to a database. However, the backend seemed to segv at exactly the\nsame moment.\n\nThis occurs with both 6.3.x and 6.4.x (can't remember what revision).\n\nLast night, he sent me a copy of the app, and I ran it against a recent\n(last Saturday) cvs copy of 6.5, and the same thing happens.\n\nNow to good bit ;-)\n\nThe first problem is with pgdump. His restore app is in two parts, a shell\nscript and a Java application. The shell script creates two databases\n(edit and prod), restores them (from the output of pgdump), then calls\nJava to load the large objects, and repair the differing oid's.\n\nHowever, this fails when creating functions that have more than one sql\nstatement in them. He has some functions that insert into a table\ndepending on some arguments, then issue a select on the last arg which is\nthe functions result. However, pgdump doesn't end the select with a ; and\nthis causes the 6.5 backend to fail. Adding the ; fixes the problem.\n\nI don't know if it's a known problem, but may need someone to check.\n\nOk, that's the simple one. Now that harder two:\n\nWhen the Java app runs, it causes the backend to segv repeatedly at the\nsame point (after about the 60th large object). Now, I've checked his code\nand can't find anything obviously wrong, so I added some tracing to it,\nand discovered that when the application closes (either explicitly closing\nthe application, or upon an error), the backend outputs the following to\nstderr, and segv's:\n\npq_recvbuf: recv() failed, errno 2\npq_recvbuf: recv() failed, errno 0\n\nNow, each of these are for the two open connections, and each appears as\nsoon as it's respective connection closes. Remember the connections are to\ntwo different databases.\n\nRunning the backed with the -d2 flag, these expand to:\n\npq_recvbuf: recv() failed, errno 2\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 6731 exited with status 0\npq_recvbuf: recv() failed, errno 0\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 6730 exited with status 0\n\nThis is repeatable, and is not related at all to the large object being\nloaded. Reversing the order that the objects are loaded, causes it to fail\non a different object.\n\nNow, the first question: Does someone who knows the backend better than I\ndo know what could cause the recv() message to occur when disconnecting?\n\nNow the third problem. The last problem occurs outside of any\ntransactions. In JDBC, you use transactions by setting autocommit to\nfalse. Then, there are methods to commit or rollback the database.\n\nOk, now the problem. When he sets autocommit to false, the JDBC driver\nsends BEGIN to the backend. Ok so far, however, something then fails\nduring the first large object's load, and causes everything else to fail.\n\nI haven't looked into this one fully, but it's identical on all three\nmajor versions of the backend, which is a little surprising.\n\nNow the weird thing is that the same errors occur when the connections are\nclosed.\n\nI don't think it's a JDBC problem, as I can't reproduce it with any of my\ncode. Neither can I see anything wrong with Jason's code.\n\nAny how, this is what's kept me busy the last few evenings, and it's got\nbe stumped.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Wed, 17 Feb 1999 21:59:06 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Continued problems with pgdump, Large Objects and crashing backends" }, { "msg_contents": "Peter T Mount <[email protected]> writes:\n> However, this fails when creating functions that have more than one sql\n> statement in them. He has some functions that insert into a table\n> depending on some arguments, then issue a select on the last arg which is\n> the functions result. However, pgdump doesn't end the select with a ; and\n> this causes the 6.5 backend to fail. Adding the ; fixes the problem.\n\nWhat does 'fail' mean exactly? Crash, or just reject the query?\nIt sounds like there is a pg_dump bug here (omitting a required\nsemicolon) but I don't understand whether there's also a backend bug.\n\n\n> Running the backed with the -d2 flag, these expand to:\n\n> pq_recvbuf: recv() failed, errno 2\n> proc_exit(0) [#0]\n> shmem_exit(0) [#0]\n> exit(0)\n> /usr/local/pgsql/bin/postmaster: reaping dead processes...\n> /usr/local/pgsql/bin/postmaster: CleanupProc: pid 6731 exited with status 0\n> pq_recvbuf: recv() failed, errno 0\n> proc_exit(0) [#0]\n> shmem_exit(0) [#0]\n> exit(0)\n> /usr/local/pgsql/bin/postmaster: reaping dead processes...\n> /usr/local/pgsql/bin/postmaster: CleanupProc: pid 6730 exited with status 0\n\nThis doesn't look like a segv trace to me --- if the backend was\ncoredumping then the postmaster should see a nonzero exit status.\n\nThe recv() complaints probably indicate that the client application\ndisconnected ungracefully (ie, without sending the 'X' terminate\nmessage). It's curious that they're not both alike.\nThat might be a red herring however --- right now pq_recvbuf doesn't\ndistinguish plain EOF from a true error, and if it's plain EOF then\nwhatever errno was last set to gets printed. Think I'll go fix that.\n\nBarring more evidence, all I see here is client disconnect, not a\nbackend failure. What's your basis for claiming a segv crash?\n\n\n> Ok, now the problem. When he sets autocommit to false, the JDBC driver\n> sends BEGIN to the backend. Ok so far, however, something then fails\n> during the first large object's load, and causes everything else to fail.\n\nThat's not a bug, it's a feature ... allegedly, anyway. Any error\ninside a transaction means the entire transaction is aborted. And\nthe backend will keep reminding you so until you cooperate by ending\nthe transaction. I don't like the behavior very much either, but\nit's operating as designed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Feb 1999 17:57:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Continued problems with pgdump,\n\tLarge Objects and crashing backends" }, { "msg_contents": "On Wed, 17 Feb 1999, Tom Lane wrote:\n\n> Peter T Mount <[email protected]> writes:\n> > However, this fails when creating functions that have more than one sql\n> > statement in them. He has some functions that insert into a table\n> > depending on some arguments, then issue a select on the last arg which is\n> > the functions result. However, pgdump doesn't end the select with a ; and\n> > this causes the 6.5 backend to fail. Adding the ; fixes the problem.\n> \n> What does 'fail' mean exactly? Crash, or just reject the query?\n> It sounds like there is a pg_dump bug here (omitting a required\n> semicolon) but I don't understand whether there's also a backend bug.\n\nI didn't say this was a backend bug, but was one thing I came across while\nlooking at the following problem.\n\n> > Running the backed with the -d2 flag, these expand to:\n> \n> > pq_recvbuf: recv() failed, errno 2\n> > proc_exit(0) [#0]\n> > shmem_exit(0) [#0]\n> > exit(0)\n> > /usr/local/pgsql/bin/postmaster: reaping dead processes...\n> > /usr/local/pgsql/bin/postmaster: CleanupProc: pid 6731 exited with status 0\n> > pq_recvbuf: recv() failed, errno 0\n> > proc_exit(0) [#0]\n> > shmem_exit(0) [#0]\n> > exit(0)\n> > /usr/local/pgsql/bin/postmaster: reaping dead processes...\n> > /usr/local/pgsql/bin/postmaster: CleanupProc: pid 6730 exited with status 0\n> \n> This doesn't look like a segv trace to me --- if the backend was\n> coredumping then the postmaster should see a nonzero exit status.\n> \n> The recv() complaints probably indicate that the client application\n> disconnected ungracefully (ie, without sending the 'X' terminate\n> message). It's curious that they're not both alike.\n> That might be a red herring however --- right now pq_recvbuf doesn't\n> distinguish plain EOF from a true error, and if it's plain EOF then\n> whatever errno was last set to gets printed. Think I'll go fix that.\n> \n> Barring more evidence, all I see here is client disconnect, not a\n> backend failure.\n\nHmmm, I've never seen the recv() problem before with any JDBC app, only\nthis one.\n\nPS: Currently the JDBC driver is still using the 6.3.x protocol. When 6.4\ncame out I didn't implement the CANCEL stuff, as I was concentrating on\ngetting more of the innards implemented.\n\nAnyhow, if the terminate message is a problem, I'll upgrade the protocol.\n\n> What's your basis for claiming a segv crash?\n\nI think the segv came from Jason (who's run it against 6.3.x and 6.4.x).\n\n> > Ok, now the problem. When he sets autocommit to false, the JDBC driver\n> > sends BEGIN to the backend. Ok so far, however, something then fails\n> > during the first large object's load, and causes everything else to fail.\n> \n> That's not a bug, it's a feature ... allegedly, anyway. Any error\n> inside a transaction means the entire transaction is aborted. And\n> the backend will keep reminding you so until you cooperate by ending\n> the transaction. I don't like the behavior very much either, but\n> it's operating as designed.\n\nI'm going to overhaul the autocommit(false) code. I suspect it's broken,\nbut I need to sit down and figure what is happening with this problem\nfirst.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Thu, 18 Feb 1999 07:18:02 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Continued problems with pgdump, Large Objects and\n\tcrashing backends" }, { "msg_contents": "Peter T Mount <[email protected]> writes:\n>> The recv() complaints probably indicate that the client application\n>> disconnected ungracefully (ie, without sending the 'X' terminate\n>> message). It's curious that they're not both alike.\n\n> Hmmm, I've never seen the recv() problem before with any JDBC app, only\n> this one.\n\nThat particular message is new in the 6.5 code (BTW, as of this morning\nit should say \"pq_recvbuf: unexpected EOF on client connection\").\n\nI was about to say that prior versions would also complain about an\nunexpected client disconnect, but actually it looks like 6.4.2 doesn't\n--- at least not in this low-level code. I'm not inclined to remove the\nmessage however. I think we want it there to help detect more serious\nproblems, like disconnect in the middle of a COPY operation.\n\n> PS: Currently the JDBC driver is still using the 6.3.x protocol. When 6.4\n> came out I didn't implement the CANCEL stuff, as I was concentrating on\n> getting more of the innards implemented.\n> Anyhow, if the terminate message is a problem, I'll upgrade the protocol.\n\nThe terminate message is defined in the old protocol too; it's not new\nfor 6.4. As for whether it's a \"problem\" not to send it, it's only \na problem if you don't like complaints in the postmaster log ;-).\nThe backend will close up shop just fine without it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Feb 1999 11:40:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Continued problems with pgdump,\n\tLarge Objects and crashing backends" }, { "msg_contents": "On Thu, 18 Feb 1999, Tom Lane wrote:\n\n> Peter T Mount <[email protected]> writes:\n> >> The recv() complaints probably indicate that the client application\n> >> disconnected ungracefully (ie, without sending the 'X' terminate\n> >> message). It's curious that they're not both alike.\n> \n> > Hmmm, I've never seen the recv() problem before with any JDBC app, only\n> > this one.\n> \n> That particular message is new in the 6.5 code (BTW, as of this morning\n> it should say \"pq_recvbuf: unexpected EOF on client connection\").\n> \n> I was about to say that prior versions would also complain about an\n> unexpected client disconnect, but actually it looks like 6.4.2 doesn't\n> --- at least not in this low-level code. I'm not inclined to remove the\n> message however. I think we want it there to help detect more serious\n> problems, like disconnect in the middle of a COPY operation.\n> \n> > PS: Currently the JDBC driver is still using the 6.3.x protocol. When 6.4\n> > came out I didn't implement the CANCEL stuff, as I was concentrating on\n> > getting more of the innards implemented.\n> > Anyhow, if the terminate message is a problem, I'll upgrade the protocol.\n> \n> The terminate message is defined in the old protocol too; it's not new\n> for 6.4. As for whether it's a \"problem\" not to send it, it's only \n> a problem if you don't like complaints in the postmaster log ;-).\n> The backend will close up shop just fine without it.\n\nLooks like something that's been missing since the begining. Ok, I'll add\nthe message to it tomorrow, as I'm planning some cleanups this weekend.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Thu, 18 Feb 1999 19:23:01 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Continued problems with pgdump, Large Objects and\n\tcrashing backends" } ]
[ { "msg_contents": "unsubscribe\r\n\r\n\n\n\n\n\n\n\nunsubscribe", "msg_date": "Thu, 18 Feb 1999 08:50:07 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "[HACKERS] unsubscribe" } ]
[ { "msg_contents": "I have fixed bushy plans, enabled with the postgres -b option. Here is\nproof from OPTIMIZER_DEBUG:\n\n\tlevels left: 7\n\t(9 8 7 6 ): size=1 width=16\n\t path list:\n\t Nestloop size=1 cost=0.000000\n\t MergeJoin size=1 cost=0.000000\n\t clauses=(x5.y = x6.y)\n\t sortouter=1 sortinner=1\n\t SeqScan(6) size=0 cost=0.000000\n\t SeqScan(7) size=0 cost=0.000000\n\t MergeJoin size=1 cost=0.000000\n\t clauses=(x7.y = x8.y)\n\t sortouter=1 sortinner=1\n\t SeqScan(8) size=0 cost=0.000000\n\t SeqScan(9) size=0 cost=0.000000\n\t \n\nThe regression tests pass with bushy plans enabled. I am not sure if\nthe executor is actually using a bushy plan, though.\n\nThe old bushy code was poor. It tried to do bushy plans by modifying\nthe joininfo nodes. I removed all that code, and just do the join\nsearch in make_rels_by_clause_joins(). This is much more logical, and\ndoes not require the joininfo setup/cleanup that the old code attempted.\n\nFrankly, I would like to enable bushy plans and right-handed plans by\ndefault. The optimizer is now fast enough that a 9-table join is almost\ninstantaneous, even with bush plans. People are not sophisticated\nenough to know when to enables these options themselves. I am not sure I\nam either. I would like to enables both, fix whatever breaks, and\nprogrammatically enable the options when they make sense.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Feb 1999 19:47:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Bushy Plans fixed" }, { "msg_contents": "On Wed, 17 Feb 1999, Bruce Momjian wrote:\n\n> Frankly, I would like to enable bushy plans and right-handed plans by\n> default. The optimizer is now fast enough that a 9-table join is almost\n> instantaneous, even with bush plans. People are not sophisticated\n> enough to know when to enables these options themselves. I am not sure I\n> am either. I would like to enables both, fix whatever breaks, and\n> programmatically enable the options when they make sense.\n\nSounds reasonable to me...I know, in my case, it isn't something I'd think\nto enable, and tend to be the type that uses btree's for indices all the\ntime cause I don't really understand why/where I'd use others...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 17 Feb 1999 21:58:42 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bushy Plans fixed" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I have fixed bushy plans, enabled with the postgres -b option. Here is\n> proof from OPTIMIZER_DEBUG:\n\nNice!!!\n\n> levels left: 7\n> (9 8 7 6 ): size=1 width=16\n> path list:\n> Nestloop size=1 cost=0.000000\n> MergeJoin size=1 cost=0.000000\n> clauses=(x5.y = x6.y)\n> sortouter=1 sortinner=1\n> SeqScan(6) size=0 cost=0.000000\n> SeqScan(7) size=0 cost=0.000000\n> MergeJoin size=1 cost=0.000000\n> clauses=(x7.y = x8.y)\n> sortouter=1 sortinner=1\n> SeqScan(8) size=0 cost=0.000000\n> SeqScan(9) size=0 cost=0.000000\n> \n> \n> The regression tests pass with bushy plans enabled. I am not sure if\n> the executor is actually using a bushy plan, though.\n\nSure, it does.\n\n> \n> The old bushy code was poor. It tried to do bushy plans by modifying\n> the joininfo nodes. I removed all that code, and just do the join\n> search in make_rels_by_clause_joins(). This is much more logical, and\n> does not require the joininfo setup/cleanup that the old code attempted.\n> \n> Frankly, I would like to enable bushy plans and right-handed plans by\n> default. The optimizer is now fast enough that a 9-table join is almost\n> instantaneous, even with bush plans. People are not sophisticated\n> enough to know when to enables these options themselves. I am not sure I\n> am either. I would like to enables both, fix whatever breaks, and\n> programmatically enable the options when they make sense.\n\nWe need not in right-sided plans code any more. \nI agreed that we should enable bushes by default.\n\nVadim\n", "msg_date": "Thu, 18 Feb 1999 09:54:20 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bushy Plans fixed" }, { "msg_contents": "> > Frankly, I would like to enable bushy plans and right-handed plans by\n> > default. The optimizer is now fast enough that a 9-table join is almost\n> > instantaneous, even with bush plans. People are not sophisticated\n> > enough to know when to enables these options themselves. I am not sure I\n> > am either. I would like to enables both, fix whatever breaks, and\n> > programmatically enable the options when they make sense.\n> \n> We need not in right-sided plans code any more. \n> I agreed that we should enable bushes by default.\n\nAre you saying right-hand plans are not useful if we have bushy plans?\nIf so, I will remove the right-hand code.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Feb 1999 22:36:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bushy Plans fixed" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > Frankly, I would like to enable bushy plans and right-handed plans by\n> > > default. The optimizer is now fast enough that a 9-table join is almost\n> > > instantaneous, even with bush plans. People are not sophisticated\n> > > enough to know when to enables these options themselves. I am not sure I\n> > > am either. I would like to enables both, fix whatever breaks, and\n> > > programmatically enable the options when they make sense.\n> >\n> > We need not in right-sided plans code any more.\n> > I agreed that we should enable bushes by default.\n> \n> Are you saying right-hand plans are not useful if we have bushy plans?\n> If so, I will remove the right-hand code.\n\nI mean that bushes should be able to produce right-sided plans.\nSo - remove right-sided coded.\n\nVadim\n", "msg_date": "Thu, 18 Feb 1999 11:35:39 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bushy Plans fixed" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > > > Frankly, I would like to enable bushy plans and right-handed plans by\n> > > > default. The optimizer is now fast enough that a 9-table join is almost\n> > > > instantaneous, even with bush plans. People are not sophisticated\n> > > > enough to know when to enables these options themselves. I am not sure I\n> > > > am either. I would like to enables both, fix whatever breaks, and\n> > > > programmatically enable the options when they make sense.\n> > >\n> > > We need not in right-sided plans code any more.\n> > > I agreed that we should enable bushes by default.\n> > \n> > Are you saying right-hand plans are not useful if we have bushy plans?\n> > If so, I will remove the right-hand code.\n> \n> I mean that bushes should be able to produce right-sided plans.\n> So - remove right-sided coded.\n\nThe bushy code joined joined relations, not base relations. Currently,\nbase relations are always inner without right-hand plans. See\nmake_rels_by_clause_joins() and let me know what needs to be changed, or\nfeel free to modify it yourself. I will check your modifications.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 Feb 1999 23:43:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bushy Plans fixed" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > I mean that bushes should be able to produce right-sided plans.\n> > So - remove right-sided coded.\n> \n> The bushy code joined joined relations, not base relations. Currently,\n> base relations are always inner without right-hand plans. See\n> make_rels_by_clause_joins() and let me know what needs to be changed, or\n> feel free to modify it yourself. I will check your modifications.\n\nNo time -:(\nSo, leave right-sided plans as is and enable them and bashes\nby default.\n\nVadim\n", "msg_date": "Thu, 18 Feb 1999 11:57:32 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bushy Plans fixed" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > >\n> > > I mean that bushes should be able to produce right-sided plans.\n> > > So - remove right-sided coded.\n> > \n> > The bushy code joined joined relations, not base relations. Currently,\n> > base relations are always inner without right-hand plans. See\n> > make_rels_by_clause_joins() and let me know what needs to be changed, or\n> > feel free to modify it yourself. I will check your modifications.\n> \n> No time -:(\n> So, leave right-sided plans as is and enable them and bashes\n> by default.\n\nI have removed right-sided plans in the bushy case, because the joins\nwill happen on their own when it processes the other joined rel.\n\nIn the joined rel/base rel join case, the code is not going to\nauto-handle this, is it, without right-handed plans?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Feb 1999 00:10:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bushy Plans fixed" } ]
[ { "msg_contents": "Hi\n\nWell, every one got so hung up on my side question that the main question\nwas not answered. So, a re-ask.\n\nWhat ever came of the talked about 6.4.3? Will there be a 6.4.3? if so,\nabout when?\n\nHave a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner I'm excited about life! How about YOU!?\nProfessional Web Hosting and site design to include programming\nProudly powered by R H Linux 4.2, Apache 1.3.x, PHP 3.x, PostgreSQL 6.x\n-----------------------------------------------------------------------\nOnly if you know where you're going can you get there.\n\n", "msg_date": "Wed, 17 Feb 1999 22:26:42 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "6.4.3?" }, { "msg_contents": "On Wed, 17 Feb 1999, Terry Mackintosh wrote:\n\n> Hi\n> \n> Well, every one got so hung up on my side question that the main question\n> was not answered. So, a re-ask.\n> \n> What ever came of the talked about 6.4.3? Will there be a 6.4.3? if so,\n> about when?\n\nI believe that v6.4.3 is planned for packaging the day we put v6.5 into\nbeta, which is awaiting Vadim's Word...if ppl are just waiting for me to\npackage it up, then...ack...sorry, let me know.\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 17 Feb 1999 23:58:34 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.3?" } ]
[ { "msg_contents": "This is all logical, all sequential access will be faster with bigger\nblocksize. It usually gets faster up to a blocksize of 256k.\nYou could have done the same test using dd.\nThe slowdown will be on random access using index and too much data\nto cache all data pages and needed index pages.\n1. The cache will not be as selective, since for one often needed\nrow the backend will cache 32k (plus at least 32k for each index\nneeded to find this row).\n2. The index lookup will use more CPU time, since one index page\nis larger.\n3. To cache one row with 3 indices you will need at least 128k memory\ncompared to 32 k with an 8k pagesize.\n4. It will also increase the amount of disk space and cache needed\nfor the system tables (since they are all rather small)\n\nFor the default configuration a 4-8k page size seems to be a good\ntradeoff in other DBMS's. For PostgreSQL this might not be so,\nbecause of the lack of read ahead and write behind.\nRemember that the read ahead would actually need to read in bigger\nblocks (up to 256k) to actually perform best. The same is true for \nthe write behind.\n\nAndreas\n\n> Tatsuo Ishii wrote:\n> >> \n> >> But modern Unixes have read/write ahead i/o if it seems a sequential\n> >> access, don't they. I did some testing on my LinuxPPC box.\n> >> \n> >> 0. create table t2(i int,c char(4000));\n> >> 1. time psql -c \"copy t2 from '/tmp/aaa'\" test\n> >> (aaa has 5120 records and this will create 20MB table)\n> >> 2. time psql -c \"select count(*) from t2\" test\n> >> 3. total time of the regression test\n> >> \n> >> o result of testing 1\n> >> \n> >> 8K: 0.02user 0.04system 3:26.20elapsed\n> >> 32K: 0.03user 0.06system 0:48.25elapsed\n> >> \n> >> 32K is 4 times faster than 8k!\n> >> \n> >> o result of testing 2\n> >> \n> >> 8K: 0.02user 0.04system 6:00.31elapsed\n> >> 32K: 0.04user 0.02system 1:02.13elapsed\n> >> \n> >> 32K is neary 6 times faster than 8k!\n> >\n> >Did you use the same -B for 8K and 32K ?\n> >You should use 4x buffers in 8K case!\n> \n> Ok. This time I started postmaster as 'postmaster -S -i -B 256'.\n> \n> test1:\n> 0.03user 0.02system 3:21.65elapsed\n> \n> test2:\n> 0.01user 0.08system 5:30.94elapsed\n> \n> a little bit faster, but no significant difference?\n> \n> \n> \n", "msg_date": "Thu, 18 Feb 1999 12:06:38 +0100", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] 8K block limit " } ]
[ { "msg_contents": "Postgres 6.4.2, gcc 2.8.1, hpux 10.20 and Sparc/Solaris 2.6\n(All debugging attempts were on the hpux machine, but the symptom also\noccurs on the Solaris box. Also tried the latest (feb 18) snapshot\non the hpux box, symptom still occurs.)\n\nI was installing the perl DBD::Pg module, but it fails the large object \ntest. To make sure it wasn't just an issue with perl (or the module),\nI compiled and ran src/test/examples/testlo, and it also fails.\n(testlo2 also has failed in the past, although I didn't use it for any\nof my current debugging attempts.)\n\n(In case there's any question, I created a database, and then created\na short text file called /tmp/gaga (ok, so I used the same file that \nthe perl module created for the perl test; you caught me) with one line of\ntext. Then I do:\n\n./testlo ronfoo /tmp/gaga /tmp/gaga1\n\nwhich fails complaining that there was an error reading the file (which\nactually is misleading-- the error actually is in writing to the new\nlarge object).\n\n(after you do this, you must drop the database and recreate it before you\nrun testlo again, otherwise you get errors about creating the xinv##### \n\"object\".)\n\nI've put a comment before the line that seems to be the \"offending\" line,\nI _don't_ know for sure what's wrong with it, just that the line before \nthe comment runs, and the line after the \"offender\" doesn't ever get \nexecuted.\n\nIs this a known problem with pg, or possibly a problem with gcc 2.8.1?\n(or something else entirely?)\n\n-ron\n\n/*-------------------------------------------------------------------------\n *\n * IDENTIFICATION\n *\t $Header: /usr/local/cvsroot/pgsql/src/backend/storage/large_object/inv_api.c,v 1.41.2.1 1998/12/13 05:08:19 momjian Exp $\n *\n *-------------------------------------------------------------------------\n\n[about line 1060]\n\n\t/*\n\t * Finally, copy the user's data buffer into the tuple. This violates\n\t * the tuple and class abstractions.\n\t */\n\n\tattptr = ((char *) ntup) + hoff;\n/* XXX this next line is where things \"just kind of stop\" */\n\t*((int32 *) attptr) = obj_desc->offset + nwrite - 1;\n \tattptr += sizeof(int32);\n \n [rest of file snipped]\n\n", "msg_date": "Thu, 18 Feb 1999 14:13:53 PST", "msg_from": "Ron Snyder <[email protected]>", "msg_from_op": true, "msg_subject": "large objects failing (hpux10.20 sparc/solaris 2.6, gcc 2.8.1)" }, { "msg_contents": "[snipped original message explaining how testlo fails for me on\nsparc/solaris 2.6 and hpux 10.20, gcc 2.8.1, postgres 6.4.2]\n\n> I made patches for 6.4.2 a week ago to fix problems of lobj reported\n> by another user. I'm not sure if his problem was solved or not, since\n> I got no reply from him. Anyway, with the patch, lotest.c runs fine on\n> my LinuxPPC box. More over, following commented out part of testlo.c\n> now passes without any problem (I guess these were commented out\n> becasue overwriting lobj did not work).\n> \n\n[patches snipped]\n\nTatsuo,\n I applied the patches to my 6.4.2 source tree (not the snapshot)-- \nthe patches applied cleanly, but my backend still goes into never never\nland at the line I mentioned before. What version of gcc are you using?\nWould it be useful for me to post any additional info?\n\n-ron\n\n", "msg_date": "Fri, 19 Feb 1999 10:50:16 PST", "msg_from": "Ron Snyder <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] large objects failing (hpux10.20 sparc/solaris 2.6,\n\tgcc 2.8.1)" }, { "msg_contents": "> Postgres 6.4.2, gcc 2.8.1, hpux 10.20 and Sparc/Solaris 2.6\n> (All debugging attempts were on the hpux machine, but the symptom also\n> occurs on the Solaris box. Also tried the latest (feb 18) snapshot\n> on the hpux box, symptom still occurs.)\n> \n> I was installing the perl DBD::Pg module, but it fails the large object \n> test. To make sure it wasn't just an issue with perl (or the module),\n> I compiled and ran src/test/examples/testlo, and it also fails.\n> (testlo2 also has failed in the past, although I didn't use it for any\n> of my current debugging attempts.)\n> \n> (In case there's any question, I created a database, and then created\n> a short text file called /tmp/gaga (ok, so I used the same file that \n> the perl module created for the perl test; you caught me) with one line of\n> text. Then I do:\n> \n> ./testlo ronfoo /tmp/gaga /tmp/gaga1\n> \n> which fails complaining that there was an error reading the file (which\n> actually is misleading-- the error actually is in writing to the new\n> large object).\n> \n> (after you do this, you must drop the database and recreate it before you\n> run testlo again, otherwise you get errors about creating the xinv##### \n> \"object\".)\n\nI made patches for 6.4.2 a week ago to fix problems of lobj reported\nby another user. I'm not sure if his problem was solved or not, since\nI got no reply from him. Anyway, with the patch, lotest.c runs fine on\nmy LinuxPPC box. More over, following commented out part of testlo.c\nnow passes without any problem (I guess these were commented out\nbecasue overwriting lobj did not work).\n\n/*\n\tprintf(\"\\tas large object %d.\\n\", lobjOid);\n\n\tprintf(\"picking out bytes 1000-2000 of the large object\\n\");\n\tpickout(conn, lobjOid, 1000, 1000);\n\n\tprintf(\"overwriting bytes 1000-2000 of the large object with X's\\n\");\n\toverwrite(conn, lobjOid, 1000, 1000);\n*/\n\nTatsuo Ishii\n------------------------------ cut here -------------------------------\n*** postgresql-6.4.2/src/backend/storage/large_object/inv_api.c.orig\tSun Dec 13 14:08:19 1998\n--- postgresql-6.4.2/src/backend/storage/large_object/inv_api.c\tFri Feb 12 20:21:05 1999\n***************\n*** 545,555 ****\n \t\t\ttuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);\n \t\telse\n \t\t{\n! \t\t\tif (obj_desc->offset > obj_desc->highbyte)\n \t\t\t\ttuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);\n \t\t\telse\n \t\t\t\ttuplen = inv_wrold(obj_desc, buf, nbytes - nwritten, tuple, buffer);\n! \t\t\tReleaseBuffer(buffer);\n \t\t}\n \n \t\t/* move pointers past the amount we just wrote */\n--- 545,561 ----\n \t\t\ttuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);\n \t\telse\n \t\t{\n! \t\tif (obj_desc->offset > obj_desc->highbyte) {\n \t\t\t\ttuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);\n+ \t\t\t\tReleaseBuffer(buffer);\n+ \t\t\t}\n \t\t\telse\n \t\t\t\ttuplen = inv_wrold(obj_desc, buf, nbytes - nwritten, tuple, buffer);\n! \t\t\t/* inv_wrold() has already issued WriteBuffer()\n! \t\t\t which has decremented local reference counter\n! \t\t\t (LocalRefCount). So we should not call\n! \t\t\t ReleaseBuffer() here. -- Tatsuo 99/2/4\n! \t\t\tReleaseBuffer(buffer); */\n \t\t}\n \n \t\t/* move pointers past the amount we just wrote */\n***************\n*** 624,648 ****\n \t\t|| obj_desc->offset < obj_desc->lowbyte\n \t\t|| !ItemPointerIsValid(&(obj_desc->htid)))\n \t{\n \n \t\t/* initialize scan key if not done */\n \t\tif (obj_desc->iscan == (IndexScanDesc) NULL)\n \t\t{\n- \t\t\tScanKeyData skey;\n- \n \t\t\t/*\n \t\t\t * As scan index may be prematurely closed (on commit), we\n \t\t\t * must use object current offset (was 0) to reinitialize the\n \t\t\t * entry [ PA ].\n \t\t\t */\n- \t\t\tScanKeyEntryInitialize(&skey, 0x0, 1, F_INT4GE,\n- \t\t\t\t\t\t\t\t Int32GetDatum(obj_desc->offset));\n \t\t\tobj_desc->iscan =\n \t\t\t\tindex_beginscan(obj_desc->index_r,\n \t\t\t\t\t\t\t\t(bool) 0, (uint16) 1,\n \t\t\t\t\t\t\t\t&skey);\n \t\t}\n- \n \t\tdo\n \t\t{\n \t\t\tres = index_getnext(obj_desc->iscan, ForwardScanDirection);\n--- 630,655 ----\n \t\t|| obj_desc->offset < obj_desc->lowbyte\n \t\t|| !ItemPointerIsValid(&(obj_desc->htid)))\n \t{\n+ \t\tScanKeyData skey;\n+ \n+ \t\tScanKeyEntryInitialize(&skey, 0x0, 1, F_INT4GE,\n+ \t\t\t\t Int32GetDatum(obj_desc->offset));\n \n \t\t/* initialize scan key if not done */\n \t\tif (obj_desc->iscan == (IndexScanDesc) NULL)\n \t\t{\n \t\t\t/*\n \t\t\t * As scan index may be prematurely closed (on commit), we\n \t\t\t * must use object current offset (was 0) to reinitialize the\n \t\t\t * entry [ PA ].\n \t\t\t */\n \t\t\tobj_desc->iscan =\n \t\t\t\tindex_beginscan(obj_desc->index_r,\n \t\t\t\t\t\t\t\t(bool) 0, (uint16) 1,\n \t\t\t\t\t\t\t\t&skey);\n+ \t\t} else {\n+ \t\t\tindex_rescan(obj_desc->iscan, false, &skey);\n \t\t}\n \t\tdo\n \t\t{\n \t\t\tres = index_getnext(obj_desc->iscan, ForwardScanDirection);\n***************\n*** 666,672 ****\n \t\t\ttuple = heap_fetch(obj_desc->heap_r, SnapshotNow,\n \t\t\t\t\t\t\t &res->heap_iptr, buffer);\n \t\t\tpfree(res);\n! \t\t} while (tuple == (HeapTuple) NULL);\n \n \t\t/* remember this tid -- we may need it for later reads/writes */\n \t\tItemPointerCopy(&tuple->t_ctid, &obj_desc->htid);\n--- 673,679 ----\n \t\t\ttuple = heap_fetch(obj_desc->heap_r, SnapshotNow,\n \t\t\t\t\t\t\t &res->heap_iptr, buffer);\n \t\t\tpfree(res);\n! \t\t} while (!HeapTupleIsValid(tuple));\n \n \t\t/* remember this tid -- we may need it for later reads/writes */\n \t\tItemPointerCopy(&tuple->t_ctid, &obj_desc->htid);\n***************\n*** 675,680 ****\n--- 682,691 ----\n \t{\n \t\ttuple = heap_fetch(obj_desc->heap_r, SnapshotNow,\n \t\t\t\t\t\t &(obj_desc->htid), buffer);\n+ \t\tif (!HeapTupleIsValid(tuple)) {\n+ \t\t elog(ERROR,\n+ \t\t \"inv_fetchtup: heap_fetch failed\");\n+ \t\t}\n \t}\n \n \t/*\n***************\n*** 746,757 ****\n \n \tnblocks = RelationGetNumberOfBlocks(hr);\n \n! \tif (nblocks > 0)\n \t\tbuffer = ReadBuffer(hr, nblocks - 1);\n! \telse\n \t\tbuffer = ReadBuffer(hr, P_NEW);\n! \n! \tpage = BufferGetPage(buffer);\n \n \t/*\n \t * If the last page is too small to hold all the data, and it's too\n--- 757,771 ----\n \n \tnblocks = RelationGetNumberOfBlocks(hr);\n \n! \tif (nblocks > 0) {\n \t\tbuffer = ReadBuffer(hr, nblocks - 1);\n! \t\tpage = BufferGetPage(buffer);\n! \t}\n! \telse {\n \t\tbuffer = ReadBuffer(hr, P_NEW);\n! \t\tpage = BufferGetPage(buffer);\n! \t\tPageInit(page, BufferGetPageSize(buffer), 0);\n! \t}\n \n \t/*\n \t * If the last page is too small to hold all the data, and it's too\n***************\n*** 865,876 ****\n \n \t\tnblocks = RelationGetNumberOfBlocks(hr);\n \n! \t\tif (nblocks > 0)\n \t\t\tnewbuf = ReadBuffer(hr, nblocks - 1);\n! \t\telse\n \t\t\tnewbuf = ReadBuffer(hr, P_NEW);\n \n- \t\tnewpage = BufferGetPage(newbuf);\n \t\tfreespc = IFREESPC(newpage);\n \n \t\t/*\n--- 879,894 ----\n \n \t\tnblocks = RelationGetNumberOfBlocks(hr);\n \n! \t\tif (nblocks > 0) {\n \t\t\tnewbuf = ReadBuffer(hr, nblocks - 1);\n! \t\t\tnewpage = BufferGetPage(newbuf);\n! \t\t}\n! \t\telse {\n \t\t\tnewbuf = ReadBuffer(hr, P_NEW);\n+ \t\t\tnewpage = BufferGetPage(newbuf);\n+ \t\t\tPageInit(newpage, BufferGetPageSize(newbuf), 0);\n+ \t\t}\n \n \t\tfreespc = IFREESPC(newpage);\n \n \t\t/*\n***************\n*** 973,978 ****\n--- 991,999 ----\n \tWriteBuffer(buffer);\n \tif (newbuf != buffer)\n \t\tWriteBuffer(newbuf);\n+ \n+ \t/* Tuple id is no longer valid */\n+ \tItemPointerSetInvalid(&(obj_desc->htid));\n \n \t/* done */\n \treturn nwritten;\n", "msg_date": "Fri, 19 Feb 1999 23:13:29 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] large objects failing (hpux10.20 sparc/solaris 2.6,\n\tgcc 2.8.1)" }, { "msg_contents": "> Tatsuo,\n> I applied the patches to my 6.4.2 source tree (not the snapshot)-- \n> the patches applied cleanly, but my backend still goes into never never\n> land at the line I mentioned before. What version of gcc are you using?\n> Would it be useful for me to post any additional info?\n\nLet me try on Solaris2.6/sparc in my office first. Today is Saturday\nin Japan, so the testing will be the day after tomorrow. Is it ok for\nyou?\n\nBTW, gcc version I'm using on LinuxPPC is egcs-2.90.25 980302\n(egcs-1.0.2 prerelease).\n---\nTatsuo Ishii\n", "msg_date": "Sat, 20 Feb 1999 11:16:04 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] large objects failing (hpux10.20 sparc/solaris 2.6,\n\tgcc 2.8.1)" }, { "msg_contents": ">> I applied the patches to my 6.4.2 source tree (not the snapshot)-- \n>> the patches applied cleanly, but my backend still goes into never never\n>> land at the line I mentioned before. What version of gcc are you using?\n>> Would it be useful for me to post any additional info?\n>\n>Let me try on Solaris2.6/sparc in my office first. Today is Saturday\n>in Japan, so the testing will be the day after tomorrow. Is it ok for\n>you?\n\nOk. I found an align problem in lobj that might not appear other than\nSolaris/sparc. Please apply included patches to\nsrc/backend/storage/large_object/inv_api.c and try again. (These are\naddtions to the previous ones).\n\nHope this is the last bug:-)\n--\nTatsuo Ishii\n--------------------------------------------------------------------\n*** inv_api.c.orig2\tMon Feb 22 16:15:31 1999\n--- inv_api.c\tMon Feb 22 16:16:55 1999\n***************\n*** 1019,1028 ****\n \n \t/* compute tuple size -- no nulls */\n \thoff = offsetof(HeapTupleData, t_bits);\n \n \t/* add in olastbyte, varlena.vl_len, varlena.vl_dat */\n \ttupsize = hoff + (2 * sizeof(int32)) + nwrite;\n! \ttupsize = LONGALIGN(tupsize);\n \n \t/*\n \t * Allocate the tuple on the page, violating the page abstraction.\n--- 1019,1029 ----\n \n \t/* compute tuple size -- no nulls */\n \thoff = offsetof(HeapTupleData, t_bits);\n+ \thoff = DOUBLEALIGN(hoff);\n \n \t/* add in olastbyte, varlena.vl_len, varlena.vl_dat */\n \ttupsize = hoff + (2 * sizeof(int32)) + nwrite;\n! \ttupsize = DOUBLEALIGN(tupsize);\n \n \t/*\n \t * Allocate the tuple on the page, violating the page abstraction.\n", "msg_date": "Mon, 22 Feb 1999 16:30:12 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] large objects failing (hpux10.20 sparc/solaris 2.6,\n\tgcc 2.8.1)" }, { "msg_contents": "Applied to the main tree. I found the patch malformed, so I applied it\nby hand. Interesting you had to double-align.\n\n\n> >> I applied the patches to my 6.4.2 source tree (not the snapshot)-- \n> >> the patches applied cleanly, but my backend still goes into never never\n> >> land at the line I mentioned before. What version of gcc are you using?\n> >> Would it be useful for me to post any additional info?\n> >\n> >Let me try on Solaris2.6/sparc in my office first. Today is Saturday\n> >in Japan, so the testing will be the day after tomorrow. Is it ok for\n> >you?\n> \n> Ok. I found an align problem in lobj that might not appear other than\n> Solaris/sparc. Please apply included patches to\n> src/backend/storage/large_object/inv_api.c and try again. (These are\n> addtions to the previous ones).\n> \n> Hope this is the last bug:-)\n> --\n> Tatsuo Ishii\n> --------------------------------------------------------------------\n> *** inv_api.c.orig2\tMon Feb 22 16:15:31 1999\n> --- inv_api.c\tMon Feb 22 16:16:55 1999\n> ***************\n> *** 1019,1028 ****\n> \n> \t/* compute tuple size -- no nulls */\n> \thoff = offsetof(HeapTupleData, t_bits);\n> \n> \t/* add in olastbyte, varlena.vl_len, varlena.vl_dat */\n> \ttupsize = hoff + (2 * sizeof(int32)) + nwrite;\n> ! \ttupsize = LONGALIGN(tupsize);\n> \n> \t/*\n> \t * Allocate the tuple on the page, violating the page abstraction.\n> --- 1019,1029 ----\n> \n> \t/* compute tuple size -- no nulls */\n> \thoff = offsetof(HeapTupleData, t_bits);\n> + \thoff = DOUBLEALIGN(hoff);\n> \n> \t/* add in olastbyte, varlena.vl_len, varlena.vl_dat */\n> \ttupsize = hoff + (2 * sizeof(int32)) + nwrite;\n> ! \ttupsize = DOUBLEALIGN(tupsize);\n> \n> \t/*\n> \t * Allocate the tuple on the page, violating the page abstraction.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 11:46:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] large objects failing (hpux10.20 sparc/solaris 2.6,\n\tgcc 2.8.1)" }, { "msg_contents": "> Ok. I found an align problem in lobj that might not appear other than\n> Solaris/sparc. Please apply included patches to\n> src/backend/storage/large_object/inv_api.c and try again. (These are\n> addtions to the previous ones).\n> \n> Hope this is the last bug:-)\n\nTatsuo-- I've been out for a couple of days, but I wanted to let you\nknow that this did indeed fix my problems.\n\nThanks!\n\n-ron\n", "msg_date": "Wed, 24 Feb 1999 9:34:22 PST", "msg_from": "Ron Snyder <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] large objects failing (hpux10.20 sparc/solaris 2.6,\n\tgcc 2.8.1)" }, { "msg_contents": ">> Ok. I found an align problem in lobj that might not appear other than\n>> Solaris/sparc. Please apply included patches to\n>> src/backend/storage/large_object/inv_api.c and try again. (These are\n>> addtions to the previous ones).\n>> \n>> Hope this is the last bug:-)\n>\n>Tatsuo-- I've been out for a couple of days, but I wanted to let you\n>know that this did indeed fix my problems.\n>\n>Thanks!\n\nYou are welcome!\n\n>To Bruce:\n>Thanks for taking care of my previous patches for current. If\n>included patch is ok, I will make one for current.\n\nNow I'm working on lobj in current tree(Currently lobj in 6.5 seems\nbroken).\n--\nTatsuo Ishii\n", "msg_date": "Thu, 25 Feb 1999 10:42:04 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] large objects failing (hpux10.20 sparc/solaris 2.6,\n\tgcc 2.8.1)" }, { "msg_contents": ">>To Bruce:\n>>Thanks for taking care of my previous patches for current. If\n>>included patch is ok, I will make one for current.\n>\n>Now I'm working on lobj in current tree(Currently lobj in 6.5 seems\n>broken).\n\nDone. \n\no overwriting an existing lobj now works\no 8KB garbage block always inserted problem is fixed\n\n--\nTatsuo Ishii\n--------------------------- cut here ---------------------------------\n*** pgsql/src/backend/storage/large_object/inv_api.c.orig\tWed Feb 24 12:45:24 1999\n--- pgsql/src/backend/storage/large_object/inv_api.c\tThu Feb 25 15:58:10 1999\n***************\n*** 72,78 ****\n *\t\tFor subsequent notes, [PA] is Pascal Andr\u001b,Ai\u001b(B <[email protected]>\n */\n \n! #define IFREESPC(p)\t\t(PageGetFreeSpace(p) - sizeof(HeapTupleData) - sizeof(struct varlena) - sizeof(int32))\n #define IMAXBLK\t\t\t8092\n #define IMINBLK\t\t\t512\n \n--- 72,81 ----\n *\t\tFor subsequent notes, [PA] is Pascal Andr\u001b,Ai\u001b(B <[email protected]>\n */\n \n! #define IFREESPC(p)\t\t(PageGetFreeSpace(p) - \\\n! \t\t\t\t DOUBLEALIGN(offsetof(HeapTupleHeaderData,t_bits)) - \\\n! \t\t\t\t DOUBLEALIGN(sizeof(struct varlena) + sizeof(int32)) - \\\n! \t\t\t\t sizeof(double))\n #define IMAXBLK\t\t\t8092\n #define IMINBLK\t\t\t512\n \n***************\n*** 623,646 ****\n \t\t|| obj_desc->offset < obj_desc->lowbyte\n \t\t|| !ItemPointerIsValid(&(obj_desc->htid)))\n \t{\n \n \t\t/* initialize scan key if not done */\n \t\tif (obj_desc->iscan == (IndexScanDesc) NULL)\n \t\t{\n- \t\t\tScanKeyData skey;\n- \n \t\t\t/*\n \t\t\t * As scan index may be prematurely closed (on commit), we\n \t\t\t * must use object current offset (was 0) to reinitialize the\n \t\t\t * entry [ PA ].\n \t\t\t */\n- \t\t\tScanKeyEntryInitialize(&skey, 0x0, 1, F_INT4GE,\n- \t\t\t\t\t\t\t\t Int32GetDatum(obj_desc->offset));\n \t\t\tobj_desc->iscan = index_beginscan(obj_desc->index_r,\n \t\t\t\t\t\t\t\t(bool) 0, (uint16) 1,\n \t\t\t\t\t\t\t\t&skey);\n! \t\t}\n! \n \t\tdo\n \t\t{\n \t\t\tres = index_getnext(obj_desc->iscan, ForwardScanDirection);\n--- 626,650 ----\n \t\t|| obj_desc->offset < obj_desc->lowbyte\n \t\t|| !ItemPointerIsValid(&(obj_desc->htid)))\n \t{\n+ \t\tScanKeyData skey;\n+ \n+ \t\tScanKeyEntryInitialize(&skey, 0x0, 1, F_INT4GE,\n+ \t\t\t\t Int32GetDatum(obj_desc->offset));\n \n \t\t/* initialize scan key if not done */\n \t\tif (obj_desc->iscan == (IndexScanDesc) NULL)\n \t\t{\n \t\t\t/*\n \t\t\t * As scan index may be prematurely closed (on commit), we\n \t\t\t * must use object current offset (was 0) to reinitialize the\n \t\t\t * entry [ PA ].\n \t\t\t */\n \t\t\tobj_desc->iscan = index_beginscan(obj_desc->index_r,\n \t\t\t\t\t\t\t\t(bool) 0, (uint16) 1,\n \t\t\t\t\t\t\t\t&skey);\n! \t\t} else {\n! \t\t\tindex_rescan(obj_desc->iscan, false, &skey);\n! \t}\n \t\tdo\n \t\t{\n \t\t\tres = index_getnext(obj_desc->iscan, ForwardScanDirection);\n***************\n*** 673,678 ****\n--- 677,685 ----\n \t{\n \t\ttuple->t_self = obj_desc->htid;\n \t\theap_fetch(obj_desc->heap_r, SnapshotNow, tuple, buffer);\n+ \t\tif (tuple->t_data == NULL) {\n+ \t\t\telog(ERROR, \"inv_fetchtup: heap_fetch failed\");\n+ \t\t}\n \t}\n \n \t/*\n***************\n*** 744,755 ****\n \n \tnblocks = RelationGetNumberOfBlocks(hr);\n \n! \tif (nblocks > 0)\n \t\tbuffer = ReadBuffer(hr, nblocks - 1);\n! \telse\n \t\tbuffer = ReadBuffer(hr, P_NEW);\n! \n! \tpage = BufferGetPage(buffer);\n \n \t/*\n \t * If the last page is too small to hold all the data, and it's too\n--- 751,765 ----\n \n \tnblocks = RelationGetNumberOfBlocks(hr);\n \n! \tif (nblocks > 0) {\n \t\tbuffer = ReadBuffer(hr, nblocks - 1);\n! \t\tpage = BufferGetPage(buffer);\n! \t}\n! \telse {\n \t\tbuffer = ReadBuffer(hr, P_NEW);\n! \t\tpage = BufferGetPage(buffer);\n! \t\tPageInit(page, BufferGetPageSize(buffer), 0);\n! \t}\n \n \t/*\n \t * If the last page is too small to hold all the data, and it's too\n***************\n*** 864,875 ****\n \n \t\tnblocks = RelationGetNumberOfBlocks(hr);\n \n! \t\tif (nblocks > 0)\n \t\t\tnewbuf = ReadBuffer(hr, nblocks - 1);\n! \t\telse\n \t\t\tnewbuf = ReadBuffer(hr, P_NEW);\n \n- \t\tnewpage = BufferGetPage(newbuf);\n \t\tfreespc = IFREESPC(newpage);\n \n \t\t/*\n--- 874,889 ----\n \n \t\tnblocks = RelationGetNumberOfBlocks(hr);\n \n! \t\tif (nblocks > 0) {\n \t\t\tnewbuf = ReadBuffer(hr, nblocks - 1);\n! \t\t\tnewpage = BufferGetPage(newbuf);\n! \t\t}\n! \t\telse {\n \t\t\tnewbuf = ReadBuffer(hr, P_NEW);\n+ \t\t\tnewpage = BufferGetPage(newbuf);\n+ \t\t\tPageInit(newpage, BufferGetPageSize(newbuf), 0);\n+ \t\t}\n \n \t\tfreespc = IFREESPC(newpage);\n \n \t\t/*\n***************\n*** 973,978 ****\n--- 987,995 ----\n \tWriteBuffer(buffer);\n \tif (newbuf != buffer)\n \t\tWriteBuffer(newbuf);\n+ \n+ \t/* Tuple id is no longer valid */\n+ \tItemPointerSetInvalid(&(obj_desc->htid));\n \n \t/* done */\n \treturn nwritten;\n", "msg_date": "Thu, 25 Feb 1999 16:33:00 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "[CURRENT] large object fix" }, { "msg_contents": "Applied.\n\n\n\n> >>To Bruce:\n> >>Thanks for taking care of my previous patches for current. If\n> >>included patch is ok, I will make one for current.\n> >\n> >Now I'm working on lobj in current tree(Currently lobj in 6.5 seems\n> >broken).\n> \n> Done. \n> \n> o overwriting an existing lobj now works\n> o 8KB garbage block always inserted problem is fixed\n> \n> --\n> Tatsuo Ishii\n> --------------------------- cut here ---------------------------------\n> *** pgsql/src/backend/storage/large_object/inv_api.c.orig\tWed Feb 24 12:45:24 1999\n> --- pgsql/src/backend/storage/large_object/inv_api.c\tThu Feb 25 15:58:10 1999\n> ***************\n> *** 72,78 ****\n> *\t\tFor subsequent notes, [PA] is Pascal Andr\u001b,Ai\u001b(B <[email protected]>\n> */\n> \n> ! #define IFREESPC(p)\t\t(PageGetFreeSpace(p) - sizeof(HeapTupleData) - sizeof(struct varlena) - sizeof(int32))\n> #define IMAXBLK\t\t\t8092\n> #define IMINBLK\t\t\t512\n> \n> --- 72,81 ----\n> *\t\tFor subsequent notes, [PA] is Pascal Andr\u001b,Ai\u001b(B <[email protected]>\n> */\n> \n> ! #define IFREESPC(p)\t\t(PageGetFreeSpace(p) - \\\n> ! \t\t\t\t DOUBLEALIGN(offsetof(HeapTupleHeaderData,t_bits)) - \\\n> ! \t\t\t\t DOUBLEALIGN(sizeof(struct varlena) + sizeof(int32)) - \\\n> ! \t\t\t\t sizeof(double))\n> #define IMAXBLK\t\t\t8092\n> #define IMINBLK\t\t\t512\n> \n> ***************\n> *** 623,646 ****\n> \t\t|| obj_desc->offset < obj_desc->lowbyte\n> \t\t|| !ItemPointerIsValid(&(obj_desc->htid)))\n> \t{\n> \n> \t\t/* initialize scan key if not done */\n> \t\tif (obj_desc->iscan == (IndexScanDesc) NULL)\n> \t\t{\n> - \t\t\tScanKeyData skey;\n> - \n> \t\t\t/*\n> \t\t\t * As scan index may be prematurely closed (on commit), we\n> \t\t\t * must use object current offset (was 0) to reinitialize the\n> \t\t\t * entry [ PA ].\n> \t\t\t */\n> - \t\t\tScanKeyEntryInitialize(&skey, 0x0, 1, F_INT4GE,\n> - \t\t\t\t\t\t\t\t Int32GetDatum(obj_desc->offset));\n> \t\t\tobj_desc->iscan = index_beginscan(obj_desc->index_r,\n> \t\t\t\t\t\t\t\t(bool) 0, (uint16) 1,\n> \t\t\t\t\t\t\t\t&skey);\n> ! \t\t}\n> ! \n> \t\tdo\n> \t\t{\n> \t\t\tres = index_getnext(obj_desc->iscan, ForwardScanDirection);\n> --- 626,650 ----\n> \t\t|| obj_desc->offset < obj_desc->lowbyte\n> \t\t|| !ItemPointerIsValid(&(obj_desc->htid)))\n> \t{\n> + \t\tScanKeyData skey;\n> + \n> + \t\tScanKeyEntryInitialize(&skey, 0x0, 1, F_INT4GE,\n> + \t\t\t\t Int32GetDatum(obj_desc->offset));\n> \n> \t\t/* initialize scan key if not done */\n> \t\tif (obj_desc->iscan == (IndexScanDesc) NULL)\n> \t\t{\n> \t\t\t/*\n> \t\t\t * As scan index may be prematurely closed (on commit), we\n> \t\t\t * must use object current offset (was 0) to reinitialize the\n> \t\t\t * entry [ PA ].\n> \t\t\t */\n> \t\t\tobj_desc->iscan = index_beginscan(obj_desc->index_r,\n> \t\t\t\t\t\t\t\t(bool) 0, (uint16) 1,\n> \t\t\t\t\t\t\t\t&skey);\n> ! \t\t} else {\n> ! \t\t\tindex_rescan(obj_desc->iscan, false, &skey);\n> ! \t}\n> \t\tdo\n> \t\t{\n> \t\t\tres = index_getnext(obj_desc->iscan, ForwardScanDirection);\n> ***************\n> *** 673,678 ****\n> --- 677,685 ----\n> \t{\n> \t\ttuple->t_self = obj_desc->htid;\n> \t\theap_fetch(obj_desc->heap_r, SnapshotNow, tuple, buffer);\n> + \t\tif (tuple->t_data == NULL) {\n> + \t\t\telog(ERROR, \"inv_fetchtup: heap_fetch failed\");\n> + \t\t}\n> \t}\n> \n> \t/*\n> ***************\n> *** 744,755 ****\n> \n> \tnblocks = RelationGetNumberOfBlocks(hr);\n> \n> ! \tif (nblocks > 0)\n> \t\tbuffer = ReadBuffer(hr, nblocks - 1);\n> ! \telse\n> \t\tbuffer = ReadBuffer(hr, P_NEW);\n> ! \n> ! \tpage = BufferGetPage(buffer);\n> \n> \t/*\n> \t * If the last page is too small to hold all the data, and it's too\n> --- 751,765 ----\n> \n> \tnblocks = RelationGetNumberOfBlocks(hr);\n> \n> ! \tif (nblocks > 0) {\n> \t\tbuffer = ReadBuffer(hr, nblocks - 1);\n> ! \t\tpage = BufferGetPage(buffer);\n> ! \t}\n> ! \telse {\n> \t\tbuffer = ReadBuffer(hr, P_NEW);\n> ! \t\tpage = BufferGetPage(buffer);\n> ! \t\tPageInit(page, BufferGetPageSize(buffer), 0);\n> ! \t}\n> \n> \t/*\n> \t * If the last page is too small to hold all the data, and it's too\n> ***************\n> *** 864,875 ****\n> \n> \t\tnblocks = RelationGetNumberOfBlocks(hr);\n> \n> ! \t\tif (nblocks > 0)\n> \t\t\tnewbuf = ReadBuffer(hr, nblocks - 1);\n> ! \t\telse\n> \t\t\tnewbuf = ReadBuffer(hr, P_NEW);\n> \n> - \t\tnewpage = BufferGetPage(newbuf);\n> \t\tfreespc = IFREESPC(newpage);\n> \n> \t\t/*\n> --- 874,889 ----\n> \n> \t\tnblocks = RelationGetNumberOfBlocks(hr);\n> \n> ! \t\tif (nblocks > 0) {\n> \t\t\tnewbuf = ReadBuffer(hr, nblocks - 1);\n> ! \t\t\tnewpage = BufferGetPage(newbuf);\n> ! \t\t}\n> ! \t\telse {\n> \t\t\tnewbuf = ReadBuffer(hr, P_NEW);\n> + \t\t\tnewpage = BufferGetPage(newbuf);\n> + \t\t\tPageInit(newpage, BufferGetPageSize(newbuf), 0);\n> + \t\t}\n> \n> \t\tfreespc = IFREESPC(newpage);\n> \n> \t\t/*\n> ***************\n> *** 973,978 ****\n> --- 987,995 ----\n> \tWriteBuffer(buffer);\n> \tif (newbuf != buffer)\n> \t\tWriteBuffer(newbuf);\n> + \n> + \t/* Tuple id is no longer valid */\n> + \tItemPointerSetInvalid(&(obj_desc->htid));\n> \n> \t/* done */\n> \treturn nwritten;\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Mar 1999 11:07:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [CURRENT] large object fix" } ]
[ { "msg_contents": "Does anyone know what the maximum number of items there can be in a list\nfor the IN predicate. I have a maintenance script that extracts\ninformation from one dB, builds a list for the IN predicate, connects to\na different dB, and runs a query based on the list generated. In one\ninstance the list that was built had ~450 items in the list, which\ncrashed the server.\n\nIs it a matter of the size of the list in characters and not items in\nthe list? The whole list, including the quotes around the items in the\nlist, commas, etc. was ~5k characters.\n\nThanks\n\nDennis\n\n---------------------------------------------------------------------------\n", "msg_date": "Thu, 18 Feb 1999 09:25:24 -0500", "msg_from": "Dennis Roesler <[email protected]>", "msg_from_op": true, "msg_subject": "list limit for IN predicate?" } ]
[ { "msg_contents": "hi, hackers!\nIf someone can, please help me!\n\nI deleted by an accident some vital data from a table, and now I'm\nwondering is it possible to \"undelete\" at least some of deleted records\nuntil they are not finally sweeped out with vacuum. If yes, how do it?\n\nTIA, \nAleksey.\n\n\n\n", "msg_date": "Thu, 18 Feb 1999 19:32:07 +0200 (IST)", "msg_from": "Postgres DBA <[email protected]>", "msg_from_op": true, "msg_subject": "UnDelete?" }, { "msg_contents": "Postgres DBA wrote:\n> \n> hi, hackers!\n> If someone can, please help me!\n> \n> I deleted by an accident some vital data from a table, and now I'm\n> wondering is it possible to \"undelete\" at least some of deleted records\n> until they are not finally sweeped out with vacuum. If yes, how do it?\n\nThis used to be the case on older verions of Postgres, then you had to \nonly give the time qualification of the period for which the data was\nvalid.\nFor some reason (performance ?) this feature was removed ;(\n\nAnd I'm pretty sure that the data is still inside the tables, so some \ncreative work with hex editor inside a _copy_ of the table would be \nable to get at it. \nSo as a first thing I advise you to make a copy of your database \nbackend directory, at least of the tables that contain your data.\n(the db directory is the one you gave to postmaster at startup +\ndatabase name,\nthe datafiles for tables are named exactly same as tables)\n\nNext, get the programmers docs (and/or read code ;), and start digging.\n\nIf you come up with some tools for salvaging lost data, share them :)\n\n---------------\nHannu\n", "msg_date": "Thu, 18 Feb 1999 20:36:20 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] UnDelete?" } ]
[ { "msg_contents": "The in-list problem is a query optimizer classic. Ingres turns in-lists into\nsequences of or'ed \"=\" predicates (e.g. a = value1 or a = value2 or ...).\nQuery optimizers usually represent where clauses and other expressions as\ntree structures and then use recursive calls to analyze their contents. A\nlarge in-list predicate spawns a very deep tree structure (because of the\ntransformation to or'ed \"=\"s) and the recursive analysis of this tree\nresults in many calls being piled onto the C call stack. By the time you add\nup the stack frame requirements for each of these recursive calls, a large\nenough in-list can exceed the size of the call stack being used by the query\ncompilation. Since 1.2 (or maybe 2.0) Ingres has incorporated a technique\nwhich detects the stack overflow and just fails the query. Before then, the\nstack overflow was permitted to happen and arbitrary server memory\noverlaying took place. This could lead to failure of other user threads and\neven server failure.\n\nThe number of entries in the in-list is the critical factor in whether the\nstack overflow happens or not (not the size of the constant values in the\nin-list entries). Tests I ran last July showed that the default stack size\n(64K?) allows roughly 150 entries in an in-list in pre-2.0 releases of\nIngres. 2.0 changes reduced this to a little under 100, but a fix produced\nin the October/November 1998 timeframe increased the threshold up to over\n200 (for 2.0, only). \n\nIf this is a chronic problem, the C stacksize should be increased using CBF.\nUnfortunately, this affects all user threads running on the server (meaning\nmore memory is allocated for everyone). So if this causes you concern and if\nthis is truly a scheduleable maintenance query, you could bring up a server\nwith the increased stacksize, run the queries, then restart the server with\nthe smaller stacksize.\n\nDoug.\n\nDoug Inkster\nComputer Associates Intl.,\n2580 Innes Road,\nGloucester, Ont. Canada\n\nphone: (613)837-1236\nemail: [email protected]\n or [email protected]\n\n> -----Original Message-----\n> From:\tKeith Jamieson [SMTP:[email protected]]\n> Sent:\tThursday, February 18, 1999 11:07 AM\n> To:\[email protected]\n> Subject:\tRe: list limit for IN predicate?\n> \n> I encountered a similar problem a few years ago. I was building up a query\n> using the In clause, but as soon as we got more than 255 list items, the\n> query failed.\n> \n> Dennis Roesler wrote:\n> \n> > Does anyone know what the maximum number of items there can be in a list\n> > for the IN predicate. I have a maintenance script that extracts\n> > information from one dB, builds a list for the IN predicate, connects to\n> > a different dB, and runs a query based on the list generated. In one\n> > instance the list that was built had ~450 items in the list, which\n> > crashed the server.\n> >\n> > Is it a matter of the size of the list in characters and not items in\n> > the list? The whole list, including the quotes around the items in the\n> > list, commas, etc. was ~5k characters.\n> >\n> > Thanks\n> >\n> > Dennis\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "18 Feb 1999 12:39:26 -0600", "msg_from": "[email protected] (Inkster, Douglas)", "msg_from_op": true, "msg_subject": "RE: list limit for IN predicate?" } ]
[ { "msg_contents": "I was wondering if there is a way in CVS\nto tell it to kill my local copy of a file\nif there is a problem with merge? I didn't\nchange or patch these files, but every once\nand a while something like this happens:\n\n-------------------------------------------------\nP src/backend/parser/analyze.c\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/gram.c,v\nretrieving revision 2.71\nretrieving revision 2.73\nMerging differences between 2.71 and 2.73 into gram.c\nrcsmerge: warning: conflicts during merge\ncvs server: conflicts found in src/backend/parser/gram.c\nC src/backend/parser/gram.c\n----------------------------------------------------------------\n\nany suggestions woudl be way cool.\n\nThank you\nClarkEvans\[email protected]\n", "msg_date": "Thu, 18 Feb 1999 19:24:05 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": true, "msg_subject": "CVS overwrite on merge fail?" }, { "msg_contents": "Delete that gram.c and do another cvs update. The other way would be to\nedit gram.c and looking for <<<<<< and >>>>>>. Deleting gram.c and make\nanother cvs update is faster.\n\n-Egon\n\nOn Thu, 18 Feb 1999, Clark Evans wrote:\n\n> I was wondering if there is a way in CVS\n> to tell it to kill my local copy of a file\n> if there is a problem with merge? I didn't\n> change or patch these files, but every once\n> and a while something like this happens:\n> \n> -------------------------------------------------\n> P src/backend/parser/analyze.c\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/parser/gram.c,v\n> retrieving revision 2.71\n> retrieving revision 2.73\n> Merging differences between 2.71 and 2.73 into gram.c\n> rcsmerge: warning: conflicts during merge\n> cvs server: conflicts found in src/backend/parser/gram.c\n> C src/backend/parser/gram.c\n> ----------------------------------------------------------------\n> \n> any suggestions woudl be way cool.\n> \n> Thank you\n> ClarkEvans\n> [email protected]\n> \n> \n\n", "msg_date": "Thu, 18 Feb 1999 20:36:13 +0100 (MET)", "msg_from": "Egon Schmid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS overwrite on merge fail?" }, { "msg_contents": "> I was wondering if there is a way in CVS\n> to tell it to kill my local copy of a file\n> if there is a problem with merge? I didn't\n> change or patch these files, but every once\n> and a while something like this happens:\n> \n> -------------------------------------------------\n> P src/backend/parser/analyze.c\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/parser/gram.c,v\n> retrieving revision 2.71\n> retrieving revision 2.73\n> Merging differences between 2.71 and 2.73 into gram.c\n> rcsmerge: warning: conflicts during merge\n> cvs server: conflicts found in src/backend/parser/gram.c\n> C src/backend/parser/gram.c\n> ----------------------------------------------------------------\n> \n> any suggestions woudl be way cool.\n> \n> Thank you\n> ClarkEvans\n> [email protected]\n> \n> \n\nJust delete the file and re-update. I use this to remove changes I have\nmade to my local copy:\n\n:\n[ ! -d pgsql ] && echo \"You must be at the CVS top of the pgsql tree\" 1>&2 && exit 1\npn pgcvs -qn update pgsql | grep '^M ' | cut -d' ' -f2 | xargs rm\npgupdate\n\nBasically, it removes files with M, and reupdates.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Feb 1999 14:50:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS overwrite on merge fail?" } ]
[ { "msg_contents": "I periodically re-fresh my local copies of both the\ncurrent snapshot and the release snapshot from the CVS tree.\n\nSuprizingly, recently (last month or so?) a few files were \nchanged in the REL_6_4 tree that caused it to fail to compile. \nJust to make sure that I wasn't being an idiot, I deleted\nmy local copy and did a complete refresh using:\n\ncvs -z3 -d :pserver:[email protected]:/usr/local/cvsroot \\\n co -r REL6_4 -P pgsql >> cvsinit.out 2>> cvsinit.out &\n \nThen I did a configure with no options, followed by make.\n\nHere is the tail of the compile log:\n\n----------------------------------------------------------\nlex.yy.c:820: warning: no previous prototype for `yylex'\nlex.yy.c: In function `yylex':\nlex.yy.c:822: warning: `yy_cp' might be used uninitialized in this function\nlex.yy.c:822: warning: `yy_bp' might be used uninitialized in this function\nscan.l: At top level:\nscan.l:426: warning: no previous prototype for `yyerror'\nlex.yy.c:2174: warning: `yy_flex_realloc' defined but not used\ngcc -I../../include -I../../backend -O2 -Wall -Wmissing-prototypes -I.. -Wno-error -c scansup.c -o scansup.o\nld -r -o SUBSYS.o analyze.o gram.o keywords.o parser.o parse_agg.o parse_clause.o parse_expr.o parse_func.o parse_node.o parse_oper.o parse_relation.o parse_type.o parse_coerce.o parse_target.o scan.o scansup.o\nmake[2]: Leaving directory `/usr/local/src/pgsql/RELEASE/src/backend/parser'\nmake -C port all \nmake[2]: Entering directory `/usr/local/src/pgsql/RELEASE/src/backend/port'\ngcc -I../../include -I../../backend -O2 -Wall -Wmissing-prototypes -I.. -c dynloader.c -o dynloader.o\nld -r -o SUBSYS.o dynloader.o \nmake[2]: Leaving directory `/usr/local/src/pgsql/RELEASE/src/backend/port'\nmake -C postmaster all \nmake[2]: Entering directory `/usr/local/src/pgsql/RELEASE/src/backend/postmaster'\ngcc -I../../include -I../../backend -O2 -Wall -Wmissing-prototypes -I.. -c postmaster.c -o postmaster.o\npostmaster.c: In function `initMasks':\npostmaster.c:802: Invalid `asm' statement:\npostmaster.c:802: fixed or forbidden register 2 (cx) was spilled for class CREG.\nmake[2]: *** [postmaster.o] Error 1\nmake[2]: Leaving directory `/usr/local/src/pgsql/RELEASE/src/backend/postmaster'\nmake[1]: *** [postmaster.dir] Error 2\nmake[1]: Leaving directory `/usr/local/src/pgsql/RELEASE/src/backend'\nmake: *** [all] Error 2\n-----------------------------------------------------------------------------\n\nI'm using RH 5.2 stock on a Pentium Pro.\n\nBTW, is there any file that contains the current version number?\nThis would be slick.\n\n\nThanks!\n\nClark\[email protected]\n", "msg_date": "Thu, 18 Feb 1999 20:07:50 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": true, "msg_subject": "6.4 Build Error from CVS snapshot" }, { "msg_contents": "> I'm using RH 5.2 stock on a Pentium Pro.\n> \n> BTW, is there any file that contains the current version number?\n> This would be slick.\n\nYes. \"SELECT pg_version();\" supplies it. Also see\nsrc/tools/RELEASE_CHANGES for all the places.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Feb 1999 15:32:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4 Build Error from CVS snapshot" } ]
[ { "msg_contents": "After the previous error, I was thinking that I built\nit from the wrong account... so I changed accounts,\ndid a 'make clean' followed by a 'make'.\n\nFollowing is a different error that occurs\nwhen I did this. This different error is the\nsame error that I was getting *before* I blew\naway the RELEASE directory and re-freshed it\nfrom the CVS server.\n\nHope it helps. Is this something that I'm \ndoing wrong? The current tree builds just great.\n\nBest,\n\nClark\n\n-------- Original Message --------\nSubject: Hmm\nDate: Thu, 18 Feb 1999 15:24:58 -0500\nFrom: PosgreSQL Backend Database <[email protected]>\nTo: [email protected]\n\nmake[3]: Entering directory `/usr/local/src/pgsql/RELEASE/src/backend/utils/sort'\ngcc -I../../../include -I../../../backend -O2 -Wall -Wmissing-prototypes -I../.. -c lselect.c -o lselect.o\ngcc -I../../../include -I../../../backend -O2 -Wall -Wmissing-prototypes -I../.. -c psort.c -o psort.o\nld -r -o SUBSYS.o lselect.o psort.o\nmake[3]: Leaving directory `/usr/local/src/pgsql/RELEASE/src/backend/utils/sort'\nmake[3]: Entering directory `/usr/local/src/pgsql/RELEASE/src/backend/utils/time'\ngcc -I../../../include -I../../../backend -O2 -Wall -Wmissing-prototypes -I../.. -c tqual.c -o tqual.o\nld -r -o SUBSYS.o tqual.o\nmake[3]: Leaving directory `/usr/local/src/pgsql/RELEASE/src/backend/utils/time'\ngcc -I../../include -I../../backend -O2 -Wall -Wmissing-prototypes -I.. -c fmgrtab.c -o fmgrtab.o\nmake[2]: *** No rule to make target `adt/SUBSYS.o', needed by `SUBSYS.o'. Stop.\nmake[2]: Leaving directory `/usr/local/src/pgsql/RELEASE/src/backend/utils'\nmake[1]: *** [utils.dir] Error 2\nmake[1]: Leaving directory `/usr/local/src/pgsql/RELEASE/src/backend'\nmake: *** [all] Error 2\n", "msg_date": "Thu, 18 Feb 1999 20:28:43 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": true, "msg_subject": "6.4 Build Error from CVS snapshot" }, { "msg_contents": "\nOn 18-Feb-99 Clark Evans wrote:\n> After the previous error, I was thinking that I built\n> it from the wrong account... so I changed accounts,\n> did a 'make clean' followed by a 'make'.\n> \n> Following is a different error that occurs\n> when I did this. This different error is the\n> same error that I was getting *before* I blew\n> away the RELEASE directory and re-freshed it\n> from the CVS server.\n> \n> Hope it helps. Is this something that I'm \n> doing wrong? The current tree builds just great.\n\n> make[2]: *** No rule to make target `adt/SUBSYS.o', needed by `SUBSYS.o'. Stop.\n\nI've had this before and I believe the solution was: \n\n$ rm adt/SUBSYS.o\n$ gmake\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Thu, 18 Feb 1999 15:58:56 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] 6.4 Build Error from CVS snapshot" } ]
[ { "msg_contents": "I have just checked in code changes (no doc updates yet :-() that\naddress our recent discussions about how many backend processes\ncan be used. Specifically:\n\nconfigure takes a --with-maxbackends=N switch that sets the hard limit\non the maximum number of backends per postmaster. (It's still a hard\nlimit because several arrays are sized by MAXBACKENDS. I didn't think\nit was worth trying to change that.) The default is still 64.\n\nThe postmaster can be started with a \"-N backends\" switch that sets\na smaller limit on the number of backends for this postmaster.\nThe only cost of having a large MAXBACKENDS constant is a few dozen\nbytes of shared memory per array slot, so if you want you can configure\nMAXBACKENDS pretty large and then set the effective limit with -N at\npostmaster startup.\n\nWhen the postmaster is started, it will immediately acquire enough\nsemaphores to support min(MAXBACKENDS, -N) backend processes.\nIf your kernel sema parameters are too low to allow that, you get an\nimmediate failure, rather than failure under peak load. The postmaster\njust refuses to start up, with a log message like this:\n\tIpcSemaphoreCreate: semget failed (No space left on device)\n\t\tkey=5440026, num=16, permission=600\n(Right at this instant, it looks like it fails to release whatever\nsemas it did acquire. Ugh. Think I can fix that though.)\n\nI have verified that I can start more than 64 backends after suitable\nconfiguration, but I am not in a position to check that things work\nsmoothly with a really large number of backends. I found one parameter\n(MAX_PROC_SEMS) that was hard-wired at 128 rather than set equal to\nMaxBackendIds, so I am a little worried that there might be others.\nIf anyone has the time and interest to push the envelope with a few\nhundred backends, please report back!\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Feb 1999 01:58:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Max backend limits cleaned up" } ]
[ { "msg_contents": "Hi,\n\nif I create a table like this:\nCREATE TABLE test (\n id decimal(3) primary key,\n name varchar(32));\n\nhow can I ask postgres which is the primary key from table test?\nI hope to get something like \"id is the primary key from table test.\".\nAny ideas\nMarc\n-- \n-----------------------------------------------------------\nMarc Grimme - An der Muehle 1 - 85716 Unterschleissheim\nFon: +49 89 37 48 81 22 - +49 89 37 48 92 7 - 3/0 - \nmail2: [email protected], [email protected]\n-----------------------------------------------------------\nThe UNIX Guru's View of Sex:\n# unzip ; strip ; touch ; finger ; mount ; fsck ; more ; yes ; umount ;\nsleep\n", "msg_date": "Fri, 19 Feb 1999 14:22:34 +0100", "msg_from": "Marc Grimme <[email protected]>", "msg_from_op": true, "msg_subject": "SQL-Query 2 get primary key" }, { "msg_contents": "You may query pg_indexes as in:\n\nselect * from pg_indexes where tablename = 'test';\n\ntablename|indexname|indexdef\n---------+---------+-----------------------------------------------------------------------\n\ntest |test_pkey|CREATE UNIQUE INDEX \"test_pkey\" ON \"test\" USING btree\n(\"id\" \"int4_ops\")\n(1 row)\n\n\n\nMarc Grimme ha scritto:\n\n> Hi,\n>\n> if I create a table like this:\n> CREATE TABLE test (\n> id decimal(3) primary key,\n> name varchar(32));\n>\n> how can I ask postgres which is the primary key from table test?\n> I hope to get something like \"id is the primary key from table test.\".\n> Any ideas\n> Marc\n> --\n> -----------------------------------------------------------\n> Marc Grimme - An der Muehle 1 - 85716 Unterschleissheim\n> Fon: +49 89 37 48 81 22 - +49 89 37 48 92 7 - 3/0 -\n> mail2: [email protected], [email protected]\n> -----------------------------------------------------------\n> The UNIX Guru's View of Sex:\n> # unzip ; strip ; touch ; finger ; mount ; fsck ; more ; yes ; umount ;\n> sleep\n\n--\n - Jose' -\n\nAnd behold, I tell you these things that ye may learn wisdom; that ye may\nlearn that when ye are in the service of your fellow beings ye are only\nin the service of your God. - Mosiah 2:17 -\n\n\n\nYou may query pg_indexes as in:\nselect * from pg_indexes where tablename = 'test';\ntablename|indexname|indexdef\n---------+---------+-----------------------------------------------------------------------\ntest     |test_pkey|CREATE UNIQUE INDEX \"test_pkey\"\nON \"test\" USING btree (\"id\" \"int4_ops\")\n(1 row)\n \n \nMarc Grimme ha scritto:\nHi,\nif I create a table like this:\nCREATE TABLE test (\n   id decimal(3) primary key,\n   name varchar(32));\nhow can I ask postgres which is the primary key from table test?\nI hope to get something like \"id is the primary key from table test.\".\nAny ideas\nMarc\n--\n-----------------------------------------------------------\nMarc Grimme - An der Muehle 1 - 85716 Unterschleissheim\nFon: +49 89 37 48 81 22 - +49 89 37 48 92 7 - 3/0 -\nmail2: [email protected], [email protected]\n-----------------------------------------------------------\nThe UNIX Guru's View of Sex:\n# unzip ; strip ; touch ; finger ; mount ; fsck ; more ; yes ; umount\n;\nsleep\n--\n                              \n- Jose' -\nAnd behold, I tell you these things that ye may learn wisdom; that ye\nmay\nlearn that when ye are in the service of your fellow beings ye are\nonly\nin the service of your God.               \n- Mosiah 2:17 -", "msg_date": "Fri, 19 Feb 1999 17:32:14 +0100", "msg_from": "\"jose' soares\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] SQL-Query 2 get primary key" }, { "msg_contents": "Hi,\n\njose' soares wrote:\n> \n> You may query pg_indexes as in:\n> \n> select * from pg_indexes where tablename = 'test';\n> \n> tablename|indexname|indexdef\n> ---------+---------+-----------------------------------------------------------------------\n> \n> test |test_pkey|CREATE UNIQUE INDEX \"test_pkey\" ON \"test\" USING\n> btree (\"id\" \"int4_ops\")\n> (1 row)\n> \n> \n> \nI read about the pg_indexes table and stuff but I am interested in the\n\"columnname\" of the primary key in the specified table.\nIsn�t this feature very important?\nCheers,\nMarc\n-- \n-----------------------------------------------------------\nMarc Grimme - An der Muehle 1 - 85716 Unterschleissheim\nFon: +49 89 37 48 81 22 - +49 89 37 48 92 7 - 3/0 - \nmail2: [email protected], [email protected]\n-----------------------------------------------------------\nThe UNIX Guru's View of Sex:\n# unzip ; strip ; touch ; finger ; mount ; fsck ; more ; yes ; umount ;\nsleep\n", "msg_date": "Fri, 19 Feb 1999 18:09:53 +0100", "msg_from": "Marc Grimme <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] SQL-Query 2 get primary key" }, { "msg_contents": "Thus spake Marc Grimme\n> if I create a table like this:\n> CREATE TABLE test (\n> id decimal(3) primary key,\n> name varchar(32));\n> \n> how can I ask postgres which is the primary key from table test?\n\nSELECT pg_class.relname, pg_attribute.attname\n FROM pg_class, pg_attribute, pg_index\n WHERE pg_class.oid = pg_attribute.attrelid AND\n pg_class.oid = pg_index.indrelid AND\n pg_index.indkey[0] = pg_attribute.attnum AND\n pg_index.indisprimary = 't';\n\nThat lists all the primary keys in your database. Add a \"WHERE pg_class\n= 'test'\" clause to get the specific table.\n\nNote that this makes the assumption that only one field can be in the\nprimary key (no complex primary keys) but I don't think there will\never be more than one the way we declare it now.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n\n", "msg_date": "Fri, 19 Feb 1999 23:07:22 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] SQL-Query 2 get primary key" }, { "msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> \n> Thus spake Marc Grimme\n> > if I create a table like this:\n> > CREATE TABLE test (\n> > id decimal(3) primary key,\n> > name varchar(32));\n> >\n> > how can I ask postgres which is the primary key from table test?\n> \n> SELECT pg_class.relname, pg_attribute.attname\n> FROM pg_class, pg_attribute, pg_index\n> WHERE pg_class.oid = pg_attribute.attrelid AND\n> pg_class.oid = pg_index.indrelid AND\n> pg_index.indkey[0] = pg_attribute.attnum AND\n> pg_index.indisprimary = 't';\n\nShould it work in 6.4.0 ?\n\nIt gives an empty table for me ;(\n \n> That lists all the primary keys in your database. Add a \"WHERE pg_class\n> = 'test'\" clause to get the specific table.\n\nYou probably mean \"pg_class.relname = 'test'\" ?\n\n> Note that this makes the assumption that only one field can be in the\n> primary key (no complex primary keys) but I don't think there will\n> ever be more than one the way we declare it now.\n \nActually you can declare multi_field PK as \n(Bruce: this probably should be added to \\h create table):\n\nhannu=> create table test(\nhannu-> id1 int,\nhannu-> id2 int,\nhannu-> meat text,\nhannu-> primary key (id1,id2)\nhannu-> );\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index test_pkey\nfor table test\nCREATE\n\n-------------------------\nHannu\n", "msg_date": "Sat, 20 Feb 1999 14:39:44 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] SQL-Query 2 get primary key" }, { "msg_contents": "> > if I create a table like this:\n> > CREATE TABLE test (\n> > id decimal(3) primary key,\n> > name varchar(32));\n<snip>\n> Note that this makes the assumption that only one field can be in the\n> primary key (no complex primary keys) but I don't think there will\n> ever be more than one the way we declare it now.\n\nfyi, the following syntax is allowed:\n\n CREATE TABLE test (\n id decimal(3),\n name varchar(32),\n primary key(id));\n\nand multiple columns can be declared as primary keys.\n\n - Tom\n", "msg_date": "Sat, 20 Feb 1999 16:17:19 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] SQL-Query 2 get primary key" } ]
[ { "msg_contents": "There is now insert triggers or rules for table cwdomains.\nError appear after installing plpgsql into existing database\n\ninsert into cwdomains(name, who) values ('woland.wplus.net', 'dms');\nERROR: fmgr_info: Cache lookup for language %d failed 21505\n\n-- \nDmitry Samersoff\n DM\\S, [email protected], AIM: Samersoff\n http://devnull.wplus.net\n\n", "msg_date": "Fri, 19 Feb 1999 17:17:16 +0300 (MSK)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": true, "msg_subject": "What does it means?" } ]
[ { "msg_contents": "\nCan someone please take a minute to look at this?\n\nI've gzip'd and moved his errorlog to\nftp.postgresql.org:/pub/debugging...one thing that appears to be\nlacking...what version of PostgreSQL are you using?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n---------- Forwarded message ----------\nDate: Thu, 18 Feb 1999 18:23:25 -0500\nFrom: Daryl W. Dunbar <[email protected]>\nTo: The Hermit Hacker <[email protected]>\nSubject: RE: Interested?\n\nThanks Marc, We exchanged an e-mail or two last week, along with\nTatsuo Ishii and Tom Lane. You suggested I truss the process.\n\nAnyway, periodically, the backends spiral out of control with hung\nup children until I hit MaxBackendID (which I compiled in to be\n128). Initially, I was running out of semaphores on Solaris 7 and\nchanged /etc/system to add these lines:\nset shmsys:shminfo_shmmax=16777216\nset shmsys:shminfo_shmmin=1\nset shmsys:shminfo_shmmni=128\nset shmsys:shminfo_shmseg=51\n*\nset semsys:seminfo_semmap=128\nset semsys:seminfo_semmni=128\nset semsys:seminfo_semmns=8192\nset semsys:seminfo_semmnu=8192\nset semsys:seminfo_semmsl=64\nset semsys:seminfo_semopm=32\nset semsys:seminfo_semume=32\n\nI increased shared memory so I could start more backends...\n\nOK, so now, everything is running fine and boom, the backends start\nto hang on semop, eventually reaching MaxBackendID and refusing\nconnections.\nAttached is a log file from a hang up today. Debug is set to 3.\nAll times are PST. I have carved out a bunch of normal operation\nfrom the beginning (about 21,000 lines) and redundant 'too many\nbackends' (about 1,000 lines, while I was eating lunch :) signified\nby {SNIP SNIP}. I pick the log back up with the birth of pid 2828\nand left several 'normal' cycles in until...\n\nYou can see that process 2840 is the first child to hang. It was\nstarted at 11:39:23 and did not die until sent a 15 by the parent at\n14:12:16. All of the hung processes fall between 2840 and 3454.\n\nSorry the file is so big. Here are some 'keys' you can use:\nStartup is the first line (obviously).\nYou can find child startup by looking for [2840] (pid in brackets)\nYou can find child exits by looking for '2480 exited'\nYou can find where I send the kill signal by looking for 'pmdie 15'\n\nI think that's a good start. :)\n\nDon't hesitate to contact me if I can shed any more light. I'm wide\nopen to ideas at the moment. I'm in EST, but tend to work until\n10-11 at night, so e-mail anytime.\n\nThanks,\n\nDwD\n\n> -----Original Message-----\n> From: The Hermit Hacker [mailto:[email protected]]\n> Sent: Thursday, February 18, 1999 5:36 PM\n> To: Daryl W. Dunbar\n> Subject: Re: Interested?\n>\n>\n>\n> Hi Daryl...\n>\n> \tI'm not the strongest at internal code, so may not\n> be of any help\n> at all. I just went through my -hackers email, and can't\n> seem to find\n> anything from you in there. Can you tell me what your\n> problem is, as well\n> as version of PostgreSQL you are using, and we'll see\n> what we can do?\n>\n> Marc\n>\n> On Thu, 18 Feb 1999, Daryl W. Dunbar wrote:\n>\n> > Marc,\n> >\n> > I know that you put considerable volunteer time into\n> PostgreSQL. If\n> > I am not too bold in asking, and you are comfortable\n> with it, I am\n> > prepared to compensate you for your time if you can assist me in\n> > tracking down this rather nasty bug I have been\n> e-mailing Hackers\n> > about. Please let me know if you are interested and if\n> so, at what\n> > rate.\n> >\n> > We are in the process of launching a pretty exciting site and a\n> > database in a integral part of it. I really want to\n> use PostgreSQL,\n> > but can not take it into production on Solaris with this problem\n> > going on. I'm in the process of installing a test site\n> on Linux to\n> > see if the problem exists there, but I expect it is limited to\n> > Solaris.\n> >\n> > I anxiously await your response.\n> >\n> > Thanks,\n> >\n> > DwD\n> >\n> > --\n> > Daryl W. Dunbar\n> > VP of Engineering/Chief Technology Officer\n> > http://www.com, Where the Web Begins!\n> > mailto:[email protected]\n> >\n> >\n>\n> Marc G. Fournier\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org\n>\n\n", "msg_date": "Fri, 19 Feb 1999 13:45:56 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Major bug, possible, with Solaris 7?" }, { "msg_contents": "Oh, sorry. 6.4.2 with a backend patch to prevent the parent death\nin the event of MaxBackendID being reached.\n\nI know it is in semop() because I did a truss on the child\nprocesses. From a small sample, it looks like they may all be\ntrying to operate on the same semaphore. I'm recompiling with\nthe -g flag to gain more insight...\n\nDwD\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of The Hermit\n> Hacker\n> Sent: Friday, February 19, 1999 12:46 PM\n> To: [email protected]\n> Cc: Daryl W. Dunbar\n> Subject: [HACKERS] Major bug, possible, with Solaris 7?\n>\n>\n>\n> Can someone please take a minute to look at this?\n>\n> I've gzip'd and moved his errorlog to\n> ftp.postgresql.org:/pub/debugging...one thing that appears to be\n> lacking...what version of PostgreSQL are you using?\n>\n> Marc G. Fournier\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org\n>\n> ---------- Forwarded message ----------\n> Date: Thu, 18 Feb 1999 18:23:25 -0500\n> From: Daryl W. Dunbar <[email protected]>\n> To: The Hermit Hacker <[email protected]>\n> Subject: RE: Interested?\n>\n> Thanks Marc, We exchanged an e-mail or two last week, along with\n> Tatsuo Ishii and Tom Lane. You suggested I truss the process.\n>\n> Anyway, periodically, the backends spiral out of control with hung\n> up children until I hit MaxBackendID (which I compiled in to be\n> 128). Initially, I was running out of semaphores on Solaris 7 and\n> changed /etc/system to add these lines:\n> set shmsys:shminfo_shmmax=16777216\n> set shmsys:shminfo_shmmin=1\n> set shmsys:shminfo_shmmni=128\n> set shmsys:shminfo_shmseg=51\n> *\n> set semsys:seminfo_semmap=128\n> set semsys:seminfo_semmni=128\n> set semsys:seminfo_semmns=8192\n> set semsys:seminfo_semmnu=8192\n> set semsys:seminfo_semmsl=64\n> set semsys:seminfo_semopm=32\n> set semsys:seminfo_semume=32\n>\n> I increased shared memory so I could start more backends...\n>\n> OK, so now, everything is running fine and boom, the\n> backends start\n> to hang on semop, eventually reaching MaxBackendID and refusing\n> connections.\n> Attached is a log file from a hang up today. Debug is set to 3.\n> All times are PST. I have carved out a bunch of normal operation\n> from the beginning (about 21,000 lines) and redundant 'too many\n> backends' (about 1,000 lines, while I was eating lunch :)\n> signified\n> by {SNIP SNIP}. I pick the log back up with the birth of pid 2828\n> and left several 'normal' cycles in until...\n>\n> You can see that process 2840 is the first child to hang. It was\n> started at 11:39:23 and did not die until sent a 15 by\n> the parent at\n> 14:12:16. All of the hung processes fall between 2840 and 3454.\n>\n> Sorry the file is so big. Here are some 'keys' you can use:\n> Startup is the first line (obviously).\n> You can find child startup by looking for [2840] (pid in brackets)\n> You can find child exits by looking for '2480 exited'\n> You can find where I send the kill signal by looking for\n> 'pmdie 15'\n>\n> I think that's a good start. :)\n>\n> Don't hesitate to contact me if I can shed any more\n> light. I'm wide\n> open to ideas at the moment. I'm in EST, but tend to work until\n> 10-11 at night, so e-mail anytime.\n>\n> Thanks,\n>\n> DwD\n>\n> > -----Original Message-----\n> > From: The Hermit Hacker [mailto:[email protected]]\n> > Sent: Thursday, February 18, 1999 5:36 PM\n> > To: Daryl W. Dunbar\n> > Subject: Re: Interested?\n> >\n> >\n> >\n> > Hi Daryl...\n> >\n> > \tI'm not the strongest at internal code, so may not\n> > be of any help\n> > at all. I just went through my -hackers email, and can't\n> > seem to find\n> > anything from you in there. Can you tell me what your\n> > problem is, as well\n> > as version of PostgreSQL you are using, and we'll see\n> > what we can do?\n> >\n> > Marc\n> >\n> > On Thu, 18 Feb 1999, Daryl W. Dunbar wrote:\n> >\n> > > Marc,\n> > >\n> > > I know that you put considerable volunteer time into\n> > PostgreSQL. If\n> > > I am not too bold in asking, and you are comfortable\n> > with it, I am\n> > > prepared to compensate you for your time if you can\n> assist me in\n> > > tracking down this rather nasty bug I have been\n> > e-mailing Hackers\n> > > about. Please let me know if you are interested and if\n> > so, at what\n> > > rate.\n> > >\n> > > We are in the process of launching a pretty exciting\n> site and a\n> > > database in a integral part of it. I really want to\n> > use PostgreSQL,\n> > > but can not take it into production on Solaris with\n> this problem\n> > > going on. I'm in the process of installing a test site\n> > on Linux to\n> > > see if the problem exists there, but I expect it is limited to\n> > > Solaris.\n> > >\n> > > I anxiously await your response.\n> > >\n> > > Thanks,\n> > >\n> > > DwD\n> > >\n> > > --\n> > > Daryl W. Dunbar\n> > > VP of Engineering/Chief Technology Officer\n> > > http://www.com, Where the Web Begins!\n> > > mailto:[email protected]\n> > >\n> > >\n> >\n> > Marc G. Fournier\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary:\n> > scrappy@{freebsd|postgresql}.org\n> >\n>\n>\n\n", "msg_date": "Fri, 19 Feb 1999 17:38:14 -0500", "msg_from": "\"Daryl W. Dunbar\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Major bug, possible, with Solaris 7?" }, { "msg_contents": "On Fri, 19 Feb 1999, Daryl W. Dunbar wrote:\n\n> Oh, sorry. 6.4.2 with a backend patch to prevent the parent death\n> in the event of MaxBackendID being reached.\n> \n> I know it is in semop() because I did a truss on the child\n> processes. From a small sample, it looks like they may all be\n> trying to operate on the same semaphore. I'm recompiling with\n> the -g flag to gain more insight...\n\nI'm just curious, but is this being used production yet? If not, would\nyou be willing to try out the current snapshot, which is soon to become\n6.5-BETA? If this apparent bug still exists there, I think its sufficient\na bug to prevent v6.5 coming out until this is fixed :( then again,\nsomething this reproducible will most likely hold up v6.4.3 from being\nreleased also, so if we are planning a v6.4.3 (I thought we were), we'll\nhave to get this fixed in the 6.4 line also.\n\nActually, with that in mind, I'm putting together a very quick tar ball of\nwhat v6.4.3 is looking like so far. this is *not* a release, but I'd like\nto see if this problem exists in the most current STABLE tree or not...I\nknow there has been quite a few fixes put into it...\n\nCheck in about a half hour or so, under the 'test' directory of\nftp.postgresql.org .. should be there then...\n\n\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of The Hermit\n> > Hacker\n> > Sent: Friday, February 19, 1999 12:46 PM\n> > To: [email protected]\n> > Cc: Daryl W. Dunbar\n> > Subject: [HACKERS] Major bug, possible, with Solaris 7?\n> >\n> >\n> >\n> > Can someone please take a minute to look at this?\n> >\n> > I've gzip'd and moved his errorlog to\n> > ftp.postgresql.org:/pub/debugging...one thing that appears to be\n> > lacking...what version of PostgreSQL are you using?\n> >\n> > Marc G. Fournier\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary:\n> > scrappy@{freebsd|postgresql}.org\n> >\n> > ---------- Forwarded message ----------\n> > Date: Thu, 18 Feb 1999 18:23:25 -0500\n> > From: Daryl W. Dunbar <[email protected]>\n> > To: The Hermit Hacker <[email protected]>\n> > Subject: RE: Interested?\n> >\n> > Thanks Marc, We exchanged an e-mail or two last week, along with\n> > Tatsuo Ishii and Tom Lane. You suggested I truss the process.\n> >\n> > Anyway, periodically, the backends spiral out of control with hung\n> > up children until I hit MaxBackendID (which I compiled in to be\n> > 128). Initially, I was running out of semaphores on Solaris 7 and\n> > changed /etc/system to add these lines:\n> > set shmsys:shminfo_shmmax=16777216\n> > set shmsys:shminfo_shmmin=1\n> > set shmsys:shminfo_shmmni=128\n> > set shmsys:shminfo_shmseg=51\n> > *\n> > set semsys:seminfo_semmap=128\n> > set semsys:seminfo_semmni=128\n> > set semsys:seminfo_semmns=8192\n> > set semsys:seminfo_semmnu=8192\n> > set semsys:seminfo_semmsl=64\n> > set semsys:seminfo_semopm=32\n> > set semsys:seminfo_semume=32\n> >\n> > I increased shared memory so I could start more backends...\n> >\n> > OK, so now, everything is running fine and boom, the\n> > backends start\n> > to hang on semop, eventually reaching MaxBackendID and refusing\n> > connections.\n> > Attached is a log file from a hang up today. Debug is set to 3.\n> > All times are PST. I have carved out a bunch of normal operation\n> > from the beginning (about 21,000 lines) and redundant 'too many\n> > backends' (about 1,000 lines, while I was eating lunch :)\n> > signified\n> > by {SNIP SNIP}. I pick the log back up with the birth of pid 2828\n> > and left several 'normal' cycles in until...\n> >\n> > You can see that process 2840 is the first child to hang. It was\n> > started at 11:39:23 and did not die until sent a 15 by\n> > the parent at\n> > 14:12:16. All of the hung processes fall between 2840 and 3454.\n> >\n> > Sorry the file is so big. Here are some 'keys' you can use:\n> > Startup is the first line (obviously).\n> > You can find child startup by looking for [2840] (pid in brackets)\n> > You can find child exits by looking for '2480 exited'\n> > You can find where I send the kill signal by looking for\n> > 'pmdie 15'\n> >\n> > I think that's a good start. :)\n> >\n> > Don't hesitate to contact me if I can shed any more\n> > light. I'm wide\n> > open to ideas at the moment. I'm in EST, but tend to work until\n> > 10-11 at night, so e-mail anytime.\n> >\n> > Thanks,\n> >\n> > DwD\n> >\n> > > -----Original Message-----\n> > > From: The Hermit Hacker [mailto:[email protected]]\n> > > Sent: Thursday, February 18, 1999 5:36 PM\n> > > To: Daryl W. Dunbar\n> > > Subject: Re: Interested?\n> > >\n> > >\n> > >\n> > > Hi Daryl...\n> > >\n> > > \tI'm not the strongest at internal code, so may not\n> > > be of any help\n> > > at all. I just went through my -hackers email, and can't\n> > > seem to find\n> > > anything from you in there. Can you tell me what your\n> > > problem is, as well\n> > > as version of PostgreSQL you are using, and we'll see\n> > > what we can do?\n> > >\n> > > Marc\n> > >\n> > > On Thu, 18 Feb 1999, Daryl W. Dunbar wrote:\n> > >\n> > > > Marc,\n> > > >\n> > > > I know that you put considerable volunteer time into\n> > > PostgreSQL. If\n> > > > I am not too bold in asking, and you are comfortable\n> > > with it, I am\n> > > > prepared to compensate you for your time if you can\n> > assist me in\n> > > > tracking down this rather nasty bug I have been\n> > > e-mailing Hackers\n> > > > about. Please let me know if you are interested and if\n> > > so, at what\n> > > > rate.\n> > > >\n> > > > We are in the process of launching a pretty exciting\n> > site and a\n> > > > database in a integral part of it. I really want to\n> > > use PostgreSQL,\n> > > > but can not take it into production on Solaris with\n> > this problem\n> > > > going on. I'm in the process of installing a test site\n> > > on Linux to\n> > > > see if the problem exists there, but I expect it is limited to\n> > > > Solaris.\n> > > >\n> > > > I anxiously await your response.\n> > > >\n> > > > Thanks,\n> > > >\n> > > > DwD\n> > > >\n> > > > --\n> > > > Daryl W. Dunbar\n> > > > VP of Engineering/Chief Technology Officer\n> > > > http://www.com, Where the Web Begins!\n> > > > mailto:[email protected]\n> > > >\n> > > >\n> > >\n> > > Marc G. Fournier\n> > > Systems Administrator @ hub.org\n> > > primary: [email protected] secondary:\n> > > scrappy@{freebsd|postgresql}.org\n> > >\n> >\n> >\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 19 Feb 1999 23:39:15 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Major bug, possible, with Solaris 7?" }, { "msg_contents": "At this point, I willing to try anything. I'm in production (live\nsite), but we have not announced the site. What that means is that\nI have the weekend to debug/fix/decide what to do. I'll take\nwhatever version you suggest and load it.\n\nDwD\n\n> -----Original Message-----\n> From: The Hermit Hacker [mailto:[email protected]]\n> Sent: Friday, February 19, 1999 10:39 PM\n> To: Daryl W. Dunbar\n> Cc: [email protected]\n> Subject: RE: [HACKERS] Major bug, possible, with Solaris 7?\n>\n>\n> On Fri, 19 Feb 1999, Daryl W. Dunbar wrote:\n>\n> > Oh, sorry. 6.4.2 with a backend patch to prevent the\n> parent death\n> > in the event of MaxBackendID being reached.\n> >\n> > I know it is in semop() because I did a truss on the child\n> > processes. From a small sample, it looks like they may all be\n> > trying to operate on the same semaphore. I'm recompiling with\n> > the -g flag to gain more insight...\n>\n> I'm just curious, but is this being used production yet?\n> If not, would\n> you be willing to try out the current snapshot, which is\n> soon to become\n> 6.5-BETA? If this apparent bug still exists there, I\n> think its sufficient\n> a bug to prevent v6.5 coming out until this is fixed\n\n> then again,\n> something this reproducible will most likely hold up\n> v6.4.3 from being\n> released also, so if we are planning a v6.4.3 (I thought\n> we were), we'll\n> have to get this fixed in the 6.4 line also.\n>\n> Actually, with that in mind, I'm putting together a very\n> quick tar ball of\n> what v6.4.3 is looking like so far. this is *not* a\n> release, but I'd like\n> to see if this problem exists in the most current STABLE\n> tree or not...I\n> know there has been quite a few fixes put into it...\n>\n> Check in about a half hour or so, under the 'test' directory of\n> ftp.postgresql.org .. should be there then...\n>\n>\n> > > -----Original Message-----\n> > > From: [email protected]\n> > > [mailto:[email protected]]On Behalf\n> Of The Hermit\n> > > Hacker\n> > > Sent: Friday, February 19, 1999 12:46 PM\n> > > To: [email protected]\n> > > Cc: Daryl W. Dunbar\n> > > Subject: [HACKERS] Major bug, possible, with Solaris 7?\n> > >\n> > >\n> > >\n> > > Can someone please take a minute to look at this?\n> > >\n> > > I've gzip'd and moved his errorlog to\n> > > ftp.postgresql.org:/pub/debugging...one thing that\n> appears to be\n> > > lacking...what version of PostgreSQL are you using?\n> > >\n> > > Marc G. Fournier\n> > > Systems Administrator @ hub.org\n> > > primary: [email protected] secondary:\n> > > scrappy@{freebsd|postgresql}.org\n> > >\n> > > ---------- Forwarded message ----------\n> > > Date: Thu, 18 Feb 1999 18:23:25 -0500\n> > > From: Daryl W. Dunbar <[email protected]>\n> > > To: The Hermit Hacker <[email protected]>\n> > > Subject: RE: Interested?\n> > >\n> > > Thanks Marc, We exchanged an e-mail or two last\n> week, along with\n> > > Tatsuo Ishii and Tom Lane. You suggested I truss the process.\n> > >\n> > > Anyway, periodically, the backends spiral out of\n> control with hung\n> > > up children until I hit MaxBackendID (which I\n> compiled in to be\n> > > 128). Initially, I was running out of semaphores on\n> Solaris 7 and\n> > > changed /etc/system to add these lines:\n> > > set shmsys:shminfo_shmmax=16777216\n> > > set shmsys:shminfo_shmmin=1\n> > > set shmsys:shminfo_shmmni=128\n> > > set shmsys:shminfo_shmseg=51\n> > > *\n> > > set semsys:seminfo_semmap=128\n> > > set semsys:seminfo_semmni=128\n> > > set semsys:seminfo_semmns=8192\n> > > set semsys:seminfo_semmnu=8192\n> > > set semsys:seminfo_semmsl=64\n> > > set semsys:seminfo_semopm=32\n> > > set semsys:seminfo_semume=32\n> > >\n> > > I increased shared memory so I could start more backends...\n> > >\n> > > OK, so now, everything is running fine and boom, the\n> > > backends start\n> > > to hang on semop, eventually reaching MaxBackendID\n> and refusing\n> > > connections.\n> > > Attached is a log file from a hang up today. Debug\n> is set to 3.\n> > > All times are PST. I have carved out a bunch of\n> normal operation\n> > > from the beginning (about 21,000 lines) and redundant\n> 'too many\n> > > backends' (about 1,000 lines, while I was eating lunch :)\n> > > signified\n> > > by {SNIP SNIP}. I pick the log back up with the\n> birth of pid 2828\n> > > and left several 'normal' cycles in until...\n> > >\n> > > You can see that process 2840 is the first child to\n> hang. It was\n> > > started at 11:39:23 and did not die until sent a 15 by\n> > > the parent at\n> > > 14:12:16. All of the hung processes fall between\n> 2840 and 3454.\n> > >\n> > > Sorry the file is so big. Here are some 'keys' you can use:\n> > > Startup is the first line (obviously).\n> > > You can find child startup by looking for [2840] (pid\n> in brackets)\n> > > You can find child exits by looking for '2480 exited'\n> > > You can find where I send the kill signal by looking for\n> > > 'pmdie 15'\n> > >\n> > > I think that's a good start. :)\n> > >\n> > > Don't hesitate to contact me if I can shed any more\n> > > light. I'm wide\n> > > open to ideas at the moment. I'm in EST, but tend to\n> work until\n> > > 10-11 at night, so e-mail anytime.\n> > >\n> > > Thanks,\n> > >\n> > > DwD\n> > >\n> > > > -----Original Message-----\n> > > > From: The Hermit Hacker [mailto:[email protected]]\n> > > > Sent: Thursday, February 18, 1999 5:36 PM\n> > > > To: Daryl W. Dunbar\n> > > > Subject: Re: Interested?\n> > > >\n> > > >\n> > > >\n> > > > Hi Daryl...\n> > > >\n> > > > \tI'm not the strongest at internal code, so may not\n> > > > be of any help\n> > > > at all. I just went through my -hackers email, and can't\n> > > > seem to find\n> > > > anything from you in there. Can you tell me what your\n> > > > problem is, as well\n> > > > as version of PostgreSQL you are using, and we'll see\n> > > > what we can do?\n> > > >\n> > > > Marc\n> > > >\n> > > > On Thu, 18 Feb 1999, Daryl W. Dunbar wrote:\n> > > >\n> > > > > Marc,\n> > > > >\n> > > > > I know that you put considerable volunteer time into\n> > > > PostgreSQL. If\n> > > > > I am not too bold in asking, and you are comfortable\n> > > > with it, I am\n> > > > > prepared to compensate you for your time if you can\n> > > assist me in\n> > > > > tracking down this rather nasty bug I have been\n> > > > e-mailing Hackers\n> > > > > about. Please let me know if you are interested and if\n> > > > so, at what\n> > > > > rate.\n> > > > >\n> > > > > We are in the process of launching a pretty exciting\n> > > site and a\n> > > > > database in a integral part of it. I really want to\n> > > > use PostgreSQL,\n> > > > > but can not take it into production on Solaris with\n> > > this problem\n> > > > > going on. I'm in the process of installing a test site\n> > > > on Linux to\n> > > > > see if the problem exists there, but I expect it\n> is limited to\n> > > > > Solaris.\n> > > > >\n> > > > > I anxiously await your response.\n> > > > >\n> > > > > Thanks,\n> > > > >\n> > > > > DwD\n> > > > >\n> > > > > --\n> > > > > Daryl W. Dunbar\n> > > > > VP of Engineering/Chief Technology Officer\n> > > > > http://www.com, Where the Web Begins!\n> > > > > mailto:[email protected]\n> > > > >\n> > > > >\n> > > >\n> > > > Marc G. Fournier\n> > > > Systems Administrator @ hub.org\n> > > > primary: [email protected] secondary:\n> > > > scrappy@{freebsd|postgresql}.org\n> > > >\n> > >\n> > >\n> >\n>\n> Marc G. Fournier\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org\n>\n\n", "msg_date": "Fri, 19 Feb 1999 22:50:13 -0500", "msg_from": "\"Daryl W. Dunbar\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Major bug, possible, with Solaris 7?" }, { "msg_contents": "On Fri, 19 Feb 1999, Daryl W. Dunbar wrote:\n\n> At this point, I willing to try anything. I'm in production (live\n> site), but we have not announced the site. What that means is that\n> I have the weekend to debug/fix/decide what to do. I'll take\n> whatever version you suggest and load it.\n\nApologies for the delay...there is a copy of postgresql-6.4.3beta.tar.gz\navailable in the test directory...try that and please report back here...\n\n\n> \n> DwD\n> \n> > -----Original Message-----\n> > From: The Hermit Hacker [mailto:[email protected]]\n> > Sent: Friday, February 19, 1999 10:39 PM\n> > To: Daryl W. Dunbar\n> > Cc: [email protected]\n> > Subject: RE: [HACKERS] Major bug, possible, with Solaris 7?\n> >\n> >\n> > On Fri, 19 Feb 1999, Daryl W. Dunbar wrote:\n> >\n> > > Oh, sorry. 6.4.2 with a backend patch to prevent the\n> > parent death\n> > > in the event of MaxBackendID being reached.\n> > >\n> > > I know it is in semop() because I did a truss on the child\n> > > processes. From a small sample, it looks like they may all be\n> > > trying to operate on the same semaphore. I'm recompiling with\n> > > the -g flag to gain more insight...\n> >\n> > I'm just curious, but is this being used production yet?\n> > If not, would\n> > you be willing to try out the current snapshot, which is\n> > soon to become\n> > 6.5-BETA? If this apparent bug still exists there, I\n> > think its sufficient\n> > a bug to prevent v6.5 coming out until this is fixed\n> \n> > then again,\n> > something this reproducible will most likely hold up\n> > v6.4.3 from being\n> > released also, so if we are planning a v6.4.3 (I thought\n> > we were), we'll\n> > have to get this fixed in the 6.4 line also.\n> >\n> > Actually, with that in mind, I'm putting together a very\n> > quick tar ball of\n> > what v6.4.3 is looking like so far. this is *not* a\n> > release, but I'd like\n> > to see if this problem exists in the most current STABLE\n> > tree or not...I\n> > know there has been quite a few fixes put into it...\n> >\n> > Check in about a half hour or so, under the 'test' directory of\n> > ftp.postgresql.org .. should be there then...\n> >\n> >\n> > > > -----Original Message-----\n> > > > From: [email protected]\n> > > > [mailto:[email protected]]On Behalf\n> > Of The Hermit\n> > > > Hacker\n> > > > Sent: Friday, February 19, 1999 12:46 PM\n> > > > To: [email protected]\n> > > > Cc: Daryl W. Dunbar\n> > > > Subject: [HACKERS] Major bug, possible, with Solaris 7?\n> > > >\n> > > >\n> > > >\n> > > > Can someone please take a minute to look at this?\n> > > >\n> > > > I've gzip'd and moved his errorlog to\n> > > > ftp.postgresql.org:/pub/debugging...one thing that\n> > appears to be\n> > > > lacking...what version of PostgreSQL are you using?\n> > > >\n> > > > Marc G. Fournier\n> > > > Systems Administrator @ hub.org\n> > > > primary: [email protected] secondary:\n> > > > scrappy@{freebsd|postgresql}.org\n> > > >\n> > > > ---------- Forwarded message ----------\n> > > > Date: Thu, 18 Feb 1999 18:23:25 -0500\n> > > > From: Daryl W. Dunbar <[email protected]>\n> > > > To: The Hermit Hacker <[email protected]>\n> > > > Subject: RE: Interested?\n> > > >\n> > > > Thanks Marc, We exchanged an e-mail or two last\n> > week, along with\n> > > > Tatsuo Ishii and Tom Lane. You suggested I truss the process.\n> > > >\n> > > > Anyway, periodically, the backends spiral out of\n> > control with hung\n> > > > up children until I hit MaxBackendID (which I\n> > compiled in to be\n> > > > 128). Initially, I was running out of semaphores on\n> > Solaris 7 and\n> > > > changed /etc/system to add these lines:\n> > > > set shmsys:shminfo_shmmax=16777216\n> > > > set shmsys:shminfo_shmmin=1\n> > > > set shmsys:shminfo_shmmni=128\n> > > > set shmsys:shminfo_shmseg=51\n> > > > *\n> > > > set semsys:seminfo_semmap=128\n> > > > set semsys:seminfo_semmni=128\n> > > > set semsys:seminfo_semmns=8192\n> > > > set semsys:seminfo_semmnu=8192\n> > > > set semsys:seminfo_semmsl=64\n> > > > set semsys:seminfo_semopm=32\n> > > > set semsys:seminfo_semume=32\n> > > >\n> > > > I increased shared memory so I could start more backends...\n> > > >\n> > > > OK, so now, everything is running fine and boom, the\n> > > > backends start\n> > > > to hang on semop, eventually reaching MaxBackendID\n> > and refusing\n> > > > connections.\n> > > > Attached is a log file from a hang up today. Debug\n> > is set to 3.\n> > > > All times are PST. I have carved out a bunch of\n> > normal operation\n> > > > from the beginning (about 21,000 lines) and redundant\n> > 'too many\n> > > > backends' (about 1,000 lines, while I was eating lunch :)\n> > > > signified\n> > > > by {SNIP SNIP}. I pick the log back up with the\n> > birth of pid 2828\n> > > > and left several 'normal' cycles in until...\n> > > >\n> > > > You can see that process 2840 is the first child to\n> > hang. It was\n> > > > started at 11:39:23 and did not die until sent a 15 by\n> > > > the parent at\n> > > > 14:12:16. All of the hung processes fall between\n> > 2840 and 3454.\n> > > >\n> > > > Sorry the file is so big. Here are some 'keys' you can use:\n> > > > Startup is the first line (obviously).\n> > > > You can find child startup by looking for [2840] (pid\n> > in brackets)\n> > > > You can find child exits by looking for '2480 exited'\n> > > > You can find where I send the kill signal by looking for\n> > > > 'pmdie 15'\n> > > >\n> > > > I think that's a good start. :)\n> > > >\n> > > > Don't hesitate to contact me if I can shed any more\n> > > > light. I'm wide\n> > > > open to ideas at the moment. I'm in EST, but tend to\n> > work until\n> > > > 10-11 at night, so e-mail anytime.\n> > > >\n> > > > Thanks,\n> > > >\n> > > > DwD\n> > > >\n> > > > > -----Original Message-----\n> > > > > From: The Hermit Hacker [mailto:[email protected]]\n> > > > > Sent: Thursday, February 18, 1999 5:36 PM\n> > > > > To: Daryl W. Dunbar\n> > > > > Subject: Re: Interested?\n> > > > >\n> > > > >\n> > > > >\n> > > > > Hi Daryl...\n> > > > >\n> > > > > \tI'm not the strongest at internal code, so may not\n> > > > > be of any help\n> > > > > at all. I just went through my -hackers email, and can't\n> > > > > seem to find\n> > > > > anything from you in there. Can you tell me what your\n> > > > > problem is, as well\n> > > > > as version of PostgreSQL you are using, and we'll see\n> > > > > what we can do?\n> > > > >\n> > > > > Marc\n> > > > >\n> > > > > On Thu, 18 Feb 1999, Daryl W. Dunbar wrote:\n> > > > >\n> > > > > > Marc,\n> > > > > >\n> > > > > > I know that you put considerable volunteer time into\n> > > > > PostgreSQL. If\n> > > > > > I am not too bold in asking, and you are comfortable\n> > > > > with it, I am\n> > > > > > prepared to compensate you for your time if you can\n> > > > assist me in\n> > > > > > tracking down this rather nasty bug I have been\n> > > > > e-mailing Hackers\n> > > > > > about. Please let me know if you are interested and if\n> > > > > so, at what\n> > > > > > rate.\n> > > > > >\n> > > > > > We are in the process of launching a pretty exciting\n> > > > site and a\n> > > > > > database in a integral part of it. I really want to\n> > > > > use PostgreSQL,\n> > > > > > but can not take it into production on Solaris with\n> > > > this problem\n> > > > > > going on. I'm in the process of installing a test site\n> > > > > on Linux to\n> > > > > > see if the problem exists there, but I expect it\n> > is limited to\n> > > > > > Solaris.\n> > > > > >\n> > > > > > I anxiously await your response.\n> > > > > >\n> > > > > > Thanks,\n> > > > > >\n> > > > > > DwD\n> > > > > >\n> > > > > > --\n> > > > > > Daryl W. Dunbar\n> > > > > > VP of Engineering/Chief Technology Officer\n> > > > > > http://www.com, Where the Web Begins!\n> > > > > > mailto:[email protected]\n> > > > > >\n> > > > > >\n> > > > >\n> > > > > Marc G. Fournier\n> > > > > Systems Administrator @ hub.org\n> > > > > primary: [email protected] secondary:\n> > > > > scrappy@{freebsd|postgresql}.org\n> > > > >\n> > > >\n> > > >\n> > >\n> >\n> > Marc G. Fournier\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary:\n> > scrappy@{freebsd|postgresql}.org\n> >\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 20 Feb 1999 00:48:00 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Major bug, possible, with Solaris 7?" }, { "msg_contents": "OK. I'm running 6.4.3beta (after patching the code to compile -\npatches attached). Now we wait to see if it breaks again...\n\nDwD\n\n\n> -----Original Message-----\n> From: The Hermit Hacker [mailto:[email protected]]\n> Sent: Friday, February 19, 1999 11:48 PM\n> To: Daryl W. Dunbar\n> Cc: [email protected]\n> Subject: RE: [HACKERS] Major bug, possible, with Solaris 7?\n>\n>\n> On Fri, 19 Feb 1999, Daryl W. Dunbar wrote:\n>\n> > At this point, I willing to try anything. I'm in\n> production (live\n> > site), but we have not announced the site. What that\n> means is that\n> > I have the weekend to debug/fix/decide what to do. I'll take\n> > whatever version you suggest and load it.\n>\n> Apologies for the delay...there is a copy of\n> postgresql-6.4.3beta.tar.gz\n> available in the test directory...try that and please\n> report back here...\n>\n>\n> >\n> > DwD\n> >\n> > > -----Original Message-----\n> > > From: The Hermit Hacker [mailto:[email protected]]\n> > > Sent: Friday, February 19, 1999 10:39 PM\n> > > To: Daryl W. Dunbar\n> > > Cc: [email protected]\n> > > Subject: RE: [HACKERS] Major bug, possible, with Solaris 7?\n> > >\n> > >\n> > > On Fri, 19 Feb 1999, Daryl W. Dunbar wrote:\n> > >\n> > > > Oh, sorry. 6.4.2 with a backend patch to prevent the\n> > > parent death\n> > > > in the event of MaxBackendID being reached.\n> > > >\n> > > > I know it is in semop() because I did a truss on the child\n> > > > processes. From a small sample, it looks like they\n> may all be\n> > > > trying to operate on the same semaphore. I'm\n> recompiling with\n> > > > the -g flag to gain more insight...\n> > >\n> > > I'm just curious, but is this being used production yet?\n> > > If not, would\n> > > you be willing to try out the current snapshot, which is\n> > > soon to become\n> > > 6.5-BETA? If this apparent bug still exists there, I\n> > > think its sufficient\n> > > a bug to prevent v6.5 coming out until this is fixed\n> >\n> > > then again,\n> > > something this reproducible will most likely hold up\n> > > v6.4.3 from being\n> > > released also, so if we are planning a v6.4.3 (I thought\n> > > we were), we'll\n> > > have to get this fixed in the 6.4 line also.\n> > >\n> > > Actually, with that in mind, I'm putting together a very\n> > > quick tar ball of\n> > > what v6.4.3 is looking like so far. this is *not* a\n> > > release, but I'd like\n> > > to see if this problem exists in the most current STABLE\n> > > tree or not...I\n> > > know there has been quite a few fixes put into it...\n> > >\n> > > Check in about a half hour or so, under the 'test'\n> directory of\n> > > ftp.postgresql.org .. should be there then...\n> > >\n> > >\n> > > > > -----Original Message-----\n> > > > > From: [email protected]\n> > > > > [mailto:[email protected]]On Behalf\n> > > Of The Hermit\n> > > > > Hacker\n> > > > > Sent: Friday, February 19, 1999 12:46 PM\n> > > > > To: [email protected]\n> > > > > Cc: Daryl W. Dunbar\n> > > > > Subject: [HACKERS] Major bug, possible, with Solaris 7?\n> > > > >\n> > > > >\n> > > > >\n> > > > > Can someone please take a minute to look at this?\n> > > > >\n> > > > > I've gzip'd and moved his errorlog to\n> > > > > ftp.postgresql.org:/pub/debugging...one thing that\n> > > appears to be\n> > > > > lacking...what version of PostgreSQL are you using?\n> > > > >\n> > > > > Marc G. Fournier\n> > > > > Systems Administrator @ hub.org\n> > > > > primary: [email protected] secondary:\n> > > > > scrappy@{freebsd|postgresql}.org\n> > > > >\n> > > > > ---------- Forwarded message ----------\n> > > > > Date: Thu, 18 Feb 1999 18:23:25 -0500\n> > > > > From: Daryl W. Dunbar <[email protected]>\n> > > > > To: The Hermit Hacker <[email protected]>\n> > > > > Subject: RE: Interested?\n> > > > >\n> > > > > Thanks Marc, We exchanged an e-mail or two last\n> > > week, along with\n> > > > > Tatsuo Ishii and Tom Lane. You suggested I truss\n> the process.\n> > > > >\n> > > > > Anyway, periodically, the backends spiral out of\n> > > control with hung\n> > > > > up children until I hit MaxBackendID (which I\n> > > compiled in to be\n> > > > > 128). Initially, I was running out of semaphores on\n> > > Solaris 7 and\n> > > > > changed /etc/system to add these lines:\n> > > > > set shmsys:shminfo_shmmax=16777216\n> > > > > set shmsys:shminfo_shmmin=1\n> > > > > set shmsys:shminfo_shmmni=128\n> > > > > set shmsys:shminfo_shmseg=51\n> > > > > *\n> > > > > set semsys:seminfo_semmap=128\n> > > > > set semsys:seminfo_semmni=128\n> > > > > set semsys:seminfo_semmns=8192\n> > > > > set semsys:seminfo_semmnu=8192\n> > > > > set semsys:seminfo_semmsl=64\n> > > > > set semsys:seminfo_semopm=32\n> > > > > set semsys:seminfo_semume=32\n> > > > >\n> > > > > I increased shared memory so I could start more\n> backends...\n> > > > >\n> > > > > OK, so now, everything is running fine and boom, the\n> > > > > backends start\n> > > > > to hang on semop, eventually reaching MaxBackendID\n> > > and refusing\n> > > > > connections.\n> > > > > Attached is a log file from a hang up today. Debug\n> > > is set to 3.\n> > > > > All times are PST. I have carved out a bunch of\n> > > normal operation\n> > > > > from the beginning (about 21,000 lines) and redundant\n> > > 'too many\n> > > > > backends' (about 1,000 lines, while I was eating lunch :)\n> > > > > signified\n> > > > > by {SNIP SNIP}. I pick the log back up with the\n> > > birth of pid 2828\n> > > > > and left several 'normal' cycles in until...\n> > > > >\n> > > > > You can see that process 2840 is the first child to\n> > > hang. It was\n> > > > > started at 11:39:23 and did not die until sent a 15 by\n> > > > > the parent at\n> > > > > 14:12:16. All of the hung processes fall between\n> > > 2840 and 3454.\n> > > > >\n> > > > > Sorry the file is so big. Here are some 'keys'\n> you can use:\n> > > > > Startup is the first line (obviously).\n> > > > > You can find child startup by looking for [2840] (pid\n> > > in brackets)\n> > > > > You can find child exits by looking for '2480 exited'\n> > > > > You can find where I send the kill signal by looking for\n> > > > > 'pmdie 15'\n> > > > >\n> > > > > I think that's a good start. :)\n> > > > >\n> > > > > Don't hesitate to contact me if I can shed any more\n> > > > > light. I'm wide\n> > > > > open to ideas at the moment. I'm in EST, but tend to\n> > > work until\n> > > > > 10-11 at night, so e-mail anytime.\n> > > > >\n> > > > > Thanks,\n> > > > >\n> > > > > DwD\n> > > > >\n> > > > > > -----Original Message-----\n> > > > > > From: The Hermit Hacker [mailto:[email protected]]\n> > > > > > Sent: Thursday, February 18, 1999 5:36 PM\n> > > > > > To: Daryl W. Dunbar\n> > > > > > Subject: Re: Interested?\n> > > > > >\n> > > > > >\n> > > > > >\n> > > > > > Hi Daryl...\n> > > > > >\n> > > > > > \tI'm not the strongest at internal code, so may not\n> > > > > > be of any help\n> > > > > > at all. I just went through my -hackers email,\n> and can't\n> > > > > > seem to find\n> > > > > > anything from you in there. Can you tell me what your\n> > > > > > problem is, as well\n> > > > > > as version of PostgreSQL you are using, and we'll see\n> > > > > > what we can do?\n> > > > > >\n> > > > > > Marc\n> > > > > >\n> > > > > > On Thu, 18 Feb 1999, Daryl W. Dunbar wrote:\n> > > > > >\n> > > > > > > Marc,\n> > > > > > >\n> > > > > > > I know that you put considerable volunteer time into\n> > > > > > PostgreSQL. If\n> > > > > > > I am not too bold in asking, and you are comfortable\n> > > > > > with it, I am\n> > > > > > > prepared to compensate you for your time if you can\n> > > > > assist me in\n> > > > > > > tracking down this rather nasty bug I have been\n> > > > > > e-mailing Hackers\n> > > > > > > about. Please let me know if you are\n> interested and if\n> > > > > > so, at what\n> > > > > > > rate.\n> > > > > > >\n> > > > > > > We are in the process of launching a pretty exciting\n> > > > > site and a\n> > > > > > > database in a integral part of it. I really want to\n> > > > > > use PostgreSQL,\n> > > > > > > but can not take it into production on Solaris with\n> > > > > this problem\n> > > > > > > going on. I'm in the process of installing a\n> test site\n> > > > > > on Linux to\n> > > > > > > see if the problem exists there, but I expect it\n> > > is limited to\n> > > > > > > Solaris.\n> > > > > > >\n> > > > > > > I anxiously await your response.\n> > > > > > >\n> > > > > > > Thanks,\n> > > > > > >\n> > > > > > > DwD\n> > > > > > >\n> > > > > > > --\n> > > > > > > Daryl W. Dunbar\n> > > > > > > VP of Engineering/Chief Technology Officer\n> > > > > > > http://www.com, Where the Web Begins!\n> > > > > > > mailto:[email protected]\n> > > > > > >\n> > > > > > >\n> > > > > >\n> > > > > > Marc G. Fournier\n> > > > > > Systems Administrator @ hub.org\n> > > > > > primary: [email protected] secondary:\n> > > > > > scrappy@{freebsd|postgresql}.org\n> > > > > >\n> > > > >\n> > > > >\n> > > >\n> > >\n> > > Marc G. Fournier\n> > > Systems Administrator @ hub.org\n> > > primary: [email protected] secondary:\n> > > scrappy@{freebsd|postgresql}.org\n> > >\n> >\n>\n> Marc G. Fournier\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org\n>", "msg_date": "Sat, 20 Feb 1999 11:26:12 -0500", "msg_from": "\"Daryl W. Dunbar\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Major bug, possible, with Solaris 7?" }, { "msg_contents": "Problem still exists in 6.4.3.\n\nI am wondering, since gdb can not give me any information on the\nlocation of my hang (I get lots of ??'s) and all I can see is\nsemsys(), am I spinning in a system library? Does anyone have\naccess to the Solaris7 patches? I see one kernel patch out there,\nbut I can not access the description, nor download the patch,\nbecause it is not considered in the recommended or security list.\nI'm talking to my rep on this on Monday!\n\nFor reference, I can provide a syslog and truss of the 6.4.3\nfailure, but I expect it looks just about like the 6.4.2 one.\n\nThanks,\n\nDwD\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of\n> Daryl W. Dunbar\n> Sent: Saturday, February 20, 1999 11:26 AM\n> To: The Hermit Hacker\n> Cc: [email protected]\n> Subject: RE: [HACKERS] Major bug, possible, with Solaris 7?\n>\n>\n> OK. I'm running 6.4.3beta (after patching the code to compile -\n> patches attached). Now we wait to see if it breaks again...\n>\n> DwD\n>\n>\n> > -----Original Message-----\n> > From: The Hermit Hacker [mailto:[email protected]]\n> > Sent: Friday, February 19, 1999 11:48 PM\n> > To: Daryl W. Dunbar\n> > Cc: [email protected]\n> > Subject: RE: [HACKERS] Major bug, possible, with Solaris 7?\n> >\n> >\n> > On Fri, 19 Feb 1999, Daryl W. Dunbar wrote:\n> >\n> > > At this point, I willing to try anything. I'm in\n> > production (live\n> > > site), but we have not announced the site. What that\n> > means is that\n> > > I have the weekend to debug/fix/decide what to do. I'll take\n> > > whatever version you suggest and load it.\n> >\n> > Apologies for the delay...there is a copy of\n> > postgresql-6.4.3beta.tar.gz\n> > available in the test directory...try that and please\n> > report back here...\n> >\n> >\n> > >\n> > > DwD\n> > >\n> > > > -----Original Message-----\n> > > > From: The Hermit Hacker [mailto:[email protected]]\n> > > > Sent: Friday, February 19, 1999 10:39 PM\n> > > > To: Daryl W. Dunbar\n> > > > Cc: [email protected]\n> > > > Subject: RE: [HACKERS] Major bug, possible, with Solaris 7?\n> > > >\n> > > >\n> > > > On Fri, 19 Feb 1999, Daryl W. Dunbar wrote:\n> > > >\n> > > > > Oh, sorry. 6.4.2 with a backend patch to prevent the\n> > > > parent death\n> > > > > in the event of MaxBackendID being reached.\n> > > > >\n> > > > > I know it is in semop() because I did a truss on the child\n> > > > > processes. From a small sample, it looks like they\n> > may all be\n> > > > > trying to operate on the same semaphore. I'm\n> > recompiling with\n> > > > > the -g flag to gain more insight...\n> > > >\n> > > > I'm just curious, but is this being used production yet?\n> > > > If not, would\n> > > > you be willing to try out the current snapshot, which is\n> > > > soon to become\n> > > > 6.5-BETA? If this apparent bug still exists there, I\n> > > > think its sufficient\n> > > > a bug to prevent v6.5 coming out until this is fixed\n> > >\n> > > > then again,\n> > > > something this reproducible will most likely hold up\n> > > > v6.4.3 from being\n> > > > released also, so if we are planning a v6.4.3 (I thought\n> > > > we were), we'll\n> > > > have to get this fixed in the 6.4 line also.\n> > > >\n> > > > Actually, with that in mind, I'm putting together a very\n> > > > quick tar ball of\n> > > > what v6.4.3 is looking like so far. this is *not* a\n> > > > release, but I'd like\n> > > > to see if this problem exists in the most current STABLE\n> > > > tree or not...I\n> > > > know there has been quite a few fixes put into it...\n> > > >\n> > > > Check in about a half hour or so, under the 'test'\n> > directory of\n> > > > ftp.postgresql.org .. should be there then...\n> > > >\n> > > >\n> > > > > > -----Original Message-----\n> > > > > > From: [email protected]\n> > > > > > [mailto:[email protected]]On Behalf\n> > > > Of The Hermit\n> > > > > > Hacker\n> > > > > > Sent: Friday, February 19, 1999 12:46 PM\n> > > > > > To: [email protected]\n> > > > > > Cc: Daryl W. Dunbar\n> > > > > > Subject: [HACKERS] Major bug, possible, with Solaris 7?\n> > > > > >\n> > > > > >\n> > > > > >\n> > > > > > Can someone please take a minute to look at this?\n> > > > > >\n> > > > > > I've gzip'd and moved his errorlog to\n> > > > > > ftp.postgresql.org:/pub/debugging...one thing that\n> > > > appears to be\n> > > > > > lacking...what version of PostgreSQL are you using?\n> > > > > >\n> > > > > > Marc G. Fournier\n> > > > > > Systems Administrator @ hub.org\n> > > > > > primary: [email protected] secondary:\n> > > > > > scrappy@{freebsd|postgresql}.org\n> > > > > >\n> > > > > > ---------- Forwarded message ----------\n> > > > > > Date: Thu, 18 Feb 1999 18:23:25 -0500\n> > > > > > From: Daryl W. Dunbar <[email protected]>\n> > > > > > To: The Hermit Hacker <[email protected]>\n> > > > > > Subject: RE: Interested?\n> > > > > >\n> > > > > > Thanks Marc, We exchanged an e-mail or two last\n> > > > week, along with\n> > > > > > Tatsuo Ishii and Tom Lane. You suggested I truss\n> > the process.\n> > > > > >\n> > > > > > Anyway, periodically, the backends spiral out of\n> > > > control with hung\n> > > > > > up children until I hit MaxBackendID (which I\n> > > > compiled in to be\n> > > > > > 128). Initially, I was running out of semaphores on\n> > > > Solaris 7 and\n> > > > > > changed /etc/system to add these lines:\n> > > > > > set shmsys:shminfo_shmmax=16777216\n> > > > > > set shmsys:shminfo_shmmin=1\n> > > > > > set shmsys:shminfo_shmmni=128\n> > > > > > set shmsys:shminfo_shmseg=51\n> > > > > > *\n> > > > > > set semsys:seminfo_semmap=128\n> > > > > > set semsys:seminfo_semmni=128\n> > > > > > set semsys:seminfo_semmns=8192\n> > > > > > set semsys:seminfo_semmnu=8192\n> > > > > > set semsys:seminfo_semmsl=64\n> > > > > > set semsys:seminfo_semopm=32\n> > > > > > set semsys:seminfo_semume=32\n> > > > > >\n> > > > > > I increased shared memory so I could start more\n> > backends...\n> > > > > >\n> > > > > > OK, so now, everything is running fine and boom, the\n> > > > > > backends start\n> > > > > > to hang on semop, eventually reaching MaxBackendID\n> > > > and refusing\n> > > > > > connections.\n> > > > > > Attached is a log file from a hang up today. Debug\n> > > > is set to 3.\n> > > > > > All times are PST. I have carved out a bunch of\n> > > > normal operation\n> > > > > > from the beginning (about 21,000 lines) and redundant\n> > > > 'too many\n> > > > > > backends' (about 1,000 lines, while I was\n> eating lunch :)\n> > > > > > signified\n> > > > > > by {SNIP SNIP}. I pick the log back up with the\n> > > > birth of pid 2828\n> > > > > > and left several 'normal' cycles in until...\n> > > > > >\n> > > > > > You can see that process 2840 is the first child to\n> > > > hang. It was\n> > > > > > started at 11:39:23 and did not die until sent a 15 by\n> > > > > > the parent at\n> > > > > > 14:12:16. All of the hung processes fall between\n> > > > 2840 and 3454.\n> > > > > >\n> > > > > > Sorry the file is so big. Here are some 'keys'\n> > you can use:\n> > > > > > Startup is the first line (obviously).\n> > > > > > You can find child startup by looking for [2840] (pid\n> > > > in brackets)\n> > > > > > You can find child exits by looking for '2480 exited'\n> > > > > > You can find where I send the kill signal by looking for\n> > > > > > 'pmdie 15'\n> > > > > >\n> > > > > > I think that's a good start. :)\n> > > > > >\n> > > > > > Don't hesitate to contact me if I can shed any more\n> > > > > > light. I'm wide\n> > > > > > open to ideas at the moment. I'm in EST, but tend to\n> > > > work until\n> > > > > > 10-11 at night, so e-mail anytime.\n> > > > > >\n> > > > > > Thanks,\n> > > > > >\n> > > > > > DwD\n> > > > > >\n> > > > > > > -----Original Message-----\n> > > > > > > From: The Hermit Hacker [mailto:[email protected]]\n> > > > > > > Sent: Thursday, February 18, 1999 5:36 PM\n> > > > > > > To: Daryl W. Dunbar\n> > > > > > > Subject: Re: Interested?\n> > > > > > >\n> > > > > > >\n> > > > > > >\n> > > > > > > Hi Daryl...\n> > > > > > >\n> > > > > > > \tI'm not the strongest at internal code, so may not\n> > > > > > > be of any help\n> > > > > > > at all. I just went through my -hackers email,\n> > and can't\n> > > > > > > seem to find\n> > > > > > > anything from you in there. Can you tell me what your\n> > > > > > > problem is, as well\n> > > > > > > as version of PostgreSQL you are using, and we'll see\n> > > > > > > what we can do?\n> > > > > > >\n> > > > > > > Marc\n> > > > > > >\n> > > > > > > On Thu, 18 Feb 1999, Daryl W. Dunbar wrote:\n> > > > > > >\n> > > > > > > > Marc,\n> > > > > > > >\n> > > > > > > > I know that you put considerable volunteer time into\n> > > > > > > PostgreSQL. If\n> > > > > > > > I am not too bold in asking, and you are comfortable\n> > > > > > > with it, I am\n> > > > > > > > prepared to compensate you for your time if you can\n> > > > > > assist me in\n> > > > > > > > tracking down this rather nasty bug I have been\n> > > > > > > e-mailing Hackers\n> > > > > > > > about. Please let me know if you are\n> > interested and if\n> > > > > > > so, at what\n> > > > > > > > rate.\n> > > > > > > >\n> > > > > > > > We are in the process of launching a pretty exciting\n> > > > > > site and a\n> > > > > > > > database in a integral part of it. I really want to\n> > > > > > > use PostgreSQL,\n> > > > > > > > but can not take it into production on Solaris with\n> > > > > > this problem\n> > > > > > > > going on. I'm in the process of installing a\n> > test site\n> > > > > > > on Linux to\n> > > > > > > > see if the problem exists there, but I expect it\n> > > > is limited to\n> > > > > > > > Solaris.\n> > > > > > > >\n> > > > > > > > I anxiously await your response.\n> > > > > > > >\n> > > > > > > > Thanks,\n> > > > > > > >\n> > > > > > > > DwD\n> > > > > > > >\n> > > > > > > > --\n> > > > > > > > Daryl W. Dunbar\n> > > > > > > > VP of Engineering/Chief Technology Officer\n> > > > > > > > http://www.com, Where the Web Begins!\n> > > > > > > > mailto:[email protected]\n> > > > > > > >\n> > > > > > > >\n> > > > > > >\n> > > > > > > Marc G. Fournier\n> > > > > > > Systems Administrator @ hub.org\n> > > > > > > primary: [email protected] secondary:\n> > > > > > > scrappy@{freebsd|postgresql}.org\n> > > > > > >\n> > > > > >\n> > > > > >\n> > > > >\n> > > >\n> > > > Marc G. Fournier\n> > > > Systems Administrator @ hub.org\n> > > > primary: [email protected] secondary:\n> > > > scrappy@{freebsd|postgresql}.org\n> > > >\n> > >\n> >\n> > Marc G. Fournier\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary:\n> > scrappy@{freebsd|postgresql}.org\n> >\n>\n\n", "msg_date": "Sat, 20 Feb 1999 14:22:48 -0500", "msg_from": "\"Daryl W. Dunbar\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Major bug, possible, with Solaris 7?" }, { "msg_contents": "\"Daryl W. Dunbar\" <[email protected]> writes:\n> Problem still exists in 6.4.3.\n\nI figured it probably would :-(.\n\nAs far as I can tell from your truss trace, the processes are going\nto sleep via semop() and never being awoken. There's not much more\nthat we can find out at the kernel level, since the kernel can't tell\n*why* a backend thinks it needs to go to sleep. Assuming that\nTEST_AND_SET is defined in your compilation, the backend only use\none semaphore apiece and all blocking/awakening is done via the same\nsemaphore. We need to know what lock-manager condition is causing\neach backend to decide to block and why the lock is not getting\nreleased.\n\nI was hoping that a gdb backtrace would tell us more --- it's bad that\nyou can't get any info that way. On my system (HPUX) gdb has a problem\nwith debugging shared libraries in a process that you attach to, as\nopposed to starting fresh under gdb. I dunno if Solaris is similar, but\nit might be worth building your -g version of the backend with no shared\nlibraries, everything linked statically (-static option, I think, when\nlinking the postgres binary). If your system doesn't have a static\nversion of libc then this won't help.\n\nBut probably the first thing to try at this point is adding a bunch of\ndebugging printouts. If you compile with -DLOCK_MGR_DEBUG (see\nsrc/backend/storage/lmgr/lock.c) and turn on the trace-locks option then\nyou'll get a bunch more log output that should tell us something useful\nabout why the processes are deciding to block.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Feb 1999 16:48:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Major bug, possible, with Solaris 7? " } ]
[ { "msg_contents": "\nI have a record in table cust with the username of joblo and it's\nalready lower case. This is from a cvsup a couple of weeks old.\n\n\n\nclassifieds=> select count(*) from cust where username = lower('joblo');\ncount\n-----\n 0\n(1 row)\n\nclassifieds=> select count(*) from cust where username = 'joblo';\ncount\n-----\n 1\n(1 row)\n\n\nDoesn't seem to matter if I use lower on username, 'joblo' or both. And\nthere's only the one record in the table.\n\nDid something break or did I forget how to use lower()?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Fri, 19 Feb 1999 13:31:51 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "lower() broken?" }, { "msg_contents": "> I have a record in table cust with the username of joblo and it's\n> already lower case. This is from a cvsup a couple of weeks old.\n> Doesn't seem to matter if I use lower on username, 'joblo' or both. \n> And there's only the one record in the table.\n> Did something break or did I forget how to use lower()?\n\nNot sure. You *did* forget to tell us what data type is used for column\n\"username\".\n\n - Tom\n", "msg_date": "Sat, 20 Feb 1999 02:45:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] lower() broken?" }, { "msg_contents": "On Sat, 20 Feb 1999, Thomas G. Lockhart wrote:\n\n> > I have a record in table cust with the username of joblo and it's\n> > already lower case. This is from a cvsup a couple of weeks old.\n> > Doesn't seem to matter if I use lower on username, 'joblo' or both. \n> > And there's only the one record in the table.\n> > Did something break or did I forget how to use lower()?\n> \n> Not sure. You *did* forget to tell us what data type is used for column\n> \"username\".\n\nOops! Yeah, I guess lower wouldn't work so well if it was a numeric\nfield. Anyway, username is a char(8).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 19 Feb 1999 22:43:57 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] lower() broken?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\n>>>>> \"Vince\" == Vince Vielhaber <[email protected]> writes:\n\n Vince> Oops! Yeah, I guess lower wouldn't work so well if it was\n Vince> a numeric field. Anyway, username is a char(8).\n\nIt won't be equal to 'joblo', it will be equal to 'joblo '. You may\nwant to consider using varchar(8).\n\nroland\n- -- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD Custom Software Solutions\[email protected] 101 West 15th St #4NN\[email protected] New York, NY 10011\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.2\nComment: Processed by Mailcrypt 3.4, an Emacs/PGP interface\n\niQCVAwUBNs4+/eoW38lmvDvNAQHyYwQAm5l6iiHIzHmpZ+9hYUe+FX81TeKLG7Tm\nkoqbU3zxCVHVRcWID7PH7EjnHhPYga19ctNyE8Y0nVsKpzc9DadACfBdYexUy+Qc\nTdS9WiDzFyO0eOg4BrjV67ZWBtTwIxOYng9NSZHlUOgNx9HLggmIH0Tnfl2vyU8H\nEAaq/zlq6c8=\n=Lizm\n-----END PGP SIGNATURE-----\n", "msg_date": "19 Feb 1999 23:50:07 -0500", "msg_from": "Roland Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] lower() broken?" }, { "msg_contents": "On 19 Feb 1999, Roland Roberts wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> \n> >>>>> \"Vince\" == Vince Vielhaber <[email protected]> writes:\n> \n> Vince> Oops! Yeah, I guess lower wouldn't work so well if it was\n> Vince> a numeric field. Anyway, username is a char(8).\n> \n> It won't be equal to 'joblo', it will be equal to 'joblo '. You may\n> want to consider using varchar(8).\n\nDamn. That's the one thing that never even occurred to me!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 20 Feb 1999 00:02:08 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] lower() broken?" }, { "msg_contents": "> > Vince> Oops! Yeah, I guess lower wouldn't work so well if it was\n> > Vince> a numeric field. Anyway, username is a char(8).\n> > It won't be equal to 'joblo', it will be equal to 'joblo '. You \n> > may want to consider using varchar(8).\n> Damn. That's the one thing that never even occurred to me!\n\nI don't remember what my old Ingres system did for comparisons of char\nagainst other string types; does every system (or the SQL standard)\nconsider the trailing blanks significant, or should they be implicitly\nignored in comparisons?\n\nbtw, if you don't want to redefine the column, then try\n\n where trim(trailing from username) = lower('joblo');\n\nbut that will be a slower query since \"username\" must be trimmed before\ncomparison.\n\n - Tom\n", "msg_date": "Sat, 20 Feb 1999 06:17:05 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] lower() broken?" }, { "msg_contents": "\n\nVince Vielhaber ha scritto:\n\n> I have a record in table cust with the username of joblo and it's\n> already lower case. This is from a cvsup a couple of weeks old.\n>\n> classifieds=> select count(*) from cust where username = lower('joblo');\n> count\n> -----\n> 0\n> (1 row)\n>\n> classifieds=> select count(*) from cust where username = 'joblo';\n> count\n> -----\n> 1\n> (1 row)\n>\n> Doesn't seem to matter if I use lower on username, 'joblo' or both. And\n> there's only the one record in the table.\n>\n> Did something break or did I forget how to use lower()?\n>\n> Vince.\n\nI suppose you defined username as char() like...\n\nprova=> create table test(username char(10));\nCREATE\nprova=> insert into test values ('joblo');\nINSERT 207732 1\nprova=> select count(*) from test where username = lower('joblo');\ncount\n-----\n 0\n(1 row)\n\n\nprova=> select count(*) from test where trim(username) = lower('joblo');\ncount\n-----\n 1\n(1 row)\n\nprova=> select count(*) from test where username = 'joblo';\ncount\n-----\n 1\n(1 row)\n\nprova=> select count(*) from test where username = lower('joblo ');\ncount\n-----\n 1\n(1 row)\n\nThe lower function \"trims\" the trailling spaces, this is why comparison fails.\n\nbecause 'joblo ' != 'joblo'\n\nI think this is a bug.\n\n - Jose' -\n\n\n", "msg_date": "Mon, 22 Feb 1999 14:44:34 +0100", "msg_from": "\"jose' soares\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] lower() broken?" } ]
[ { "msg_contents": "I have greatly updated the optimizer/README file, if anyone is\ninterested in undestanding how the optimizer works. Any suggestions\nwelcomed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Feb 1999 14:28:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "New optimizer README" } ]
[ { "msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> \n> Can someone suggest a\n> SELECT statement the would return something like this for this table?\n> \n> relname|attname\n> -------+-------\n> y |id1,id2\n\nFist:\n\n I have been wanting to write aggregate functions SET() and MULTISET()\n that produce an array of base type like SUM() produces a single value.\n\n But AFAIK the C function interface is not capable of returning it ?\n\n I hope I have misread something.\n\n BTW is there any possibility do define (in system tables) a function \n that returns an array of its argument type ?\n\n I know that currently COUNT() is defined to return int and take any \n type, but it would be terribly nice to have a mechanism that makes \n the above possible.\n\n> Or even this.\n> \n> relname|attname\n> -------+-------\n> y |id1\n> y |id2\n> \n> Of course, best would be a new command to give this list. Perhaps it can\n> be added to the pg_indexes table. Heck, creation time would be a perfect\n> time to store it.\n\nSecond:\n\n Is there a (prefgerrably single) spot where I could place a trigger\nthat can \n store the source for a CREATE statement?\n\n I know that Oracle stores all DDL source (sometimes in one big text\nfield, \n sometimes each row in its own record), and uses it in recompiling\nstuff.\n\n As it is not currently the case with PostgreSQL, it would be nice to do\nit\n using some existing mechanism.\n\n\nAnd finally :\n \n what is the state of ALTER TABLE statements for \n adding/[dis|en]abling/dropping constraints ?\n\n Will it be in 6.5 ?\n\n Or will the only way to make a column nullable defining a new column ?\n\n------------------------\nHannu\n", "msg_date": "Sat, 20 Feb 1999 18:09:44 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "A few questions (was: SQL-Query 2 get primary key)" } ]
[ { "msg_contents": "\nPlease forgive me if this is not the place to post this. I'm not a member\nof the list but todays CVS did not compile\n\nThe print.c line 226\n\n List pathkey = lfirst(i));\n\nmight be\n\n List *pathkey = lfirst(i);\n\nI really don't know if this is correct or not but it at least compiled for\nme after the change.\n\nIf this is not the proper place to submit CVS problems please let me know\nwhere to direct such problems. Thanks.\n\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\nJames Thompson 138 Cardwell Hall Manhattan, Ks 66506 785-532-0561 \nKansas State University Department of Mathematics\n->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<\n\n\n", "msg_date": "Sat, 20 Feb 1999 15:41:17 -0600 (CST)", "msg_from": "James Thompson <[email protected]>", "msg_from_op": true, "msg_subject": "Bug in src/backend/nodes/print.c" } ]
[ { "msg_contents": "I did a little more work on the configurable-max-backends patch:\n\n1. initdb didn't work because I had broken bootstrap mode :-(.\nFixed.\n\n2. I separated the hard maximum limit on the number of backends\n(MAXBACKENDS, used to size a couple of arrays) from the default\nsoft limit (now DEF_MAXBACKENDS).\n\n3. The only cost of enlarging MAXBACKENDS is about 32 bytes per\narray slot, so I set MAXBACKENDS at 1024 while leaving the\ndefault DEF_MAXBACKENDS at 64. (I more than bought back the 32K\nby reducing the number of allocated spinlocks to what we were\nactually using, anyway.)\n\n4. Upshot: default limit on number of backends is 64, same as it\never was, but you can set it as high as 1024 without a recompile.\nJust start the postmaster with desired -N switch. (Of course you\nmight need to reconfigure your kernel first ;-).)\n\n5. Allocation of semaphores and shared memory is now based on\n-N switch value (default or specified) rather than the MAXBACKENDS\nconstant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Feb 1999 22:47:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Max backend limits cleaned up" }, { "msg_contents": "> 5. Allocation of semaphores and shared memory is now based on\n> -N switch value (default or specified) rather than the MAXBACKENDS\n> constant.\n\nsgml and man documenation updates, right? Or should I do it?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 Feb 1999 22:59:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Max backend limits cleaned up" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> 5. Allocation of semaphores and shared memory is now based on\n>> -N switch value (default or specified) rather than the MAXBACKENDS\n>> constant.\n\n> sgml and man documenation updates, right? Or should I do it?\n\nI put something into the docs last night in the places where configure\nand postmaster switches are described.\n\nI am thinking, though, that we also ought to have FAQ entries under\nheadings like:\n\n\tI get \"IpcSemaphoreCreate: semget failed (No space left on device)\"\n\twhen I try to start the postmaster\n\n\tI get 'Sorry, too many clients already' when trying to connect\n\nIf you like, I'll try to write up a first cut at these.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Feb 1999 13:43:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Max backend limits cleaned up " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> 5. Allocation of semaphores and shared memory is now based on\n> >> -N switch value (default or specified) rather than the MAXBACKENDS\n> >> constant.\n> \n> > sgml and man documenation updates, right? Or should I do it?\n> \n> I put something into the docs last night in the places where configure\n> and postmaster switches are described.\n> \n> I am thinking, though, that we also ought to have FAQ entries under\n> headings like:\n> \n> \tI get \"IpcSemaphoreCreate: semget failed (No space left on device)\"\n> \twhen I try to start the postmaster\n> \n> \tI get 'Sorry, too many clients already' when trying to connect\n> \n> If you like, I'll try to write up a first cut at these.\n\nSure.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 Feb 1999 14:27:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Max backend limits cleaned up" }, { "msg_contents": "Yom, I am getting:\n\n\n(1) cat /u/pg/server.log\nIpcSemaphoreCreate: semget failed (No space left on device) key=5432017,\nnum=16, permission=600\n\nThis is without any special switches. Do I need to do anything?\n\n> Bruce Momjian <[email protected]> writes:\n> >> 5. Allocation of semaphores and shared memory is now based on\n> >> -N switch value (default or specified) rather than the MAXBACKENDS\n> >> constant.\n> \n> > sgml and man documenation updates, right? Or should I do it?\n> \n> I put something into the docs last night in the places where configure\n> and postmaster switches are described.\n> \n> I am thinking, though, that we also ought to have FAQ entries under\n> headings like:\n> \n> \tI get \"IpcSemaphoreCreate: semget failed (No space left on device)\"\n> \twhen I try to start the postmaster\n> \n> \tI get 'Sorry, too many clients already' when trying to connect\n> \n> If you like, I'll try to write up a first cut at these.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 Feb 1999 17:06:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Max backend limits cleaned up" }, { "msg_contents": "I got it working by adding a -N 32 to the postmaster startup. Looks\nlike my site BSD/OS can't start 64 backends. Some of my configuration\nis wrong. Perhaps we need 32 as the default.\n\n\n> Bruce Momjian <[email protected]> writes:\n> >> 5. Allocation of semaphores and shared memory is now based on\n> >> -N switch value (default or specified) rather than the MAXBACKENDS\n> >> constant.\n> \n> > sgml and man documenation updates, right? Or should I do it?\n> \n> I put something into the docs last night in the places where configure\n> and postmaster switches are described.\n> \n> I am thinking, though, that we also ought to have FAQ entries under\n> headings like:\n> \n> \tI get \"IpcSemaphoreCreate: semget failed (No space left on device)\"\n> \twhen I try to start the postmaster\n> \n> \tI get 'Sorry, too many clients already' when trying to connect\n> \n> If you like, I'll try to write up a first cut at these.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 Feb 1999 17:24:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Max backend limits cleaned up" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I am getting:\n> IpcSemaphoreCreate: semget failed (No space left on device) key=5432017,\n> num=16, permission=600\n> [ later ]\n> I got it working by adding a -N 32 to the postmaster startup. Looks\n> like my site BSD/OS can't start 64 backends. Some of my configuration\n> is wrong. Perhaps we need 32 as the default.\n\nYeah, I was thinking about that myself. I left the default -N setting\nat 64 on the theory that people who had gone to the trouble of making\nsure they had proper kernel configurations should not get surprised by\nv6.5 suddenly reducing the default number-of-backends limit.\n\nOn the other hand, we have reason to believe that a lot of systems are\nnot configured to allow Postgres to grab 64 semaphores, so if we don't\nreduce the default -N value we will almost certainly see a lot of gripes\njust like the above when people move to 6.5. (I think -N 32 would work\nas a default on minimally-configured systems, but cannot prove it.)\n\nI haven't got a real strong feeling either way. Opinions?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Feb 1999 10:10:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Max backend limits cleaned up " }, { "msg_contents": "Having recently experienced a similar problem with semaphores and\nkernel size, I can say it is an issue. I feel that documentation\nwill clear it up either way. Either you lower the default backend\nlimit, and document how to raise it along with the associated kernel\nvariables, or leave it alone and document the appropriate steps to\ntuning the kernel to accommodate it and how to lower it if you don't\nwant to tune the kernel.\n\nDwD\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Monday, February 22, 1999 10:10 AM\n> To: Bruce Momjian\n> Cc: [email protected]\n> Subject: Re: [HACKERS] Re: Max backend limits cleaned up\n>\n>\n> Bruce Momjian <[email protected]> writes:\n> > I am getting:\n> > IpcSemaphoreCreate: semget failed (No space left on\n> device) key=5432017,\n> > num=16, permission=600\n> > [ later ]\n> > I got it working by adding a -N 32 to the postmaster\n> startup. Looks\n> > like my site BSD/OS can't start 64 backends. Some of\n> my configuration\n> > is wrong. Perhaps we need 32 as the default.\n>\n> Yeah, I was thinking about that myself. I left the\n> default -N setting\n> at 64 on the theory that people who had gone to the\n> trouble of making\n> sure they had proper kernel configurations should not get\n> surprised by\n> v6.5 suddenly reducing the default number-of-backends limit.\n>\n> On the other hand, we have reason to believe that a lot\n> of systems are\n> not configured to allow Postgres to grab 64 semaphores,\n> so if we don't\n> reduce the default -N value we will almost certainly see\n> a lot of gripes\n> just like the above when people move to 6.5. (I think -N\n> 32 would work\n> as a default on minimally-configured systems, but cannot\n> prove it.)\n>\n> I haven't got a real strong feeling either way. Opinions?\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Mon, 22 Feb 1999 11:07:31 -0500", "msg_from": "\"Daryl W. Dunbar\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: Max backend limits cleaned up " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I am getting:\n> > IpcSemaphoreCreate: semget failed (No space left on device) key=5432017,\n> > num=16, permission=600\n> > [ later ]\n> > I got it working by adding a -N 32 to the postmaster startup. Looks\n> > like my site BSD/OS can't start 64 backends. Some of my configuration\n> > is wrong. Perhaps we need 32 as the default.\n> \n> Yeah, I was thinking about that myself. I left the default -N setting\n> at 64 on the theory that people who had gone to the trouble of making\n> sure they had proper kernel configurations should not get surprised by\n> v6.5 suddenly reducing the default number-of-backends limit.\n\nThe default was always 32, right, so they should not be suprised if they\ndon't up the limit to 64. Now, if they modify that config.h value, they\nmay be suprised. Is that the problem?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 11:55:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Max backend limits cleaned up" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Having recently experienced a similar problem with semaphores and\n> kernel size, I can say it is an issue. I feel that documentation\n> will clear it up either way. Either you lower the default backend\n> limit, and document how to raise it along with the associated kernel\n> variables, or leave it alone and document the appropriate steps to\n> tuning the kernel to accommodate it and how to lower it if you don't\n> want to tune the kernel.\n> \n\nThe issue for me is that novices who are never going to hit the 32-user\nlimit should be able to install PostgreSQL with no major changes,\nincluding not even a postmaster flag to lower the limit.\n\nIf you need more than 32 connections, you will be able to modify the\nkernel.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 11:57:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Max backend limits cleaned up" }, { "msg_contents": "The default to-date has been 64. The reason you don't see much\ntrouble with it is twofold, 1) Linux has a huge default for\nsemaphores and shared memory, 2) The old memory model allocated\nsemaphores in blocks of 16 up to MaxBackedId (which was hard coded\nto 64). I did not run into trouble on untuned Solaris until\npostmaster tried to start the 49th backend (semaphores 49-64 when my\nkernel defaulted to 60).\n\nThe new model allocates semaphores and shared memory at startup,\nassuring you won't experience midstream troubles like I did. It\ndoes, however allocate to the max or -N setting, which most users\nwill probably never reach.\n\nI'd vote for 32 as the default.\n\nDwD\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Monday, February 22, 1999 11:57 AM\n> To: Daryl W. Dunbar\n> Cc: [email protected]; [email protected]\n> Subject: Re: [HACKERS] Re: Max backend limits cleaned up\n>\n>\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Having recently experienced a similar problem with\n> semaphores and\n> > kernel size, I can say it is an issue. I feel that\n> documentation\n> > will clear it up either way. Either you lower the\n> default backend\n> > limit, and document how to raise it along with the\n> associated kernel\n> > variables, or leave it alone and document the\n> appropriate steps to\n> > tuning the kernel to accommodate it and how to lower it\n> if you don't\n> > want to tune the kernel.\n> >\n>\n> The issue for me is that novices who are never going to\n> hit the 32-user\n> limit should be able to install PostgreSQL with no major changes,\n> including not even a postmaster flag to lower the limit.\n>\n> If you need more than 32 connections, you will be able to\n> modify the\n> kernel.\n>\n> --\n> Bruce Momjian |\n> http://www.op.net/~candle\n> [email protected]\n> | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill,\n> Pennsylvania 19026\n>\n\n", "msg_date": "Mon, 22 Feb 1999 12:13:10 -0500", "msg_from": "\"Daryl W. Dunbar\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: Max backend limits cleaned up" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Yeah, I was thinking about that myself. I left the default -N setting\n>> at 64 on the theory that people who had gone to the trouble of making\n>> sure they had proper kernel configurations should not get surprised by\n>> v6.5 suddenly reducing the default number-of-backends limit.\n\n> The default was always 32, right, so they should not be suprised if they\n> don't up the limit to 64.\n\nNo, the default MaxBackendId was 64 (unless that's been changed\nrecently?). A stock installation of 6.4 will support 64 backends\nassuming that you have adequate kernel settings for that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Feb 1999 12:51:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Max backend limits cleaned up " }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> The default to-date has been 64. The reason you don't see much\n> trouble with it is twofold, 1) Linux has a huge default for\n> semaphores and shared memory, 2) The old memory model allocated\n> semaphores in blocks of 16 up to MaxBackedId (which was hard coded\n> to 64). I did not run into trouble on untuned Solaris until\n> postmaster tried to start the 49th backend (semaphores 49-64 when my\n> kernel defaulted to 60).\n> \n> The new model allocates semaphores and shared memory at startup,\n> assuring you won't experience midstream troubles like I did. It\n> does, however allocate to the max or -N setting, which most users\n> will probably never reach.\n> \n> I'd vote for 32 as the default.\n\nI understand now. The stuff is preallocated.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 12:54:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Max backend limits cleaned up" }, { "msg_contents": ">> I am thinking, though, that we also ought to have FAQ entries under\n>> headings like:\n>> I get \"IpcSemaphoreCreate: semget failed (No space left on device)\"\n>> when I try to start the postmaster\n>> I get 'Sorry, too many clients already' when trying to connect\n>> \n>> If you like, I'll try to write up a first cut at these.\n\n> Sure.\n\nAttached are some proposed diffs against the copy of the FAQ that's\ncurrently in CVS ... not sure if that is the master copy or not.\n\n\t\t\tregards, tom lane\n\n*** FAQ~\tSat Mar 6 17:10:05 1999\n--- FAQ\tSat Mar 6 18:00:22 1999\n***************\n*** 59,66 ****\n 2.10) All my servers crash under concurrent table access. Why?\n 2.11) How do I tune the database engine for better performance?\n 2.12) What debugging features are available in PostgreSQL?\n! 2.13) How do I enable more than 64 concurrent backends?\n! 2.14) What non-unix ports are available?\n \n Operational questions\n \n--- 59,67 ----\n 2.10) All my servers crash under concurrent table access. Why?\n 2.11) How do I tune the database engine for better performance?\n 2.12) What debugging features are available in PostgreSQL?\n! 2.13) When I try to start the postmaster, I get IpcSemaphoreCreate errors.\n! 2.14) I get 'Sorry, too many clients' when trying to connect.\n! 2.15) What non-unix ports are available?\n \n Operational questions\n \n***************\n*** 384,391 ****\n You either do not have shared memory configured properly in kernel or\n you need to enlarge the shared memory available in the kernel. The\n exact amount you need depends on your architecture and how many\n! buffers you configure postmaster to run with. For most systems, with\n! default buffer sizes, you need a minimum of ~760K.\n \n 2.7) I have changed a source file, but a recompile does not see the change?\n \n--- 385,393 ----\n You either do not have shared memory configured properly in kernel or\n you need to enlarge the shared memory available in the kernel. The\n exact amount you need depends on your architecture and how many\n! buffers and backend processes you configure postmaster to run with.\n! For most systems, with default numbers of buffers and processes, you\n! need a minimum of ~1MB.\n \n 2.7) I have changed a source file, but a recompile does not see the change?\n \n***************\n*** 420,426 ****\n If you are doing a lot of inserts, consider doing them in a large\n batch using the copy command. This is much faster than single\n individual inserts. Second, statements not in a begin work/commit\n! transaction block are considered to be their in their own transaction.\n Consider performing several statements in a single transaction block.\n This reduces the transaction overhead. Also consider dropping and\n recreating indices when making large data changes.\n--- 422,428 ----\n If you are doing a lot of inserts, consider doing them in a large\n batch using the copy command. This is much faster than single\n individual inserts. Second, statements not in a begin work/commit\n! transaction block are considered to be in their own transaction.\n Consider performing several statements in a single transaction block.\n This reduces the transaction overhead. Also consider dropping and\n recreating indices when making large data changes.\n***************\n*** 482,494 ****\n pgsql/data/base/dbname directory. The client profile file will be put\n in the current directory.\n \n! 2.13) How do I enable more than 64 concurrent backends?\n! \n! Edit include/storage/sinvaladt.h, and change the value of\n! MaxBackendId. In the future, we plan to make this a configurable\n! prameter.\n \n! 2.14) What non-unix ports are available?\n \n It is possible to compile the libpq C library, psql, and other\n interfaces and binaries to run on MS Windows platforms. In this case,\n--- 484,530 ----\n pgsql/data/base/dbname directory. The client profile file will be put\n in the current directory.\n \n! 2.13) When I try to start the postmaster, I get IpcSemaphoreCreate errors.\n! \n! If the error message is \"IpcSemaphoreCreate: semget failed (No space left\n! on device)\" then your kernel is not configured with enough semaphores.\n! Postgres needs one semaphore per potential backend process. A temporary\n! solution is to start the postmaster with a smaller limit on the number of\n! backend processes (use -N with a parameter less than its default, 32).\n! A more permanent solution is to increase your kernel's SEMMNS and SEMMNI\n! parameters.\n! \n! If the error message is something else, you might not have semaphore\n! support configured in your kernel at all.\n! \n! 2.14) I get 'Sorry, too many clients' when trying to connect.\n! \n! You need to increase the postmaster's limit on how many concurrent backend\n! processes it can start.\n! \n! In Postgres 6.5, the default limit is 32 processes. You can increase it\n! by restarting the postmaster with a suitable -N value. With the default\n! configuration you can set -N as large as 1024; if you need more, you'll\n! need to increase MAXBACKENDS in include/config.h and rebuild Postgres.\n! You can set the default value of -N at configuration time, if you like,\n! using configure's --with-maxbackends switch.\n! \n! Note that if you make -N larger than 32, you should consider increasing\n! -B beyond its default of 64. For large numbers of backend processes,\n! you are also likely to find that you need to increase various Unix kernel\n! configuration parameters. Things to check include the maximum size of\n! shared memory blocks (SHMMAX), the maximum number of semaphores (SEMMNS and\n! SEMMNI), the maximum number of processes (NPROC), the maximum number of\n! processes per user (MAXUPRC), and the maximum number of open files (NFILE\n! and NINODE). The main reason that Postgres has a limit on the number of\n! allowed backend processes is so that you can ensure that your system\n! won't run out of resources.\n! \n! In Postgres versions prior to 6.5, the maximum number of backends was\n! 64, and changing it required a rebuild after altering the MaxBackendId\n! constant in include/storage/sinvaladt.h.\n \n! 2.15) What non-unix ports are available?\n \n It is possible to compile the libpq C library, psql, and other\n interfaces and binaries to run on MS Windows platforms. In this case,\n", "msg_date": "Sat, 06 Mar 1999 18:06:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Max backend limits cleaned up " }, { "msg_contents": "> >> I am thinking, though, that we also ought to have FAQ entries under\n> >> headings like:\n> >> I get \"IpcSemaphoreCreate: semget failed (No space left on device)\"\n> >> when I try to start the postmaster\n> >> I get 'Sorry, too many clients already' when trying to connect\n> >> \n> >> If you like, I'll try to write up a first cut at these.\n> \n> > Sure.\n> \n> Attached are some proposed diffs against the copy of the FAQ that's\n> currently in CVS ... not sure if that is the master copy or not.\n\nApplied. The main file is kept as HTML for the web site, while I\nconvert it to ASCII for the distribution. Let me know how it looks. I\ndid a little html cleanup. You can see it on the web site.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 Mar 1999 06:51:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Max backend limits cleaned up" } ]
[ { "msg_contents": "It used to be that Postgres' shared memory was sized on the basis of\nthe hard-wired MaxBackendId constant. I have altered things so that\nit is sized on the basis of the actual -N switch given to the postmaster\nat postmaster start time. This makes it a lot easier to stress the\nalgorithm ;-), and what I find is that it ain't too robust.\n\nIn particular, using current cvs sources try to start the postmaster\nwith \"-N 1\" (only one backend allowed). The backend can be started\nall right, but as soon as you try to do much of anything, it falls over:\n\n$ startpg.debug -N 1\n$ psql regression\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: regression\n\nregression=> \\d\nNOTICE: ShmemAlloc: out of memory\npqReadData() -- backend closed the channel unexpectedly.\n\n\nI conclude from this that the model of shared memory usage embodied\nin LockShmemSize() (in src/backend/storage/lmgr/lock.c) isn't very\naccurate: at small N it's not allocating enough memory.\n\nDoes anyone understand the data structures that are allocated in\nshared memory well enough to fix LockShmemSize() properly?\nOr should I just kluge it, say by making LockShmemSize() work from\nsomething like MAX(maxBackends,10) ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Feb 1999 13:58:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Anyone understand shared-memory space usage?" }, { "msg_contents": "Does the bottom of the backend flowchart help?\n\n\n> It used to be that Postgres' shared memory was sized on the basis of\n> the hard-wired MaxBackendId constant. I have altered things so that\n> it is sized on the basis of the actual -N switch given to the postmaster\n> at postmaster start time. This makes it a lot easier to stress the\n> algorithm ;-), and what I find is that it ain't too robust.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 Feb 1999 14:28:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone understand shared-memory space usage?" }, { "msg_contents": "I would look in:\n\n\tCreateSharedMemoryAndSemaphores(IPCKey key, int maxBackends)\n\t{\n\t...\n\t size = BufferShmemSize() + LockShmemSize(maxBackends);\n\nLockShmemSize looks like a terrible mess, but my assumption is that the\nproblem is in there.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 Feb 1999 16:18:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone understand shared-memory space usage?" }, { "msg_contents": "I wrote:\n> Does anyone understand the data structures that are allocated in\n> shared memory well enough to fix LockShmemSize() properly?\n\nNo one volunteered, so I dug into the code and think I have it fixed\nnow. Leastwise you can run the regression tests even at -N 1 (but\nyou have to put a \"sleep\" into regress.sh --- it seems that when you\nquit psql, it takes a second or two before the postmaster will accept\nanother connection. Should backend shutdown take that long??)\n\nIt turned out that there were really, really serious problems both in\nshared-memory space estimation and in dynahash.c itself. I'm simply\namazed we have not seen more bug reports traceable to running out\nof shared memory and/or hashtable errors. Some lowlights:\n\n* One out of every thirty records allocated in a hashtable was simply\nbeing wasted, because the allocator failed to include it in the table's\nfreelist.\n\n* The routine for expanding a hashtable's top-level directory could\nnever have worked; I conclude that it's never been executed. (At\ndefault settings it would not be called until the table has exceeded\n64K entries, so I can believe we've never seen it run...)\n\n* I think the routine for deleting a hashtable is also broken, because\nit individually frees records that it did not allocate individually.\nI don't understand why this isn't making the memory management stuff\ncoredump. Maybe we never free a hashtable?\n\n* Setup of fixed-directory hashtables (ShmemInitHash) was sadly broken;\nit's really incredible that it worked at all, because it was (a)\nmisestimating the size of the space it needed to allocate and then\n(b) miscalculating where the directory should be within that space.\nAs near as I can tell, we have been running with hashtable directories\nsitting in space not actually allocated to them. Compared to this,\nthe fact that the routine also forgot to tell dynahash.c what size\ndirectory it had made hardly matters.\n\n* Several places were estimating the sizes of hashtables using code\nthat was not quite right (and assumed far more than it should've\nabout the inner structure of hashtables anyway). Also, having\n(mis)calculated the sizes of the major tables in shared memory,\nwe were requesting a total shared memory block exactly equal to\ntheir sum, with no allowance for smaller data structures (like the\nshmem index table) nor any safety factor for estimation error.\n\n\nI would like someone to check my work; if the code was really as\nbroken as I think it was, we should have been seeing more problems\nthan we were. See my changes committed last night in\n\tsrc/include/utils/hsearch.h\n\tsrc/backend/utils/hash/dynahash.c\n\tsrc/backend/storage/ipc/shmem.c\n\tsrc/backend/storage/ipc/ipci.c\n\tsrc/backend/storage/buffer/buf_init.c\n\tsrc/backend/storage/lmgr/lock.c\n\tsrc/backend/storage/smgr/mm.c\n\n\t\t\tregards, tom lane\n\nPS: I am now wondering whether Daryl Dunbar's problems might not be\ndue to the shared-memory hash table for locks getting larger than other\npeople have seen it get. Because of the errors in ShmemInitHash, I\nwould not be at all surprised to see the system fall over once that\ntable exceeds 256 entries (or some small multiple thereof).\n", "msg_date": "Mon, 22 Feb 1999 12:40:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Anyone understand shared-memory space usage? " }, { "msg_contents": "I wrote:\n> I would like someone to check my work; if the code was really as\n> broken as I think it was, we should have been seeing more problems\n> than we were.\n\nI spent an hour tracing through startup of 6.4.x, and I now understand\nwhy the thing doesn't crash despite the horrible bugs in ShmemInitHash.\nRead on, if you have a strong stomach.\n\nFirst off, ShmemInitHash allocates too small a chunk of space for\nthe hash header + directory (because it computes the size of the\ndirectory as log2(max_size) *bytes* not longwords). Then, it computes\nthe wrong address for the directory --- the expression\n\tinfoP->dir = (long *) (location + sizeof(HHDR));\nlooks good until you remember that location is a pointer to long not\na pointer to char. Upshot: the address computed for \"dir\" is typically\n168 bytes past the end of the space actually allocated for it.\n\nWhy is this not fatal? Well, the very next ShmemAlloc call is always\nto create the first \"segment\" of the hashtable; this is always for 1024\nbytes, so the dir pointer is no longer pointing to nowhere. It is in\nfact pointing at the 42'nd entry of its own first segment. (HHGTTG fans\ncan find deep significance in this.) In other words entry 42 of the\nhash segment points back at the segment itself.\n\nWhen you work through the logic in dynahash.c, you discover that the\nupshot of this is that (a) the segment appears to be the first item on\nits own 42'nd hash-bucket chain, and (b) the 0'th and 42'nd hash-bucket\nchains are therefore the same list, or more accurately the 0'th chain is\nthe cdr of the 42'nd chain since it doesn't appear to contain the\nsegment itself.\n\nAs long as no searched-for hash key with a hash value of 0 or 42\nhappens to match whatever the first few words of the segment are,\nthings pretty much work. The only way you'd really notice is that\nhash_seq() will report some of the hashtable records twice, and will\nalso report one completely bogus \"record\" that is the hash segment.\nOur uses of hash_seq() are apparently robust enough not to be bothered.\n\nThings don't go to hell in a handbasket until and unless the hashtable\nis expanded past 256 entries. At that point another segment is allocated\nand its pointer is stored in slot 43 of the old segment, causing all the\ntable entries that were in hashbucket 43 to instantly disappear from\nview --- they can't be found by searching the table anymore. Also,\nhashchain 43 now appears to be the same as hashchain 256 (the first \nof the new segment), but that's not going to bother anyone any worse\nthan the first duplicated chain did.\n\nI think it's entirely likely that this set of bugs can account for flaky\nbehavior seen in installations with more than 256 shared-memory buffers\n(postmaster -B > 256), more than 256 simultaneously held locks (have no\nidea how to translate that into user terms), or more than 256 concurrent\nbackends. I'm still wondering whether that might describe Daryl\nDunbar's problem with locks not getting released, for example.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Feb 1999 19:49:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Anyone understand shared-memory space usage? " }, { "msg_contents": "> I think it's entirely likely that this set of bugs can account for flaky\n> behavior seen in installations with more than 256 shared-memory buffers\n> (postmaster -B > 256), more than 256 simultaneously held locks (have no\n> idea how to translate that into user terms), or more than 256 concurrent\n> backends. I'm still wondering whether that might describe Daryl\n> Dunbar's problem with locks not getting released, for example.\n\nPeople have reported sloness/bugs with hash index lookups. Does this\nrelate to that?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 22:57:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Anyone understand shared-memory space usage?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I think it's entirely likely that this set of bugs can account for flaky\n>> behavior seen in installations with more than 256 shared-memory buffers\n>> (postmaster -B > 256), more than 256 simultaneously held locks (have no\n>> idea how to translate that into user terms), or more than 256 concurrent\n>> backends. I'm still wondering whether that might describe Daryl\n>> Dunbar's problem with locks not getting released, for example.\n\n> People have reported sloness/bugs with hash index lookups. Does this\n> relate to that?\n\nIt looks like the routines in src/backend/access/hash/ don't use the\ncode in src/backend/utils/hash/ at all, so my guess is that whatever\nbugs might lurk in hash indexes are unrelated.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Feb 1999 10:05:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Anyone understand shared-memory space usage? " } ]
[ { "msg_contents": "I have updated the code contributors page on the web site. I have added\nMichael Meskes, Tom Lane, and Jan Wieck, and removed some of the\nprevious entries.\n\nI wanted to pair down the web page list to just the people who regularly\ncontribute major pieces of code.\n\nWe still have the HISTORY file on the 'support' page, which lists\nvarious contributors for each release. \n\nIf I have missed people or the text needs to be improved, let me know.\n\n--\n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 Feb 1999 15:50:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Update contributors page" }, { "msg_contents": "Thus spake Bruce Momjian\n> I have updated the code contributors page on the web site. I have added\n> Michael Meskes, Tom Lane, and Jan Wieck, and removed some of the\n> previous entries.\n> \n> I wanted to pair down the web page list to just the people who regularly\n> contribute major pieces of code.\n\nI'm not sure what you consider \"major\" but I wonder why it needs to be\npared down at all. After all, to be honest, who really looks at that\npage but the contributors and their family? To me that page is more\nof an ego boost than anything else. Isn't that part of the reason we\ncontribute anyway?\n\nI say we leave in everyone who has added a comma to the project. It\ndoesn't cost anything and it's nice to acknowledge everyone's help,\nno matter how small their contribution.\n\nI see I'm still in the list (first thanks to alphabeticity :-) so\nobviously I am not arguing for my own benefit here. I just happen\nto think that lots of little things can contribute to a big project\nlike this and we should encourage any contributions we can.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sun, 21 Feb 1999 17:28:14 -0500 (EST)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Update contributors page" } ]
[ { "msg_contents": "Hi all,\n\nThe inet regression test has been failed on my LinuxPPC. While\ninvestigating the reason, I found a code that doesn't work on\nLinuxPPC. From network_broadcast() in utils/adt/network.c:\n\nint\taddr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n\nHere ip_bits() returns from (unsigned char)0 to 32. My question is:\nwhat is the correct result of (0xffffffff >> ip_bits())?\n\n1. 0x0\n2. 0xffffffff (actually does nothing)\n\nLinuxPPC is 1. FreeBSD and Solaris are 2. network_broadcast() seems to\nexpect 2. My guess is shifting over 32bit against a 32bit integer is\nnot permitted and the result is platform depedent. If this would true,\nit could be said that network_broadcast() has a portabilty\nproblem. Comments?\n---\nTatsuo Ishii\n", "msg_date": "Mon, 22 Feb 1999 11:54:39 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "inet data type regression test fails" }, { "msg_contents": "> The inet regression test has been failed on my LinuxPPC. While\n> investigating the reason, I found a code that doesn't work on\n> LinuxPPC. From network_broadcast() in utils/adt/network.c:\n> \n> int\taddr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n> \n> Here ip_bits() returns from (unsigned char)0 to 32. My question is:\n> what is the correct result of (0xffffffff >> ip_bits())?\n\nI should have said that:\n\nwhat is the correct result of (0xffffffff >> ip_bits()) if ip_bits() == 32?\n\n> 1. 0x0\n> 2. 0xffffffff (actually does nothing)\n> \n> LinuxPPC is 1. FreeBSD and Solaris are 2. network_broadcast() seems to\n> expect 2. My guess is shifting over 32bit against a 32bit integer is\n> not permitted and the result is platform depedent. If this would true,\n> it could be said that network_broadcast() has a portabilty\n> problem. Comments?\n> ---\n> Tatsuo Ishii\n> \n\n", "msg_date": "Mon, 22 Feb 1999 23:09:08 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] inet data type regression test fails " }, { "msg_contents": "> what is the correct result of\n> (0xffffffff >> ip_bits()) if ip_bits() == 32?\n> > 1. 0x0\n> > 2. 0xffffffff (actually does nothing)\n\nIn both cases, it does something. I haven't looked it up, but I suspect\nthat this is an implementation-defined result, since you are seeing the\nresults of right-shifting the sign bit *or* the high bit downward. On\nsome systems it does not propagate, and on others it does.\n\nHave you tried coercing 0xffffffff to be a signed char? The better\nsolution is probably to mask the result before comparing, or handling\nshifts greater than 31 as a special case. For example,\n\n /* It's an IP V4 address: */\n int addr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n\nbecomes\n\n /* It's an IP V4 address: */\n int addr = htonl(ntohl(ip_v4addr(ip));\n if (ip_bits(ip) < sizeof(addr))\n addr |= (0xffffffff >> ip_bits(ip)));\n\nor something like that...\n\n - Tom\n", "msg_date": "Tue, 23 Feb 1999 02:49:26 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inet data type regression test fails" }, { "msg_contents": ">> what is the correct result of\n>> (0xffffffff >> ip_bits()) if ip_bits() == 32?\n>> > 1. 0x0\n>> > 2. 0xffffffff (actually does nothing)\n>\n>In both cases, it does something. I haven't looked it up, but I suspect\n>that this is an implementation-defined result, since you are seeing the\n>results of right-shifting the sign bit *or* the high bit downward. On\n>some systems it does not propagate, and on others it does.\n>\n>Have you tried coercing 0xffffffff to be a signed char? The better\n>solution is probably to mask the result before comparing, or handling\n>shifts greater than 31 as a special case. For example,\n>\n> /* It's an IP V4 address: */\n> int addr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n>\n>becomes\n>\n> /* It's an IP V4 address: */\n> int addr = htonl(ntohl(ip_v4addr(ip));\n> if (ip_bits(ip) < sizeof(addr))\n> addr |= (0xffffffff >> ip_bits(ip)));\n>\n>or something like that...\n\nThank you for the advice. I concluded that current inet code has a\nportability problem. Included patches should be applied to both\ncurrent and 6.4 tree. I have tested on LinuxPPC, FreeBSD and Solaris\n2.6. Now the inet regression tests on these platforms are all happy.\n---\nTatsuo Ishii\n------------------------------------------------------------------------\n*** pgsql/src/backend/utils/adt/network.c.orig\tFri Jan 1 13:17:13 1999\n--- pgsql/src/backend/utils/adt/network.c\tTue Feb 23 21:31:41 1999\n***************\n*** 356,362 ****\n \tif (ip_family(ip) == AF_INET)\n \t{\n \t\t/* It's an IP V4 address: */\n! \t\tint\taddr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n \n \t\tif (inet_net_ntop(AF_INET, &addr, 32, tmp, sizeof(tmp)) == NULL)\n \t\t{\n--- 356,367 ----\n \tif (ip_family(ip) == AF_INET)\n \t{\n \t\t/* It's an IP V4 address: */\n! \t\tint addr;\n! \t\tunsigned long mask = 0xffffffff;\n! \n! \t\tif (ip_bits(ip) < 32)\n! \t\t\tmask >>= ip_bits(ip);\n! \t\taddr = htonl(ntohl(ip_v4addr(ip)) | mask);\n \n \t\tif (inet_net_ntop(AF_INET, &addr, 32, tmp, sizeof(tmp)) == NULL)\n \t\t{\n", "msg_date": "Wed, 24 Feb 1999 12:02:57 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] inet data type regression test fails " }, { "msg_contents": "Applied.\n\n\n> >> what is the correct result of\n> >> (0xffffffff >> ip_bits()) if ip_bits() == 32?\n> >> > 1. 0x0\n> >> > 2. 0xffffffff (actually does nothing)\n> >\n> >In both cases, it does something. I haven't looked it up, but I suspect\n> >that this is an implementation-defined result, since you are seeing the\n> >results of right-shifting the sign bit *or* the high bit downward. On\n> >some systems it does not propagate, and on others it does.\n> >\n> >Have you tried coercing 0xffffffff to be a signed char? The better\n> >solution is probably to mask the result before comparing, or handling\n> >shifts greater than 31 as a special case. For example,\n> >\n> > /* It's an IP V4 address: */\n> > int addr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n> >\n> >becomes\n> >\n> > /* It's an IP V4 address: */\n> > int addr = htonl(ntohl(ip_v4addr(ip));\n> > if (ip_bits(ip) < sizeof(addr))\n> > addr |= (0xffffffff >> ip_bits(ip)));\n> >\n> >or something like that...\n> \n> Thank you for the advice. I concluded that current inet code has a\n> portability problem. Included patches should be applied to both\n> current and 6.4 tree. I have tested on LinuxPPC, FreeBSD and Solaris\n> 2.6. Now the inet regression tests on these platforms are all happy.\n> ---\n> Tatsuo Ishii\n> ------------------------------------------------------------------------\n> *** pgsql/src/backend/utils/adt/network.c.orig\tFri Jan 1 13:17:13 1999\n> --- pgsql/src/backend/utils/adt/network.c\tTue Feb 23 21:31:41 1999\n> ***************\n> *** 356,362 ****\n> \tif (ip_family(ip) == AF_INET)\n> \t{\n> \t\t/* It's an IP V4 address: */\n> ! \t\tint\taddr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n> \n> \t\tif (inet_net_ntop(AF_INET, &addr, 32, tmp, sizeof(tmp)) == NULL)\n> \t\t{\n> --- 356,367 ----\n> \tif (ip_family(ip) == AF_INET)\n> \t{\n> \t\t/* It's an IP V4 address: */\n> ! \t\tint addr;\n> ! \t\tunsigned long mask = 0xffffffff;\n> ! \n> ! \t\tif (ip_bits(ip) < 32)\n> ! \t\t\tmask >>= ip_bits(ip);\n> ! \t\taddr = htonl(ntohl(ip_v4addr(ip)) | mask);\n> \n> \t\tif (inet_net_ntop(AF_INET, &addr, 32, tmp, sizeof(tmp)) == NULL)\n> \t\t{\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Feb 1999 22:16:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inet data type regression test fails" }, { "msg_contents": "> Hi all,\n> \n> The inet regression test has been failed on my LinuxPPC. While\n> investigating the reason, I found a code that doesn't work on\n> LinuxPPC. From network_broadcast() in utils/adt/network.c:\n> \n> int\taddr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n> \n> Here ip_bits() returns from (unsigned char)0 to 32. My question is:\n> what is the correct result of (0xffffffff >> ip_bits())?\n> \n> 1. 0x0\n> 2. 0xffffffff (actually does nothing)\n> \n> LinuxPPC is 1. FreeBSD and Solaris are 2. network_broadcast() seems to\n> expect 2. My guess is shifting over 32bit against a 32bit integer is\n> not permitted and the result is platform depedent. If this would true,\n> it could be said that network_broadcast() has a portabilty\n> problem. Comments?\n\nIf 0xffffff is unsigned, it should allow the right shift. When you say\n1 or 2, how do you get those values?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Mar 1999 10:25:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inet data type regression test fails" }, { "msg_contents": "> > The inet regression test has been failed on my LinuxPPC. While\n> > investigating the reason, I found a code that doesn't work on\n> > LinuxPPC. From network_broadcast() in utils/adt/network.c:\n> > \n> > int\taddr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n> > \n> > Here ip_bits() returns from (unsigned char)0 to 32. My question is:\n> > what is the correct result of (0xffffffff >> ip_bits())?\n> > \n> > 1. 0x0\n> > 2. 0xffffffff (actually does nothing)\n> > \n> > LinuxPPC is 1. FreeBSD and Solaris are 2. network_broadcast() seems to\n> > expect 2. My guess is shifting over 32bit against a 32bit integer is\n> > not permitted and the result is platform depedent. If this would true,\n> > it could be said that network_broadcast() has a portabilty\n> > problem. Comments?\n> \n> If 0xffffff is unsigned, it should allow the right shift. \n\nNo. it does not depend on if 0xffffffff is signed or not. Suppose a\nis signed and b is unsigned. In \"a >> b\", before doing an actual\nshifting operation, a is \"upgraded\" to unsigned by the compiler.\n\n>When you say\n> 1 or 2, how do you get those values?\n\nYou could observe the \"32 bit shift efect\" I mentioned in the previous\nmail by running following small program.\n\nmain()\n{\n unsigned char c;\n for (c = 0;c <=32;c++) {\n printf(\"shift: %d result: 0x%08x\\n\",c,0xffffffff >> c);\n }\n}\n---\nTatsuo Ishii\n", "msg_date": "Tue, 16 Mar 1999 10:35:04 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] inet data type regression test fails " }, { "msg_contents": "\nCan someone comment on this one? Is it fixed?\n\n\n> Hi all,\n> \n> The inet regression test has been failed on my LinuxPPC. While\n> investigating the reason, I found a code that doesn't work on\n> LinuxPPC. From network_broadcast() in utils/adt/network.c:\n> \n> int\taddr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n> \n> Here ip_bits() returns from (unsigned char)0 to 32. My question is:\n> what is the correct result of (0xffffffff >> ip_bits())?\n> \n> 1. 0x0\n> 2. 0xffffffff (actually does nothing)\n> \n> LinuxPPC is 1. FreeBSD and Solaris are 2. network_broadcast() seems to\n> expect 2. My guess is shifting over 32bit against a 32bit integer is\n> not permitted and the result is platform depedent. If this would true,\n> it could be said that network_broadcast() has a portabilty\n> problem. Comments?\n> ---\n> Tatsuo Ishii\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 May 1999 10:56:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inet data type regression test fails" }, { "msg_contents": "> Can someone comment on this one? Is it fixed?\n> > The inet regression test has been failed on my LinuxPPC. \n> > My guess is shifting over 32bit against a 32bit integer is\n> > not permitted and the result is platform depedent.\n\nYes, it is fixed. You applied the patches :)\n\nbackend/utils/adt/network.c:\nrevision 1.6\ndate: 1999/02/24 03:17:05; author: momjian; state: Exp; lines: +7\n-2\nThank you for the advice. I concluded that current inet code has a\nportability problem. Included patches should be applied to both\ncurrent and 6.4 tree. I have tested on LinuxPPC, FreeBSD and Solaris\n2.6. Now the inet regression tests on these platforms are all happy.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 09 May 1999 15:24:13 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inet data type regression test fails" }, { "msg_contents": "On Sun, 9 May 1999, Bruce Momjian wrote:\n\n> > int\taddr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n\nThere needs to be a UL on the end of that constant. Otherwise it depends\non whether or not the compiler chooses to make it signed or unsigned. Not\nonly that, but shifting by >=32 is undefined... Intel chipsets will go mod\n32 and change 32 to 0.\n\nTaral\n\n", "msg_date": "Sun, 9 May 1999 15:51:10 -0500 (CDT)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inet data type regression test fails" }, { "msg_contents": "> On Sun, 9 May 1999, Bruce Momjian wrote:\n> \n> > > int\taddr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n> \n> There needs to be a UL on the end of that constant. Otherwise it depends\n> on whether or not the compiler chooses to make it signed or unsigned. Not\n> only that, but shifting by >=32 is undefined... Intel chipsets will go mod\n> 32 and change 32 to 0.\n> \n\nAnyone want to supply a patch?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 14:28:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inet data type regression test fails" }, { "msg_contents": "> > > > int\taddr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n> > \n> > There needs to be a UL on the end of that constant. Otherwise it depends\n> > on whether or not the compiler chooses to make it signed or unsigned. Not\n> > only that, but shifting by >=32 is undefined... Intel chipsets will go mod\n> > 32 and change 32 to 0.\n> > \n> \n> Anyone want to supply a patch?\n\nThis has been already fixed. Now it looks like:\n\n\t\tunsigned long mask = 0xffffffff;\n\n\t\tif (ip_bits(ip) < 32)\n\t\t\tmask >>= ip_bits(ip);\n\t\taddr = htonl(ntohl(ip_v4addr(ip)) | mask);\n---\nTatsuo Ishii\n", "msg_date": "Tue, 11 May 1999 09:54:25 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] inet data type regression test fails " }, { "msg_contents": "> > > > > int\taddr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n> > > \n> > > There needs to be a UL on the end of that constant. Otherwise it depends\n> > > on whether or not the compiler chooses to make it signed or unsigned. Not\n> > > only that, but shifting by >=32 is undefined... Intel chipsets will go mod\n> > > 32 and change 32 to 0.\n> > > \n> > \n> > Anyone want to supply a patch?\n> \n> This has been already fixed. Now it looks like:\n> \n> \t\tunsigned long mask = 0xffffffff;\n> \n> \t\tif (ip_bits(ip) < 32)\n> \t\t\tmask >>= ip_bits(ip);\n> \t\taddr = htonl(ntohl(ip_v4addr(ip)) | mask);\n\nOh. Very nice.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 21:05:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inet data type regression test fails" }, { "msg_contents": "On Tue, 11 May 1999, Tatsuo Ishii wrote:\n\n> \t\tunsigned long mask = 0xffffffff;\n> \n> \t\tif (ip_bits(ip) < 32)\n> \t\t\tmask >>= ip_bits(ip);\n> \t\taddr = htonl(ntohl(ip_v4addr(ip)) | mask);\n\nThat's wrong too. There needs to be:\n\nelse\n\tmask = 0;\n\nTaral\n\n", "msg_date": "Mon, 10 May 1999 20:05:28 -0500 (CDT)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inet data type regression test fails " }, { "msg_contents": ">On Tue, 11 May 1999, Tatsuo Ishii wrote:\n>\n>> \t\tunsigned long mask = 0xffffffff;\n>> \n>> \t\tif (ip_bits(ip) < 32)\n>> \t\t\tmask >>= ip_bits(ip);\n>> \t\taddr = htonl(ntohl(ip_v4addr(ip)) | mask);\n>\n>That's wrong too. There needs to be:\n>\n>else\n>\tmask = 0;\n>\n>Taral\n\nNo. it is expected addr == 0xffffffff if ip_bits() returns >= 32. This \nis how the function (network_broadcast()) is made.\nSee included posting.\n\n>From: Tatsuo Ishii <[email protected]>\n>To: [email protected]\n>Subject: [HACKERS] inet data type regression test fails\n>Date: Mon, 22 Feb 1999 11:54:39 +0900\n>\n>Hi all,\n>\n>The inet regression test has been failed on my LinuxPPC. While\n>investigating the reason, I found a code that doesn't work on\n>LinuxPPC. From network_broadcast() in utils/adt/network.c:\n>\n>int\taddr = htonl(ntohl(ip_v4addr(ip)) | (0xffffffff >> ip_bits(ip)));\n>\n>Here ip_bits() returns from (unsigned char)0 to 32. My question is:\n>what is the correct result of (0xffffffff >> ip_bits())?\n>\n>1. 0x0\n>2. 0xffffffff (actually does nothing)\n>\n>LinuxPPC is 1. FreeBSD and Solaris are 2. network_broadcast() seems to\n>expect 2. My guess is shifting over 32bit against a 32bit integer is\n>not permitted and the result is platform depedent. If this would true,\n>it could be said that network_broadcast() has a portabilty\n>problem. Comments?\n>---\n>Tatsuo Ishii\n>\n>\n", "msg_date": "Tue, 11 May 1999 10:22:16 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] inet data type regression test fails " }, { "msg_contents": "On Tue, 11 May 1999, Tatsuo Ishii wrote:\n\n> >On Tue, 11 May 1999, Tatsuo Ishii wrote:\n> >\n> >> \t\tunsigned long mask = 0xffffffff;\n> >> \n> >> \t\tif (ip_bits(ip) < 32)\n> >> \t\t\tmask >>= ip_bits(ip);\n> >> \t\taddr = htonl(ntohl(ip_v4addr(ip)) | mask);\n\n> No. it is expected addr == 0xffffffff if ip_bits() returns >= 32. This \n> is how the function (network_broadcast()) is made.\n> See included posting.\n\nip_bits(ip) = 0 => mask = 0xffffffff\nip_bits(ip) = 31 => mask = 1\nip_bits(ip) = 32 => mask = 0xffffffff\n\nYou sure?\n\nTaral\n\n", "msg_date": "Mon, 10 May 1999 20:48:54 -0500 (CDT)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inet data type regression test fails " }, { "msg_contents": ">> >> \t\tunsigned long mask = 0xffffffff;\n>> >> \n>> >> \t\tif (ip_bits(ip) < 32)\n>> >> \t\t\tmask >>= ip_bits(ip);\n>> >> \t\taddr = htonl(ntohl(ip_v4addr(ip)) | mask);\n>\n>> No. it is expected addr == 0xffffffff if ip_bits() returns >= 32. This \n>> is how the function (network_broadcast()) is made.\n>> See included posting.\n>\n>ip_bits(ip) = 0 => mask = 0xffffffff\n>ip_bits(ip) = 31 => mask = 1\n>ip_bits(ip) = 32 => mask = 0xffffffff\n>\n>You sure?\n\nYes. That's exactly what I expected.\n---\nTatsuo Ishii\n\n", "msg_date": "Tue, 11 May 1999 19:13:59 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] inet data type regression test fails " } ]
[ { "msg_contents": "> Modified Files:\n> rules.out\n> Log Message:\n> change pgsql to postgres\n\nI have some patches which will change rules.sql to avoid using\ngetpgusername() altogether, since that returns an installation-dependent\nresult.\n\nWill apply sometime soon, along with some other changes laying a bit of\ngroundwork for outer joins so you can start on the planner/optimizer\nparts :)\n\n - Tom\n", "msg_date": "Mon, 22 Feb 1999 02:58:46 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [COMMITTERS] 'pgsql/src/test/regress/expected rules.out'" }, { "msg_contents": "> > Modified Files:\n> > rules.out\n> > Log Message:\n> > change pgsql to postgres\n> \n> I have some patches which will change rules.sql to avoid using\n> getpgusername() altogether, since that returns an installation-dependent\n> result.\n> \n> Will apply sometime soon, along with some other changes laying a bit of\n> groundwork for outer joins so you can start on the planner/optimizer\n> parts :)\n\nThose will be a synch now that I understand the optimizer. In fact, I\nthink it all will happen in the executor.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 Feb 1999 22:04:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] 'pgsql/src/test/regress/expected rules.out'" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Will apply ... some other changes laying a bit of\n> > groundwork for outer joins so you can start on the planner/optimizer\n> > parts :)\n> Those will be a synch now that I understand the optimizer. In fact, I\n> think it all will happen in the executor.\n\nI've modified executor/nodeMergeJoin.c to walk a left/right/both outer\njoin, but didn't fill in the part which actually creates the result\ntuple (which will be the current left- or right-side tuple plus nulls\nfor filler). I hope this is up your alley :)\n\nSo far, I'm not certain what to pass to the planner. The syntax leads me\nto pass a select structure from gram.y with a \"JoinExpr\" structure in\nthe \"fromClause\" list. I need to expand that with a combination of\ncolumn names and qualifications, but at the time I see the JoinExpr I\ndon't have access to the top query structure itself. So I may just keep\na modestly transformed JoinExpr to expand later or to pass to the\nplanner.\n\nbtw, the EXCEPT/INTERSECT stuff from Stefan has some ugliness in gram.y\nwhich needs to be fixed (the shift/reduce conflict is not acceptable for\nour release version) and some of that code clearly needs to move to\nanalyze.c or some other module.\n\n - Tom\n", "msg_date": "Mon, 22 Feb 1999 06:56:36 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: start on outer join" }, { "msg_contents": ">\n> > Modified Files:\n> > rules.out\n> > Log Message:\n> > change pgsql to postgres\n>\n> I have some patches which will change rules.sql to avoid using\n> getpgusername() altogether, since that returns an installation-dependent\n> result.\n>\n> Will apply sometime soon, along with some other changes laying a bit of\n> groundwork for outer joins so you can start on the planner/optimizer\n> parts :)\n\n Highly appreciated!\n\n I know it was my fault and that it would have been my job to\n fix it. But since I've installed glibc-2 (libc6) and\n gcc-2.8.1, many regressions fail due to floating point and\n error message diff's.\n\n Thanks.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 22 Feb 1999 11:01:23 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [COMMITTERS] 'pgsql/src/test/regress/expected\n\trules.out'" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > > Will apply ... some other changes laying a bit of\n> > > groundwork for outer joins so you can start on the planner/optimizer\n> > > parts :)\n> > Those will be a synch now that I understand the optimizer. In fact, I\n> > think it all will happen in the executor.\n> \n> I've modified executor/nodeMergeJoin.c to walk a left/right/both outer\n> join, but didn't fill in the part which actually creates the result\n> tuple (which will be the current left- or right-side tuple plus nulls\n> for filler). I hope this is up your alley :)\n\nNested loop and hash have to be done too.\n\n> \n> So far, I'm not certain what to pass to the planner. The syntax leads me\n> to pass a select structure from gram.y with a \"JoinExpr\" structure in\n> the \"fromClause\" list. I need to expand that with a combination of\n> column names and qualifications, but at the time I see the JoinExpr I\n> don't have access to the top query structure itself. So I may just keep\n> a modestly transformed JoinExpr to expand later or to pass to the\n> planner.\n\nCan we just set a flag in the RangeTblEntry to indicate if it is an\nOUTER join?\n\n> btw, the EXCEPT/INTERSECT stuff from Stefan has some ugliness in gram.y\n> which needs to be fixed (the shift/reduce conflict is not acceptable for\n> our release version) and some of that code clearly needs to move to\n> analyze.c or some other module.\n\nYes. I agree. Got Vadim's stuff merged into Stephan's code. I think a\nreview of the actual patch is the only solution. It is in the patches list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 11:37:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: start on outer join" }, { "msg_contents": "> I know it was my fault and that it would have been my job to\n> fix it. But since I've installed glibc-2 (libc6) and\n> gcc-2.8.1, many regressions fail due to floating point and\n> error message diff's.\n\nYeah, that seems to be a problem (or at least an annoyance). I've got\negcs-2.91.57 installed, but for running regression tests I go back to\ngcc-2.7.2.1 to get the rounding behavior back as it used to be.\n\n - Tom\n", "msg_date": "Tue, 23 Feb 1999 02:58:44 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [COMMITTERS] 'pgsql/src/test/regress/expected\n\trules.out'" } ]
[ { "msg_contents": "I have completed most of the cleanups I want to do with the optimizer.\nIt is much improved.\n\nIt passes the regression tests.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 00:34:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "optimizer cleanup" }, { "msg_contents": "> I have completed most of the cleanups I want to do with the optimizer.\n> It is much improved.\n> It passes the regression tests.\n\nAre you seeing misc.sql pass in the regression test? I've got a tree\nwhich is a few days old, but four warnings of the form:\n\n NOTICE: Non-functional update, only first update is performed\n\nhave disappeared. I'm worried that there might be some query analysis\nwhich is no longer happening...\n\n - Tom\n", "msg_date": "Mon, 22 Feb 1999 07:05:54 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] optimizer cleanup" }, { "msg_contents": "> > I have completed most of the cleanups I want to do with the optimizer.\n> > It is much improved.\n> > It passes the regression tests.\n> \n> Are you seeing misc.sql pass in the regression test? I've got a tree\n> which is a few days old, but four warnings of the form:\n> \n> NOTICE: Non-functional update, only first update is performed\n> \n> have disappeared. I'm worried that there might be some query analysis\n> which is no longer happening...\n\nmisc fails here with those missing warnings, but the warnings have not\nbeen here in a long time. My regression test logs go back to Jan 29,\nand at that time the warning lines where missing too.\n\nIt is an UPDATE with a join that caused the old warnings.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 11:42:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] optimizer cleanup" } ]
[ { "msg_contents": "I have added a List manipulation section to the developers FAQ on the\nweb site.\n\nDevelopers, please keep in mind that lcons adds to the front of the\nlist, so it is quicker than lappend, if the order of the List is not\nimportant.\n\nI am going to go through tomorrow and see which lappends I can convert\nto lcons.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 01:28:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "lappend vs. lcons" } ]
[ { "msg_contents": "Check out http://www.bitkeeper.com/ -- it's about time someone wrote something\nbetter than sccs/cvs/rcs.\n\nTaral\n", "msg_date": "Mon, 22 Feb 1999 00:32:07 -0600", "msg_from": "Taral <[email protected]>", "msg_from_op": true, "msg_subject": "New source control system" } ]
[ { "msg_contents": "Dear Friends,\n\tI am working testing a new database buffer management algorithm. I feel\nlost among the lot of directories and files. Could you please guide me\nto the directories and files that I should look for to start my\nexperiments. I am interested in the buffer manager code and the place\nwhere compile analysis for the application can be done. Thank you for\nconcern and looking forward to hearing from you.\nSincerely,\nMohamed Hefny\nGraduate Student and Research Assitant\nThe Computer Science Department\nThe American University in Cairo\n", "msg_date": "Mon, 22 Feb 1999 11:26:37 +0200", "msg_from": "Mohamed Hefny <[email protected]>", "msg_from_op": true, "msg_subject": "Playing with postgres" } ]
[ { "msg_contents": "I'm currently developing a postgres DB backend to my current project and have\nthe following problem. If I execute a query I need to know the table the\nreturning fields belong to. For example :\n\nSELECT x.y, z.y from x, y where x.key = z.key\n\nThe problem with the above is that I get back 2 fields called 'y' and have no\nway of knowing from which tables they've come from. The query is arbitrary so I\ncannot assume table order in the SQL statement.\n\nWhat information from the 'C' or 'C++' APIs can help ?\n\nIs there is a way of identifying the originating tables ?\n\nThanks\n Philip \n--------------------------------------------------------------------------\nPhilip Shiels E-Mail:[email protected] JRC Ispra, Italy, TP270\nGIST:http://gist.jrc.it CEO:http://www.ceo.org GEM:http://gem.jrc.it\n", "msg_date": "Mon, 22 Feb 1999 11:36:59 +0100", "msg_from": "Philip Shiels <[email protected]>", "msg_from_op": true, "msg_subject": "Tables names from query" } ]
[ { "msg_contents": "Hello!\n\n I am trying to execute query:\nSELECT city_id, COUNT(DISTINCT pos_id)\n ...\nGROUP BY city_id ;\n\n but got the error:\nERROR: parser: parse error at or near \"distinct\"\n\n It is me who do not understand SQL or postgres does not implement it?\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Mon, 22 Feb 1999 16:22:31 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "SELECT COUNT(DISTINCT)" }, { "msg_contents": "I was in need of that exact feature awhile back. As far as I can tell it\nisn't supported. I ended up using a complicated NOT EXISTS query instead.\nI've been meaning to hack this, but I haven't gotten a chance. \n\n-Andy\n\n\nOn Mon, 22 Feb 1999, Oleg Broytmann wrote:\n\n> Hello!\n> \n> I am trying to execute query:\n> SELECT city_id, COUNT(DISTINCT pos_id)\n> ...\n> GROUP BY city_id ;\n> \n> but got the error:\n> ERROR: parser: parse error at or near \"distinct\"\n> \n> It is me who do not understand SQL or postgres does not implement it?\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> \n> \n\n", "msg_date": "Mon, 22 Feb 1999 08:46:09 -0500 (EST)", "msg_from": "Andy Selle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SELECT COUNT(DISTINCT)" }, { "msg_contents": "> I am trying to execute query:\n> SELECT city_id, COUNT(DISTINCT pos_id)\n> ...\n> GROUP BY city_id ;\n> It is me who do not understand SQL or postgres does not implement it?\n\nAs Andy points out, it is not (yet) implemented. afaik no one is working\non it currently.\n\n - Tom\n", "msg_date": "Tue, 23 Feb 1999 03:13:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SELECT COUNT(DISTINCT)" } ]
[ { "msg_contents": "Hi!\n\n I ran a query:\nSELECT p.subsec_id, p.pos_id, cn.pos_id\n FROM central cn, shops sh, districts d, positions p\n WHERE cn.shop_id = sh.shop_id AND sh.distr_id = d.distr_id\n AND d.city_id = 1 ;\n\nand got a huge list, where, are, e.g:\nsubsec_id|pos_id|pos_id\n---------+------+------\n 1| 1| 1\n 1| 1| 1\n 1| 1| 1\n 1| 1| 1\n [skipped]\n 1| 2| 1\n 1| 2| 2\n 1| 2| 2\n 1| 2| 2\n\nand so on.\n\n I modified the query to exclude rows:\n\nSELECT p.subsec_id, p.pos_id, cn.pos_id\n FROM central cn, shops sh, districts d, positions p\n WHERE cn.shop_id = sh.shop_id AND sh.distr_id = d.distr_id\n AND d.city_id = 1 AND cn.pos_id = p.pos_id ;\n\nbut got 0 rows as a result. I expected:\nsubsec_id|pos_id|pos_id\n---------+------+------\n 1| 1| 1\n [skipped]\n 1| 2| 2\n\n Is it a bug?\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Mon, 22 Feb 1999 18:24:31 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with complex join" } ]
[ { "msg_contents": "Since there appears to be a one to one relationship between backend\nprocesses and connected users, what options are there for shops that have\nmore than 64 users?\n\n\t-----Original Message-----\n\tFrom:\tTom Lane [SMTP:[email protected]]\n\tSent:\tMonday, February 22, 1999 8:10 AM\n\tTo:\tBruce Momjian\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] Re: Max backend limits cleaned up \n\n\tBruce Momjian <[email protected]> writes:\n\t> I am getting:\n\t> IpcSemaphoreCreate: semget failed (No space left on device)\nkey=5432017,\n\t> num=16, permission=600\n\t> [ later ]\n\t> I got it working by adding a -N 32 to the postmaster startup.\nLooks\n\t> like my site BSD/OS can't start 64 backends. Some of my\nconfiguration\n\t> is wrong. Perhaps we need 32 as the default.\n\n\tYeah, I was thinking about that myself. I left the default -N\nsetting\n\tat 64 on the theory that people who had gone to the trouble of\nmaking\n\tsure they had proper kernel configurations should not get surprised\nby\n\tv6.5 suddenly reducing the default number-of-backends limit.\n\n\tOn the other hand, we have reason to believe that a lot of systems\nare\n\tnot configured to allow Postgres to grab 64 semaphores, so if we\ndon't\n\treduce the default -N value we will almost certainly see a lot of\ngripes\n\tjust like the above when people move to 6.5. (I think -N 32 would\nwork\n\tas a default on minimally-configured systems, but cannot prove it.)\n\n\tI haven't got a real strong feeling either way. Opinions?\n\n\t\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Feb 1999 10:04:35 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: Max backend limits cleaned up " } ]
[ { "msg_contents": "I updated the cvs. Forgot to update the web site. Done now. You\nshould see the updated list now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 16:28:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Updated developers list" }, { "msg_contents": "I sorry to post this message to the Hackers list, but I can't seem to \nsubscribe to any of the other lists. \nAn email like\n\nTo: [email protected]\nSubject: subscribe\nsubscribe pgsql-sql\n\ndoesn't have any affect.\n\nAnyway, does anyone know when \"alter table tn drop column attr type\"\nwill be implemented?\nThanks,\nRich.\n\n", "msg_date": "Mon, 22 Feb 1999 18:31:47 -0800 (PST)", "msg_from": "RHS Linux User <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated developers list" }, { "msg_contents": "> I sorry to post this message to the Hackers list, but I can't seem to \n> subscribe to any of the other lists. \n> An email like\n> \n> To: [email protected]\n> Subject: subscribe\n> subscribe pgsql-sql\n> \n> doesn't have any affect.\n> \n> Anyway, does anyone know when \"alter table tn drop column attr type\"\n> will be implemented?\n\nWorkaround in FAQ.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 22:01:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Updated developers list" }, { "msg_contents": "\nOn Mon, 22 Feb 1999, Bruce Momjian wrote:\n\n> > I sorry to post this message to the Hackers list, but I can't seem to \n> > subscribe to any of the other lists. \n> > An email like\n> > \n> > To: [email protected]\n> > Subject: subscribe\n> > subscribe pgsql-sql\n> > \n> > doesn't have any affect.\n> > \n> > Anyway, does anyone know when \"alter table tn drop column attr type\"\n> > will be implemented?\n> \n> Workaround in FAQ.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n\nSorry for the stupid question. :-)\n\nAnyway, I've been using postgresql on usedcars.com for 3-4 months now \n(1-2 queries/sec, with RH Linux 5, using jdbc driver), and it works much \nbetter than anything else \nI've tried. I've had my share of small to med. problems, but as long as I \ndon't abuse it in my Java code, it almost always works like a champ. Good \njob guys. \nI'm probably interested enough now to start contributing. (Probably to the \njdbc driver) \nRich.\n\nPS, The other subscribes DID work (read above), I was just too \nimpatient. :-) \n\n", "msg_date": "Wed, 24 Feb 1999 00:18:36 -0800 (PST)", "msg_from": "RHS Linux User <[email protected]>", "msg_from_op": false, "msg_subject": "Praise" } ]
[ { "msg_contents": "I have again updated it to show List manipulation examples, so people\ncan better understand the code.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 17:16:35 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Developers FAQ" } ]
[ { "msg_contents": "Hello hackers...\n\nEnclosed below I have a patch to allow a btree index on the int8 type. \n\nI would like some feedback on what the hash function for the int8 hash function \nin the ./backend/access/hash/hashfunc.c should return.\n\nAlso, could someone (maybe Tomas Lockhart?) look-over the patch and make sure \nthe system table entries are correct? I've tried to research them as much as I \ncould, but some of them are still not clear to me.\n\nThanks,\n-Ryan\n\nP.S. I claimed the following OID's for this implimentation:\n\t754 and 842 (for the btree index)\n\t949 for the hash index (not totally implimented yet.)\n\nI got these by using the ./include/catalog/unused_oids script. I hope they were \nnot being resererved for something els4.\n\n\n*** ./backend/access/hash/hashfunc.c.orig\tMon Feb 22 15:30:41 1999\n--- ./backend/access/hash/hashfunc.c\tMon Feb 22 15:31:32 1999\n***************\n*** 32,37 ****\n--- 32,49 ----\n \treturn ~key;\n }\n \n+ /*\n+ * I'm not sure how to impliment this hash function\n+ * -Ryan (2/22/1999)\n+ */\n+ #ifdef NOT_USED\n+ uint32\n+ hashint8(uint64 *key)\n+ {\n+ \treturn ~((uint32)key);\n+ }\n+ #endif /* NOT_USED */\n+ \n /* Hash function from Chris Torek. */\n uint32\n hashfloat4(float32 keyp)\n*** ./backend/access/nbtree/nbtcompare.c.orig\tMon Feb 22 14:14:56 1999\n--- ./backend/access/nbtree/nbtcompare.c\tMon Feb 22 15:02:41 1999\n***************\n*** 40,45 ****\n--- 40,56 ----\n }\n \n int32\n+ btint8cmp(int64 *a, int64 *b)\n+ {\n+ \tif (*a > *b)\n+ \t\treturn 1;\n+ \telse if (*a == *b)\n+ \t\treturn 0;\n+ \telse\n+ \t\treturn -1;\n+ }\n+ \n+ int32\n btint24cmp(int16 a, int32 b)\n {\n \treturn ((int32) a) - b;\n*** ./include/catalog/pg_amop.h.orig\tMon Feb 22 14:14:37 1999\n--- ./include/catalog/pg_amop.h\tMon Feb 22 15:42:14 1999\n***************\n*** 168,173 ****\n--- 168,183 ----\n DATA(insert OID = 0 ( 403 426 521 5 btreesel btreenpage ));\n \n /*\n+ *\tnbtree int8_ops\n+ */\n+ \n+ DATA(insert OID = 0 ( 403 754 412 1 btreesel btreenpage ));\n+ DATA(insert OID = 0 ( 403 754 414 2 btreesel btreenpage ));\n+ DATA(insert OID = 0 ( 403 754 410 3 btreesel btreenpage ));\n+ DATA(insert OID = 0 ( 403 754 415 4 btreesel btreenpage ));\n+ DATA(insert OID = 0 ( 403 754 413 5 btreesel btreenpage ));\n+ \n+ /*\n *\tnbtree oid_ops\n */\n \n***************\n*** 338,343 ****\n--- 348,364 ----\n DATA(insert OID = 0 ( 405\t423 670 1 hashsel hashnpage ));\n /* int4_ops */\n DATA(insert OID = 0 ( 405\t426 96 1 hashsel hashnpage ));\n+ \n+ /*\n+ * Add this when I figure out the int8 hash function.\n+ * -Ryan (2/22/1999)\n+ */\n+ \n+ #ifdef NOT_USED\n+ /* int8_ops */\n+ /* DATA(insert OID = 0 ( 405\t426 96 1 hashsel hashnpage )); */\n+ #endif\n+ \n /* oid_ops */\n DATA(insert OID = 0 ( 405\t427 607 1 hashsel hashnpage ));\n /* oid8_ops */\n*** ./include/catalog/pg_amproc.h.orig\tMon Feb 22 14:14:27 1999\n--- ./include/catalog/pg_amproc.h\tMon Feb 22 14:57:54 1999\n***************\n*** 92,97 ****\n--- 92,98 ----\n DATA(insert OID = 0 (403 435 404 1));\n DATA(insert OID = 0 (403 436 948 1));\n DATA(insert OID = 0 (403 437 828 1));\n+ DATA(insert OID = 0 (403 754 842 1));\n DATA(insert OID = 0 (403 1076 1078 1));\n DATA(insert OID = 0 (403 1077 1079 1));\n DATA(insert OID = 0 (403 1114 1092 1));\n*** ./include/catalog/pg_opclass.h.orig\tMon Feb 22 14:13:53 1999\n--- ./include/catalog/pg_opclass.h\tMon Feb 22 14:26:33 1999\n***************\n*** 93,98 ****\n--- 93,100 ----\n DESCR(\"\");\n DATA(insert OID = 714 (\tcircle_ops\t\t718 ));\n DESCR(\"\");\n+ DATA(insert OID = 754 (\tint8_ops\t\t 20 ));\n+ DESCR(\"\");\n DATA(insert OID = 1076 (\tbpchar_ops\t 1042 ));\n DESCR(\"\");\n DATA(insert OID = 1077 (\tvarchar_ops 1043 ));\nNo differences encountered\n*** ./include/catalog/pg_proc.h.orig\tMon Feb 22 14:14:16 1999\n--- ./include/catalog/pg_proc.h\tMon Feb 22 15:41:57 1999\n***************\n*** 735,740 ****\n--- 735,742 ----\n DESCR(\"btree less-equal-greater\");\n DATA(insert OID = 351 ( btint4cmp\t\t PGUID 11 f t f 2 f 23 \"23 23\" \n100 0 0 100 foo bar ));\n DESCR(\"btree less-equal-greater\");\n+ DATA(insert OID = 842 ( btint8cmp\t\t PGUID 11 f t f 2 f 23 \"20 20\" \n100 0 0 100 foo bar ));\n+ DESCR(\"btree less-equal-greater\");\n DATA(insert OID = 352 ( btint42cmp\t\t PGUID 11 f t f 2 f 23 \"23 21\" \n100 0 0 100 foo bar ));\n DESCR(\"btree less-equal-greater\");\n DATA(insert OID = 353 ( btint24cmp\t\t PGUID 11 f t f 2 f 23 \"21 23\" \n100 0 0 100 foo bar ));\n***************\n*** 821,826 ****\n--- 823,838 ----\n DESCR(\"hash\");\n DATA(insert OID = 450 ( hashint4\t\t PGUID 11 f t f 2 f 23 \"23 23\" \n100 0 0 100 foo bar ));\n DESCR(\"hash\");\n+ \n+ /*\n+ * Add this when I figure out the int8 hash function.\n+ * -Ryan (2/22/1999)\n+ */\n+ #ifdef NOT_USED\n+ /* DATA(insert OID = 949 ( hashint8\t\t PGUID 11 f t f 2 f 23 \"20 20\" \n100 0 0 100 foo bar )); */\n+ /* DESCR(\"hash\"); */\n+ #endif /* NOT_USED */\n+ \n DATA(insert OID = 451 ( hashfloat4\t\t PGUID 11 f t f 2 f 23 \"700 \n700\" 100 0 0 100\tfoo bar ));\n DESCR(\"hash\");\n DATA(insert OID = 452 ( hashfloat8\t\t PGUID 11 f t f 2 f 23 \"701 \n701\" 100 0 0 100\tfoo bar ));\nNo differences encountered\n*** ./include/utils/builtins.h.orig\tMon Feb 22 15:05:19 1999\n--- ./include/utils/builtins.h\tMon Feb 22 15:06:17 1999\n***************\n*** 163,168 ****\n--- 163,169 ----\n */\n extern int32 btint2cmp(int16 a, int16 b);\n extern int32 btint4cmp(int32 a, int32 b);\n+ extern int32 btint8cmp(int64 *a, int64 *b);\n extern int32 btint24cmp(int16 a, int32 b);\n extern int32 btint42cmp(int32 a, int16 b);\n extern int32 btfloat4cmp(float32 a, float32 b);\n", "msg_date": "Mon, 22 Feb 1999 16:18:48 -0700 (MST)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "discussion on proposed int8_ops patch" }, { "msg_contents": "> Enclosed below I have a patch to allow a btree index on the int8 type.\n> I would like some feedback on what the hash function for the int8 hash \n> function in the ./backend/access/hash/hashfunc.c should return.\n> Also, could someone (maybe Tomas Lockhart?) look-over the patch and \n> make sure the system table entries are correct?\n\nI've got the patches and have applied them (with a bit of fix-up) to my\ncurrent source tree. I would like to look at them in more detail before\ncommitting them to the source tree, but I'm sure you've gotten most of\nthe important stuff.\n\nistm that the int8 hash function can look just like the int4 hash\nfunction, coercing the int8 input down to int4 first. afaik this isn't a\nproblem, in that int8->int4 overflows are not signaled. I've enabled\nthis hash strategy in your code.\n\n> P.S. I claimed the following OID's for this implimentation:\n> 754 and 842 (for the btree index)\n> 949 for the hash index (not totally implimented yet.)\n> I got these by using the ./include/catalog/unused_oids script.\n\nThose should be fine, and that was the right way to choose them.\n\nSorry that I'm out of town until next week, but I should be able to\nfinish things then. Thanks for the patches.\n\n - Tom\n", "msg_date": "Tue, 23 Feb 1999 18:04:34 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] discussion on proposed int8_ops patch" }, { "msg_contents": "Applied, though there was some wrapping of the e-mail I had to clean up.\n\nYour hash code looks fine, so I enabled it by removing the ifdef's. You\ncould XOR the top int4 bytes with the bottom int4 bytes, but I doubt\nthere is a portable way to do that, so you are better off just leaving\nit as is, where it looks at only the lower int32 bytes.\n\nIf you copied how the other entries pointed to other entries, your code\nwill be fine.\n\n> Hello hackers...\n> \n> Enclosed below I have a patch to allow a btree index on the int8 type. \n> \n> I would like some feedback on what the hash function for the int8 hash function \n> in the ./backend/access/hash/hashfunc.c should return.\n> \n> Also, could someone (maybe Tomas Lockhart?) look-over the patch and make sure \n> the system table entries are correct? I've tried to research them as much as I \n> could, but some of them are still not clear to me.\n> \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Mar 1999 00:10:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] discussion on proposed int8_ops patch" }, { "msg_contents": "> Applied, though there was some wrapping of the e-mail I had to clean \n> up.\n> Your hash code looks fine, so I enabled it by removing the ifdef's. \n> > Enclosed below I have a patch to allow a btree index on the int8 \n> > type.\n> > I would like some feedback on what the hash function for the int8 \n> > hash function in the ./backend/access/hash/hashfunc.c should return.\n> > Also, could someone (maybe Tomas Lockhart?) look-over the patch and \n> > make sure the system table entries are correct? I've tried to \n> > research them as much as I could, but some of them are still not \n> > clear to me.\n\n*argh* I had responded to Ryan and the list that there were problems\nwith the patch and that I would fix it up and then apply to the tree.\nSo don't expect this stuff to work as-is, and now I'll have to figure\nout what else has changed :(\n\nMan, I go away for two weeks and look at what happens ;)\n\n - Tom\n", "msg_date": "Sun, 21 Mar 1999 15:00:28 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] discussion on proposed int8_ops patch" }, { "msg_contents": "> > Applied, though there was some wrapping of the e-mail I had to clean \n> > up.\n> > Your hash code looks fine, so I enabled it by removing the ifdef's. \n> > > Enclosed below I have a patch to allow a btree index on the int8 \n> > > type.\n> > > I would like some feedback on what the hash function for the int8 \n> > > hash function in the ./backend/access/hash/hashfunc.c should return.\n> > > Also, could someone (maybe Tomas Lockhart?) look-over the patch and \n> > > make sure the system table entries are correct? I've tried to \n> > > research them as much as I could, but some of them are still not \n> > > clear to me.\n> \n> *argh* I had responded to Ryan and the list that there were problems\n> with the patch and that I would fix it up and then apply to the tree.\n> So don't expect this stuff to work as-is, and now I'll have to figure\n> out what else has changed :(\n\nSorry. I don't remember seeing your comments.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 Mar 1999 13:51:38 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] discussion on proposed int8_ops patch" } ]
[ { "msg_contents": "Hello all,\n\nAFAIC the relation between objects is not copied correctly \nby copyObject() (i.e the same pointers to an object are copied \nto different pointers by copyObject()). \n\nI think it makes copyObject() unreliable.\nI have some bug-reports due to this cause.(see attached file)\n\nWe should patch one by one ? \n\nThere is a way to maintain the list of (old,new) pairs during \ncopyObject() operations.\nWe could copyObject() correctly with this mechanism,though \nthere may be the problem of performance.\n\nComment ?\n\nHiroshi Inoue\[email protected]", "msg_date": "Tue, 23 Feb 1999 09:13:05 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "copyObject() ?" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> AFAIC the relation between objects is not copied correctly \n> by copyObject() (i.e the same pointers to an object are copied \n> to different pointers by copyObject()). \n\nTrue, but it seems irrelevant to me --- as Jan Wieck was just pointing\nout, no code should ever depend on pointer-equality in parse trees or\nplan trees anyway.\n\n> There is a way to maintain the list of (old,new) pairs during \n> copyObject() operations.\n\nI think we'd be better off fixing any places that mistakenly assume\npointer compare is sufficient. You didn't say which version you were\ntesting, but we know there are a few bugs like that in the current\nCVS sources because of collateral damage from the EXCEPT/INTERSECT\npatch. I believe the plan is to either fix them or back out the patch\nbefore 6.5.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Feb 1999 10:16:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] copyObject() ? " }, { "msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > AFAIC the relation between objects is not copied correctly \n> > by copyObject() (i.e the same pointers to an object are copied \n> > to different pointers by copyObject()). \n> \n> True, but it seems irrelevant to me --- as Jan Wieck was just pointing\n> out, no code should ever depend on pointer-equality in parse trees or\n> plan trees anyway.\n> \n> > There is a way to maintain the list of (old,new) pairs during \n> > copyObject() operations.\n> \n> I think we'd be better off fixing any places that mistakenly assume\n> pointer compare is sufficient. You didn't say which version you were\n> testing, but we know there are a few bugs like that in the current\n> CVS sources because of collateral damage from the EXCEPT/INTERSECT\n> patch. I believe the plan is to either fix them or back out the patch\n> before 6.5.\n\nYes, I removed a pointer comparison in the optimizer. It now uses\nequal(). Someone needs to go over EXCEPT/INTERSECT code and identify\nintroduced problems or we are going to be chasing these introduced bugs\nfor months. Anyone volunteering?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Feb 1999 10:28:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] copyObject() ?" }, { "msg_contents": "Hello all,\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Wednesday, February 24, 1999 12:16 AM\n> To: Hiroshi Inoue\n> Cc: pgsql-hackers\n> Subject: Re: [HACKERS] copyObject() ? \n> \n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > AFAIC the relation between objects is not copied correctly \n> > by copyObject() (i.e the same pointers to an object are copied \n> > to different pointers by copyObject()). \n> \n> True, but it seems irrelevant to me --- as Jan Wieck was just pointing\n> out, no code should ever depend on pointer-equality in parse trees or\n> plan trees anyway.\n>\n\nIf multiple references are not necessary,why we don't allocate diffrent \nobjects which have equal contents from the start ?\n\nIt seems very difficult to prevent developpers from using the following \nfact implicitly.\n\n\tThe same pointers always point the equal contents.\n\t\t\t ^^^^^^^^\n\nDifferent pointers (as copyObject() currently generates) which have \nequal contents may have different contents some time.\nIsn't it a significant differnce ?\n\n> > There is a way to maintain the list of (old,new) pairs during \n> > copyObject() operations.\n> \n> I think we'd be better off fixing any places that mistakenly assume\n> pointer compare is sufficient. You didn't say which version you were\n> testing, \n\nMy environment is v6.4.2.\nOK,I would test my cases again after the release of 6.5-BETA(v6.4.3?).\n \nTIA\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 25 Feb 1999 10:57:50 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] copyObject() ? " } ]
[ { "msg_contents": "此封郵件來自 莊芳昇 先生.\nThis mail from Chuang Fang-sheng\n\[email protected]\n\[email protected]", "msg_date": "Tue, 23 Feb 1999 08:16:28 +0800", "msg_from": "\"Chuang Fang-sheng\" <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "Hi Adriaan\n\tI've recently compiled postgresql 6.4.2 of a dec alpha (running DU4.0D)\nI set the follow environment variables\nsetenv CC cc\nsetenv CFLAGS \"-O4 -std\"\nsetenv CPPFLAGS \"-I/usr/local/include\"\nsetenv LDFLAGS \"-L/usr/local/lib\"\nsetenv LIBS \"\"\n\ncreated a Makefile.custom in the source directory with contents\nCUSTOM_COPT= -O4 -std\n\nAdd the following to Makefile.shlib (you may hae to change a few things here)\nifeq ($(PORTNAME), alpha)\n install-shlib-dep := install-shlib\n shlib := lib$(NAME)$(DLSUFFIX)\n LDFLAGS_SL := -shared -msym -s -rpath /usr/local/pgsql/lib -check_registry \n/usr/shlib/so_locations -check_registry /usr/local/lib/so_location \n-update_registry /usr/local/lib/so_locations\nendif\n\n\nand ran \n./configure --with-template=alpha\n\n\nIf you want the plpgsql stuff then go into src/pl/plpgsql/src and relink \nlibplpgsql.so without the \"-L../../../interfaces/libpq -lpq\"\n\nThe reression tests in src/test/regress mostly work. Below is a summary of \nthis with my own annotation as to why some of the failure were ok. (I'll also \nattach the regression.diff file if anyone want to know)\n\nboolean .. \t\tok\nchar .. \t\tok\nname .. \t\tok\nvarchar .. \t\tok\ntext .. \t\tok\nstrings .. \t\tok\nint2 .. \t\tfailed\tok\t(diff error message)\nint4 .. \t\tfailed\tok\t(diff error message)\nint8 .. \t\tfailed \tfailed\t(failed big time:)\noid .. \t\tok\nfloat4 .. \t\tok\nfloat8 .. \t\tfailed \tok\t(diff error message)\nnumerology .. \t\tok\npoint .. \t\tok\nlseg .. \t\tok\nbox .. \t\tok\npath .. \t\tok\npolygon .. \t\tok\ncircle .. \t\tok\ngeometry .. \t\tfailed \tok\t(precision is last digits in some results)\ntimespan .. \t\tok\ndatetime .. \t\tok\nreltime .. \t\tok\nabstime .. \t\tfailed\tok\t(diff timezone to expected) \ntinterval .. \t\tfailed\tok\t(diff timezone to expected) \nhorology .. \t\tfailed\tok\t(diff timezone to expected) \ninet .. \t\tfailed\tfailed\ncomments .. \t\tok\nopr_sanity .. \t\tfailed\tok \t(added soundex module before test)\ncreate_function_1 .. \tok\ncreate_type .. \tok\ncreate_table .. \tok\ncreate_function_2 .. \tok\nconstraints .. \tok\ntriggers .. \t\tok\ncopy .. \t\tok\ncreate_misc .. \tok\ncreate_aggregate .. \tok\ncreate_operator .. \tok\ncreate_view .. \tok\ncreate_index .. \tok\nsanity_check .. \tok\nerrors .. \t\tok\nselect .. \t\tok\nselect_into .. \tok\nselect_distinct .. \tok\nselect_distinct_on .. \tok\nselect_implicit .. \tok\nselect_having .. \tok\nsubselect .. \t\tok\nunion .. \t\tok\naggregates .. \t\tok\ntransactions .. \tok\nrandom .. \t\tfailed\tok \t(this always fails)\nportals .. \t\tok\nmisc .. \t\tok\narrays .. \t\tok\nbtree_index .. \tok\nhash_index .. \t\tok\nselect_views .. \tok\nalter_table .. \t \tok\nportals_p2 .. \t\tok\nrules .. \t\tok\ninstall_plpgsql .. \tfailed\tok\t(added plpgsql module before test)\nplpgsql .. \t\tok\n\n\nThe only problem I have with 6.4.2 is getting kerberos 4 authentification \nworking and getting the perl DBI module to link with the kerberized libpq.so.\n\nAnyway I hope this helps\n\n\n\n\n +-----------------+------------------------------------------+\n | _ ^ _ | Dr. Rodney McDuff |\n | |\\ /|\\ /| | Network Development, ITS |\n | \\ | / | The University of Queensland |\n | \\ | / | St. Lucia, Brisbane |\n | \\|/ | Queensland, Australia. 4072. |\n |<-------+------->| TELEPHONE: +61 7 3365 8220 |\n | /|\\ | FACSIMILE: +61 7 3365 4477 |\n | / | \\ | EMAIL: [email protected] |\n | / | \\ | |\n | |/ \\|/ \\| | Ex ignorantia ad sapientiam |\n | - v - | Ex luce ad tenebras |\n +-----------------+------------------------------------------+", "msg_date": "Tue, 23 Feb 1999 10:42:27 +1000", "msg_from": "Rodney McDuff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Postgres Future: Postgres on Digital Alpha" }, { "msg_contents": "I assume we have this in 6.5 tree, right?\n\n> Hi Adriaan\n> \tI've recently compiled postgresql 6.4.2 of a dec alpha (running DU4.0D)\n> I set the follow environment variables\n> setenv CC cc\n> setenv CFLAGS \"-O4 -std\"\n> setenv CPPFLAGS \"-I/usr/local/include\"\n> setenv LDFLAGS \"-L/usr/local/lib\"\n> setenv LIBS \"\"\n> \n> created a Makefile.custom in the source directory with contents\n> CUSTOM_COPT= -O4 -std\n> \n> Add the following to Makefile.shlib (you may hae to change a few things here)\n> ifeq ($(PORTNAME), alpha)\n> install-shlib-dep := install-shlib\n> shlib := lib$(NAME)$(DLSUFFIX)\n> LDFLAGS_SL := -shared -msym -s -rpath /usr/local/pgsql/lib -check_registry \n> /usr/shlib/so_locations -check_registry /usr/local/lib/so_location \n> -update_registry /usr/local/lib/so_locations\n> endif\n> \n> \n> and ran \n> ./configure --with-template=alpha\n> \n> \n> If you want the plpgsql stuff then go into src/pl/plpgsql/src and relink \n> libplpgsql.so without the \"-L../../../interfaces/libpq -lpq\"\n> \n> The reression tests in src/test/regress mostly work. Below is a summary of \n> this with my own annotation as to why some of the failure were ok. (I'll also \n> attach the regression.diff file if anyone want to know)\n> \n> boolean .. \t\tok\n> char .. \t\tok\n> name .. \t\tok\n> varchar .. \t\tok\n> text .. \t\tok\n> strings .. \t\tok\n> int2 .. \t\tfailed\tok\t(diff error message)\n> int4 .. \t\tfailed\tok\t(diff error message)\n> int8 .. \t\tfailed \tfailed\t(failed big time:)\n> oid .. \t\tok\n> float4 .. \t\tok\n> float8 .. \t\tfailed \tok\t(diff error message)\n> numerology .. \t\tok\n> point .. \t\tok\n> lseg .. \t\tok\n> box .. \t\tok\n> path .. \t\tok\n> polygon .. \t\tok\n> circle .. \t\tok\n> geometry .. \t\tfailed \tok\t(precision is last digits in some results)\n> timespan .. \t\tok\n> datetime .. \t\tok\n> reltime .. \t\tok\n> abstime .. \t\tfailed\tok\t(diff timezone to expected) \n> tinterval .. \t\tfailed\tok\t(diff timezone to expected) \n> horology .. \t\tfailed\tok\t(diff timezone to expected) \n> inet .. \t\tfailed\tfailed\n> comments .. \t\tok\n> opr_sanity .. \t\tfailed\tok \t(added soundex module before test)\n> create_function_1 .. \tok\n> create_type .. \tok\n> create_table .. \tok\n> create_function_2 .. \tok\n> constraints .. \tok\n> triggers .. \t\tok\n> copy .. \t\tok\n> create_misc .. \tok\n> create_aggregate .. \tok\n> create_operator .. \tok\n> create_view .. \tok\n> create_index .. \tok\n> sanity_check .. \tok\n> errors .. \t\tok\n> select .. \t\tok\n> select_into .. \tok\n> select_distinct .. \tok\n> select_distinct_on .. \tok\n> select_implicit .. \tok\n> select_having .. \tok\n> subselect .. \t\tok\n> union .. \t\tok\n> aggregates .. \t\tok\n> transactions .. \tok\n> random .. \t\tfailed\tok \t(this always fails)\n> portals .. \t\tok\n> misc .. \t\tok\n> arrays .. \t\tok\n> btree_index .. \tok\n> hash_index .. \t\tok\n> select_views .. \tok\n> alter_table .. \t \tok\n> portals_p2 .. \t\tok\n> rules .. \t\tok\n> install_plpgsql .. \tfailed\tok\t(added plpgsql module before test)\n> plpgsql .. \t\tok\n> \n> \n> The only problem I have with 6.4.2 is getting kerberos 4 authentification \n> working and getting the perl DBI module to link with the kerberized libpq.so.\n> \n> Anyway I hope this helps\n> \nContent-Description: regression.diffs.gz\n\n[Attachment, skipping...]\n\n> \n> +-----------------+------------------------------------------+\n> | _ ^ _ | Dr. Rodney McDuff |\n> | |\\ /|\\ /| | Network Development, ITS |\n> | \\ | / | The University of Queensland |\n> | \\ | / | St. Lucia, Brisbane |\n> | \\|/ | Queensland, Australia. 4072. |\n> |<-------+------->| TELEPHONE: +61 7 3365 8220 |\n> | /|\\ | FACSIMILE: +61 7 3365 4477 |\n> | / | \\ | EMAIL: [email protected] |\n> | / | \\ | |\n> | |/ \\|/ \\| | Ex ignorantia ad sapientiam |\n> | - v - | Ex luce ad tenebras |\n> +-----------------+------------------------------------------+\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Feb 1999 22:55:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres Future: Postgres on Digital Alpha" } ]
[ { "msg_contents": "I've just committed a bunch of (mostly) small patches which fix up some\nerror messages and introduce some non-functional initial code for outer\njoins.\n\nThe timing on this is a bit non-optimal, since I'm leaving town tomorrow\nthrough the weekend, but I've tested all patches on the regression\ntests. Except for a couple of patches which did not apply cleanly, but\nthose seemed to be straight-forward fixes which I did manually.\n\nBon appetit :)\n\n - Tom\n", "msg_date": "Tue, 23 Feb 1999 08:14:34 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Error messages, outer joins, etc" } ]
[ { "msg_contents": "This question is directed at the developers of Postgres.\nHow open are you to changing the protocol between the backend and the client ?\n\nI posted a question regarding finding the tables to which fields (from a query)\nbelong. The problem I have is that the only information I have is that the query\nis a SELECT. The query is completely arbitrary and I have no idea what the\nselect contains (and cannot change it).\nIs it possible to change the protocol between the backend/client to include the\nthe table names (or some unique value allowing me get to the table name) ? (for\ndata that has come directly from the DB and a blank table name for functions\nthat are not directly represented in a DB table).\nPerhaps an option upon connection to use the new protocol (so not all the other\nclients that currently work with the existing protocol break).\n\nI am, depending on how much effort this takes, willing to perform the\ndevelopment myself, can you measure the effort ?\n\nThanks\n\tPhilip\n--------------------------------------------------------------------------\nPhilip Shiels E-Mail:[email protected] JRC Ispra, Italy, TP270\nGIST:http://gist.jrc.it GEM:http://gem.jrc.it\nTutorial:http://gist.jrc.it:8080\n", "msg_date": "Tue, 23 Feb 1999 16:18:36 +0100", "msg_from": "Philip Shiels <[email protected]>", "msg_from_op": true, "msg_subject": "Alterations to backend/client protocol" }, { "msg_contents": "Philip Shiels <[email protected]> writes:\n> Is it possible to change the protocol between the backend/client to\n> include the the table names (or some unique value allowing me get to\n> the table name) ?\n\nA protocol upgrade is certainly possible --- I caused one to happen\nmyself for 6.4. However it incurs a certain amount of pain all around,\nsince new clients won't talk to old servers. I think there'd have to\nbe some discussion and hopefully a consensus about whether the proposed\nnew features are worth the trouble.\n\nBTW, changing the backend's rules for making default column labels would\nbe a way to provide the same info without needing a protocol upgrade.\nIt might break some application-level client logic, however. Offhand\nI'm not sure which way would give fewer headaches. But people have\ncomplained for a long time that the current default labels aren't\ninformative enough, so I think you could probably sell them on a more\nuseful labeling scheme even if it did break a few old clients.\n\n> I am, depending on how much effort this takes, willing to perform the\n> development myself, can you measure the effort ?\n\nAny changes needed in the protocol (see the protocol chapter in the\ndeveloper's guide) and libpq would be trivial enough. I do not know\nwhether it is practical to get the information you want inside the\nbackend, however --- in particular, for queries involving joins,\nI think that the data effectively comes from a \"temporary table\" that\nis the joined relation. Can you identify the ancestry of columns in\nthat temp table? I dunno.\n\nMy guess is that it'd be less work and less impact on other Postgres\nusers to modify the queries you send in the first place...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Feb 1999 10:36:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Alterations to backend/client protocol " } ]
[ { "msg_contents": "uname -a\nSunOS vlad 5.7 Generic_106541-01 sun4u sparc SUNW,Ultra-5_10\ngcc -v\nReading specs from\n/opt/gnu/lib/gcc-lib/sparc-sun-solaris2.7/egcs-2.91.60/specs\ngcc version egcs-2.91.60 19981201 (egcs-1.1.1 release)\n\n\nI just compiled the snapshot using this command to configure pgsql:\n\nconfigure --prefix=/opt/pgsql \\\n --with-template=solaris_sparc_gcc \\\n --with-tcl \\\n --with-perl \\\n --with-tclconfig=/opt/tcl/lib \\\n --with-includes=/opt/tcl/include\n\nAll compiles fine, but when I try to run the postmaster I get the\nfollowing:\n\nvlad: postmaster -i\nIpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\nsize=1137426, permission=600\nFATAL 1: ShmemCreate: cannot create region\n\n\nThought it might help with the development.\nThanks.\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n", "msg_date": "Tue, 23 Feb 1999 09:32:40 -0600", "msg_from": "Brian P Millett <[email protected]>", "msg_from_op": true, "msg_subject": "postmaster fails with 2-23 snapshot" } ]
[ { "msg_contents": "uname -a\nSunOS vlad 5.7 Generic_106541-01 sun4u sparc SUNW,Ultra-5_10\ngcc -v\nReading specs from\n/opt/gnu/lib/gcc-lib/sparc-sun-solaris2.7/egcs-2.91.60/specs\ngcc version egcs-2.91.60 19981201 (egcs-1.1.1 release)\n\n\nI just compiled the snapshot using this command to configure pgsql:\n\nconfigure --prefix=/opt/pgsql \\\n --with-template=solaris_sparc_gcc \\\n --with-tcl \\\n --with-perl \\\n --with-tclconfig=/opt/tcl/lib \\\n --with-includes=/opt/tcl/include\n\nAll compiles fine, but when I try to run the postmaster I get the\nfollowing:\n\nvlad: postmaster -i\nIpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\nsize=1137426, permission=600\nFATAL 1: ShmemCreate: cannot create region\n\n\nThought it might help with the development.\nThanks.\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n", "msg_date": "Tue, 23 Feb 1999 09:40:36 -0600", "msg_from": "Brian P Millett <[email protected]>", "msg_from_op": true, "msg_subject": "postmaster failure with 2-23 snapshot" }, { "msg_contents": "> vlad: postmaster -i\n> IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\n> size=1137426, permission=600\n\nI think shmget returns that error code when the requested size is\nlarger than the system limit on shared memory block size. Check\nyour kernel parameters (SHMMAX and friends).\n\nYou might find that starting the postmaster with a smaller value\nof -N is an easier answer than reconfiguring your kernel.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Feb 1999 19:03:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot " }, { "msg_contents": "Here is what I added to my /etc/system on Solaris 7:\n\nset shmsys:shminfo_shmmax=16777216\nset shmsys:shminfo_shmmin=1\nset shmsys:shminfo_shmmni=128\nset shmsys:shminfo_shmseg=51\n*\nset semsys:seminfo_semmap=128\nset semsys:seminfo_semmni=128\nset semsys:seminfo_semmns=8192\nset semsys:seminfo_semmnu=8192\nset semsys:seminfo_semmsl=64\nset semsys:seminfo_semopm=32\nset semsys:seminfo_semume=32\n\nOf course, this is way more than you need to run 64 backends, this\nwill accommodate thousands of semaphores, but not much more than 128\nbackends due to the shared memory needs... You might want to run a\nsysdef to see the defaults first and then pick your tunables.\n\nDwD\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Tuesday, February 23, 1999 7:04 PM\n> To: Brian P Millett\n> Cc: pgsql-hackers\n> Subject: Re: [HACKERS] postmaster failure with 2-23 snapshot\n>\n>\n> > vlad: postmaster -i\n> > IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,\n> > size=1137426, permission=600\n>\n> I think shmget returns that error code when the requested size is\n> larger than the system limit on shared memory block size. Check\n> your kernel parameters (SHMMAX and friends).\n>\n> You might find that starting the postmaster with a smaller value\n> of -N is an easier answer than reconfiguring your kernel.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Tue, 23 Feb 1999 20:39:09 -0500", "msg_from": "\"Daryl W. Dunbar\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] postmaster failure with 2-23 snapshot " }, { "msg_contents": "\"Daryl W. Dunbar\" wrote:\n\n> Here is what I added to my /etc/system on Solaris 7:\n>\n> set shmsys:shminfo_shmmax=16777216\n> set shmsys:shminfo_shmmin=1\n> set shmsys:shminfo_shmmni=128\n> set shmsys:shminfo_shmseg=51\n> *\n> set semsys:seminfo_semmap=128\n> set semsys:seminfo_semmni=128\n> set semsys:seminfo_semmns=8192\n> set semsys:seminfo_semmnu=8192\n> set semsys:seminfo_semmsl=64\n> set semsys:seminfo_semopm=32\n> set semsys:seminfo_semume=32\n\nThanks for the quick reply, Yes I looked at the /etc/system & I did have\n\nset semsys:seminfo_semmap=128\nset semsys:seminfo_semmni=128\nset semsys:seminfo_semmns=8192\nset semsys:seminfo_semmnu=8192\nset semsys:seminfo_semmsl=64\nset semsys:seminfo_semopm=32\nset semsys:seminfo_semume=32\n\nBUT I didn't have\nset shmsys:shminfo_shmmax=16777216\nset shmsys:shminfo_shmmin=1\nset shmsys:shminfo_shmseg=51\n\n\nThat was it.\n\nThanks!\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n", "msg_date": "Wed, 24 Feb 1999 08:12:36 -0600", "msg_from": "Brian P Millett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot" }, { "msg_contents": "Brian P Millett <[email protected]> writes:\n> BUT I didn't have\n> set shmsys:shminfo_shmmax=16777216\n> That was it.\n\nI'll bet the default value of SHMMAX on your kernel is 1MB.\nYou said 6.4.x worked for you, right?\n\nA stock version of 6.4.x creates a shared memory segment of about\n830K if you don't alter the default -B setting. Thanks to some\nchanges I made recently in the memory space estimation stuff,\nthe current CVS sources will try to make a shm segment of about\n1100K with the default -B and -N settings.\n\nIf 1MB is a popular SHMMAX default, it might be a good idea to\ntrim down the safety margin a little bit so we come out short of\n1MB at the default settings ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Feb 1999 10:33:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> A stock version of 6.4.x creates a shared memory segment of about\n> 830K if you don't alter the default -B setting. Thanks to some\n> changes I made recently in the memory space estimation stuff,\n> the current CVS sources will try to make a shm segment of about\n> 1100K with the default -B and -N settings.\n\nHave there also been changes to the semaphore usage over the last 10\ndays? A February 15th snapshot is fine on my systems, as long as I\napply the patches that appeared here yesterday to get Kerberos going,\nbut after 'cvs update' yesterday (February 23rd), the postmaster is\nrefusing to start, claiming that semget() failed to allocate a block\nof 16 semaphores. The default maximum here is 60 semaphores, so I\nguess it must have allocated at least 44 of them before the failure.\n\nThis is under NetBSD/i386-1.3I.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "25 Feb 1999 07:52:11 +0100", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot" }, { "msg_contents": "I wrote:\n\n> but after 'cvs update' yesterday (February 23rd), the postmaster is\n> refusing to start, claiming that semget() failed to allocate a block\n> of 16 semaphores. The default maximum here is 60 semaphores, so I\n> guess it must have allocated at least 44 of them before the failure.\n\nLooking more closely into it, the postmaster is trying to allocate 64\nsemaphores in four groups of 16, so I built a new kernel with a higher\nlimit, and it's now OK.\n\nThis is as it should be, I hope? It's not a case of something being\nmisconfigured now, using semaphores instead of some other facility?\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "25 Feb 1999 11:51:55 +0100", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot" }, { "msg_contents": "Tom Ivar Helbekkmo <[email protected]> writes:\n> Looking more closely into it, the postmaster is trying to allocate 64\n> semaphores in four groups of 16, so I built a new kernel with a higher\n> limit, and it's now OK.\n> This is as it should be, I hope? It's not a case of something being\n> misconfigured now, using semaphores instead of some other facility?\n\nYes, this is an intentional change --- I guess you haven't been reading\nthe hackers list very closely. The postmaster is now set up to grab\nall the semaphores Postgres could need (for the specified number of\nbackend processes) immediately at postmaster startup. Failing then\nfor lack of semaphores seems a better idea than failing under load\nwhen you try to start the N+1'st client, which is what used to happen.\n\nThere has been some discussion of reducing the default number-of-\nbackends limit to 32 so that a stock installation is less likely\nto run out of semaphores.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Feb 1999 09:28:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot " }, { "msg_contents": "On Thu, 25 Feb 1999, Tom Lane wrote:\n\n> Tom Ivar Helbekkmo <[email protected]> writes:\n> > Looking more closely into it, the postmaster is trying to allocate 64\n> > semaphores in four groups of 16, so I built a new kernel with a higher\n> > limit, and it's now OK.\n> > This is as it should be, I hope? It's not a case of something being\n> > misconfigured now, using semaphores instead of some other facility?\n> \n> Yes, this is an intentional change --- I guess you haven't been reading\n> the hackers list very closely. The postmaster is now set up to grab\n> all the semaphores Postgres could need (for the specified number of\n> backend processes) immediately at postmaster startup. Failing then\n> for lack of semaphores seems a better idea than failing under load\n> when you try to start the N+1'st client, which is what used to happen.\n> \n> There has been some discussion of reducing the default number-of-\n> backends limit to 32 so that a stock installation is less likely\n> to run out of semaphores.\n\nIs there any way (sysctl?) of determining the max # of semaphores\nconfigured into a system?\n\nI just looked at a sys/sysconfig.h under Solaris, and it appears they have\nan \"undocumented function\" that does this...but I can't seem to find\nanything right off...\n\nFor that matter, being able to do a configure check to see if semaphores\nare even compiled into the system or not (ala FreeBSD) might be nice\ntoo...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 25 Feb 1999 11:14:39 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot " }, { "msg_contents": "The Hermit Hacker wrote:\n\n> On Thu, 25 Feb 1999, Tom Lane wrote:\n>\n> Is there any way (sysctl?) of determining the max # of semaphores\n> configured into a system?\n\nMark, I don't know if this is what you want, but with solaris, I can see what is\nthe current setup using \"sysdef\". At the end of the output, I have this:\n\n*\n* IPC Semaphores\n*\n 128 entries in semaphore map (SEMMAP)\n 128 semaphore identifiers (SEMMNI)\n 8192 semaphores in system (SEMMNS)\n 8192 undo structures in system (SEMMNU)\n 64 max semaphores per id (SEMMSL)\n 32 max operations per semop call (SEMOPM)\n 32 max undo entries per process (SEMUME)\n 32767 semaphore maximum value (SEMVMX)\n 16384 adjust on exit max value (SEMAEM)\n*\n* IPC Shared Memory\n*\n 16777216 max shared memory segment size (SHMMAX)\n 1 min shared memory segment size (SHMMIN)\n 100 shared memory identifiers (SHMMNI)\n 51 max attached shm segments per process (SHMSEG)\n*\n* Time Sharing Scheduler Tunables\n*\n60 maximum time sharing user priority (TSMAXUPRI)\nSYS system class name (SYS_NAME)\n\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n", "msg_date": "Thu, 25 Feb 1999 10:50:23 -0600", "msg_from": "Brian P Millett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Thu, 25 Feb 1999, Tom Lane wrote:\n> \n> > Yes, this is an intentional change --- I guess you haven't been reading\n> > the hackers list very closely. The postmaster is now set up to grab\n> > all the semaphores Postgres could need (for the specified number of\n> > backend processes) immediately at postmaster startup. Failing then\n> > for lack of semaphores seems a better idea than failing under load\n> > when you try to start the N+1'st client, which is what used to happen.\n> >\n> > There has been some discussion of reducing the default number-of-\n> > backends limit to 32 so that a stock installation is less likely\n> > to run out of semaphores.\n> \n> Is there any way (sysctl?) of determining the max # of semaphores\n> configured into a system?\n> \n\n<snip comment on undocumented solaris call>\n\nPerhaps on startup the postmaster can allocate the max number of\nsemaphores, thus preserving the 'fail now, not later' behavior, but then\nrelease all but a smaller block? (say, 16)? Kind of an amalgam of the\nnew and old allocation strategies. that way, other programs that\npotentially need a large number of sems can have them, if postgresql\nisn't using them right now, without needing a reconfigured kernel. \n\nHmm, that does open a window for failure if there are sufficient sems at\nstartup, but not latter, when the high load kicks in. Perhaps a\nconfiguration flag, for \"greedy semaphore allocation?\" This lets the\nDBA decide what should fail under the high load, scarce sems condition. \nIf the db is mission critical, it's worth reconfiguring, and letting it\nhave all the sems. Even if \"non-greedy\", still do the test, and fail if\nthere's not enough potential sems for a max num of backends, though\n(don't plan the timebomb, basically).\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Thu, 25 Feb 1999 10:51:50 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot" }, { "msg_contents": "> Tom Ivar Helbekkmo <[email protected]> writes:\n> > Looking more closely into it, the postmaster is trying to allocate 64\n> > semaphores in four groups of 16, so I built a new kernel with a higher\n> > limit, and it's now OK.\n> > This is as it should be, I hope? It's not a case of something being\n> > misconfigured now, using semaphores instead of some other facility?\n> \n> Yes, this is an intentional change --- I guess you haven't been reading\n> the hackers list very closely. The postmaster is now set up to grab\n> all the semaphores Postgres could need (for the specified number of\n> backend processes) immediately at postmaster startup. Failing then\n> for lack of semaphores seems a better idea than failing under load\n> when you try to start the N+1'st client, which is what used to happen.\n> \n> There has been some discussion of reducing the default number-of-\n> backends limit to 32 so that a stock installation is less likely\n> to run out of semaphores.\n\nTom, better lower that limit soon. People are having trouble with the\nsnapshots.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Feb 1999 12:36:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot" }, { "msg_contents": "> Is there any way (sysctl?) of determining the max # of semaphores\n> configured into a system?\n> \n> I just looked at a sys/sysconfig.h under Solaris, and it appears they have\n> an \"undocumented function\" that does this...but I can't seem to find\n> anything right off...\n> \n> For that matter, being able to do a configure check to see if semaphores\n> are even compiled into the system or not (ala FreeBSD) might be nice\n> too...\n\nNone of the commercial db's do that, so I assume there is no portable\nway. We will lower the limit so it will pass most/all kernels, and help\npeople who need to up it. Perhaps an FAQ for kernels.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Feb 1999 12:42:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot" }, { "msg_contents": "On Thu, 25 Feb 1999, Bruce Momjian wrote:\n\n> > Is there any way (sysctl?) of determining the max # of semaphores\n> > configured into a system?\n> > \n> > I just looked at a sys/sysconfig.h under Solaris, and it appears they have\n> > an \"undocumented function\" that does this...but I can't seem to find\n> > anything right off...\n> > \n> > For that matter, being able to do a configure check to see if semaphores\n> > are even compiled into the system or not (ala FreeBSD) might be nice\n> > too...\n> \n> None of the commercial db's do that, so I assume there is no portable\n> way. We will lower the limit so it will pass most/all kernels, and help\n> people who need to up it. Perhaps an FAQ for kernels.\n\nNone of the commercial db's use configure and source code :) \n\nEven if its a header file that we can check for a default setting?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 25 Feb 1999 15:08:51 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> None of the commercial db's use configure and source code :) \n>\n> Even if its a header file that we can check for a default setting?\n\nAFAIK there's no *portable* way of finding out what the kernel's\nconfiguration parameters are --- it's possible to find out, on most\nflavors of Unix, but the place to look differs from platform to\nplatform.\n\nI think our best bet is just to trim Postgres' default settings enough\nso that an unmodified installation will run on most platforms. People\nwho really want more backends or more buffers will have had to learn how\nto adjust their kernel params anyway...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Feb 1999 18:30:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n\n> Is there any way (sysctl?) of determining the max # of semaphores\n> configured into a system?\n\nOn NetBSD (default configuration; I had to change it for PostgreSQL):\n\nathene:tih> ipcs -S\nseminfo:\n semmap: 30 (# of entries in semaphore map)\n semmni: 10 (# of semaphore identifiers)\n semmns: 60 (# of semaphores in system)\n semmnu: 30 (# of undo structures in system)\n semmsl: 60 (max # of semaphores per id)\n semopm: 100 (max # of operations per semop call)\n semume: 10 (max # of undo entries per process)\n semusz: 100 (size in bytes of undo structure)\n semvmx: 32767 (semaphore maximum value)\n semaem: 16384 (adjust on exit max value)\n\nathene:tih> ipcs -Q\nmsginfo:\n msgmax: 16384 (max characters in a message)\n msgmni: 40 (# of message queues)\n msgmnb: 2048 (max characters in a message queue)\n msgtql: 40 (max # of messages in system)\n msgssz: 8 (size of a message segment)\n msgseg: 2048 (# of message segments in system)\n\nathene:tih> ipcs -M\nshminfo:\n shmmax: 4194304 (max shared memory segment size)\n shmmin: 1 (min shared memory segment size)\n shmmni: 128 (max number of shared memory identifiers)\n shmseg: 32 (max shared memory segments per process)\n shmall: 1024 (max amount of shared memory in pages)\n\n> For that matter, being able to do a configure check to see if\n> semaphores are even compiled into the system or not (ala FreeBSD)\n> might be nice too...\n\nAgain, on NetBSD:\n\nathene:tih> sysctl -a | grep sysv\nkern.sysvmsg = 1\nkern.sysvsem = 1\nkern.sysvshm = 1\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "26 Feb 1999 08:23:05 +0100", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot" }, { "msg_contents": "> None of the commercial db's use configure and source code :) \n> \n> Even if its a header file that we can check for a default setting?\n\nIf it possible, let's do it. I just suspect is it not.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 Mar 1999 21:36:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster failure with 2-23 snapshot]" } ]
[ { "msg_contents": "What would it take to have transaction logging added to Postgres. I am a\nc/c++ programmer and will consider contributing to the Postgres development\neffort. I really like everything I see and read about Postgres. As a\nresult, I am strongly considering Postgres as the database engine for my\nMembership database application. My customer is willing to invest in a\ncommercial database, but most of the commercial databases I have briefly\nlooked at fall a little short in one way or another. I have several\nconcerns/needs that I am willing to implement and/or support:\n\n\t- Outer join support in views\n\n\t- Transaction logging\n\n\t- Some form of mirroring, shadowing, or replication\n\n\t- The current locking mechanism is of some concern. I need to make\nsure that one user can read a record and then a second can read and update\nthat same record.\n\n\t- If the first user attempts to update that record, what happens?\n\nI know some of these requests are currently being worked, it would be\nhelpful to get some idea of when these items are expected to be released.\n\nThanks, Michael\n\n\t-----Original Message-----\n\tFrom:\[email protected] [SMTP:[email protected]]\n\tSent:\tTuesday, February 23, 1999 6:08 AM\n\tTo:\[email protected]\n\tSubject:\tRe: [GENERAL] Transaction logging\n\n\n\t\tHi !\n\n\tPeter T Mount <[email protected]> writes:\n\t> > Has anyone implemented transaction logging in Postgres? Any\nsuggestions on\n\t> > how to easily implement transaction logging? Storing the log\nfile in a text\n\t> > file seems best but I am not sure out to open and write to a\ntext file from\n\t> > a trigger. I would also be nice to post this transaction log\nagainst a back\n\t> > up server.\n\n\t> Just a quick thought, but how about using syslog? That can be used\nto post\n\t> queries to a remote server, and it can be told to store the\n\"postgres\"\n\t> stuff to a seperate file on that server.\n\n\t> Just an idea...\n\n\t\tWhy not, but I think it's a bad idea. Syslog is used to log\n\tevents coming from the system. It stores every connection to the\n\tsystem, and any event which can affect the system (such as power\n\tshutdown).\n\n\t\tThe transaction logging is a different taste of log : it\nmust\n\tstore every transaction made to the database, and in case of\ndeletion\n\tof records, or change to data, it must save the old values. So it\n\tgenerates a lot of traffic, and is closely dependant of the database\n\tsystem.\n\n\t\tSyslog is not strong enough to deal with so much data, and\nthe \n\tuse of an external process to get the transaction logging would\n\tgenerate too much traffic (the cost in time would be too high). The\n\tlogging facility would, I think, be made by the database itself.\n\n\t\tAnything else : the logging facility is used to recover the\n\tdatabase after a crash (mainly). This kind of log _must_ be easy to\n\tuse in case of crash. Syslog is very well when you won't to know\nwhat\n\tappend, but not to deal with the problem. Don't forget that Syslog\nadd \n\tsome data to the events we send to him (such as the sender and the\n\tdate of the message). These data, in case of recovery by transaction\n\tlogging mechanism, are noise, which get the recovery (a little bit)\n\tharder.\n\n\t\tI don't think that we could get a logging facility with the\n\tuse of triggers. I think it would be better to hack the postgres\n\tbackend, and supersedes the access to SQL primitives (like insert or\n\tupdate). It would be a little harder to implement, but faster and\n\ttotally transparent to the user.\n\n\t\tregards.\n\n\t-- \n\t ___\n\t{~._.~} Stephane - DUST - Dupille\n\t ( Y ) You were dust and you shall turn into dust\n\t()~*~() email : [email protected]\n\t(_)-(_)\n", "msg_date": "Tue, 23 Feb 1999 12:17:15 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [GENERAL] Transaction logging" }, { "msg_contents": "> What would it take to have transaction logging added to Postgres. I am a\n> c/c++ programmer and will consider contributing to the Postgres development\n> effort. I really like everything I see and read about Postgres. As a\n> result, I am strongly considering Postgres as the database engine for my\n> Membership database application. My customer is willing to invest in a\n> commercial database, but most of the commercial databases I have briefly\n> looked at fall a little short in one way or another. I have several\n> concerns/needs that I am willing to implement and/or support:\n> \n> \t- Outer join support in views\n\nIn the works. Perhaps for 6.5, probably not.\n\n> \n> \t- Transaction logging\n> \n> \t- Some form of mirroring, shadowing, or replication\n> \n> \t- The current locking mechanism is of some concern. I need to make\n> sure that one user can read a record and then a second can read and update\n> that same record.\n\nMVCC locking in 6.5. Will do what you need.\n> \n> \t- If the first user attempts to update that record, what happens?\n\nHard to explain. Will wait or update a copy while read's use an older\ncopy fo the row.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Feb 1999 22:02:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Transaction logging" }, { "msg_contents": "On Tue, 23 Feb 1999, Michael Davis wrote:\n\n> What would it take to have transaction logging added to Postgres. I am a\n> c/c++ programmer and will consider contributing to the Postgres development\n> effort. I really like everything I see and read about Postgres. As a\n> result, I am strongly considering Postgres as the database engine for my\n> Membership database application. My customer is willing to invest in a\n> commercial database, but most of the commercial databases I have briefly\n> looked at fall a little short in one way or another. I have several\n> concerns/needs that I am willing to implement and/or support:\n> \n> \t- Outer join support in views\n> \n> \t- Transaction logging\n> \n> \t- Some form of mirroring, shadowing, or replication\n\nFor this purpose, people might be interested in reading the following document:\n\n\tDistributed Relational Database Architecture\n\thttp://www.opengroup.org/publications/catalog/c812.htm\n\n\tABSTRACT: This Technical Standard is one of three volumes\n\tdocumenting the Distributed Relational Database Architecture\n\tSpecification. This volume describes the connectivity between\n\trelational database managers that enables applications programs\n\tto access distributed relational data. It describes the\n\tnecessary connection between an application and a relational\n\tdatabase management system in a distributed environment; the\n\tresponsibilities of the participants and when flows should occur;\n\tand the formats and protocols required for distributed database\n\tmanagement system processing. It does not describe an API for\n\tdistributed database management system processing.\n\nThey have PDF downloadable.\n\nIf people do decide to start working on db mirroring, this might be a good\ndoc to read, if nothing else than just to get a better understanding of the\nissues involved.\n\n--\nTodd Graham Lewis 32�49'N,83�36'W (800) 719-4664, x2804\n******Linux****** MindSpring Enterprises [email protected]\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n", "msg_date": "Wed, 24 Feb 1999 10:09:35 -0500 (EST)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [GENERAL] Transaction logging" }, { "msg_contents": "Michael Davis wrote:\n\n>\n> What would it take to have transaction logging added to Postgres. I am a\n> c/c++ programmer and will consider contributing to the Postgres development\n> effort. I really like everything I see and read about Postgres. As a\n\n I spent some time on transaction logging since it's a feature\n I'm missing too. There are mainly two different transaction\n log mechanisms out.\n\n 1. Log queries sent to the backend.\n\n 2. Log images of inserted/updated rows and row ID's of\n deleted ones.\n\n The query level logging will write less information if\n queries usually affect a large number of rows. Unfortunately\n the extensibility of Postgres work's against this approach.\n There could be any number of user written functions who's\n results aren't reproduceable during recovery. And core\n features of Postgres itself would introduce the same problem.\n Have a sequence which is used to create default values for\n multiple tables, so that one ID is unique across them. Now\n two backends insert (with INSERT ... SELECT) concurrently\n into different tables using the same sequence. It's a\n classic race condition and it depends on context switching\n and page faults which backend will get which sequence\n numbers. You cannot foresee and you cannot reproduce, except\n you hook into the sequence generator and log this too. Later\n when recovering, another hook into the sequence generator\n must reproduce the logged results on the per\n backend/transaction/command base, and the same must be done\n for each function that usually returns unreproduceable\n results (anything dealing with time, pid's, etc.).\n\n As said, this must also cover user functions. So at least\n there must be a general log API that provides such a\n functionality for user written functions.\n\n The image logging approach also has problems. First, the only\n thing given to the heap access methods to outdate a tuple on\n update/delete is the current tuple ID (information that tells\n which tuple in which block is meant). So you need to save\n the database files in binary format, because during the\n actually existing dump/restore this could change and the\n logged CTID's would hit the wrong tuples.\n\n Second, you must remember in the log which transaction ID\n these informations came from and later if the transaction\n committed or not, so the recovery can set this commit/abort\n information in pg_log too. pg_log is a shared system file and\n the transaction ID's are unique only for one server. Using\n this information for online replication of a single database\n to another Postgres installation will not work.\n\n Third, there are still some shared system catalogs across all\n databases (pg_database, pg_group, pg_log!!!, pg_shadow and\n pg_variable). Due to that it would be impossible (or at least\n very, very tricky) to restore/recover (maybe point in time)\n one single database. If you destroy one database and restore\n it from the binary backup, these shared catalogs cannot be\n restored too, so they're out of sync with the backup time.\n How should the recovery now hit the right things (which\n probably must not be there at all)?.\n\n All this is really a mess. I think the architecture of\n Postgres will only allow something on query level with some\n general API for things that must reproduce the same result\n during recovery. For example time(). Inside the backend,\n time() should never be called directly. Instead another\n function is to be called that log's during normal operation\n which time get's returned by this particular function call\n and if the backend is in recovery mode, returns the value\n from the log.\n\n And again, this all means trouble. Usually, most queries sent\n to the database don't change any data because they are\n SELECT's. It would dramatically blow up the log amount if you\n log ALL queries instead of only those that modify things. But\n when the query begins, you don't know this, because a SELECT\n might call a function that uses SPI to UPDATE something else.\n So the decision if the query must be logged or not can only\n be made when the query is done (by having some global\n variable where the heap access methods set a flag that\n something got written). Now you have to log function call's\n like time() even if the query will not modify any single row\n in the database because the query is a\n\n SELECT 'now'::datetime - updtime FROM ...\n\n Doing this on a table with thousands of rows will definitely\n waste much logging space and slowdown the whole thing by\n unnecessary logging.\n\n Maybe it's a compromise if at each query start the actual\n time and other such information is remembered by the backend,\n all time() calls return this remembered value instead of the\n real one (wouldn't be bad anyway IMHO), and this information\n is logged only if the query is to be logged.\n\n Finally I think I must have missed some more problems, but\n aren't these enough already to frustrate you :-?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 5 Mar 1999 19:23:51 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [GENERAL] Transaction logging" }, { "msg_contents": "\nAdded to TODO:\n\n\t* Transaction log, so re-do log can be on a separate disk by\n\t logging SQL queries, or before/after row images\n\n\n\n> Michael Davis wrote:\n> \n> >\n> > What would it take to have transaction logging added to Postgres. I am a\n> > c/c++ programmer and will consider contributing to the Postgres development\n> > effort. I really like everything I see and read about Postgres. As a\n> \n> I spent some time on transaction logging since it's a feature\n> I'm missing too. There are mainly two different transaction\n> log mechanisms out.\n> \n> 1. Log queries sent to the backend.\n> \n> 2. Log images of inserted/updated rows and row ID's of\n> deleted ones.\n> \n> The query level logging will write less information if\n> queries usually affect a large number of rows. Unfortunately\n> the extensibility of Postgres work's against this approach.\n> There could be any number of user written functions who's\n> results aren't reproduceable during recovery. And core\n> features of Postgres itself would introduce the same problem.\n> Have a sequence which is used to create default values for\n> multiple tables, so that one ID is unique across them. Now\n> two backends insert (with INSERT ... SELECT) concurrently\n> into different tables using the same sequence. It's a\n> classic race condition and it depends on context switching\n> and page faults which backend will get which sequence\n> numbers. You cannot foresee and you cannot reproduce, except\n> you hook into the sequence generator and log this too. Later\n> when recovering, another hook into the sequence generator\n> must reproduce the logged results on the per\n> backend/transaction/command base, and the same must be done\n> for each function that usually returns unreproduceable\n> results (anything dealing with time, pid's, etc.).\n> \n> As said, this must also cover user functions. So at least\n> there must be a general log API that provides such a\n> functionality for user written functions.\n> \n> The image logging approach also has problems. First, the only\n> thing given to the heap access methods to outdate a tuple on\n> update/delete is the current tuple ID (information that tells\n> which tuple in which block is meant). So you need to save\n> the database files in binary format, because during the\n> actually existing dump/restore this could change and the\n> logged CTID's would hit the wrong tuples.\n> \n> Second, you must remember in the log which transaction ID\n> these informations came from and later if the transaction\n> committed or not, so the recovery can set this commit/abort\n> information in pg_log too. pg_log is a shared system file and\n> the transaction ID's are unique only for one server. Using\n> this information for online replication of a single database\n> to another Postgres installation will not work.\n> \n> Third, there are still some shared system catalogs across all\n> databases (pg_database, pg_group, pg_log!!!, pg_shadow and\n> pg_variable). Due to that it would be impossible (or at least\n> very, very tricky) to restore/recover (maybe point in time)\n> one single database. If you destroy one database and restore\n> it from the binary backup, these shared catalogs cannot be\n> restored too, so they're out of sync with the backup time.\n> How should the recovery now hit the right things (which\n> probably must not be there at all)?.\n> \n> All this is really a mess. I think the architecture of\n> Postgres will only allow something on query level with some\n> general API for things that must reproduce the same result\n> during recovery. For example time(). Inside the backend,\n> time() should never be called directly. Instead another\n> function is to be called that log's during normal operation\n> which time get's returned by this particular function call\n> and if the backend is in recovery mode, returns the value\n> from the log.\n> \n> And again, this all means trouble. Usually, most queries sent\n> to the database don't change any data because they are\n> SELECT's. It would dramatically blow up the log amount if you\n> log ALL queries instead of only those that modify things. But\n> when the query begins, you don't know this, because a SELECT\n> might call a function that uses SPI to UPDATE something else.\n> So the decision if the query must be logged or not can only\n> be made when the query is done (by having some global\n> variable where the heap access methods set a flag that\n> something got written). Now you have to log function call's\n> like time() even if the query will not modify any single row\n> in the database because the query is a\n> \n> SELECT 'now'::datetime - updtime FROM ...\n> \n> Doing this on a table with thousands of rows will definitely\n> waste much logging space and slowdown the whole thing by\n> unnecessary logging.\n> \n> Maybe it's a compromise if at each query start the actual\n> time and other such information is remembered by the backend,\n> all time() calls return this remembered value instead of the\n> real one (wouldn't be bad anyway IMHO), and this information\n> is logged only if the query is to be logged.\n> \n> Finally I think I must have missed some more problems, but\n> aren't these enough already to frustrate you :-?\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #======================================== [email protected] (Jan Wieck) #\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Jul 1999 22:17:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [GENERAL] Transaction logging" } ]
[ { "msg_contents": "select '12/13/1901 18:00:00 CST'::datetime;\n?column? \n----------------------------\nFri Dec 13 18:00:00 1901 CST\n(1 row)\n\nselect '12/13/1901 17:59:59 CST'::datetime;\n?column? \n------------------------\nFri Dec 13 23:59:59 1901\n(1 row)\n\n\nWhy???\n", "msg_date": "Tue, 23 Feb 1999 17:57:03 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "quick Date question" }, { "msg_contents": "> select '12/13/1901 18:00:00 CST'::datetime;\n> ----------------------------\n> Fri Dec 13 18:00:00 1901 CST\n> select '12/13/1901 17:59:59 CST'::datetime;\n> ------------------------\n> Fri Dec 13 23:59:59 1901\n> Why???\n\nUnix time databases do not have support for dates earlier than 1903 (?)\nsince Unix system time does not go back farther than that. Also, it\nwould be inappropriate (imho) to adopt the current conventions for\ntimezone since those conventions did not exist back then.\n\nSo date/time outside of the Unix system time range is displayed as UTC.\nYou had forced a different time zone in your input, but the output is\nthe same time in the universal time zone. If you do not specify a time\nzone then the output matches the input exactly.\n\n - Tom\n", "msg_date": "Mon, 01 Mar 1999 16:38:27 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] quick Date question" } ]
[ { "msg_contents": "Your getting me excited about 6.5. Is there a projected release date for\n6.5? Is there any information on transaction logging? Is there anything I\ncan do to help? I am curious about these items because they will make my\nlife much easier in the upcoming months as I migrate my application to\nPostgres. Working around these could be very difficulty or near impossible.\n\n\t-----Original Message-----\n\tFrom:\tBruce Momjian [SMTP:[email protected]]\n\tSent:\tTuesday, February 23, 1999 8:02 PM\n\tTo:\tMichael Davis\n\tCc:\[email protected]; [email protected]\n\tSubject:\tRe: [GENERAL] Transaction logging\n\n\t> What would it take to have transaction logging added to Postgres.\nI am a\n\t> c/c++ programmer and will consider contributing to the Postgres\ndevelopment\n\t> effort. I really like everything I see and read about Postgres.\nAs a\n\t> result, I am strongly considering Postgres as the database engine\nfor my\n\t> Membership database application. My customer is willing to invest\nin a\n\t> commercial database, but most of the commercial databases I have\nbriefly\n\t> looked at fall a little short in one way or another. I have\nseveral\n\t> concerns/needs that I am willing to implement and/or support:\n\t> \n\t> \t- Outer join support in views\n\n\tIn the works. Perhaps for 6.5, probably not.\n\n\t> \n\t> \t- Transaction logging\n\t> \n\t> \t- Some form of mirroring, shadowing, or replication\n\t> \n\t> \t- The current locking mechanism is of some concern. I need\nto make\n\t> sure that one user can read a record and then a second can read\nand update\n\t> that same record.\n\n\tMVCC locking in 6.5. Will do what you need.\n\t> \n\t> \t- If the first user attempts to update that record, what\nhappens?\n\n\tHard to explain. Will wait or update a copy while read's use an\nolder\n\tcopy fo the row.\n\n\n\t-- \n\t Bruce Momjian | http://www.op.net/~candle\n\t [email protected] | (610) 853-3000\n\t + If your life is a hard drive, | 830 Blythe Avenue\n\t + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Tue, 23 Feb 1999 22:31:44 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Transaction logging" }, { "msg_contents": "> Your getting me excited about 6.5. Is there a projected release date for\n> 6.5? Is there any information on transaction logging? Is there anything I\n> can do to help? I am curious about these items because they will make my\n> life much easier in the upcoming months as I migrate my application to\n> Postgres. Working around these could be very difficulty or near impossible.\n\nWe are waiting for the MVCC/locking stuff to be finished. Everything\nelse is mostly ready. We were planning for Feb 1, but we must wait.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Feb 1999 23:43:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Transaction logging" }, { "msg_contents": ">\n> Your getting me excited about 6.5. Is there a projected release date for\n> 6.5? Is there any information on transaction logging? Is there anything I\n> can do to help? I am curious about these items because they will make my\n> life much easier in the upcoming months as I migrate my application to\n> Postgres. Working around these could be very difficulty or near impossible.\n\n I've spent some time thinking about transaction log. The\n first idea was to log queries and in some way. But I had to\n drop that approach because there are functions (and users\n could have written threir own ones too), that don't return\n the same result when the database later get's rolled forward\n (e.g. anything handling date's/times). And OTOH an\n application could SELECT something from the database that\n maybe got created by a sequence, and uses this value then in\n another INSERT. But when recovering the database, it isn't\n guaranteed that all the data will get the same sequences\n again (race conditions in concurrent queries). How should the\n transaction log now know that this one constant value in the\n query must be substituted by another value to ensure\n referential integrity? Absolutely impossible.\n\n So the only way I see is to use some sort of image logging\n from inside the heap access methods. Would be much more\n tricky to dump and recover.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 24 Feb 1999 15:31:51 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Transaction logging" } ]
[ { "msg_contents": "I saw this in the Ingres list. It describes exactly how we process \n\n\tSELECT * FROM tab WHERE col IN (1,2,3,4)\n\nand describes a problem we could have if the list is large.\n\n---------------------------------------------------------------------------\n\n", "msg_date": "Tue, 23 Feb 1999 23:46:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "IN list and OR processing" } ]
[ { "msg_contents": "Hi\n\tI've been having also sorts of fun trying to get kerberos 4 authentification \nworking with postgresql-6.4.2 and KTH-KRB Ebones (http://www.pdc.kth.se/kth-kr\nb) on a dec alpha running DU 4.0D using the native compiler. The following \npatch does the trick.\n\nThe rationale behind this is as follows. The KTH-KRB code header files defines \nlots of lengths like INST_SZ,REALM_SZ and KRB_SENDAUTH_VLEN. It also has a \nhabit of doing things like\n\n\tchararray[LENGTH] = '\\0'\n\nto ensure null terminated strings. In my instance this just happens to blat \nthe kerberos principal instance string leading to error like\n\n\tpg_krb4_recvauth: kerberos error: Can't decode authenticator (krb_rd_req)\n\nThe application code that comes with KTH-KRB uses \"KRB_SENDAUTH_VLEN + 1\" and \nsometimes uses \"INST_SZ + 1\" so it seems safest to put that 1 char buffer in \nthe appropriate place.\n\n\n\n*** postgresql-6.4.2/src/backend/libpq/auth.c.orig Wed Feb 24 12:14:55 \n1999\n--- postgresql-6.4.2/src/backend/libpq/auth.c Wed Feb 24 14:03:46 1999\n***************\n*** 77,86 ****\n {\n long krbopts = 0; /* one-way authentication */\n KTEXT_ST clttkt;\n! char instance[INST_SZ];\n AUTH_DAT auth_data;\n Key_schedule key_sched;\n! char version[KRB_SENDAUTH_VLEN];\n int status;\n \n strcpy(instance, \"*\"); /* don't care, but arg gets expanded\n--- 77,86 ----\n {\n long krbopts = 0; /* one-way authentication */\n KTEXT_ST clttkt;\n! char instance[INST_SZ + 1]; \n AUTH_DAT auth_data;\n Key_schedule key_sched;\n! char version[KRB_SENDAUTH_VLEN + 1];\n int status;\n \n strcpy(instance, \"*\"); /* don't care, but arg gets expanded\n*** postgresql-6.4.2/src/interfaces/libpq/fe-auth.c.orig Wed Feb 24 \n14:05:26 1999\n--- postgresql-6.4.2/src/interfaces/libpq/fe-auth.c Wed Feb 24 14:12:56 \n1999\n***************\n*** 144,151 ****\n static char *\n pg_krb4_authname(char *PQerrormsg)\n {\n! char instance[INST_SZ];\n! char realm[REALM_SZ];\n int status;\n static char name[SNAME_SZ + 1] = \"\";\n \n--- 144,151 ----\n static char *\n pg_krb4_authname(char *PQerrormsg)\n {\n! char instance[INST_SZ + 1];\n! char realm[REALM_SZ + 1];\n int status;\n static char name[SNAME_SZ + 1] = \"\";\n \n\n-- \n\n +-----------------+------------------------------------------+\n | _ ^ _ | Dr. Rodney McDuff |\n | |\\ /|\\ /| | Network Development, ITS |\n | \\ | / | The University of Queensland |\n | \\ | / | St. Lucia, Brisbane |\n | \\|/ | Queensland, Australia. 4072. |\n |<-------+------->| TELEPHONE: +61 7 3365 8220 |\n | /|\\ | FACSIMILE: +61 7 3365 4477 |\n | / | \\ | EMAIL: [email protected] |\n | / | \\ | |\n | |/ \\|/ \\| | Ex ignorantia ad sapientiam |\n | - v - | Ex luce ad tenebras |\n +-----------------+------------------------------------------+\n\n\n", "msg_date": "Wed, 24 Feb 1999 15:04:32 +1000", "msg_from": "Rodney McDuff <[email protected]>", "msg_from_op": true, "msg_subject": "KTH-KRB kerberos 4 patch" }, { "msg_contents": "Rodney McDuff <[email protected]> writes:\n\n> \tI've been having also sorts of fun trying to get kerberos 4\n> authentification working with postgresql-6.4.2 and KTH-KRB Ebones\n> (http://www.pdc.kth.se/kth-kr b) on a dec alpha running DU 4.0D\n> using the native compiler. The following patch does the trick.\n\nGreat! This got a February 15th snapshot of PostgreSQL working for\nme, too! Thanks! :-)\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "25 Feb 1999 07:53:17 +0100", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] KTH-KRB kerberos 4 patch" }, { "msg_contents": "Applied.\n\n\n\n> Hi\n> \tI've been having also sorts of fun trying to get kerberos 4 authentification \n> working with postgresql-6.4.2 and KTH-KRB Ebones (http://www.pdc.kth.se/kth-kr\n> b) on a dec alpha running DU 4.0D using the native compiler. The following \n> patch does the trick.\n> \n> The rationale behind this is as follows. The KTH-KRB code header files defines \n> lots of lengths like INST_SZ,REALM_SZ and KRB_SENDAUTH_VLEN. It also has a \n> habit of doing things like\n> \n> \tchararray[LENGTH] = '\\0'\n> \n> to ensure null terminated strings. In my instance this just happens to blat \n> the kerberos principal instance string leading to error like\n> \n> \tpg_krb4_recvauth: kerberos error: Can't decode authenticator (krb_rd_req)\n> \n> The application code that comes with KTH-KRB uses \"KRB_SENDAUTH_VLEN + 1\" and \n> sometimes uses \"INST_SZ + 1\" so it seems safest to put that 1 char buffer in \n> the appropriate place.\n> \n> \n> \n> *** postgresql-6.4.2/src/backend/libpq/auth.c.orig Wed Feb 24 12:14:55 \n> 1999\n> --- postgresql-6.4.2/src/backend/libpq/auth.c Wed Feb 24 14:03:46 1999\n> ***************\n> *** 77,86 ****\n> {\n> long krbopts = 0; /* one-way authentication */\n> KTEXT_ST clttkt;\n> ! char instance[INST_SZ];\n> AUTH_DAT auth_data;\n> Key_schedule key_sched;\n> ! char version[KRB_SENDAUTH_VLEN];\n> int status;\n> \n> strcpy(instance, \"*\"); /* don't care, but arg gets expanded\n> --- 77,86 ----\n> {\n> long krbopts = 0; /* one-way authentication */\n> KTEXT_ST clttkt;\n> ! char instance[INST_SZ + 1]; \n> AUTH_DAT auth_data;\n> Key_schedule key_sched;\n> ! char version[KRB_SENDAUTH_VLEN + 1];\n> int status;\n> \n> strcpy(instance, \"*\"); /* don't care, but arg gets expanded\n> *** postgresql-6.4.2/src/interfaces/libpq/fe-auth.c.orig Wed Feb 24 \n> 14:05:26 1999\n> --- postgresql-6.4.2/src/interfaces/libpq/fe-auth.c Wed Feb 24 14:12:56 \n> 1999\n> ***************\n> *** 144,151 ****\n> static char *\n> pg_krb4_authname(char *PQerrormsg)\n> {\n> ! char instance[INST_SZ];\n> ! char realm[REALM_SZ];\n> int status;\n> static char name[SNAME_SZ + 1] = \"\";\n> \n> --- 144,151 ----\n> static char *\n> pg_krb4_authname(char *PQerrormsg)\n> {\n> ! char instance[INST_SZ + 1];\n> ! char realm[REALM_SZ + 1];\n> int status;\n> static char name[SNAME_SZ + 1] = \"\";\n> \n> \n> -- \n> \n> +-----------------+------------------------------------------+\n> | _ ^ _ | Dr. Rodney McDuff |\n> | |\\ /|\\ /| | Network Development, ITS |\n> | \\ | / | The University of Queensland |\n> | \\ | / | St. Lucia, Brisbane |\n> | \\|/ | Queensland, Australia. 4072. |\n> |<-------+------->| TELEPHONE: +61 7 3365 8220 |\n> | /|\\ | FACSIMILE: +61 7 3365 4477 |\n> | / | \\ | EMAIL: [email protected] |\n> | / | \\ | |\n> | |/ \\|/ \\| | Ex ignorantia ad sapientiam |\n> | - v - | Ex luce ad tenebras |\n> +-----------------+------------------------------------------+\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 14 Mar 1999 11:06:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] KTH-KRB kerberos 4 patch" } ]
[ { "msg_contents": "\nHow do you propose doing outer joins in non-mergejoin situations?\nMergejoins can only be used currently in equal joins.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Feb 1999 04:53:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "OUTER joins" }, { "msg_contents": "> \n> How do you propose doing outer joins in non-mergejoin situations?\n> Mergejoins can only be used currently in equal joins.\n\nIs your solution going to be to make sure the OUTER table is always a\nMergeJoin, or on the outside of a join loop? That could work.\n\nThat could get tricky if the table is joined to _two_ other tables. \nWith the cleaned-up optimizer, we can disable non-merge joins in certain\ncircumstances, and prevent OUTER tables from being inner in the others. \nIs that the plan?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Feb 1999 05:27:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] OUTER joins" }, { "msg_contents": "(back from a short vacation...)\n\n> How do you propose doing outer joins in non-mergejoin situations?\n> Mergejoins can only be used currently in equal joins.\n\nHadn't thought about it, other than figuring that implementing the\nequi-join first was a good start. There is a class of outer join syntax\n(the USING clause) which is implicitly an equi-join...\n\n - Tom\n", "msg_date": "Mon, 01 Mar 1999 17:10:49 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OUTER joins" }, { "msg_contents": "> (back from a short vacation...)\n> \n> > How do you propose doing outer joins in non-mergejoin situations?\n> > Mergejoins can only be used currently in equal joins.\n> \n> Hadn't thought about it, other than figuring that implementing the\n> equi-join first was a good start. There is a class of outer join syntax\n> (the USING clause) which is implicitly an equi-join...\n\nNot that easy. You don't automatically get a mergejoin from an\nequijoin. I will have to force outer's to be either mergejoins, or\ninners of non-merge joins. Can you add code to non-merge joins in the\nexecutor to throw out a null row if it does not find an inner match for\nthe outer row, and I will handle the optimizer so it doesn't throw a\nnon-conforming plan to the executor.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 Mar 1999 22:25:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OUTER joins" }, { "msg_contents": "> > Hadn't thought about it, other than figuring that implementing the\n> > equi-join first was a good start. There is a class of outer join \n> > syntax (the USING clause) which is implicitly an equi-join...\n> Not that easy. You don't automatically get a mergejoin from an\n> equijoin. I will have to force outer's to be either mergejoins, or\n> inners of non-merge joins. Can you add code to non-merge joins in the\n> executor to throw out a null row if it does not find an inner match \n> for the outer row, and I will handle the optimizer so it doesn't throw \n> a non-conforming plan to the executor.\n\nSo far I don't have enough info in the parser to get the\nplanner/optimizer going. Should we work from the front to the back, or\nshould I go ahead and look at the non-merge joins? It's painfully\nobvious that I don't know anything about the middle parts of this to\nproceed without lots more research.\n\n - Tom\n", "msg_date": "Tue, 09 Mar 1999 02:46:40 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OUTER joins" }, { "msg_contents": "> > > Hadn't thought about it, other than figuring that implementing the\n> > > equi-join first was a good start. There is a class of outer join \n> > > syntax (the USING clause) which is implicitly an equi-join...\n> > Not that easy. You don't automatically get a mergejoin from an\n> > equijoin. I will have to force outer's to be either mergejoins, or\n> > inners of non-merge joins. Can you add code to non-merge joins in the\n> > executor to throw out a null row if it does not find an inner match \n> > for the outer row, and I will handle the optimizer so it doesn't throw \n> > a non-conforming plan to the executor.\n> \n> So far I don't have enough info in the parser to get the\n> planner/optimizer going. Should we work from the front to the back, or\n> should I go ahead and look at the non-merge joins? It's painfully\n> obvious that I don't know anything about the middle parts of this to\n> proceed without lots more research.\n\nWe need to do phone or IRC to discuss this. Let me know when.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 8 Mar 1999 22:25:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OUTER joins" } ]
[ { "msg_contents": "Hello!\n\nQuery:\n\nSELECT DISTINCT p.subsec_id\n FROM central cn, shops sh, districts d, positions p\n WHERE cn.shop_id = sh.shop_id AND sh.distr_id = d.distr_id\n AND d.city_id = %d AND cn.pos_id = p.pos_id\n AND cn.date_i >= current_date - '7 days'::timespan\n\nWhile running postgres slowly eats all swap space (30 Meg) and aborts:\n\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\n\n Is it I just have not enough memory or bug?\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Wed, 24 Feb 1999 16:03:13 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with complex query" }, { "msg_contents": "Oleg Broytmann <[email protected]> writes:\n> SELECT DISTINCT p.subsec_id\n> FROM central cn, shops sh, districts d, positions p\n> WHERE cn.shop_id = sh.shop_id AND sh.distr_id = d.distr_id\n> AND d.city_id = %d AND cn.pos_id = p.pos_id\n> AND cn.date_i >= current_date - '7 days'::timespan\n> While running postgres slowly eats all swap space (30 Meg) and aborts:\n> pqReadData() -- backend closed the channel unexpectedly.\n> Is it I just have not enough memory or bug?\n\nWhat version are you running? Also, does it act the same if you try to\nEXPLAIN that same query? If EXPLAIN fails then the problem is in the\nplan/optimize stage, not actual execution of the query.\n\nThis kinda sounds like the optimizer problems that Bruce has fixed for\n6.5, but I don't recall anyone reporting serious problems with only\n4 tables in the query --- you had to get up to 7 or 8 or so before\nit really went nuts.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Feb 1999 10:25:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with complex query " }, { "msg_contents": "Hi!\n\nOn Wed, 24 Feb 1999, Tom Lane wrote:\n> What version are you running? Also, does it act the same if you try to\n\n 6.4.2 on Sparc-solaris2.5.1\n\n> EXPLAIN that same query? If EXPLAIN fails then the problem is in the\n> plan/optimize stage, not actual execution of the query.\n\n EXPLAIN works fine:\n\nEXPLAIN SELECT DISTINCT p.subsec_id\n FROM central cn, shops sh, districts d, positions p\n WHERE cn.shop_id = sh.shop_id AND sh.distr_id = d.distr_id\n AND d.city_id = 2 AND cn.pos_id = p.pos_id\n AND cn.date_i >= current_date - '7 days'::timespan\n;\nNOTICE: QUERY PLAN:\n\nUnique (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Nested Loop (cost=0.00 size=1 width=16)\n -> Nested Loop (cost=0.00 size=1 width=12)\n -> Merge Join (cost=0.00 size=1 width=8)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on districts d\n(cost=0.00 size=0 width=2)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on shops sh (cost=0.00\nsize=0 width=6)\n -> Seq Scan on central cn (cost=0.00 size=0 width=4)\n -> Seq Scan on positions p (cost=0.00 size=0 width=4)\n\nEXPLAIN\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Wed, 24 Feb 1999 18:36:05 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problem with complex query " }, { "msg_contents": "> Hello!\n> \n> Query:\n> \n> SELECT DISTINCT p.subsec_id\n> FROM central cn, shops sh, districts d, positions p\n> WHERE cn.shop_id = sh.shop_id AND sh.distr_id = d.distr_id\n> AND d.city_id = %d AND cn.pos_id = p.pos_id\n> AND cn.date_i >= current_date - '7 days'::timespan\n> \n> While running postgres slowly eats all swap space (30 Meg) and aborts:\n> \n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n> \n> Is it I just have not enough memory or bug?\n\nNot sure how to comment on this. Is 6.5beta any better?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 May 1999 10:57:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with complex query" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> SELECT DISTINCT p.subsec_id\n>> FROM central cn, shops sh, districts d, positions p\n>> WHERE cn.shop_id = sh.shop_id AND sh.distr_id = d.distr_id\n>> AND d.city_id = %d AND cn.pos_id = p.pos_id\n>> AND cn.date_i >= current_date - '7 days'::timespan\n>> \n>> While running postgres slowly eats all swap space (30 Meg) and aborts:\n\n> Not sure how to comment on this. Is 6.5beta any better?\n\nProbably not :-(. My guess is that the expression \"current_date -\n'7 days'::timespan\" is being re-evaluated at each tuple, and since\nwe don't yet have intra-statement space recovery, the palloc'd space\njust grows and grows. Oleg, can you try evaluating that expression\non the application side and sending over a constant instead?\n\nI think being able to recover palloc'd space after every few tuples\nwill have to be a top priority for 6.6; we've seen too many complaints\nthat trace back to this sort of thing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 May 1999 13:49:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with complex query " }, { "msg_contents": "Hello!\n\n Tom, I want to remind you that you looked into my database and found the\nproblem was that central.shop_id was int4 but shops.shop_id int2. After\nmaking all fields identical most of the problem was fixed.\n I just rerun the query now - and it worked!\n\nOn Sun, 9 May 1999, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> >> SELECT DISTINCT p.subsec_id\n> >> FROM central cn, shops sh, districts d, positions p\n> >> WHERE cn.shop_id = sh.shop_id AND sh.distr_id = d.distr_id\n> >> AND d.city_id = %d AND cn.pos_id = p.pos_id\n> >> AND cn.date_i >= current_date - '7 days'::timespan\n> >> \n> >> While running postgres slowly eats all swap space (30 Meg) and aborts:\n> \n> > Not sure how to comment on this. Is 6.5beta any better?\n> \n> Probably not :-(. My guess is that the expression \"current_date -\n> '7 days'::timespan\" is being re-evaluated at each tuple, and since\n> we don't yet have intra-statement space recovery, the palloc'd space\n> just grows and grows. Oleg, can you try evaluating that expression\n> on the application side and sending over a constant instead?\n> \n> I think being able to recover palloc'd space after every few tuples\n> will have to be a top priority for 6.6; we've seen too many complaints\n> that trace back to this sort of thing.\n> \n> \t\t\tregards, tom lane\n> \n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Mon, 10 May 1999 14:34:27 +0400 (MSD)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problem with complex query " } ]
[ { "msg_contents": "What user ID are you using? Are you \"su\"ing over\nthe the postgres user ID.\n\nAlso, make sure all of the environment variables\nare exported in the script. PGPORT, PGUSER, etc...\n\nD.\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Wednesday, February 24, 1999 1:35 PM\nTo: [email protected]\nCc: PostgreSQL-development\nSubject: Re: [HACKERS] VACUUM ANALYZE problem on linux \n\n\nOleg Broytmann <[email protected]> writes:\n> 3. Run postmaster -b -D/usr/local/pgsql/data -o -Fe -S (to detach it)\n> and run VACUUM ANALYZE - worked\n> (I took these parameters from script /etc/init.d/postgres)\n> 4. Run /etc/init.d/postgres start\n> and run VACUUM ANALYZE - failed, no core file.\n\nSo there is something different about the environment of your postmaster\nwhen it's started by init.d versus when it's started by hand. Now you\njust have to figure out what.\n\nI thought of environment variables, ulimit settings,\nownership/permission settings ... but it's not clear why any of these\nwould affect VACUUM in particular yet leave you able to do other stuff\nsuccessfully. Puzzling.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Feb 1999 14:30:11 -0500", "msg_from": "Dan Gowin <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] VACUUM ANALYZE problem on linux " }, { "msg_contents": "Hello!\n\nOn Wed, 24 Feb 1999, Dan Gowin wrote:\n> What user ID are you using? Are you \"su\"ing over\n> the the postgres user ID.\n\n But of course!\n\n> Also, make sure all of the environment variables\n> are exported in the script. PGPORT, PGUSER, etc...\n\n I want to remind you and all who watch the thread: I have the problem\non 3 different glibc2 linucies - heavily modified Debain 2.0, newly\ninstalled Debain 2.0 and modified RedHat 5.1. I have no problem with\nsparc-solaris, and I have a report from Tom Lane he has no problem with\nHP-UX.\n IWB interesting to test libc5-linux...\n\n> D.\n> \n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Wednesday, February 24, 1999 1:35 PM\n> To: [email protected]\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] VACUUM ANALYZE problem on linux \n> \n> \n> Oleg Broytmann <[email protected]> writes:\n> > 3. Run postmaster -b -D/usr/local/pgsql/data -o -Fe -S (to detach it)\n> > and run VACUUM ANALYZE - worked\n> > (I took these parameters from script /etc/init.d/postgres)\n> > 4. Run /etc/init.d/postgres start\n> > and run VACUUM ANALYZE - failed, no core file.\n> \n> So there is something different about the environment of your postmaster\n> when it's started by init.d versus when it's started by hand. Now you\n> just have to figure out what.\n> \n> I thought of environment variables, ulimit settings,\n> ownership/permission settings ... but it's not clear why any of these\n> would affect VACUUM in particular yet leave you able to do other stuff\n> successfully. Puzzling.\n> \n> \t\t\tregards, tom lane\n> \n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 25 Feb 1999 11:58:08 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] VACUUM ANALYZE problem on linux " }, { "msg_contents": "Say, is it possible that your VACUUM problem is a locale-related bug?\n\nIf init runs with a different locale setting than hand-started\nprocesses, then that would affect index ordering ... which could\nperhaps cause fatal problems while vacuuming indexes. I could\nbelieve that VACUUM is not able to cope with indexes that appear\nto be out of order according to the sort operators it's using.\n\nThis line of thought leads to the idea that indexes had better be\nmarked explicitly with the locale that they're for. Or else we\nneed to change Postgres so that the locale setting is hard-wired\nat compile time and not dependent on environment variables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Feb 1999 09:33:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux " }, { "msg_contents": "Hi!\n\nOn Thu, 25 Feb 1999, Tom Lane wrote:\n> Say, is it possible that your VACUUM problem is a locale-related bug?\n\n Hmm... May be.\n\n> If init runs with a different locale setting than hand-started\n> processes, then that would affect index ordering ... which could\n> perhaps cause fatal problems while vacuuming indexes. I could\n> believe that VACUUM is not able to cope with indexes that appear\n> to be out of order according to the sort operators it's using.\n\n I have no indexes in this database.\n Looks like I need toreproduce the same env in command line. I'll try.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 25 Feb 1999 17:40:50 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux " }, { "msg_contents": "On Thu, 25 Feb 1999, Oleg Broytmann wrote:\n\n> \n> I have no indexes in this database.\n> Looks like I need toreproduce the same env in command line. I'll try.\n\nwhy don't you create a sh script to start postgres out off, ans set -x and \nof cource a setenv etc.. Call the _SAME_ file from your init and from your \ncommand line and diff the results.\n\n-- \nIncredible Networks LTD Angelos Karageorgiou\n20 Karea st, +30.1.92.12.312 (voice)\n116 36 Athens, Greece. +30.1.92.12.314 (fax)\nhttp://www.incredible.com [email protected] (e-mail)\n\n", "msg_date": "Thu, 25 Feb 1999 17:41:50 +0200 (EET)", "msg_from": "Angelos Karageorgiou <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux " }, { "msg_contents": "> Say, is it possible that your VACUUM problem is a locale-related bug?\n> \n> If init runs with a different locale setting than hand-started\n> processes, then that would affect index ordering ... which could\n> perhaps cause fatal problems while vacuuming indexes. I could\n> believe that VACUUM is not able to cope with indexes that appear\n> to be out of order according to the sort operators it's using.\n> \n> This line of thought leads to the idea that indexes had better be\n> marked explicitly with the locale that they're for. Or else we\n> need to change Postgres so that the locale setting is hard-wired\n> at compile time and not dependent on environment variables.\n\nThat is an excellent point. I never considered that. In fact, initdb\nwould also need identical locale, wouldn't it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Feb 1999 12:40:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux" }, { "msg_contents": "Oleg Broytmann wrote:\n\n> I have no indexes in this database.\n> Looks like I need toreproduce the same env in command line. I'll try.\n\nThis (locale of indices) still might be it - the system tables have\nindices, which I believe get updated when you VACUUM (in fact, that's\nkind of the point)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Thu, 25 Feb 1999 12:25:04 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux" }, { "msg_contents": "Hi!\n\nOn Thu, 25 Feb 1999, Ross J. Reedstrom wrote:\n> This (locale of indices) still might be it - the system tables have\n\n I recompiled postgres without locale, run initdb, load my dump - VACUUM\nANALYZE worked :(\n I wonder, what may be wrong with locale support, that trigger only on\nglibc2?\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Fri, 26 Feb 1999 13:54:23 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE problem on linux" } ]
[ { "msg_contents": "Just found out Ingres 6.4 doesn't support OUTER joins, and that is a\npain.\n\nGives me motivation to get it working in PostgreSQL. When Thomas\nreturns and I am available, hopefully we can get it into the code for\n6.5.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Feb 1999 15:42:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "OUTER joins" } ]
[ { "msg_contents": "That would be awesome and I would be very grateful.\n\n\t-----Original Message-----\n\tFrom:\tBruce Momjian [SMTP:[email protected]]\n\tSent:\tWednesday, February 24, 1999 1:42 PM\n\tTo:\[email protected]\n\tSubject:\t[HACKERS] OUTER joins\n\n\tJust found out Ingres 6.4 doesn't support OUTER joins, and that is a\n\tpain.\n\n\tGives me motivation to get it working in PostgreSQL. When Thomas\n\treturns and I am available, hopefully we can get it into the code\nfor\n\t6.5.\n\n\t-- \n\t Bruce Momjian | http://www.op.net/~candle\n\t [email protected] | (610) 853-3000\n\t + If your life is a hard drive, | 830 Blythe Avenue\n\t + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Wed, 24 Feb 1999 15:32:15 -0600", "msg_from": "Michael Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] OUTER joins" } ]