threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "With current sources:\n\nregression=> CREATE TABLE x (y text);\nCREATE\nregression=> CREATE VIEW z AS select * from x;\nCREATE\nregression=> INSERT INTO x VALUES ('foo');\nINSERT 411635 1\nregression=> INSERT INTO z VALUES ('bar');\nINSERT 411636 1\nregression=> select * from x;\ny\n---\nfoo\n(1 row)\n\nregression=> select * from z;\ny\n---\nfoo\n(1 row)\n\nOK, where'd tuple 411636 go? Seems to me that the insert should either\nhave been rejected or caused an insert into x, depending on how\ntransparent you think views are (I always thought they were\nread-only?). Dropping the data into never-never land and giving a\nmisleading success response code is not my idea of proper behavior.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 May 1999 10:42:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "INSERT INTO view means what exactly?"
},
{
"msg_contents": "Tom Lane wrote:\n\n>\n> With current sources:\n>\n> regression=> CREATE TABLE x (y text);\n> CREATE\n> regression=> CREATE VIEW z AS select * from x;\n> CREATE\n> regression=> INSERT INTO x VALUES ('foo');\n> INSERT 411635 1\n> regression=> INSERT INTO z VALUES ('bar');\n> INSERT 411636 1\n> regression=> select * from x;\n> y\n> ---\n> foo\n> (1 row)\n>\n> regression=> select * from z;\n> y\n> ---\n> foo\n> (1 row)\n>\n> OK, where'd tuple 411636 go? Seems to me that the insert should either\n> have been rejected or caused an insert into x, depending on how\n> transparent you think views are (I always thought they were\n> read-only?). Dropping the data into never-never land and giving a\n> misleading success response code is not my idea of proper behavior.\n\n Tuple 411636 went into data/base/regression/x :-)\n\n You can verify that by looking at the file - it surely lost\n it's zero size and has a data block now. Also vacuum on that\n relation will tell that there is a tuple now!\n\n This is because from the parsers point of view there is no\n difference between a table and a view. There is no rule ON\n INSERT setup for relation x, so the rewrite system does\n nothing and thus the plan will become a real insert into\n relation x. But when doing the \"SELECT * FROM z\", the rule\n _RETz is triggered and it's rewritten into a \"SELECT * FROM\n x\". Thus you'll never see your data again (unless you drop\n the rule _RETz and select after that).\n\n Making views auto transparent (by setting up INSERT, UPDATE\n and DELETE rules as well) is impossible, because in a join\n not selecting all attributes the system cannot guess where to\n take the missing ones from.\n\n It might be a good idea to abort if there's a SELECT rule on\n the result relation but not one for the actual operation\n performed. I'll put that onto my personal TODO for after\n v6.5.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 25 May 1999 17:52:05 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT INTO view means what exactly?"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> regression=> INSERT INTO z VALUES ('bar');\n>> INSERT 411636 1\n>> \n>> OK, where'd tuple 411636 go?\n\n> Tuple 411636 went into data/base/regression/x :-)\n\n.../z, you meant --- yup, I see you are right. Weird. I didn't\nrealize that views had an underlying table.\n\n> It might be a good idea to abort if there's a SELECT rule on\n> the result relation but not one for the actual operation\n> performed. I'll put that onto my personal TODO for after\n> v6.5.\n\nI agree, that would be a good safety feature.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 May 1999 14:53:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] INSERT INTO view means what exactly? "
},
{
"msg_contents": "Tom Lane wrote:\n\n>\n> [email protected] (Jan Wieck) writes:\n> > Tom Lane wrote:\n> >> regression=> INSERT INTO z VALUES ('bar');\n> >> INSERT 411636 1\n> >>\n> >> OK, where'd tuple 411636 go?\n>\n> > Tuple 411636 went into data/base/regression/x :-)\n>\n> .../z, you meant --- yup, I see you are right. Weird. I didn't\n> realize that views had an underlying table.\n\n They ARE a table - only that a rule ON SELECT hides their\n (normal) emptyness :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 25 May 1999 21:22:51 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT INTO view means what exactly?"
},
{
"msg_contents": "Does anyone know a cause for this?\n\n\n> With current sources:\n> \n> regression=> CREATE TABLE x (y text);\n> CREATE\n> regression=> CREATE VIEW z AS select * from x;\n> CREATE\n> regression=> INSERT INTO x VALUES ('foo');\n> INSERT 411635 1\n> regression=> INSERT INTO z VALUES ('bar');\n> INSERT 411636 1\n> regression=> select * from x;\n> y\n> ---\n> foo\n> (1 row)\n> \n> regression=> select * from z;\n> y\n> ---\n> foo\n> (1 row)\n> \n> OK, where'd tuple 411636 go? Seems to me that the insert should either\n> have been rejected or caused an insert into x, depending on how\n> transparent you think views are (I always thought they were\n> read-only?). Dropping the data into never-never land and giving a\n> misleading success response code is not my idea of proper behavior.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Sep 1999 15:44:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT INTO view means what exactly?"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>\n> Does anyone know a cause for this?\n\n This is one of the frequently asked RULE-/VIEW-questions. I\n think I've answered it at least a half dozen times up to now\n and if I recall right, explained it it detail in the\n documentation of the rule system too. Seems I failed to make\n it funny enough to let people read until the end ;-)\n\n Well, the cause is that there is a rewrite rule for SELECT,\n but none for INSERT. Thus, the INSERT goes through and get's\n executed as if \"z\" where a table, what it in fact is, because\n there are all catalog entries plus a relation-file for\n tuples. So why should the executor throw them away?\n\n At the time of the INSERT, the relations file \"z\" lost it's\n zero-size, and as soon as you drop the _RETz rule, you can\n SELECT the \"bar\" (and order a beer).\n\n One possible solution would be to let the rewriter check on\n INSERT/UPDATE/DELETE if a SELECT rule exists but none for the\n requested event and complain about it. But I thought the\n rewriter is already complicated enough, so I've let it out.\n\n Another solution would be, to set the ACL by default to\n owner=r and force people to change ACL's when they setup\n rules to make views updateable. Maybe the better solution.\n\n\nJan\n\n>\n>\n> > With current sources:\n> >\n> > regression=> CREATE TABLE x (y text);\n> > CREATE\n> > regression=> CREATE VIEW z AS select * from x;\n> > CREATE\n> > regression=> INSERT INTO x VALUES ('foo');\n> > INSERT 411635 1\n> > regression=> INSERT INTO z VALUES ('bar');\n> > INSERT 411636 1\n> > regression=> select * from x;\n> > y\n> > ---\n> > foo\n> > (1 row)\n> >\n> > regression=> select * from z;\n> > y\n> > ---\n> > foo\n> > (1 row)\n> >\n> > OK, where'd tuple 411636 go? Seems to me that the insert should either\n> > have been rejected or caused an insert into x, depending on how\n> > transparent you think views are (I always thought they were\n> > read-only?). Dropping the data into never-never land and giving a\n> > misleading success response code is not my idea of proper behavior.\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 21 Sep 1999 22:19:33 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT INTO view means what exactly?"
}
] |
[
{
"msg_contents": "a create index updates the statistics in pg_class,\nthis leads to substantial performance degradation compared to\n6.4.2.\n\nIf you want to see what I mean simply run the performance test in\nour test subdirectory.\n\nI think the create index statement should not update this statistic.\n(at least not in the newly created empty table case) \nThis behavior would then be in sync with the create table behavior.\n\nAndreas\n",
"msg_date": "Tue, 25 May 1999 17:04:17 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "create index updates nrows statistics"
},
{
"msg_contents": "ZEUGSWETTER Andreas IZ5 <[email protected]> writes:\n> a create index updates the statistics in pg_class,\n> this leads to substantial performance degradation compared to\n> 6.4.2.\n\nCreate index did that in 6.4.2 as well --- how could it be making\nperformance worse?\n\n> I think the create index statement should not update this statistic.\n> (at least not in the newly created empty table case) \n> This behavior would then be in sync with the create table behavior.\n\nHmm, skip the update if size is found to be 0 you mean? Might be\nreasonable ... it would eliminate the problem that\n\tCREATE TABLE\n\tCREATE INDEX\n\tCOPY ...\nresults in horrible plans compared to doing it in the \"right\" order.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 May 1999 14:48:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create index updates nrows statistics "
}
] |
[
{
"msg_contents": "> It bothers me that the GEQO results are not reliably reproducible\n> across platforms; that complicates debugging. I have been thinking\n> about suggesting that we ought to change GEQO to use a fixed random\n> seed value by default, with the variable random seed being available\n> only as a *non default* option. Comments anyone?\n> \nA few platforms (e.g. AIX) have their own random implementation, so even\nwith\na fixed seed they produce different randoms than others :-(\nIt probably still helps iff behavior is predictable on the local machine.\n\nBut: I think we use rand for some security issue. We would'nt want to make \nthat predictable.\n\nAndreas\n",
"msg_date": "Tue, 25 May 1999 17:23:06 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] 6.5 cvs: can't drop table "
}
] |
[
{
"msg_contents": "Because the nightly snapshot is named:\n\n postgresql.snapshot.tar.gz\n\nIf an FTP proxy server is running, the file actually\nreceived is whatever file was last cached in the proxy.\n\nIt would be better to name each nightly snapshot with a\ndate code, so snapshots can be downloaded as expected.\n\nThanks.\n",
"msg_date": "Tue, 25 May 1999 12:35:22 -0500",
"msg_from": "\"David R. Favor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with nightly snapshot naming"
}
] |
[
{
"msg_contents": "> a create index updates the statistics in pg_class,\n> this leads to substantial performance degradation compared to\n> 6.4.2.\n> \n> If you want to see what I mean simply run the performance test in\n> our test subdirectory.\n> \n> I think the create index statement should not update this statistic.\n> (at least not in the newly created empty table case) \n> This behavior would then be in sync with the create table behavior.\n> \nTo fix this I urgently suggest the following patch:\n <<index.patch>> \nregression passes and is a little faster :-)\nperformance test: without patch 7min now 22 sec\n\nAndreas",
"msg_date": "Tue, 25 May 1999 20:05:50 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: create index updates nrows statistics"
},
{
"msg_contents": "I had this patch, but was unsure if it was save for 6.5.\n\nLooks like Tom has decided. Good.\n\n\n> > a create index updates the statistics in pg_class,\n> > this leads to substantial performance degradation compared to\n> > 6.4.2.\n> > \n> > If you want to see what I mean simply run the performance test in\n> > our test subdirectory.\n> > \n> > I think the create index statement should not update this statistic.\n> > (at least not in the newly created empty table case) \n> > This behavior would then be in sync with the create table behavior.\n> > \n> To fix this I urgently suggest the following patch:\n> <<index.patch>> \n> regression passes and is a little faster :-)\n> performance test: without patch 7min now 22 sec\n> \n> Andreas\n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 20:10:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: create index updates nrows statistics"
}
] |
[
{
"msg_contents": "Wow, the beta is actually for people to test out if their applications will\nwork on the new version 6.5. It is by no way for actual use, only for\ntesting.\nPlease use 6.4.2 until 6.5 is released.\n\nMarc, can you name the betas something like alpha and not beta,\nsince beta seems to sound like something to use for productive work\nto a lot of people (like everyone uses squid beta versions)\n\nAndreas\n> ----------\n> Von: \tAri Halberstadt[SMTP:[email protected]]\n> Gesendet: \tDienstag, 25. Mai 1999 20:21\n> An: \tPostgres Hackers List\n> Betreff: \t[HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24\n> snapshot\n> \n> Hi,\n> \n> I posted a related question yesterday to the general list, but Bruce\n> Momjian <[email protected]> suggested it belongs on the hackers\n> list.\n> \n> Using pg_dump with 6.5b1 on solaris sparc crashes with a core dump. This\n> means I can't keep backups and I can't upgrade my data model without being\n> able to export the old data.\n> \n> On Bruce's suggestion I tried upgrading to a recent snapshot (in this case\n> the one from 5/24). When I killed the old postmaster and started the new\n> postmaster I got the following error in psql:\n> \n> bboard=> \\d\n> ERROR: nodeRead: Bad type 0\n> \n> which sounds to me like maybe there was a change in the data format. Since\n> I can't use pg_dump to get my data out of 6.5b1 I'm a bit stuck now. For\n> now I've (quickly) gone back to using 6.5b1 and not the snapshot.\n> \n> If one of the developers wants debug info on the coredump let me know what\n> you need (e.g., what commands to do in gdb--though I'll have to install\n> this or get run permissions from the sysadmin). I did a quick stack trace\n> in adb and it was dead in a call to fprintf, which I know does nothing to\n> help pinpoint the bug but it sounds like a typical way to kill a C\n> program.\n> \n> -- Ari Halberstadt mailto:[email protected] <http://www.magiccookie.com/>\n> PGP public key available at <http://www.magiccookie.com/pgpkey.txt>\n> \n> \n> \n> \n",
"msg_date": "Tue, 25 May 1999 20:39:00 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24 sna\n\tpshot"
}
] |
[
{
"msg_contents": "Ari Halberstadt <[email protected]> writes:\n> Using pg_dump with 6.5b1 on solaris sparc crashes with a core dump. This\n> means I can't keep backups and I can't upgrade my data model without being\n> able to export the old data.\n\n> On Bruce's suggestion I tried upgrading to a recent snapshot (in this case\n> the one from 5/24). When I killed the old postmaster and started the new\n> postmaster I got the following error in psql:\n\nHi Ari,\n I believe there's been at least one initdb-forcing change since 6.5b1,\nso you're right, you can't run a newer postmaster until you have\nextracted your data.\n\n But I think what Bruce really had in mind was to run pg_dump out of\nthe latest snapshot against your 6.5b1 postmaster. That should work\nas far as compatibility issues go. If we're really lucky, the bug has\nbeen fixed since then --- if not, it'd be easiest to try to find it in\ncurrent sources anyway...\n\n The coredump is happening in pg_dump, right, not in the connected\nbackend?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 May 1999 15:00:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump core dump,\n\tupgrading from 6.5b1 to 5/24 snapshot"
},
{
"msg_contents": "Hi,\n\nI posted a related question yesterday to the general list, but Bruce\nMomjian <[email protected]> suggested it belongs on the hackers\nlist.\n\nUsing pg_dump with 6.5b1 on solaris sparc crashes with a core dump. This\nmeans I can't keep backups and I can't upgrade my data model without being\nable to export the old data.\n\nOn Bruce's suggestion I tried upgrading to a recent snapshot (in this case\nthe one from 5/24). When I killed the old postmaster and started the new\npostmaster I got the following error in psql:\n\nbboard=> \\d\nERROR: nodeRead: Bad type 0\n\nwhich sounds to me like maybe there was a change in the data format. Since\nI can't use pg_dump to get my data out of 6.5b1 I'm a bit stuck now. For\nnow I've (quickly) gone back to using 6.5b1 and not the snapshot.\n\nIf one of the developers wants debug info on the coredump let me know what\nyou need (e.g., what commands to do in gdb--though I'll have to install\nthis or get run permissions from the sysadmin). I did a quick stack trace\nin adb and it was dead in a call to fprintf, which I know does nothing to\nhelp pinpoint the bug but it sounds like a typical way to kill a C program.\n\n-- Ari Halberstadt mailto:[email protected] <http://www.magiccookie.com/>\nPGP public key available at <http://www.magiccookie.com/pgpkey.txt>\n\n\n",
"msg_date": "Tue, 25 May 1999 14:03:59 -0500",
"msg_from": "Ari Halberstadt <[email protected]>",
"msg_from_op": false,
"msg_subject": "pg_dump core dump, upgrading from 6.5b1 to 5/24 snapshot"
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n>...\n> But I think what Bruce really had in mind was to run pg_dump out of\n>the latest snapshot against your 6.5b1 postmaster. That should work\n\nRunning the new pg_dump:\n\n$ pg_dump -p 4001 -D -f bboard.out bboard\nSegmentation Fault (core dumped)\n\n> The coredump is happening in pg_dump, right, not in the connected\n>backend?\n\nYes, it's pg_dump, not the backend.\n\n-- Ari Halberstadt mailto:[email protected] <http://www.magiccookie.com/>\nPGP public key available at <http://www.magiccookie.com/pgpkey.txt>\n\n\n",
"msg_date": "Tue, 25 May 1999 15:25:48 -0500",
"msg_from": "Ari Halberstadt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump core dump,\n upgrading from 6.5b1 to 5/24 snapshot"
},
{
"msg_contents": "Ari Halberstadt <[email protected]> writes:\n> Running the new pg_dump:\n\n> $ pg_dump -p 4001 -D -f bboard.out bboard\n> Segmentation Fault (core dumped)\n\n>> The coredump is happening in pg_dump, right, not in the connected\n>> backend?\n\n> Yes, it's pg_dump, not the backend.\n\nOK, could you see whether omitting -D and/or -f makes a difference?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 May 1999 18:03:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump core dump,\n\tupgrading from 6.5b1 to 5/24 snapshot"
}
] |
[
{
"msg_contents": "The problem is with file conv.c in backend/utils/mb/\n....\nbig52mic(unsigned char *big5, unsigned char *p, int len)\n\n unsigned short c1;\n unsigned short big5buf,\n cnsBuf;\n \tunsigned char lc;\n char \tbogusBuf[2];\n int \t\ti;\n\n while (len > 0 && (c1 = *big5++))\n\t{\n \t\tif (c1 <= 0x007f U)\n ^^^^ my egcs 1.1.2 on linux(rh6.0) doesn't\n accept this space. Should be\n (probably) 0x007fU\n......\n\nThe same problem repeats on some more places.\n\n-- \n* David Sauer, student of Czech Technical University\n* electronic mail: [email protected] (mime compatible)\n",
"msg_date": "25 May 1999 21:48:23 +0200",
"msg_from": "David Sauer <[email protected]>",
"msg_from_op": true,
"msg_subject": "I can't compile cvs snapshot ..."
},
{
"msg_contents": "Thanks. I am working on it now.\n\n\n> The problem is with file conv.c in backend/utils/mb/\n> ....\n> big52mic(unsigned char *big5, unsigned char *p, int len)\n> \n> unsigned short c1;\n> unsigned short big5buf,\n> cnsBuf;\n> \tunsigned char lc;\n> char \tbogusBuf[2];\n> int \t\ti;\n> \n> while (len > 0 && (c1 = *big5++))\n> \t{\n> \t\tif (c1 <= 0x007f U)\n> ^^^^ my egcs 1.1.2 on linux(rh6.0) doesn't\n> accept this space. Should be\n> (probably) 0x007fU\n> ......\n> \n> The same problem repeats on some more places.\n> \n> -- \n> * David Sauer, student of Czech Technical University\n> * electronic mail: [email protected] (mime compatible)\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 25 May 1999 15:52:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I can't compile cvs snapshot ..."
},
{
"msg_contents": "> The problem is with file conv.c in backend/utils/mb/\n> ....\n> big52mic(unsigned char *big5, unsigned char *p, int len)\n> \n> unsigned short c1;\n> unsigned short big5buf,\n> cnsBuf;\n> \tunsigned char lc;\n> char \tbogusBuf[2];\n> int \t\ti;\n> \n> while (len > 0 && (c1 = *big5++))\n> \t{\n> \t\tif (c1 <= 0x007f U)\n> ^^^^ my egcs 1.1.2 on linux(rh6.0) doesn't\n> accept this space. Should be\n> (probably) 0x007fU\n> ......\n> \n> The same problem repeats on some more places.\n\nOK, I have changed it to:\n\n\t0x007f -> (unsigned)0x7f\n\nThis seems like the intent, and pgindent doesn't like the old format.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 25 May 1999 17:47:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I can't compile cvs snapshot ..."
},
{
"msg_contents": "> The problem is with file conv.c in backend/utils/mb/\n> ....\n> big52mic(unsigned char *big5, unsigned char *p, int len)\n> \n> unsigned short c1;\n> unsigned short big5buf,\n> cnsBuf;\n> \tunsigned char lc;\n> char \tbogusBuf[2];\n> int \t\ti;\n> \n> while (len > 0 && (c1 = *big5++))\n> \t{\n> \t\tif (c1 <= 0x007f U)\n> ^^^^ my egcs 1.1.2 on linux(rh6.0) doesn't\n> accept this space. Should be\n> (probably) 0x007fU\n\nBefore it was \"0x007fU\" and now is \"0x007f U\". Probably pgindent did\nsomething.\n\n>This seems like the intent, and pgindent doesn't like the old format.\n\nDoes this mean we are not allowed to use \"U\"? I think this is leagal\naccording to the standard C grammer.\n---\nTatsuo Ishii\n\n",
"msg_date": "Wed, 26 May 1999 10:08:57 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I can't compile cvs snapshot ... "
},
{
"msg_contents": "> Before it was \"0x007fU\" and now is \"0x007f U\". Probably pgindent did\n> something.\n> \n> >This seems like the intent, and pgindent doesn't like the old format.\n> \n> Does this mean we are not allowed to use \"U\"? I think this is leagal\n> according to the standard C grammer.\n\nWell, it seems BSD indent mucks up 0x7fU, so I would prefer if we didn't\nuse it. If you use it, pgindent will break it the next time I run it,\nand I will manually convert it to (unsigned). Is that OK?\n\nI can probably add some code to pgindent to work around the problem, as\nI have done for other indent issues if you would prefer.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 25 May 1999 22:50:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I can't compile cvs snapshot ..."
},
{
"msg_contents": ">> Does this mean we are not allowed to use \"U\"? I think this is leagal\n>> according to the standard C grammer.\n>\n>Well, it seems BSD indent mucks up 0x7fU, so I would prefer if we didn't\n>use it. If you use it, pgindent will break it the next time I run it,\n>and I will manually convert it to (unsigned). Is that OK?\n\nOk.\n\n>I can probably add some code to pgindent to work around the problem, as\n>I have done for other indent issues if you would prefer.\n\nWell, it's a trivial problem, so I don't care about it.\n---\nTatsuo Ishii\n\n",
"msg_date": "Wed, 26 May 1999 13:03:34 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I can't compile cvs snapshot ... "
},
{
"msg_contents": "> >> Does this mean we are not allowed to use \"U\"? I think this is leagal\n> >> according to the standard C grammer.\n> >\n> >Well, it seems BSD indent mucks up 0x7fU, so I would prefer if we didn't\n> >use it. If you use it, pgindent will break it the next time I run it,\n> >and I will manually convert it to (unsigned). Is that OK?\n> \n> Ok.\n\nI hate for pgindent to be mucking with your multi-byte code.\n\n> >I can probably add some code to pgindent to work around the problem, as\n> >I have done for other indent issues if you would prefer.\n> \n> Well, it's a trivial problem, so I don't care about it.\n\nI was hoping you would say that. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 00:14:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I can't compile cvs snapshot ..."
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Does this mean we are not allowed to use \"U\"? I think this is leagal\n>> according to the standard C grammer.\n\n> Well, it seems BSD indent mucks up 0x7fU, so I would prefer if we didn't\n> use it.\n\nIf pgindent mucks up standard C constructs then pgindent is broken.\n\nThis is not open to debate --- if you are going to run our entire\nsource base through pgindent just a few days before every release,\nthen the tool has to be something we can have 100 percent, no-questions-\nasked confidence in. Telling people to obey weird little coding\nconventions is no answer. (If everyone reliably did that, we'd not\nneed pgindent in the first place.)\n\nIt appears that BSD indent doesn't have a problem with 0xnnnL, so\nteaching it about 0xnnnU can't be that hard if you have the source.\n(I don't...)\n\nMaybe it is time to take another look at GNU indent?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 May 1999 09:55:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I can't compile cvs snapshot ... "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Does this mean we are not allowed to use \"U\"? I think this is leagal\n> >> according to the standard C grammer.\n> \n> > Well, it seems BSD indent mucks up 0x7fU, so I would prefer if we didn't\n> > use it.\n> \n> If pgindent mucks up standard C constructs then pgindent is broken.\n> \n> This is not open to debate --- if you are going to run our entire\n> source base through pgindent just a few days before every release,\n> then the tool has to be something we can have 100 percent, no-questions-\n> asked confidence in. Telling people to obey weird little coding\n> conventions is no answer. (If everyone reliably did that, we'd not\n> need pgindent in the first place.)\n> \n\npgindent gives us so many advanates, why worry about a small thing like\n0xffU. I will add to the patch I supply in the pgindent directory to\nhandle U also.\n\n> It appears that BSD indent doesn't have a problem with 0xnnnL, so\n> teaching it about 0xnnnU can't be that hard if you have the source.\n> (I don't...)\n> \n> Maybe it is time to take another look at GNU indent?\n\nYou don't want to go there. See the tools/pgindent directory for an\nexplaination. GNU indent has many bugs that wack the code silly. Try\nrunning any directory with GNU indent and compare it to the pgindent\nversion.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 10:00:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I can't compile cvs snapshot ..."
},
{
"msg_contents": "> If pgindent mucks up standard C constructs then pgindent is broken.\n> \n> This is not open to debate --- if you are going to run our entire\n> source base through pgindent just a few days before every release,\n> then the tool has to be something we can have 100 percent, no-questions-\n> asked confidence in. Telling people to obey weird little coding\n> conventions is no answer. (If everyone reliably did that, we'd not\n> need pgindent in the first place.)\n> \n> It appears that BSD indent doesn't have a problem with 0xnnnL, so\n> teaching it about 0xnnnU can't be that hard if you have the source.\n> (I don't...)\n\nOK, here is the patch that is not in the tools/pgindent directory to\nunderstand 0x7fU constants. I will put the U's back in the constants.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n*** ./lexi.c.orig\tWed May 26 10:50:54 1999\n--- ./lexi.c\tWed May 26 10:51:08 1999\n***************\n*** 186,192 ****\n \t\t\t\t*e_token++ = *buf_ptr++;\n \t\t\t}\n \t\t}\n! \t if (*buf_ptr == 'L' || *buf_ptr == 'l')\n \t\t*e_token++ = *buf_ptr++;\n \t}\n \telse\n--- 186,193 ----\n \t\t\t\t*e_token++ = *buf_ptr++;\n \t\t\t}\n \t\t}\n! \t if (*buf_ptr == 'L' || *buf_ptr == 'U' ||\n! \t\t*buf_ptr == 'l' || *buf_ptr == 'u')\n \t\t*e_token++ = *buf_ptr++;\n \t}\n \telse",
"msg_date": "Wed, 26 May 1999 11:06:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I can't compile cvs snapshot ..."
}
] |
[
{
"msg_contents": "ZEUGSWETTER Andreas IZ5 <[email protected]> wrote:\n>Wow, the beta is actually for people to test out if their applications will\n>work on the new version 6.5. It is by no way for actual use, only for\n>testing.\n>Please use 6.4.2 until 6.5 is released.\n\nI was running 6.4.2 but it had a terrible memory and/or table leak that\nmade it unuseable--one table got to 1GB the last week I used 6.4.2, with\nonly a few MB of real data. Vacuum was crashing, pg_dump was crashing, the\nbackend was dying. I had to go to 6.5b, which fortunately seems to have\nfixed the exploding table problem.\n\n>Marc, can you name the betas something like alpha and not beta,\n>since beta seems to sound like something to use for productive work\n>to a lot of people (like everyone uses squid beta versions)\n\nBeta does sound more reliable than alpha.\n\n-- Ari Halberstadt mailto:[email protected] <http://www.magiccookie.com/>\nPGP public key available at <http://www.magiccookie.com/pgpkey.txt>\n\n\n",
"msg_date": "Tue, 25 May 1999 15:20:50 -0500",
"msg_from": "Ari Halberstadt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: [HACKERS] pg_dump core dump, upgrading from 6.5b1 to 5/24\n\tsna \tpshot"
}
] |
[
{
"msg_contents": "I have built PostgreSQL 6.5 on SCO UnixWare 7 and SCO OpenServer 5.\n\nI've also written a SCO-specific installation FAQ, which I'd appeciate\nit if you could drop into ~pgsql/doc in the source tree, along with the\nother platform-specific FAQs. It would be nice to have a copy on the\nweb site along with the others as well.\n\nThanks!\n\nAndrew Merrill",
"msg_date": "Tue, 25 May 1999 14:47:31 -0700",
"msg_from": "Andrew Merrill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: supported platform list"
},
{
"msg_contents": "Thanks. Will commit soon...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 26 May 1999 02:07:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: supported platform list"
}
] |
[
{
"msg_contents": "\nStandard failures:\n\ngrep failed regress.out\nfloat8 .. failed\ngeometry .. failed\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 25 May 1999 23:07:54 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "v6.5 under FreeBSD ..."
},
{
"msg_contents": "> Standard failures:\n> grep failed regress.out\n> float8 .. failed\n> geometry .. failed\n\nOK. Version of FreeBSD??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 27 May 1999 03:46:44 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.5 under FreeBSD ..."
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > Standard failures:\n> > grep failed regress.out\n> > float8 .. failed\n> > geometry .. failed\n> \n> OK. Version of FreeBSD??\n\nAFAIK, all versions.\nI used 2.2.6 before and use 3.0 now.\n\nVadim\n",
"msg_date": "Thu, 27 May 1999 11:48:10 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.5 under FreeBSD ..."
},
{
"msg_contents": "On Thu, 27 May 1999, Thomas Lockhart wrote:\n\n> > Standard failures:\n> > grep failed regress.out\n> > float8 .. failed\n> > geometry .. failed\n> \n> OK. Version of FreeBSD??\n\nSorry...this is FreeBSD 4.0-CURRENT, which is what our Ports collection is\nbased off of. Elf, with EGCS as default compiler...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 27 May 1999 08:06:48 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] v6.5 under FreeBSD ..."
}
] |
[
{
"msg_contents": "To return consistent results pg_dump should run all queries\nin single transaction, in serializable mode. It's old problem.\nBut now when selects don't block writers we are able to do this.\n\nComments/objections?\n\nVadim\n",
"msg_date": "Wed, 26 May 1999 12:09:11 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump inconsistences"
},
{
"msg_contents": "> To return consistent results pg_dump should run all queries\n> in single transaction, in serializable mode. It's old problem.\n> But now when selects don't block writers we are able to do this.\n> \n> Comments/objections?\n\nIf I understood what you were saying, I may object, but I don't, so go\nahead. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 00:33:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump inconsistences"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > To return consistent results pg_dump should run all queries\n> > in single transaction, in serializable mode. It's old problem.\n> > But now when selects don't block writers we are able to do this.\n> >\n> > Comments/objections?\n> \n> If I understood what you were saying, I may object, but I don't, so go\n> ahead. :-)\n\nAs far as I see each COPY table TO STDOUT is executed in\nits own transaction. This may cause referential inconsistences\n(pg_dump saves foreign keys, then other transaction deletes some\nforeign and primary keys and commits, now pg_dump saves\nprimary keys and loses some of them, breaking referential\nintegrity).\n\nVadim\n",
"msg_date": "Wed, 26 May 1999 12:55:34 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump inconsistences"
},
{
"msg_contents": "> \n> As far as I see each COPY table TO STDOUT is executed in\n> its own transaction. This may cause referential inconsistences\n> (pg_dump saves foreign keys, then other transaction deletes some\n> foreign and primary keys and commits, now pg_dump saves\n> primary keys and loses some of them, breaking referential\n> integrity).\n\nOh, I get it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 00:57:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump inconsistences"
},
{
"msg_contents": "At 12:09 PM 5/26/99 +0800, Vadim Mikheev wrote:\n>To return consistent results pg_dump should run all queries\n>in single transaction, in serializable mode. It's old problem.\n>But now when selects don't block writers we are able to do this.\n\n>Comments/objections?\n\nThis would remove one of the major barriers to deployment of\nPostgres in serious, heavy-traffic environments, particularly\nthe Web, where no clock boundaries are respected for globally\ninteresting sites.\n\nThe use of db's to back web sites is the most intriguing aspect\nof modern db development, IMO. Why else would an old compiler\nlike me have an interest? :)\n\nAnd Postgres has problems in this regard, one of which you\npoint out in this post.\n\nLet me hasten to add that the development direction of the\ndb is congruent with the needs of web site users like myself.\nThe removal of table-level locking, for instance. Removing\nof fsync after select-only queries would help a lot, too,\nafter experimentation verified by a comment to the effect\nthat postgres does this (from a future enhancements list) I\nsped my site a lot by including selects in begin/end transactions.\nYUK.\n\nOn and on. Anyway, y'all are moving in the right direction(s)\nand at a good pace, too. Why not do things such as you \ndescribe, when doing so better places your product in the\nmix for continuous use ala the Web?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Tue, 25 May 1999 22:03:06 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump inconsistences"
},
{
"msg_contents": "At 12:57 AM 5/26/99 -0400, Bruce Momjian wrote:\n\n>> As far as I see each COPY table TO STDOUT is executed in\n>> its own transaction. This may cause referential inconsistences\n>> (pg_dump saves foreign keys, then other transaction deletes some\n>> foreign and primary keys and commits, now pg_dump saves\n>> primary keys and loses some of them, breaking referential\n>> integrity).\n\n>Oh, I get it.\n\nFor some reason, I always thought this was a \"feature\", which is\nwhy Vadim's suggestion to fix it seems ... obvious?\n\nThis gives a consistent snapshot ability, right? Currently, \none must shut down access to the db in order to ensure referential\nconsistencies, a pain on a 24/7 web site, admittedly a relatively\nnew application.\n\nOracle, last I heard, wants about $9,000 for deployment on a\nweb platform (despite the lower price of $1,350 for a five\nuser license). This is a major reason I'm here. Sybase is\nfree at the moment, but has performance problems with the\nparticular web server I'm using (AOLServer), weird because \nthat server's so efficient with other dbs like Oracle and\nPostgres (the interface 'tween web server and db is different,\nthat's why, Sybase is a good performing db on its own).\n\nPostgres could be a major factor in this world IF it can\nget past its flakey reputation. The recent large memory\nleak bug fix is a step in the right direction, a large one.\nSo is the notion of a consistent dump by pg_dump.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Tue, 25 May 1999 22:13:21 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump inconsistences"
}
] |
[
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n>OK, could you see whether omitting -D and/or -f makes a difference?\n\nIt works with 6.5b1 and the 5/24 snapshot when omitting the -D. It crashes\nwith -D or -d. It works with -f. So,\n\npg_dump -f bboard.out bboard\n\nworks ok. Thanks for suggesting this.\n\n-- Ari Halberstadt mailto:[email protected] <http://www.magiccookie.com/>\nPGP public key available at <http://www.magiccookie.com/pgpkey.txt>\n\n\n",
"msg_date": "Wed, 26 May 1999 01:10:47 -0500",
"msg_from": "Ari Halberstadt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump core dump,\n upgrading from 6.5b1 to 5/24 snapshot"
}
] |
[
{
"msg_contents": "> - update bench set k500k = k500k + 1 where k100 = 30\n> with indeces unknown\n> without indeces 36 seconds\n\ncan you run an: explain update bench set k500k = k500k + 1 where k100 = 30;\n\n> Still the poor update routines do not explain the\n> strange behavior, that the postmaster runs for\n> hours using at most 10% CPU, and all the time\n> heavy disk activity is observed.\n\nI suspect it is doing a seq scan. Thus explaining the heavy disk activity.\nI have previously sent in a patch which will fix this if someone applies it.\n\nAndreas\n\n",
"msg_date": "Wed, 26 May 1999 09:25:59 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE"
},
{
"msg_contents": "ZEUGSWETTER Andreas IZ5 <[email protected]> writes:\n> can you run an: explain update bench set k500k = k500k + 1 where k100 = 30;\n> I suspect it is doing a seq scan.\n\nNo, that's not it:\n\ntest=> explain update bench set k500k = k500k + 1 where k100 = 30;\nNOTICE: QUERY PLAN:\n\nIndex Scan using k100 on bench (cost=179.05 rows=2082 width=154)\n\n\nThe benchmark loads the tables first and then builds indexes, and\nin fact does a vacuum analyze after that! So the stats should be fine.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 May 1999 09:39:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] strange behavior of UPDATE "
}
] |
[
{
"msg_contents": "> > a create index updates the statistics in pg_class,\n> > this leads to substantial performance degradation compared to\n> > 6.4.2.\n> \n> Create index did that in 6.4.2 as well --- how could it be making\n> performance worse?\n> \nI am not sure why, but in 6.4.2 a create table, create index, insert,\nselect * from tab where indexedcol=5 did actually use the index path,\neven if table reltuples and relpages was 0.\nIt currently uses a seq scan, which is exactly what we wanted to avoid \nin the newly created table case, but do want on an actually small table.\n\nPlease apply the patch I previously sent.\n\nAndreas\n",
"msg_date": "Wed, 26 May 1999 09:28:07 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] create index updates nrows statistics "
},
{
"msg_contents": ">\n> > > a create index updates the statistics in pg_class,\n> > > this leads to substantial performance degradation compared to\n> > > 6.4.2.\n> >\n> > Create index did that in 6.4.2 as well --- how could it be making\n> > performance worse?\n> >\n> I am not sure why, but in 6.4.2 a create table, create index, insert,\n> select * from tab where indexedcol=5 did actually use the index path,\n> even if table reltuples and relpages was 0.\n> It currently uses a seq scan, which is exactly what we wanted to avoid\n> in the newly created table case, but do want on an actually small table.\n>\n> Please apply the patch I previously sent.\n\n From memory not verified:\n\n Doesn't CREATE INDEX update pg_statistics? I think it does so\n the faked statistics only cause different joins to happen as\n long as there is no index created immediately after CREATE\n TABLE (HASHJOIN vs. NESTLOOP).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 26 May 1999 09:43:29 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] create index updates nrows statistics"
},
{
"msg_contents": "ZEUGSWETTER Andreas IZ5 <[email protected]> writes:\n>>>> a create index updates the statistics in pg_class,\n>>>> this leads to substantial performance degradation compared to\n>>>> 6.4.2.\n>> \n>> Create index did that in 6.4.2 as well --- how could it be making\n>> performance worse?\n>> \n> I am not sure why, but in 6.4.2 a create table, create index, insert,\n> select * from tab where indexedcol=5 did actually use the index path,\n> even if table reltuples and relpages was 0.\n\nHmm, you're right. Using 6.4.2:\n\nplay=> create table foobar (f1 int4);\nCREATE\nplay=> explain select * from foobar where f1 = 4;\nNOTICE: QUERY PLAN:\n\nSeq Scan on foobar (cost=0.00 size=0 width=4)\n\nplay=> create index foobar_f1 on foobar(f1);\nCREATE\nplay=> explain select * from foobar where f1 = 4;\nNOTICE: QUERY PLAN:\n\nIndex Scan using foobar_f1 on foobar (cost=0.00 size=0 width=4)\n\nwhereas in 6.5 you still get a sequential scan because it estimates the\ncost of the index scan at 1.0 not 0.0. I think I'm to blame for this\nbehavior change: I remember twiddling costsize.c to provide more\nrealistic numbers for an index scan, and in particular to ensure that\nan index scan would be considered more expensive than a sequential scan\nunless it was able to eliminate a useful number of rows. But when\nthe estimated relation size is zero (or very small) the selectivity\nbenefit can't make up even a mere 1.0 cost bias.\n\nI believe 6.5 is operating as it should --- 6.4 was producing inferior\nplans for small tables. But it is clearly a Bad Thing to allow the 6.5\noptimizer to believe that a relation is empty when it isn't. I concur\nwith your suggestion to hack up CREATE INDEX so that creating an index\nbefore you load the table isn't quite such a losing proposition.\n\n> Please apply the patch I previously sent.\n\nWill do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 May 1999 18:18:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] create index updates nrows statistics "
}
] |
[
{
"msg_contents": "Jan wrote:\n> From memory not verified:\n> \n> Doesn't CREATE INDEX update pg_statistics? \n> \nNo.\n\n> I think it does so\n> the faked statistics only cause different joins to happen as\n> long as there is no index created immediately after CREATE\n> TABLE (HASHJOIN vs. NESTLOOP).\n> \nNo, create index on a newly created table does:\n\t1. set reltuples and relpages of the table to 0\n\t2. set relpages=2 and a calculated reltuples of 2048 or below\n\t on the index depending on how many multy columns\n\nThis leads to a rather strange state where reltuples of table <\nreltuples of index. It forces seq scans on update and select\nof single table. (see E. Mergl's update problem)\n\nAndreas\n\n",
"msg_date": "Wed, 26 May 1999 09:58:44 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: [HACKERS] create index updates nrows statistics"
}
] |
[
{
"msg_contents": "Tom,\n\nI continue my research of irregular crashes I experienced with\ncurrent 6.5 and bih joins.\nInitial data:\n1. My Crash postgres script\n mkjoindata.pl --joins 14 --rows 20 | psql test\n2. Linux 2.0.36, 64Mb Ram, 200Mhz, egcs 1.12 release\n\nResults:\n1. No crashes if I compiled with -O0 -g options !\n2. No crashes at home - it's the same computer/compiler but\n different kernel - 2.2.9.\n3. No crashes under FreeBSD -3.1 elf\n4. It crashes much less if I separate creating table/indices and doing select !\n a) mkjoindata.pl --joins 14 --rows 20 | psql test\n sometimes crashes\n b) save just select query into tt.sql and run many times psql test < tt.sql\n crashes very rare\n\nCould you explain so strange behaivour 4) ? \nI was able to run big join with 40, 60 tables in reasonable time.\n\n\nI could provide a backtrace but it's the same as I posted several times.\n\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Wed, 26 May 1999 12:25:43 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.5 cvs: weird crashes "
}
] |
[
{
"msg_contents": "\n> Hi. I'd like to update the ports list in the docs to include\n> references to v6.5 for the various platforms for which PostgreSQL-6.5b\n> has been tested.\n> \nCurrent CVS (after pgindent) compiles and regresses ok on AIX 4.3.2 \nusing the IBM compiler. It has the following problems:\n1. AIX has int8,int16,int32,int64 in /usr/include/inttypes.h \n\t--> configure fails to find snprintf support for int8 (because it\nincludes stdio.h)\n\tI feel this is an IBM problem. I changed my inttypes.h\n2. No AIX in Makefile.shlib --> plpgsql.so is not built / no rule.\n\ta number of other platforms are also missing there\n\ta working rule is often in Makefile.port, but only for a single\nobject\n\tnot multiple, which plpgsql has.\n\tThe single object rule in Makefile.aix can be used to make a\nplpgsql.so\n\tfrom libplpgsql.a. I built it manually.\n3. libpq++ does not work because xlC does not have the string type/class ? \n\nAndreas\n",
"msg_date": "Wed, 26 May 1999 12:02:48 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Call for updates!"
}
] |
[
{
"msg_contents": "I still can't get this type creation working. I get the subject error\nwhenever I try to select on the new type if it is indexed. Here is a sample.\n\ndarcy=> create table x (g glaccount, i int);\nCREATE\ndarcy=> insert into x values ('12345-0000', 1);\nINSERT 29124 1\ndarcy=> select * from x where g = '12345-0000';\n g|i\n----------+-\n12345-0000|1\n(1 row)\n\ndarcy=> create unique index y on x (g);\nCREATE\ndarcy=> select * from x where g = '12345-0000';\nERROR: fmgr_info: function 0: cache lookup failed\n\nAs you can see, the select worked until I added the index. Here is the\nSQL that created the glaccount type. I hope to rewrite the documentation\nbased on this but I need to get it working first. Any ideas?\n\n--\n--\tPostgreSQL code for GLACCOUNTs.\n--\n--\t$Id$\n--\n\nload '/usr/local/pgsql/modules/glaccount.so';\n\n--\n--\tInput and output functions and the type itself:\n--\n\ncreate function glaccount_in(opaque)\n\treturns opaque\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_out(opaque)\n\treturns opaque\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate type glaccount (\n\tinternallength = 16,\n\texternallength = 13,\n\tinput = glaccount_in,\n\toutput = glaccount_out\n);\n\n--\n-- Some extra functions\n--\n\ncreate function glaccount_major(glaccount)\n\treturns int\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_minor(glaccount)\n\treturns int\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_cmp(glaccount, glaccount)\n\treturns int\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\n--\n--\tThe various boolean tests:\n--\n\ncreate function glaccount_eq(glaccount, glaccount)\n\treturns bool\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_ne(glaccount, glaccount)\n\treturns bool\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_lt(glaccount, glaccount)\n\treturns bool\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_gt(glaccount, glaccount)\n\treturns bool\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_le(glaccount, glaccount)\n\treturns bool\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\ncreate function glaccount_ge(glaccount, glaccount)\n\treturns bool\n\tas '/usr/local/pgsql/modules/glaccount.so'\n\tlanguage 'c';\n\n--\n--\tNow the operators. Note how some of the parameters to some\n--\tof the 'create operator' commands are commented out. This\n--\tis because they reference as yet undefined operators, and\n--\twill be implicitly defined when those are, further down.\n--\n\ncreate operator < (\n\tleftarg = glaccount,\n\trightarg = glaccount,\n--\tnegator = >=,\n\tprocedure = glaccount_lt\n);\n\ncreate operator <= (\n\tleftarg = glaccount,\n\trightarg = glaccount,\n--\tnegator = >,\n\tprocedure = glaccount_le\n);\n\ncreate operator = (\n\tleftarg = glaccount,\n\trightarg = glaccount,\n\tcommutator = =,\n--\tnegator = <>,\n\tprocedure = glaccount_eq\n);\n\ncreate operator >= (\n\tleftarg = glaccount,\n\trightarg = glaccount,\n\tnegator = <,\n\tprocedure = glaccount_ge\n);\n\ncreate operator > (\n\tleftarg = glaccount,\n\trightarg = glaccount,\n\tnegator = <=,\n\tprocedure = glaccount_gt\n);\n\ncreate operator <> (\n\tleftarg = glaccount,\n\trightarg = glaccount,\n\tnegator = =,\n\tprocedure = glaccount_ne\n);\n\n-- Now, let's see if we can set it up for indexing\n\nINSERT INTO pg_opclass (opcname, opcdeftype) \n\tSELECT 'glaccount_ops', oid FROM pg_type WHERE typname = 'glaccount';\n\nSELECT o.oid AS opoid, o.oprname\n\tINTO TEMP TABLE glaccount_ops_tmp\n\tFROM pg_operator o, pg_type t\n\tWHERE o.oprleft = t.oid AND\n\t\to.oprright = t.oid AND\n\t\tt.typname = 'glaccount';\n\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n\t\t\tamopselect, amopnpages)\n\tSELECT am.oid, opcl.oid, c.opoid, 1,\n\t\t\t'btreesel'::regproc, 'btreenpage'::regproc\n\tFROM pg_am am, pg_opclass opcl, glaccount_ops_tmp c\n\tWHERE amname = 'btree' AND\n\t\topcname = 'glaccount_ops' AND\n\t\tc.oprname = '<';\n\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n\t\t\tamopselect, amopnpages)\n\tSELECT am.oid, opcl.oid, c.opoid, 2,\n\t\t\t'btreesel'::regproc, 'btreenpage'::regproc\n\tFROM pg_am am, pg_opclass opcl, glaccount_ops_tmp c\n\tWHERE amname = 'btree' AND\n\t\topcname = 'glaccount_ops' AND\n\t\tc.oprname = '<=';\n\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n\t\t\tamopselect, amopnpages)\n\tSELECT am.oid, opcl.oid, c.opoid, 3,\n\t\t\t'btreesel'::regproc, 'btreenpage'::regproc\n\tFROM pg_am am, pg_opclass opcl, glaccount_ops_tmp c\n\tWHERE amname = 'btree' AND\n\t\topcname = 'glaccount_ops' AND\n\t\tc.oprname = '=';\n\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n\t\t\tamopselect, amopnpages)\n\tSELECT am.oid, opcl.oid, c.opoid, 4,\n\t\t\t'btreesel'::regproc, 'btreenpage'::regproc\n\tFROM pg_am am, pg_opclass opcl, glaccount_ops_tmp c\n\tWHERE amname = 'btree' AND\n\t\topcname = 'glaccount_ops' AND\n\t\tc.oprname = '>=';\n\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n\t\t\tamopselect, amopnpages)\n\tSELECT am.oid, opcl.oid, c.opoid, 5,\n\t\t\t'btreesel'::regproc, 'btreenpage'::regproc\n\tFROM pg_am am, pg_opclass opcl, glaccount_ops_tmp c\n\tWHERE amname = 'btree' AND\n\t\topcname = 'glaccount_ops' AND\n\t\tc.oprname = '>';\n\nINSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n\tSELECT a.oid, b.oid, c.oid, 1\n\t\tFROM pg_am a, pg_opclass b, pg_proc c\n\t\tWHERE a.amname = 'btree' AND\n\t\t\tb.opcname = 'glaccount_ops' AND\n\t\t\tc.proname = 'glaccount_cmp';\n\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n\t\t\tamopselect, amopnpages)\n\tSELECT am.oid, opcl.oid, c.opoid, 1,\n\t\t\t'hashsel'::regproc, 'hashnpage'::regproc\n\tFROM pg_am am, pg_opclass opcl, glaccount_ops_tmp c\n\tWHERE amname = 'hash' AND\n\t\topcname = 'glaccount_ops' AND\n\t\tc.oprname = '=';\n\nINSERT INTO pg_description (objoid, description)\n\tSELECT oid, 'Two part G/L account'\n\t\tFROM pg_type WHERE typname = 'glaccount';\n\n--\n--\teof\n--\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 26 May 1999 08:08:47 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help: fmgr_info: function 0: cache lookup failed"
},
{
"msg_contents": ">\n> I still can't get this type creation working. I get the subject error\n> whenever I try to select on the new type if it is indexed. Here is a sample.\n>\n> darcy=> create table x (g glaccount, i int);\n> CREATE\n> darcy=> insert into x values ('12345-0000', 1);\n> INSERT 29124 1\n> darcy=> select * from x where g = '12345-0000';\n> g|i\n> ----------+-\n> 12345-0000|1\n> (1 row)\n>\n> darcy=> create unique index y on x (g);\n> CREATE\n> darcy=> select * from x where g = '12345-0000';\n> ERROR: fmgr_info: function 0: cache lookup failed\n>\n> As you can see, the select worked until I added the index. Here is the\n> SQL that created the glaccount type. I hope to rewrite the documentation\n> based on this but I need to get it working first. Any ideas?\n\n I can only guess - in contrast to the builtin operators, user\n created ones don't specify the index selectivity functions.\n Maybe you need to manipulate the pg_operator entries manually\n to be able to create indices too. AFAICS there is no check\n made on the fmgr call in selfuncs.c.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 26 May 1999 15:48:52 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Help: fmgr_info: function 0: cache lookup failed"
},
{
"msg_contents": "[email protected] (\"D'Arcy\" \"J.M.\" Cain) writes:\n> darcy=> select * from x where g = '12345-0000';\n> ERROR: fmgr_info: function 0: cache lookup failed\n\n> As you can see, the select worked until I added the index.\n\nThis is a bit of a reach, but maybe it would work if you added\ncommutator links to your operator definitions? You should add 'em\nanyway on general principles.\n\nIf that *does* fix it, I'd say it's still a bug; index operators\nshould not have to have commutator links.\n\nNext step would be to burrow in with a debugger and figure out what\nfunction the thing thinks it's trying to call. A backtrace from\nthe call to elog() would help here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 May 1999 11:06:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Help: fmgr_info: function 0: cache lookup failed "
},
{
"msg_contents": "Thus spake Jan Wieck\n> > darcy=> select * from x where g = '12345-0000';\n> > ERROR: fmgr_info: function 0: cache lookup failed\n> >\n> > As you can see, the select worked until I added the index. Here is the\n> > SQL that created the glaccount type. I hope to rewrite the documentation\n> > based on this but I need to get it working first. Any ideas?\n> \n> I can only guess - in contrast to the builtin operators, user\n> created ones don't specify the index selectivity functions.\n> Maybe you need to manipulate the pg_operator entries manually\n> to be able to create indices too. AFAICS there is no check\n> made on the fmgr call in selfuncs.c.\n\nI tried just setting oprcanhash to true but that didn't do it. Can\nyou suggest what fields I need to look at in pg_operator?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 26 May 1999 22:19:42 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Help: fmgr_info: function 0: cache lookup failed"
},
{
"msg_contents": "Thus spake Tom Lane\n> [email protected] (\"D'Arcy\" \"J.M.\" Cain) writes:\n> > darcy=> select * from x where g = '12345-0000';\n> > ERROR: fmgr_info: function 0: cache lookup failed\n> \n> > As you can see, the select worked until I added the index.\n> \n> This is a bit of a reach, but maybe it would work if you added\n> commutator links to your operator definitions? You should add 'em\n> anyway on general principles.\n\nWhat are commutator links and how do I add them?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 26 May 1999 22:20:56 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Help: fmgr_info: function 0: cache lookup failed"
},
{
"msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n>> This is a bit of a reach, but maybe it would work if you added\n>> commutator links to your operator definitions? You should add 'em\n>> anyway on general principles.\n\n> What are commutator links and how do I add them?\n\nThere's some doco in xoper.sgml now...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 May 1999 00:10:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Help: fmgr_info: function 0: cache lookup failed "
},
{
"msg_contents": "> \n> Thus spake Jan Wieck\n> > > darcy=> select * from x where g = '12345-0000';\n> > > ERROR: fmgr_info: function 0: cache lookup failed\n> > >\n> > > As you can see, the select worked until I added the index. Here is the\n> > > SQL that created the glaccount type. I hope to rewrite the documentation\n> > > based on this but I need to get it working first. Any ideas?\n> > \n> > I can only guess - in contrast to the builtin operators, user\n> > created ones don't specify the index selectivity functions.\n> > Maybe you need to manipulate the pg_operator entries manually\n> > to be able to create indices too. AFAICS there is no check\n> > made on the fmgr call in selfuncs.c.\n> \n> I tried just setting oprcanhash to true but that didn't do it. Can\n> you suggest what fields I need to look at in pg_operator?\n\n oprrest and oprjoin\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n",
"msg_date": "Thu, 27 May 1999 10:34:53 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Help: fmgr_info: function 0: cache lookup failed"
},
{
"msg_contents": "Thus spake Jan Wieck\n> > I tried just setting oprcanhash to true but that didn't do it. Can\n> > you suggest what fields I need to look at in pg_operator?\n> \n> oprrest and oprjoin\n\nOK, I did this and it worked. I'll go work on the documentation now.\nThanks.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 27 May 1999 07:16:44 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Help: fmgr_info: function 0: cache lookup failed"
},
{
"msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n>>>> I tried just setting oprcanhash to true but that didn't do it. Can\n>>>> you suggest what fields I need to look at in pg_operator?\n>> \n>> oprrest and oprjoin\n\n> OK, I did this and it worked. I'll go work on the documentation now.\n\nOK, I see the problem: btreesel() and friends blithely assume that the\noperator used in an index will have a selectivity function (oprrest).\n\nI can see two reasonable fixes:\n * Default to an 0.5 estimate if no oprrest link (this is what the\n optimizer does for operators that have no oprrest).\n * Generate an error message along the lines of \"index operators must\n have a restriction selectivity estimator\", if we think that they\n really really oughta.\n\nI'm not sure which way to jump. The former would be more friendly for\npeople just starting to develop index support for a new data type ...\nbut then they might never realize that lack of an estimator is hurting\nperformance for them. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 May 1999 09:56:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Help: fmgr_info: function 0: cache lookup failed "
},
{
"msg_contents": "\nTom, was this dealth with?\n\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> >>>> I tried just setting oprcanhash to true but that didn't do it. Can\n> >>>> you suggest what fields I need to look at in pg_operator?\n> >> \n> >> oprrest and oprjoin\n> \n> > OK, I did this and it worked. I'll go work on the documentation now.\n> \n> OK, I see the problem: btreesel() and friends blithely assume that the\n> operator used in an index will have a selectivity function (oprrest).\n> \n> I can see two reasonable fixes:\n> * Default to an 0.5 estimate if no oprrest link (this is what the\n> optimizer does for operators that have no oprrest).\n> * Generate an error message along the lines of \"index operators must\n> have a restriction selectivity estimator\", if we think that they\n> really really oughta.\n> \n> I'm not sure which way to jump. The former would be more friendly for\n> people just starting to develop index support for a new data type ...\n> but then they might never realize that lack of an estimator is hurting\n> performance for them. Comments?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 17:32:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Help: fmgr_info: function 0: cache lookup failed"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, was this dealth with?\n\nWhat I originally did was the second choice (generate an error message)\nbut I had to back off to using a default when we discovered that the\nrtree index operators don't have oprrest links in 6.5 :-(. I would\nlike to change it back after the rtree index entries are fixed, but\nfor the meanwhile you can mark this item done.\n\n\t\t\tregards, tom lane\n\n\n>> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n>>>>>>> I tried just setting oprcanhash to true but that didn't do it. Can\n>>>>>>> you suggest what fields I need to look at in pg_operator?\n>>>>> \n>>>>> oprrest and oprjoin\n>> \n>>>> OK, I did this and it worked. I'll go work on the documentation now.\n>> \n>> OK, I see the problem: btreesel() and friends blithely assume that the\n>> operator used in an index will have a selectivity function (oprrest).\n>> \n>> I can see two reasonable fixes:\n>> * Default to an 0.5 estimate if no oprrest link (this is what the\n>> optimizer does for operators that have no oprrest).\n>> * Generate an error message along the lines of \"index operators must\n>> have a restriction selectivity estimator\", if we think that they\n>> really really oughta.\n>> \n>> I'm not sure which way to jump. The former would be more friendly for\n>> people just starting to develop index support for a new data type ...\n>> but then they might never realize that lack of an estimator is hurting\n>> performance for them. Comments?\n>> \n>> regards, tom lane\n",
"msg_date": "Wed, 07 Jul 1999 18:06:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Help: fmgr_info: function 0: cache lookup failed "
}
] |
[
{
"msg_contents": "Any ideas?\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n---------- Forwarded message ----------\nDate: Wed, 26 May 1999 08:41:26 -0400 (EDT)\nFrom: Brandon Palmer <[email protected]>\nTo: Peter T Mount <[email protected]>\nSubject: Re: Postgreqsl Large Objects\n\nOk, thanks, I will talk to him about the docs. Another problem that you\nmay know about, want to know about. I have been using 6.5b for a while now \nwith large objects. The problem that I am having is that I am running out of\nram REAL FAST. I call a function that opens a given lo, then closes it (and\nsome other stuff in the middle). When I do not open and close the objects, I \nam ok, but with them commented in, I have problems. I would think that it\nis not freeing all the memory from the open/close pair. I have talked to \nmy LUG about this and they can not see any problems with the code that I have \nnot. Take a look if you please: http://x.cwru.edu/~bap/search_3.c\n\nI get a few :\n\nNOTICE: LockReplace: xid table corrupted\n\nerrors and then a \n\nNOTICE: ShmemAlloc: out of memory\n\nerror. Thoughts?\n\n- Brandon\n\n(The only reason I am mailing this to you is that it seems there is a mem leak\nin the lo_open and lo_close functions)\n\n",
"msg_date": "Wed, 26 May 1999 14:10:15 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory leak in large objects (was Re: Postgreqsl Large Objects)"
},
{
"msg_contents": "Peter T Mount <[email protected]> writes:\n> Ok, thanks, I will talk to him about the docs. Another problem that you\n> may know about, want to know about. I have been using 6.5b for a while now \n> with large objects.\n\nBeta1 you mean? There have been several LO bugs fixed since then, I\nbelieve. Please try it with a current snapshot.\n\n> I get a few :\n> NOTICE: LockReplace: xid table corrupted\n> errors and then a \n> NOTICE: ShmemAlloc: out of memory\n> error. Thoughts?\n\nI think this is memory corruption (something tromping on the shared\nmemory allocation information), not a leak.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 May 1999 10:18:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Memory leak in large objects (was Re: Postgreqsl Large\n\tObjects)"
}
] |
[
{
"msg_contents": "Hi,\n\nI have tested current snapshot (from CVS) to compile and run on Windows NT.\n\nIt compiles mostly OK. The only problem is with linking the libpq++, but it\ncan be a general problem:\n\npgcursordb.o: In function `_8PgCursorRC12PgConnectionPCc':\n/usr/src/pgsql.test/src/interfaces/libpq++/pgcursordb.cc:37: undefined\nreference\n to `PgTransaction::PgTransaction(PgConnection const &)'\n\nand it also need this small patch:\n------------- cut here -------------\n--- /usr/src/pgsql/src/interfaces/libpq++/Makefile.in\tMon May 24 12:04:49\n1999\n+++ src/interfaces/libpq++/Makefile.in\tWed May 26 15:29:05 1999\n@@ -44,7 +44,11 @@\n \n OBJS = pgconnection.o pgdatabase.o pgtransdb.o pgcursordb.o pglobject.o \n \n+ifeq ($(PORTNAME), win)\n+SHLIB_LINK+= --driver-name g++ -L../libpq -lpq\n+else\n SHLIB_LINK= -L../libpq -lpq\n+endif\n \n # Shared library stuff, also default 'all' target\n include $(SRCDIR)/Makefile.shlib\n------------- cut here -------------\n\nHere is current regress.out:\nint2 .. failed\nint4 .. failed\nfloat8 .. failed\ngeometry .. failed\n-> these are unimportant (libc messages, precision)\n\ndatetime .. failed\nabstime .. failed\ntinterval .. failed\nhorology .. failed\n-> it seems so that there are only differences in strings for timezones\nthere\n\nrandom .. failed\n*** expected/random.out Wed May 26 13:05:47 1999\n--- results/random.out Wed May 26 15:04:57 1999\n***************\n*** 19,23 ****\n WHERE random NOT BETWEEN 80 AND 120;\n random\n ------\n! (0 rows)\n\n--- 19,24 ----\n WHERE random NOT BETWEEN 80 AND 120;\n random\n ------\n! 123\n! (1 row)\n\n\nrules .. failed\n-> different order of some lines (unimportant)\n\nThe remaining test are OK.\n\n\t\t\tDan\n\nPS: Change my name in the doc/src/sgml/ports.sgml from \"Horak Daniel\" to\n\"Daniel Horak\", please.\n\n----------------------------------------------\nDaniel Horak\nnetwork and system administrator\ne-mail: [email protected]\nprivat e-mail: [email protected] ICQ:36448176\n----------------------------------------------\n",
"msg_date": "Wed, 26 May 1999 15:28:00 +0200",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "report for Win32 port"
},
{
"msg_contents": "On Wed, 26 May 1999, Horak Daniel wrote:\n\n> Hi,\n> \n> I have tested current snapshot (from CVS) to compile and run on Windows NT.\n> \n> It compiles mostly OK. The only problem is with linking the libpq++, but it\n> can be a general problem:\n> \n> pgcursordb.o: In function `_8PgCursorRC12PgConnectionPCc':\n> /usr/src/pgsql.test/src/interfaces/libpq++/pgcursordb.cc:37: undefined\n> reference\n> to `PgTransaction::PgTransaction(PgConnection const &)'\n\nInteresting. I wonder if any other platforms or compilers are also \nshowing this... I'll submit the patch later today.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 26 May 1999 11:30:43 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] report for Win32 port"
},
{
"msg_contents": "Applied(the libpq++ part).\n\n\n[Charset iso-8859-2 unsupported, filtering to ASCII...]\n> Hi,\n> \n> I have tested current snapshot (from CVS) to compile and run on Windows NT.\n> \n> It compiles mostly OK. The only problem is with linking the libpq++, but it\n> can be a general problem:\n> \n> pgcursordb.o: In function `_8PgCursorRC12PgConnectionPCc':\n> /usr/src/pgsql.test/src/interfaces/libpq++/pgcursordb.cc:37: undefined\n> reference\n> to `PgTransaction::PgTransaction(PgConnection const &)'\n> \n> and it also need this small patch:\n> ------------- cut here -------------\n> --- /usr/src/pgsql/src/interfaces/libpq++/Makefile.in\tMon May 24 12:04:49\n> 1999\n> +++ src/interfaces/libpq++/Makefile.in\tWed May 26 15:29:05 1999\n> @@ -44,7 +44,11 @@\n> \n> OBJS = pgconnection.o pgdatabase.o pgtransdb.o pgcursordb.o pglobject.o \n> \n> +ifeq ($(PORTNAME), win)\n> +SHLIB_LINK+= --driver-name g++ -L../libpq -lpq\n> +else\n> SHLIB_LINK= -L../libpq -lpq\n> +endif\n> \n> # Shared library stuff, also default 'all' target\n> include $(SRCDIR)/Makefile.shlib\n> ------------- cut here -------------\n> \n> Here is current regress.out:\n> int2 .. failed\n> int4 .. failed\n> float8 .. failed\n> geometry .. failed\n> -> these are unimportant (libc messages, precision)\n> \n> datetime .. failed\n> abstime .. failed\n> tinterval .. failed\n> horology .. failed\n> -> it seems so that there are only differences in strings for timezones\n> there\n> \n> random .. failed\n> *** expected/random.out Wed May 26 13:05:47 1999\n> --- results/random.out Wed May 26 15:04:57 1999\n> ***************\n> *** 19,23 ****\n> WHERE random NOT BETWEEN 80 AND 120;\n> random\n> ------\n> ! (0 rows)\n> \n> --- 19,24 ----\n> WHERE random NOT BETWEEN 80 AND 120;\n> random\n> ------\n> ! 123\n> ! (1 row)\n> \n> \n> rules .. failed\n> -> different order of some lines (unimportant)\n> \n> The remaining test are OK.\n> \n> \t\t\tDan\n> \n> PS: Change my name in the doc/src/sgml/ports.sgml from \"Horak Daniel\" to\n> \"Daniel Horak\", please.\n> \n> ----------------------------------------------\n> Daniel Horak\n> network and system administrator\n> e-mail: [email protected]\n> privat e-mail: [email protected] ICQ:36448176\n> ----------------------------------------------\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 12:08:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] report for Win32 port"
},
{
"msg_contents": ">\n> On Wed, 26 May 1999, Horak Daniel wrote:\n>\n> > Hi,\n> >\n> > I have tested current snapshot (from CVS) to compile and run on Windows NT.\n> >\n> > It compiles mostly OK. The only problem is with linking the libpq++, but it\n> > can be a general problem:\n> >\n> > pgcursordb.o: In function `_8PgCursorRC12PgConnectionPCc':\n> > /usr/src/pgsql.test/src/interfaces/libpq++/pgcursordb.cc:37: undefined\n> > reference\n> > to `PgTransaction::PgTransaction(PgConnection const &)'\n>\n> Interesting. I wonder if any other platforms or compilers are also\n> showing this... I'll submit the patch later today.\n\ng++ -Wno-error -Wno-unused -Wl,-Bdynamic -I/usr/local/pgsql/include -o testlibpq0 testlibpq0.cc -L/usr/local/pgsql/lib -lpq++\n/tmp/cca280301.o: In function `main':\n/tmp/cca280301.o(.text+0x14f): undefined reference to `getline__H2ZcZt18string_char_traits1Zc_R7istreamRt12basic_string2ZX01ZX11X01_R7istream'\n/tmp/cca280301.o(.text+0x162): undefined reference to `__ne__H2ZcZt18string_char_traits1Zc_RCt12basic_string2ZX01ZX11PCX01_b'\n/usr/local/pgsql/lib/libpq++.so: undefined reference to `crypt'\n/usr/local/pgsql/lib/libpq++.so: undefined reference to `PgTransaction::PgTransaction(PgConnection const &)'\nmake: *** [testlibpq0] Error 1\n[pgsql@orion] ~/devel/src/interfaces/libpq++/examples >\n\n Linux 2.1.88, glibc-2, gcc 2.8.1\n\n Whatever these errors mean and whatever they might be good\n for.\n\n Up to now I thought it's due to a self made upgrade of shared\n libs. That one was a little hairy and didn't worked as I\n wanted it. BTW: since glibc-2 crypt() is in it's own library.\n\n Another interesting detail is that I have a Makefile.custom\n telling \"COPT=-g\", but I don't see -g in the compiler\n switches in the examples section.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 26 May 1999 19:34:05 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] report for Win32 port"
},
{
"msg_contents": "\nOn 26-May-99 Jan Wieck wrote:\n>>\n>> On Wed, 26 May 1999, Horak Daniel wrote:\n>>\n>> > Hi,\n>> >\n>> > I have tested current snapshot (from CVS) to compile and run on Windows NT.\n>> >\n>> > It compiles mostly OK. The only problem is with linking the libpq++, but it\n>> > can be a general problem:\n>> >\n>> > pgcursordb.o: In function `_8PgCursorRC12PgConnectionPCc':\n>> > /usr/src/pgsql.test/src/interfaces/libpq++/pgcursordb.cc:37: undefined\n>> > reference\n>> > to `PgTransaction::PgTransaction(PgConnection const &)'\n>>\n>> Interesting. I wonder if any other platforms or compilers are also\n>> showing this... I'll submit the patch later today.\n> \n> g++ -Wno-error -Wno-unused -Wl,-Bdynamic -I/usr/local/pgsql/include -o\n> testlibpq0 testlibpq0.cc -L/usr/local/pgsql/lib -lpq++\n> /tmp/cca280301.o: In function `main':\n> /tmp/cca280301.o(.text+0x14f): undefined reference to\n> `getline__H2ZcZt18string_char_traits1Zc_R7istreamRt12basic_string2ZX01ZX11X01_R7is\n> tream'\n> /tmp/cca280301.o(.text+0x162): undefined reference to\n> `__ne__H2ZcZt18string_char_traits1Zc_RCt12basic_string2ZX01ZX11PCX01_b'\n> /usr/local/pgsql/lib/libpq++.so: undefined reference to `crypt'\n> /usr/local/pgsql/lib/libpq++.so: undefined reference to\n> `PgTransaction::PgTransaction(PgConnection const &)'\n> make: *** [testlibpq0] Error 1\n> [pgsql@orion] ~/devel/src/interfaces/libpq++/examples >\n> \n> Linux 2.1.88, glibc-2, gcc 2.8.1\n> \n> Whatever these errors mean and whatever they might be good\n> for.\n> \n> Up to now I thought it's due to a self made upgrade of shared\n> libs. That one was a little hairy and didn't worked as I\n> wanted it. BTW: since glibc-2 crypt() is in it's own library.\n> \n> Another interesting detail is that I have a Makefile.custom\n> telling \"COPT=-g\", but I don't see -g in the compiler\n> switches in the examples section.\n\nI've just discovered that libpq++'s makefile uses whatever is defined as\nCXX for the compiler. It's defined as c++, which is ver 2.7.2.1 here.\nWhen I force it to use g++28 (ver 2.8.1), it misses /usr/include/g++.\nAdding that to the list of CXXFLAGS fixes that. Now then.. Will it\nbreak something on another platform if I were to leave that in the\nlist? Anyone know?\n\nAlso with g++ 2.7.2.1 and 2.8.1 I can't duplicate the problem that Dan\nmentions above. Dan, what compiler/compiler version are you using???\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Wed, 26 May 1999 19:55:29 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] report for Win32 port"
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n> I've just discovered that libpq++'s makefile uses whatever is defined as\n> CXX for the compiler.\n\nAs it should ...\n\n> When I force it to use g++28 (ver 2.8.1), it misses /usr/include/g++.\n> Adding that to the list of CXXFLAGS fixes that. Now then.. Will it\n> break something on another platform if I were to leave that in the\n> list?\n\nAbsolutely. For example: if someone has both g++ and a vendor C++\ncompiler installed, and tries to compile with the vendor C++, that\nwould fail because you'd be forcing the vendor C++ to try to eat\ng++-specific include files.\n\nThe right place to fix any problem along this line is in configure,\n*not* by hardwiring platform-dependent assumptions into libpq++'s\nmakefile.\n\nIf it's actually necessary to do what you suggest, then the way to\ndo it would be for configure to add -I/usr/include/g++ to CXXFLAGS\nafter checking that CXX is g++. However, I misdoubt that you have\ndiagnosed the problem correctly, because the versions of gcc/g++\nthat I've used automatically include their private include areas into\nthe -I list. This smells more like an incorrect installation of\ng++ than a problem that Postgres ought to be solving.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 May 1999 20:40:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] report for Win32 port "
},
{
"msg_contents": "\nOn 27-May-99 Tom Lane wrote:\n> Vince Vielhaber <[email protected]> writes:\n>> I've just discovered that libpq++'s makefile uses whatever is defined as\n>> CXX for the compiler.\n> \n> As it should ...\n> \n>> When I force it to use g++28 (ver 2.8.1), it misses /usr/include/g++.\n>> Adding that to the list of CXXFLAGS fixes that. Now then.. Will it\n>> break something on another platform if I were to leave that in the\n>> list?\n> \n> Absolutely. For example: if someone has both g++ and a vendor C++\n> compiler installed, and tries to compile with the vendor C++, that\n> would fail because you'd be forcing the vendor C++ to try to eat\n> g++-specific include files.\n> \n> The right place to fix any problem along this line is in configure,\n> *not* by hardwiring platform-dependent assumptions into libpq++'s\n> makefile.\n> \n> If it's actually necessary to do what you suggest, then the way to\n> do it would be for configure to add -I/usr/include/g++ to CXXFLAGS\n> after checking that CXX is g++. However, I misdoubt that you have\n> diagnosed the problem correctly, because the versions of gcc/g++\n> that I've used automatically include their private include areas into\n> the -I list. This smells more like an incorrect installation of\n> g++ than a problem that Postgres ought to be solving.\n> \n> regards, tom lane\n\nMore than likely this is the case. FreeBSD comes with a version of gcc\nand g++ installed. In this case it's 2.7.2.1. In ports/packages it has\ngcc-2.8.1, but being pressed for time I installed the package (20 mins\nbefore trying to build with it). I was a bit surprised to see that it\ninstalled in /usr/local/bin and didn't even put a link in /usr/local/include\nor /usr/local/lib, so I probably need to look into the installation more.\nThe makefile *is* doing a test for g++ tho (it was already there, I didn't\ndo it :)\n\nFortunately xemacs saves a backup of the file you're working on with a ~\ntacked onto the end. That saved me some work (I have a tape backup but\ndidn't really want to have to restore from it). I'm referring to libpq++.sgml\nthat I'm about to send to TommyG before I wipe it out again.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Wed, 26 May 1999 20:57:12 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] report for Win32 port"
}
] |
[
{
"msg_contents": "Tom Lane <[email protected]> noted that MAXQUERYLEN's value in pg_dump is\n5000. Some of my fields are the maximum length for a text field.\n\nUsing the 5/26 snapshot, I increased MAXQUERYLEN to 16384 and it completed\nwithout crashing. I also tried it at 8192 but it still crashed at that size.\n\nThe dumped data file is 15MB (no -d or -D option) or 22MB (with -D). The\ncore file is 13.8MB, which sounds like a memory leak in pg_dump.\n\n-- Ari Halberstadt mailto:[email protected] <http://www.magiccookie.com/>\nPGP public key available at <http://www.magiccookie.com/pgpkey.txt>\n\n\n",
"msg_date": "Wed, 26 May 1999 13:12:43 -0500",
"msg_from": "Ari Halberstadt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump core dump,\n upgrading from 6.5b1 to 5/24 snapshot"
},
{
"msg_contents": "Ari Halberstadt <[email protected]> writes:\n> Tom Lane <[email protected]> noted that MAXQUERYLEN's value in pg_dump is\n> 5000. Some of my fields are the maximum length for a text field.\n\nThere are two bugs here: dumpClasses_dumpData() should not be making any\nassumption at all about the maximum size of a tuple field, and pg_dump's\nvalue for MAXQUERYLEN ought to match the backend's. I hadn't realized\nthat it wasn't using the same query buffer size as the backend does ---\nthis might possibly explain some other complaints we've seen about being\nunable to dump complex table or rule definitions.\n\nWill fix both problems this evening.\n\n> The dumped data file is 15MB (no -d or -D option) or 22MB (with -D). The\n> core file is 13.8MB, which sounds like a memory leak in pg_dump.\n\nNot necessarily --- are the large text fields in a multi-megabyte table?\nWhen you're using -D, pg_dump just does a \"SELECT * FROM table\" and then\niterates through the returned result, which must hold the whole table.\n(This is another reason why I prefer not to use -d/-D ... the COPY\nmethod doesn't require buffering the whole table inside pg_dump.)\n\nSome day we should enhance libpq to allow a select result to be received\nand processed in chunks smaller than the whole result.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 May 1999 14:45:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump core dump,\n\tupgrading from 6.5b1 to 5/24 snapshot"
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n>...\n>Will fix both problems this evening.\n\nThanks!\n\n>> The dumped data file is 15MB (no -d or -D option) or 22MB (with -D). The\n>> core file is 13.8MB, which sounds like a memory leak in pg_dump.\n>\n>Not necessarily --- are the large text fields in a multi-megabyte table?\n\nYes, it's a 15MB file for the table.\n\n>When you're using -D, pg_dump just does a \"SELECT * FROM table\" and then\n>iterates through the returned result, which must hold the whole table.\n>(This is another reason why I prefer not to use -d/-D ... the COPY\n>method doesn't require buffering the whole table inside pg_dump.)\n\nThe -d/-D options are out now for my nightly backups. (Foolish of me to\nhave used them with backups in the first place!)\n\n-- Ari Halberstadt mailto:[email protected] <http://www.magiccookie.com/>\nPGP public key available at <http://www.magiccookie.com/pgpkey.txt>\n\n\n",
"msg_date": "Wed, 26 May 1999 16:41:42 -0500",
"msg_from": "Ari Halberstadt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump core dump,\n upgrading from 6.5b1 to 5/24 snapshot"
}
] |
[
{
"msg_contents": "Do we have a numeric-type regression test. I wanted an example of a\ntable that uses them, and couldn't find anything in the regression\ntests.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 15:49:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "NUMERIC regression test?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Do we have a numeric-type regression test.\n\nNo. I complained about that a while ago --- I think it's a serious\nomission.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 May 1999 17:06:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NUMERIC regression test? "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Do we have a numeric-type regression test.\n> \n> No. I complained about that a while ago --- I think it's a serious\n> omission.\n\nWelcome that new item to our Open Items list. (applause) :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 17:09:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] NUMERIC regression test?"
},
{
"msg_contents": ">\n> > Bruce Momjian <[email protected]> writes:\n> > > Do we have a numeric-type regression test.\n> >\n> > No. I complained about that a while ago --- I think it's a serious\n> > omission.\n>\n> Welcome that new item to our Open Items list. (applause) :-)\n\n Whoooho!!!\n\n Yes, I know I should have made one. Especially one that\n calculates logarithms and all the others with really high\n precision and compare them with results from bc(1).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 27 May 1999 10:28:11 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NUMERIC regression test?"
}
] |
[
{
"msg_contents": "psql \\d shows numeric precision now:\n\ntest=> \\d num\nTable = num\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| x | numeric | 10.2 |\n+----------------------------------+----------------------------------+-------+\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 15:57:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Numeric precision on psql \\d"
}
] |
[
{
"msg_contents": "\nFreeBSD 2.2.8. cvsup a half hour ago. (blew away my changes in the doc directory\ntoo - grrrr) I'm getting this. Don't recall seeing it from a snapshot I tried a \nfew days ago. It normally compiles without incident. gcc 2.7.2.1\n\n------\ngmake[3]: Entering directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg/preproc\n'\n/usr/bin/bison -y -d preproc.y\nmv y.tab.c preproc.c\nmv y.tab.h preproc.h\ngcc -I../../../include -I../../../backend -O2 -m486 -pipe -Wall -Wmissing-prototyp\nes -I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=6 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"\n/usr/local/pgsql/include\\\" -c preproc.c -o preproc.o\npreproc.y:5824: parse error before character 026\n/usr/share/misc/bison.simple: In function `yyparse':\n/usr/share/misc/bison.simple:387: warning: implicit declaration of function `yylex'\ngmake[3]: *** [preproc.o] Error 1\ngmake[3]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg/preproc'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces'\ngmake: *** [all] Error 2\n\n$ \n------\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Wed, 26 May 1999 16:26:27 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Uh oh?"
},
{
"msg_contents": "> \n> FreeBSD 2.2.8. cvsup a half hour ago. (blew away my changes in the doc directory\n> too - grrrr) I'm getting this. Don't recall seeing it from a snapshot I tried a \n> few days ago. It normally compiles without incident. gcc 2.7.2.1\n> \n> ------\n> gmake[3]: Entering directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg/preproc\n> '\n> /usr/bin/bison -y -d preproc.y\n> mv y.tab.c preproc.c\n> mv y.tab.h preproc.h\n> gcc -I../../../include -I../../../backend -O2 -m486 -pipe -Wall -Wmissing-prototyp\n> es -I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=6 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"\n> /usr/local/pgsql/include\\\" -c preproc.c -o preproc.o\n> preproc.y:5824: parse error before character 026\n> /usr/share/misc/bison.simple: In function `yyparse':\n> /usr/share/misc/bison.simple:387: warning: implicit declaration of function `yylex'\n> gmake[3]: *** [preproc.o] Error 1\n> gmake[3]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg/preproc'\n> gmake[2]: *** [all] Error 2\n> gmake[2]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces'\n> gmake: *** [all] Error 2\n\nI am OK there, but have a problem with plpgsql:\n\n\tgcc2 -I../../../include -I../../../backend -I/usr/local/include/readline -O2 -\n\tm486 -pipe -g -Wall -O1 -I../../../interfaces/libpq -I../../../include -I../../.\n\t./backend -fpic -c -o pl_parse.o pl_gram.c\n\tscan.l: In function `plpgsql_yylex':\n\tIn file included from gram.y:43:\n\tscan.l:150: `plpgsql_yylineno' undeclared (first use this function)\n\tscan.l:150: (Each undeclared identifier is reported only once\n\tscan.l:150: for each function it appears in.)\n\tscan.l: In function `plpgsql_setinput':\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 16:38:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Uh oh?"
},
{
"msg_contents": "\nOn 26-May-99 Bruce Momjian wrote:\n>> \n>> FreeBSD 2.2.8. cvsup a half hour ago. (blew away my changes in the doc\n>> directory\n>> too - grrrr) I'm getting this. Don't recall seeing it from a snapshot I tried\n>> a \n>> few days ago. It normally compiles without incident. gcc 2.7.2.1\n>> \n>> ------\n>> gmake[3]: Entering directory\n>> `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg/preproc\n>> '\n>> /usr/bin/bison -y -d preproc.y\n>> mv y.tab.c preproc.c\n>> mv y.tab.h preproc.h\n>> gcc -I../../../include -I../../../backend -O2 -m486 -pipe -Wall\n>> -Wmissing-prototyp\n>> es -I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=6 -DPATCHLEVEL=0\n>> -DINCLUDE_PATH=\\\"\n>> /usr/local/pgsql/include\\\" -c preproc.c -o preproc.o\n>> preproc.y:5824: parse error before character 026\n>> /usr/share/misc/bison.simple: In function `yyparse':\n>> /usr/share/misc/bison.simple:387: warning: implicit declaration of function\n>> `yylex'\n>> gmake[3]: *** [preproc.o] Error 1\n>> gmake[3]: Leaving directory\n>> `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg/preproc'\n>> gmake[2]: *** [all] Error 2\n>> gmake[2]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg'\n>> gmake[1]: *** [all] Error 2\n>> gmake[1]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces'\n>> gmake: *** [all] Error 2\n> \n> I am OK there, but have a problem with plpgsql:\n> \n> gcc2 -I../../../include -I../../../backend -I/usr/local/include/readline -\nO2 -\n> m486 -pipe -g -Wall -O1 -I../../../interfaces/libpq -I../../../include -I../\n../.\n> ./backend -fpic -c -o pl_parse.o pl_gram.c\n> scan.l: In function `plpgsql_yylex':\n> In file included from gram.y:43:\n> scan.l:150: `plpgsql_yylineno' undeclared (first use this function)\n> scan.l:150: (Each undeclared identifier is reported only once\n> scan.l:150: for each function it appears in.)\n> scan.l: In function `plpgsql_setinput':\n\nAnd I only had some complaints about \"defined but not used\". I commented\nout the ecpg line from the makefile and got past the above problem. But\nhad no trouble compiling libpq++. Not even a warning. I'm going to try\nbuilding with 2.8.1 and see if that makes any difference.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Wed, 26 May 1999 19:07:49 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Uh oh?"
},
{
"msg_contents": "On Wed, May 26, 1999 at 04:26:27PM -0400, Vince Vielhaber wrote:\n> gcc -I../../../include -I../../../backend -O2 -m486 -pipe -Wall -Wmissing-prototyp\n> es -I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=6 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"\n> /usr/local/pgsql/include\\\" -c preproc.c -o preproc.o\n> preproc.y:5824: parse error before character 026\n\nI tried recompiling the stuff and this does not happen for me.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 27 May 1999 14:36:51 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Uh oh?"
}
] |
[
{
"msg_contents": "\nSELECT * FROM test WHERE test IN (SELECT * FROM test) fails with strange error\nWhen creating a table with either type inet or type cidr as a primary,unique\n key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\ncrypt_loadpwdfile() is mixing and (mis)matching memory allocation\n protocols, trying to use pfree() to release pwd_cache vector from realloc()\nFix function pointer calls to take Datum args for char and int2 args(ecgs)\n\nDo we want pg_dump -z to be the default?\n\nMake psql \\help, man pages, and sgml reflect changes in grammar\nMarkup sql.sgml, Stefan's intro to SQL\nMarkup cvs.sgml, cvs and cvsup howto\nAdd figures to sql.sgml and arch-dev.sgml, both from Stefan\nInclude Jose's date/time history in User's Guide (neat!)\nGenerate Admin, User, Programmer hardcopy postscript\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 16:29:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open 6.5 items"
},
{
"msg_contents": "> Markup sql.sgml, Stefan's intro to SQL\n\nStill needs a look, since the math markup is not looking good in my\nbrowser. But I can easily fall back to disabling those parts of the\ndoc for now.\n\n> Markup cvs.sgml, cvs and cvsup howto\n\nDone.\n\n> Add figures to sql.sgml and arch-dev.sgml, both from Stefan\n\nDone.\n\n> Include Jose's date/time history in User's Guide (neat!)\n\nDone.\n\n> Generate Admin, User, Programmer hardcopy postscript\n\nNot yet, and not likely by June 1. Also, there are additional items:\n\nGenerate INSTALL and HISTORY from sgml sources.\nUpdate ref/lock.sgml, ref/set.sgml to reflect MVCC and locking\nchanges.\n\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 27 May 1999 16:34:13 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> crypt_loadpwdfile() is mixing and (mis)matching memory allocation\n> protocols, trying to use pfree() to release pwd_cache vector from realloc()\n\nDidn't this just get fixed?\n\n> Fix function pointer calls to take Datum args for char and int2 args(ecgs)\n\nThis still needs to be done, and it looks like a lot of tedious\ngruntwork :-(. Do we have a volunteer?\n\nI think we still have some unresolved issues about locking and about\nhandling of multi-segment tables. Shouldn't those be on the TODO list?\nIf they were fixed to everyone's satisfaction, it wasn't apparent from\nthe list traffic...\n\nI am currently trying to investigate the poor performance reported by\nEdmund Mergl --- since gprof doesn't really work on my Linux box, I\nam reduced to running a profilable postmaster on my HPUX box with the\ndatabase area NFS-mounted from the Linux box, where there is enough disk\nspace for the benchmark. This setup gives new meaning to the term\n\"slow\", but I should be able to get a useful profile out of it. If that\nturns up anything significant and readily fixable, I might propose\ndelaying 6.5 for a fix.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 May 1999 13:44:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > crypt_loadpwdfile() is mixing and (mis)matching memory allocation\n> > protocols, trying to use pfree() to release pwd_cache vector from realloc()\n> \n> Didn't this just get fixed?\n> \n> > Fix function pointer calls to take Datum args for char and int2 args(ecgs)\n> \n> This still needs to be done, and it looks like a lot of tedious\n> gruntwork :-(. Do we have a volunteer?\n\nCan we throw ecps a flag to disable it from doing this until we can\naddress the problem more globally?\n\n> \n> I think we still have some unresolved issues about locking and about\n> handling of multi-segment tables. Shouldn't those be on the TODO list?\n> If they were fixed to everyone's satisfaction, it wasn't apparent from\n> the list traffic...\n\nI have heard grumbing about these, but have not seen a \"Oh, I see the\nproblem now\" report, so I am hoping someone will re-interete that it is\na problem. I have trouble telling if something is resoved. Does anyone\nknow of problems? My recollection is that the multi-segment stuff is\ndone, and the objector retracted his objection. I thought the locking\nwas behaving as it should. Comments?\n\n\n> I am currently trying to investigate the poor performance reported by\n> Edmund Mergl --- since gprof doesn't really work on my Linux box, I\n> am reduced to running a profilable postmaster on my HPUX box with the\n> database area NFS-mounted from the Linux box, where there is enough disk\n> space for the benchmark. This setup gives new meaning to the term\n> \"slow\", but I should be able to get a useful profile out of it. If that\n> turns up anything significant and readily fixable, I might propose\n> delaying 6.5 for a fix.\n\nI can run it here. What do you want me to do?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 27 May 1999 13:52:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I am currently trying to investigate the poor performance reported by\n>> Edmund Mergl --- since gprof doesn't really work on my Linux box, I\n>> am reduced to running a profilable postmaster on my HPUX box with the\n>> database area NFS-mounted from the Linux box, where there is enough disk\n>> space for the benchmark. This setup gives new meaning to the term\n>> \"slow\", but I should be able to get a useful profile out of it.\n\n> I can run it here. What do you want me to do?\n\nWhat I'm after currently is a profile of the no-indexes case. Build a\none-million-row test database using the script Edmund provided (see\nhis message of 22 May 1999 06:39:25 +0200), but stop the script before\nit invokes \"make_idx\". Then, with a backend compiled -pg, run this\nquery:\n\tupdate bench set k500k = k500k + 1 where k100 = 30\nand send me the gprof results.\n\nIf you have time, it'd also be useful to have a profile of the same\nupdate with indexes in place (run Edmund's make_idx script and then do\nthe update again).\n\nI'm pretty close to having a profile from my NFS lashup, but it would\nbe nice to have profiles of the same thing from other machines as\na cross-check that no artifacts are getting introduced...\n\n\t\t\tthanks, tom lane\n",
"msg_date": "Thu, 27 May 1999 16:39:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
}
] |
[
{
"msg_contents": "I have added to the top of plpgsql/src/scan.l:\n\n\textern int yylineno;\n\nMy lex needs that to compile properly. I assume this is the proper fix.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 16:44:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "plpgsql compile"
}
] |
[
{
"msg_contents": "\nI re-cvsup'd, and now I get this:\n\n------\ngmake[3]: Entering directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg/preproc\n'\ngcc -I../../../include -I../../../backend -O2 -m486 -pipe -Wall -Wmissing-prototyp\nes -I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=6 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"\n/usr/local/pgsql/include\\\" -c preproc.c -o preproc.o\npreproc.y:4609: parse error at null character\n/usr/share/misc/bison.simple: In function `yyparse':\n/usr/share/misc/bison.simple:387: warning: implicit declaration of function `yylex'\ngmake[3]: *** [preproc.o] Error 1\ngmake[3]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg/preproc'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces'\ngmake: *** [all] Error 2\n$\n------\n\nIs this a required interface? How do I build without it? I tried eliminating\nit with ./configure --without-ecpg but that didn't work. I'm running short on\ntime and want to fix the libpq++ problems (if I can duplicate them) before I go\nouta town, but this is a roadblock. :(\n\nVince.\n\n\n\n\n\n\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Wed, 26 May 1999 18:25:27 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Uh-oh II - ecpg"
},
{
"msg_contents": "> \n> I re-cvsup'd, and now I get this:\n> \n> ------\n> gmake[3]: Entering directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg/preproc\n> '\n> gcc -I../../../include -I../../../backend -O2 -m486 -pipe -Wall -Wmissing-prototyp\n> es -I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=6 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"\n> /usr/local/pgsql/include\\\" -c preproc.c -o preproc.o\n> preproc.y:4609: parse error at null character\n> /usr/share/misc/bison.simple: In function `yyparse':\n> /usr/share/misc/bison.simple:387: warning: implicit declaration of function `yylex'\n> gmake[3]: *** [preproc.o] Error 1\n> gmake[3]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg/preproc'\n> gmake[2]: *** [all] Error 2\n> gmake[2]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces/ecpg'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory `/usr/local/src/pgsql/pgsql/src/interfaces'\n> gmake: *** [all] Error 2\n> $\n\nComment it out of interfaces/Makefile.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 26 May 1999 20:09:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Uh-oh II - ecpg"
},
{
"msg_contents": "On Wed, May 26, 1999 at 06:25:27PM -0400, Vince Vielhaber wrote:\n> es -I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=6 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"\n> /usr/local/pgsql/include\\\" -c preproc.c -o preproc.o\n> preproc.y:4609: parse error at null character\n\nThis certainly looks like an incomplete file.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 27 May 1999 14:37:25 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Uh-oh II - ecpg"
},
{
"msg_contents": "Michael Meskes <[email protected]> writes:\n> On Wed, May 26, 1999 at 06:25:27PM -0400, Vince Vielhaber wrote:\n>> es -I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=6 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"\n>> /usr/local/pgsql/include\\\" -c preproc.c -o preproc.o\n>> preproc.y:4609: parse error at null character\n\n> This certainly looks like an incomplete file.\n\nI'm not seeing any problem here either. Nor are there any off-color\ncharacters visible in that file near that line. I've got to think\nthat Vince's copy got corrupted somehow...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 May 1999 10:02:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Uh-oh II - ecpg "
},
{
"msg_contents": "\nOn 27-May-99 Tom Lane wrote:\n> Michael Meskes <[email protected]> writes:\n>> On Wed, May 26, 1999 at 06:25:27PM -0400, Vince Vielhaber wrote:\n>>> es -I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=6 -DPATCHLEVEL=0\n>>> -DINCLUDE_PATH=\\\"\n>>> /usr/local/pgsql/include\\\" -c preproc.c -o preproc.o\n>>> preproc.y:4609: parse error at null character\n> \n>> This certainly looks like an incomplete file.\n> \n> I'm not seeing any problem here either. Nor are there any off-color\n> characters visible in that file near that line. I've got to think\n> that Vince's copy got corrupted somehow...\n\nThat musta been it. I deleted it and re-cvsupped and it just built \ncleanly (sans a couple of bison warnings I'm not worried about).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Thu, 27 May 1999 10:50:15 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Uh-oh II - ecpg"
}
] |
[
{
"msg_contents": "> > \n> > Can you send me that patch? Thanks.\n> > \n> \n\n\nI have applied the following patch to fix one of our Open Items, by\nJames Thompson:\n\n---------------------------------------------------------------------------\n\n*** crypt.c\tTue May 25 10:05:43 1999\n--- /home/postgresql/crypt.c\tTue May 25 09:59:46 1999\n***************\n*** 148,156 ****\n \t\t\t\t\t\t\t\t * reload */\n \t\t\twhile (pwd_cache_count--)\n \t\t\t{\n! \t\t\t\tpfree((void *) pwd_cache[pwd_cache_count]);\n \t\t\t}\n! \t\t\tpfree((void *) pwd_cache);\n \t\t\tpwd_cache = NULL;\n \t\t\tpwd_cache_count = 0;\n \t\t}\n--- 148,156 ----\n \t\t\t\t\t\t\t\t * reload */\n \t\t\twhile (pwd_cache_count--)\n \t\t\t{\n! \t\t\t\tfree((void *) pwd_cache[pwd_cache_count]);\n \t\t\t}\n! \t\t\tfree((void *) pwd_cache);\n \t\t\tpwd_cache = NULL;\n \t\t\tpwd_cache_count = 0;\n \t\t}\n***************\n*** 172,178 ****\n \t\t\t\tbuffer[result] = '\\0';\n \n \t\t\tpwd_cache = (char **) realloc((void *) pwd_cache, sizeof(char *) * (pwd_cache_count + 1));\n! \t\t\tpwd_cache[pwd_cache_count++] = pstrdup(buffer);\n \t\t}\n \t\tFreeFile(pwd_file);\n \n--- 172,178 ----\n \t\t\t\tbuffer[result] = '\\0';\n \n \t\t\tpwd_cache = (char **) realloc((void *) pwd_cache, sizeof(char *) * (pwd_cache_count + 1));\n! \t\t\tpwd_cache[pwd_cache_count++] = strdup(buffer);\n \t\t}\n \t\tFreeFile(pwd_file);\n \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 27 May 1999 00:10:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: patch for pgsql password"
}
] |
[
{
"msg_contents": "\n> Do we want pg_dump -z to be the default?\n> \nI would say yes. A dump without switches should dump everything\nincluding permissions. This is what people use to backup their db,\nbut would be of no use if they lose their permissions.\n\nAndreas\n",
"msg_date": "Thu, 27 May 1999 09:36:57 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> \n> > Do we want pg_dump -z to be the default?\n> > \n> I would say yes. A dump without switches should dump everything\n> including permissions. This is what people use to backup their db,\n> but would be of no use if they lose their permissions.\n\nThis is the first person to express an opinion, and I agree with it. \nDoes anyone have more comments? My idea is to print out a message that\nthe -z flag is now the default, and supply another flag to skip\npermission dumping.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 27 May 1999 10:25:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "\nOn 27-May-99 Bruce Momjian wrote:\n>> \n>> > Do we want pg_dump -z to be the default?\n>> > \n>> I would say yes. A dump without switches should dump everything\n>> including permissions. This is what people use to backup their db,\n>> but would be of no use if they lose their permissions.\n> \n> This is the first person to express an opinion, and I agree with it. \n> Does anyone have more comments? My idea is to print out a message that\n> the -z flag is now the default, and supply another flag to skip\n> permission dumping.\n\nI agree. It was brought up a number of months ago when I moved a \ndatabase to a new machine, but that was about it.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Thu, 27 May 1999 11:08:46 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
}
] |
[
{
"msg_contents": "\n> I believe 6.5 is operating as it should --- 6.4 was producing inferior\n> plans for small tables.\n> \nYes, absolutely.\n\n> But it is clearly a Bad Thing to allow the 6.5\n> optimizer to believe that a relation is empty when it isn't. I concur\n> with your suggestion to hack up CREATE INDEX so that creating an index\n> before you load the table isn't quite such a losing proposition.\n> \n> > Please apply the patch I previously sent.\n> \n> Will do.\n> \nI think this will save us a lot of complaints. Thanx\n\nAndreas\n",
"msg_date": "Thu, 27 May 1999 09:45:21 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] create index updates nrows statistics "
}
] |
[
{
"msg_contents": "I am not sure if libpq++ will compile with non g++ compilers,\nbut the Makefile does break non g++.\n\n <<mak.patch>> \nAndreas",
"msg_date": "Thu, 27 May 1999 10:31:21 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Makefile of libpq++"
},
{
"msg_contents": "> I am not sure if libpq++ will compile with non g++ compilers,\n> but the Makefile does break non g++.\n> \n> <<mak.patch>> \n> Andreas\n\nApplied.\n\n\n---------------------------------------------------------------------------\n\n*** ./src/interfaces/libpq++/Makefile.in.orig\tSun May 23 03:03:57 1999\n--- ./src/interfaces/libpq++/Makefile.in\tWed May 26 11:31:49 1999\n***************\n*** 51,57 ****\n \n \n # Pull shared-lib CFLAGS into CXXFLAGS\n! CXXFLAGS+= $(CFLAGS) -Wno-unused\n \n \n .PHONY: examples\n--- 51,57 ----\n \n \n # Pull shared-lib CFLAGS into CXXFLAGS\n! CXXFLAGS+= $(CFLAGS)\n \n \n .PHONY: examples\n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 27 May 1999 10:27:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Makefile of libpq++"
}
] |
[
{
"msg_contents": "> >> > pgcursordb.o: In function `_8PgCursorRC12PgConnectionPCc':\n> >> > \n> /usr/src/pgsql.test/src/interfaces/libpq++/pgcursordb.cc:37: undefined\n> >> > reference\n> >> > to `PgTransaction::PgTransaction(PgConnection const &)'\n> >>\n> \n> Also with g++ 2.7.2.1 and 2.8.1 I can't duplicate the problem that Dan\n> mentions above. Dan, what compiler/compiler version are you using???\n\nIt is egcs-2.91.57 (1.1?) and a bit dirty cygwin installation (used for more\nthan one year ;-)). I will try it on a clean installation of cygwin with\negcs 1.1.2 during the weekend.\n\n\t\t\tDan\n",
"msg_date": "Thu, 27 May 1999 10:51:38 +0200",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] report for Win32 port"
}
] |
[
{
"msg_contents": "I have made a number of changes to xindex.sgml. I reformatted it to make\nthe source a little easier to read. I also added the bits that I recently\ndiscovered making my own user-defined type work with indeces in where\nclauses. Can a few people pick at this and see if I am correct then\nput it into the tree if it is OK?\n\nAnd thanks for the help getting my type working.\n\n<Chapter Id=\"xindex\">\n<Title>Interfacing Extensions To Indices</Title>\n\n<Para>\n The procedures described thus far let you define a new type, new\n functions and new operators. However, we cannot yet define a secondary\n index (such as a <Acronym>B-tree</Acronym>, <Acronym>R-tree</Acronym> or\n hash access method) over a new type or its operators.\n</Para>\n\n<Para>\n Look back at\n <XRef LinkEnd=\"EXTEND-CATALOGS\" EndTerm=\"EXTEND-CATALOGS\">.\n The right half shows the catalogs that we must modify in order to tell\n <ProductName>Postgres</ProductName> how to use a user-defined type and/or\n user-defined operators with an index (i.e., <FileName>pg_am, pg_amop,\n pg_amproc, pg_operator</FileName> and <FileName>pg_opclass</FileName>).\n Unfortunately, there is no simple command to do this. We will demonstrate\n how to modify these catalogs through a running example: a new operator\n class for the <Acronym>B-tree</Acronym> access method that stores and\n sorts complex numbers in ascending absolute value order.\n</Para>\n\n<Para>\n The <FileName>pg_am</FileName> class contains one instance for every user\n defined access method. Support for the heap access method is built into\n <ProductName>Postgres</ProductName>, but every other access method is\n described here. The schema is\n\n<TABLE TOCENTRY=\"1\">\n<Title>Index Schema</Title>\n<TitleAbbrev>Indices</TitleAbbrev>\n<TGroup Cols=\"2\">\n<THead>\n<Row>\n <Entry>Attribute</Entry>\n <Entry>Description</Entry>\n</Row>\n</THead>\n<TBody>\n<Row>\n <Entry>amname</Entry>\n <Entry>name of the access method</Entry>\n</Row>\n<Row>\n<Entry>amowner</Entry>\n<Entry>object id of the owner's instance in pg_user</Entry>\n</Row>\n<Row>\n<Entry>amkind</Entry>\n<Entry>not used at present, but set to 'o' as a place holder</Entry>\n</Row>\n<Row>\n<Entry>amstrategies</Entry>\n<Entry>number of strategies for this access method (see below)</Entry>\n</Row>\n<Row>\n<Entry>amsupport</Entry>\n<Entry>number of support routines for this access method (see below)</Entry>\n</Row>\n<Row>\n<Entry>amgettuple\n aminsert\n ...</Entry>\n\n<Entry>procedure identifiers for interface routines to the access\n method. For example, regproc ids for opening, closing, and\n getting instances from the access method appear here. </Entry>\n</Row>\n</TBody>\n</TGroup>\n</TABLE>\n</Para>\n\n<Para>\n The <Acronym>object ID</Acronym> of the instance in\n <FileName>pg_am</FileName> is used as a foreign key in lots of other\n classes. You don't need to add a new instance to this class; all\n you're interested in is the <Acronym>object ID</Acronym> of the access\n method instance you want to extend:\n\n<ProgramListing>\nSELECT oid FROM pg_am WHERE amname = 'btree';\n\n +----+\n |oid |\n +----+\n |403 |\n +----+\n</ProgramListing>\n</Para>\n\n<Para>\n We will use that select in a where clause later.\n</Para>\n\n<Para>\n The <FileName>amstrategies</FileName> attribute exists to standardize\n comparisons across data types. For example, <Acronym>B-tree</Acronym>s\n impose a strict ordering on keys, lesser to greater. Since\n <ProductName>Postgres</ProductName> allows the user to define operators,\n <ProductName>Postgres</ProductName> cannot look at the name of an operator\n (eg, \">\" or \"<\") and tell what kind of comparison it is. In fact,\n some access methods don't impose any ordering at all. For example,\n <Acronym>R-tree</Acronym>s express a rectangle-containment relationship,\n whereas a hashed data structure expresses only bitwise similarity based\n on the value of a hash function. <ProductName>Postgres</ProductName>\n needs some consistent way of taking a qualification in your query,\n looking at the operator and then deciding if a usable index exists. This\n implies that <ProductName>Postgres</ProductName> needs to know, for\n example, that the \"<=\" and \">\" operators partition a\n <Acronym>B-tree</Acronym>. <ProductName>Postgres</ProductName>\n uses strategies to express these relationships between\n operators and the way they can be used to scan indices.\n</Para>\n\n<Para>\n Defining a new set of strategies is beyond the scope of this discussion,\n but we'll explain how <Acronym>B-tree</Acronym> strategies work because\n you'll need to know that to add a new operator class. In the\n <FileName>pg_am</FileName> class, the amstrategies attribute is the\n number of strategies defined for this access method. For\n <Acronym>B-tree</Acronym>s, this number is 5. These strategies\n correspond to\n\n<TABLE TOCENTRY=\"1\">\n<Title>B-tree Strategies</Title>\n<TitleAbbrev>B-tree</TitleAbbrev>\n<TGroup Cols=\"2\">\n<THead>\n<Row>\n<Entry>Operation</Entry>\n<Entry>Index</Entry>\n</Row>\n</THead>\n<TBody>\n<Row>\n<Entry>less than</Entry>\n<Entry>1</Entry>\n</Row>\n<Row>\n<Entry>less than or equal</Entry>\n<Entry>2</Entry>\n</Row>\n<Row>\n<Entry>equal</Entry>\n<Entry>3</Entry>\n</Row>\n<Row>\n<Entry>greater than or equal</Entry>\n<Entry>4</Entry>\n</Row>\n<Row>\n<Entry>greater than</Entry>\n<Entry>5</Entry>\n</Row>\n</TBody>\n</TGroup>\n</TABLE>\n</Para>\n\n<Para>\n The idea is that you'll need to add procedures corresponding to the\n comparisons above to the <FileName>pg_amop</FileName> relation (see below).\n The access method code can use these strategy numbers, regardless of data\n type, to figure out how to partition the <Acronym>B-tree</Acronym>,\n compute selectivity, and so on. Don't worry about the details of adding\n procedures yet; just understand that there must be a set of these\n procedures for <FileName>int2, int4, oid,</FileName> and every other\n data type on which a <Acronym>B-tree</Acronym> can operate.\n\n Sometimes, strategies aren't enough information for the system to figure\n out how to use an index. Some access methods require other support\n routines in order to work. For example, the <Acronym>B-tree</Acronym>\n access method must be able to compare two keys and determine whether one\n is greater than, equal to, or less than the other. Similarly, the\n <Acronym>R-tree</Acronym> access method must be able to compute\n intersections, unions, and sizes of rectangles. These\n operations do not correspond to user qualifications in\n SQL queries; they are administrative routines used by\n the access methods, internally.\n</Para>\n\n<Para>\n In order to manage diverse support routines consistently across all\n <ProductName>Postgres</ProductName> access methods,\n <FileName>pg_am</FileName> includes an attribute called\n <FileName>amsupport</FileName>. This attribute records the number of\n support routines used by an access method. For <Acronym>B-tree</Acronym>s,\n this number is one -- the routine to take two keys and return -1, 0, or\n +1, depending on whether the first key is less than, equal\n to, or greater than the second.\n<Note>\n<Para>\nStrictly speaking, this routine can return a negative\nnumber (< 0), 0, or a non-zero positive number (> 0).\n</Para>\n</Note>\n</para>\n<Para>\n The <FileName>amstrategies</FileName> entry in pg_am is just the number\n of strategies defined for the access method in question. The procedures\n for less than, less equal, and so on don't appear in\n <FileName>pg_am</FileName>. Similarly, <FileName>amsupport</FileName>\n is just the number of support routines required by the access\n method. The actual routines are listed elsewhere.\n</Para>\n\n<Para>\n The next class of interest is pg_opclass. This class exists only to\n associate a name and default type with an oid. In pg_amop, every\n <Acronym>B-tree</Acronym> operator class has a set of procedures, one\n through five, above. Some existing opclasses are <FileName>int2_ops,\n int4_ops, and oid_ops</FileName>. You need to add an instance with your\n opclass name (for example, <FileName>complex_abs_ops</FileName>) to\n <FileName>pg_opclass</FileName>. The <FileName>oid</FileName> of\n this instance is a foreign key in other classes.\n\n<ProgramListing>\nINSERT INTO pg_opclass (opcname, opcdeftype)\n SELECT 'complex_abs_ops', oid FROM pg_type WHERE typname = 'complex_abs';\n\nSELECT oid, opcname, opcdeftype\n FROM pg_opclass\n WHERE opcname = 'complex_abs_ops';\n\n +------+-----------------+------------+\n |oid | opcname | opcdeftype |\n +------+-----------------+------------+\n |17314 | complex_abs_ops | 29058 |\n +------+-----------------+------------+\n</ProgramListing>\n\n Note that the oid for your <FileName>pg_opclass</FileName> instance will\n be different! Don't worry about this though. We'll get this number\n from the system later just like we got the oid of the type here.\n</Para>\n\n<Para>\n So now we have an access method and an operator class.\n We still need a set of operators; the procedure for\n defining operators was discussed earlier in this manual.\n For the complex_abs_ops operator class on Btrees,\n the operators we require are:\n\n<ProgramListing>\n absolute value less-than\n absolute value less-than-or-equal\n absolute value equal\n absolute value greater-than-or-equal\n absolute value greater-than\n</ProgramListing>\n</Para>\n\n<Para>\n Suppose the code that implements the functions defined\n is stored in the file\n<FileName>PGROOT/src/tutorial/complex.c</FileName>\n</Para>\n\n<Para>\n Part of the code look like this: (note that we will only show the\n equality operator for the rest of the examples. The other four\n operators are very similar. Refer to <FileName>complex.c</FileName>\n or <FileName>complex.source</FileName> for the details.)\n\n<ProgramListing>\n#define Mag(c) ((c)->x*(c)->x + (c)->y*(c)->y)\n\n bool\n complex_abs_eq(Complex *a, Complex *b)\n {\n double amag = Mag(a), bmag = Mag(b);\n return (amag==bmag);\n }\n</ProgramListing>\n</Para>\n\n<Para>\n There are a couple of important things that are happening below.\n</Para>\n\n<Para>\n First, note that operators for less-than, less-than-or equal, equal,\n greater-than-or-equal, and greater-than for <FileName>int4</FileName>\n are being defined. All of these operators are already defined for\n <FileName>int4</FileName> under the names <, <=, =, >=,\n and >. The new operators behave differently, of course. In order\n to guarantee that <ProductName>Postgres</ProductName> uses these\n new operators rather than the old ones, they need to be named differently\n from the old ones. This is a key point: you can overload operators in\n <ProductName>Postgres</ProductName>, but only if the operator isn't\n already defined for the argument types. That is, if you have <\n defined for (int4, int4), you can't define it again.\n <ProductName>Postgres</ProductName> does not check this when you define\n your operator, so be careful. To avoid this problem, odd names will be\n used for the operators. If you get this wrong, the access methods\n are likely to crash when you try to do scans.\n</Para>\n\n<Para>\n The other important point is that all the operator functions return\n Boolean values. The access methods rely on this fact. (On the other\n hand, the support function returns whatever the particular access method\n expects -- in this case, a signed integer.) The final routine in the\n file is the \"support routine\" mentioned when we discussed the amsupport\n attribute of the <FileName>pg_am</FileName> class. We will use this\n later on. For now, ignore it.\n</Para>\n\n<Para>\n<ProgramListing>\nCREATE FUNCTION complex_abs_eq(complex_abs, complex_abs)\n RETURNS bool\n AS 'PGROOT/tutorial/obj/complex.so'\n LANGUAGE 'c';\n</ProgramListing>\n</Para>\n\n<Para>\n Now define the operators that use them. As noted, the operator names\n must be unique among all operators that take two <FileName>int4</FileName>\n operands. In order to see if the operator names listed below are taken,\n we can do a query on <FileName>pg_operator</FileName>:\n\n<ProgramListing>\n /*\n * this query uses the regular expression operator (~)\n * to find three-character operator names that end in\n * the character &\n */\n SELECT *\n FROM pg_operator\n WHERE oprname ~ '^..&$'::text;\n</ProgramListing>\n\n</Para>\n\n<Para>\n to see if your name is taken for the types you want. The important\n things here are the procedure (which are the <Acronym>C</Acronym>\n functions defined above) and the restriction and join selectivity\n functions. You should just use the ones used below--note that there\n are different such functions for the less-than, equal, and greater-than\n cases. These must be supplied, or the access method will crash when it\n tries to use the operator. You should copy the names for restrict and\n join, but use the procedure names you defined in the last step.\n\n<ProgramListing>\nCREATE OPERATOR = (\n leftarg = complex_abs, rightarg = complex_abs,\n procedure = complex_abs_eq,\n restrict = eqsel, join = eqjoinsel\n )\n</ProgramListing>\n</Para>\n\n<Para>\n Notice that five operators corresponding to less, less equal, equal,\n greater, and greater equal are defined.\n</Para>\n\n<Para>\n We're just about finished. the last thing we need to do is to update\n the <FileName>pg_amop</FileName> relation. To do this, we need the\n following attributes:\n\n<TABLE TOCENTRY=\"1\">\n<Title><FileName>pg_amproc</FileName> Schema</Title>\n<TitleAbbrev><FileName>pg_amproc</FileName></TitleAbbrev>\n<TGroup Cols=\"2\">\n<THead>\n<Row>\n<Entry>Attribute</Entry>\n<Entry>Description</Entry>\n</Row>\n</THead>\n<TBody>\n<Row>\n<Entry>amopid</Entry>\n<Entry>the <FileName>oid</FileName> of the <FileName>pg_am</FileName> instance\n for B-tree (== 403, see above)</Entry>\n</Row>\n<Row>\n<Entry>amopclaid</Entry>\n<Entry>the <FileName>oid</FileName> of the\n<FileName>pg_opclass</FileName> instance for <FileName>complex_abs_ops</FileName>\n (== whatever you got instead of <FileName>17314</FileName>, see above)</Entry>\n</Row>\n<Row>\n<Entry>amopopr</Entry>\n<Entry>the <FileName>oid</FileName>s of the operators for the opclass\n (which we'll get in just a minute)</Entry>\n</Row>\n<Row>\n<Entry>amopselect, amopnpages</Entry>\n<Entry>cost functions</Entry>\n</Row>\n</TBody>\n</TGroup>\n</TABLE>\n\n The cost functions are used by the query optimizer to decide whether or\n not to use a given index in a scan. Fortunately, these already exist.\n The two functions we'll use are <FileName>btreesel</FileName>, which\n estimates the selectivity of the <Acronym>B-tree</Acronym>, and\n <FileName>btreenpage</FileName>, which estimates the number of pages a\n search will touch in the tree.\n</Para>\n\n<Para>\n So we need the <FileName>oid</FileName>s of the operators we just\n defined. We'll look up the names of all the operators that take\n two <FileName>complex</FileName>es, and pick ours out:\n\n<ProgramListing>\n SELECT o.oid AS opoid, o.oprname\n INTO TABLE complex_ops_tmp\n FROM pg_operator o, pg_type t\n WHERE o.oprleft = t.oid and o.oprright = t.oid\n and t.typname = 'complex_abs';\n\n +------+---------+\n |oid | oprname |\n +------+---------+\n |17321 | < |\n +------+---------+\n |17322 | <= |\n +------+---------+\n |17323 | = |\n +------+---------+\n |17324 | >= |\n +------+---------+\n |17325 | > |\n +------+---------+\n</ProgramListing>\n\n (Again, some of your <FileName>oid</FileName> numbers will almost\n certainly be different.) The operators we are interested in are those\n with <FileName>oid</FileName>s 17321 through 17325. The values you\n get will probably be different, and you should substitute them for the\n values below. We will do this with a select statement.\n</Para>\n\n<Para>\n Now we're ready to update <FileName>pg_amop</FileName> with our new\n operator class. The most important thing in this entire discussion\n is that the operators are ordered, from less equal through greater\n equal, in <FileName>pg_amop</FileName>. We add the instances we need:\n\n<ProgramListing>\n INSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n amopselect, amopnpages) \n SELECT am.oid, opcl.oid, c.opoid, 1,\n 'btreesel'::regproc, 'btreenpage'::regproc\n FROM pg_am am, pg_opclass opcl, complex_abs_ops_tmp c\n WHERE amname = 'btree' AND\n opcname = 'complex_abs_ops' AND\n c.oprname = '<';\n</ProgramListing>\n\n Now do this for the other operators substituting for the \"1\" in the\n third line above and the \"<\" in the last line. Note the order:\n \"less than\" is 1, \"less than or equal\" is 2, \"equal\" is 3, \"greater\n than or equal\" is 4, and \"greater than\" is 5.\n</Para>\n\n<Para>\n The next step is registration of the \"support routine\" previously\n described in our discussion of <FileName>pg_am</FileName>. The\n <FileName>oid</FileName> of this support routine is stored in the\n <FileName>pg_amproc</FileName> class, keyed by the access method\n <FileName>oid</FileName> and the operator class <FileName>oid</FileName>.\n First, we need to register the function in\n <ProductName>Postgres</ProductName> (recall that we put the\n <Acronym>C</Acronym> code that implements this routine in the bottom of\n the file in which we implemented the operator routines):\n\n<ProgramListing>\n CREATE FUNCTION complex_abs_cmp(complex, complex)\n RETURNS int4\n AS 'PGROOT/tutorial/obj/complex.so'\n LANGUAGE 'c';\n\n SELECT oid, proname FROM pg_proc\n WHERE proname = 'complex_abs_cmp';\n\n +------+-----------------+\n |oid | proname |\n +------+-----------------+\n |17328 | complex_abs_cmp |\n +------+-----------------+\n</ProgramListing>\n\n (Again, your <FileName>oid</FileName> number will probably be different\n and you should substitute the value you see for the value below.)\n We can add the new instance as follows:\n\n<ProgramListing>\n INSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n SELECT a.oid, b.oid, c.oid, 1\n FROM pg_am a, pg_opclass b, pg_proc c\n WHERE a.amname = 'btree' AND\n b.opcname = 'complex_abs_ops' AND\n c.proname = 'complex_abs_cmp';\n</ProgramListing>\n</Para>\n\n<Para>\n Now we need to add a hashing strategy to allow the type to be indexed.\n We do this by using another type in pg_am but we reuse the sames ops.\n\n<ProgramListing>\n INSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy,\n amopselect, amopnpages)\n SELECT am.oid, opcl.oid, c.opoid, 1,\n 'hashsel'::regproc, 'hashnpage'::regproc\n FROM pg_am am, pg_opclass opcl, complex_abs_ops_tmp c\n WHERE amname = 'hash' AND\n opcname = 'complex_abs_ops' AND\n c.oprname = '=';\n</ProgramListing>\n</Para>\n\n<Para>\n In order to use this index in a where clause, we need to modify the\n <FileName>pg_operator</FileName> class as follows.\n\n<ProgramListing>\n UPDATE pg_operator\n SET oprrest = 'eqsel'::regproc, oprjoin = 'eqjoinsel'\n WHERE oprname = '=' AND\n oprleft = oprright AND\n oprleft = (SELECT oid FROM pg_type WHERE typname = 'complex_abs');\n \n UPDATE pg_operator\n SET oprrest = 'neqsel'::regproc, oprjoin = 'neqjoinsel'\n WHERE oprname = '<>' AND\n oprleft = oprright AND\n oprleft = (SELECT oid FROM pg_type WHERE typname = 'complex_abs');\n \n UPDATE pg_operator\n SET oprrest = 'neqsel'::regproc, oprjoin = 'neqjoinsel'\n WHERE oprname = '<>' AND\n oprleft = oprright AND\n oprleft = (SELECT oid FROM pg_type WHERE typname = 'complex_abs');\n \n UPDATE pg_operator\n SET oprrest = 'intltsel'::regproc, oprjoin = 'intltjoinsel'\n WHERE oprname = '<' AND \n oprleft = oprright AND\n oprleft = (SELECT oid FROM pg_type WHERE typname = 'complex_abs');\n \n UPDATE pg_operator\n SET oprrest = 'intltsel'::regproc, oprjoin = 'intltjoinsel'\n WHERE oprname = '<=' AND\n oprleft = oprright AND\n oprleft = (SELECT oid FROM pg_type WHERE typname = 'complex_abs');\n \n UPDATE pg_operator\n SET oprrest = 'intgtsel'::regproc, oprjoin = 'intgtjoinsel'\n WHERE oprname = '>' AND\n oprleft = oprright AND\n oprleft = (SELECT oid FROM pg_type WHERE typname = 'complex_abs');\n \n UPDATE pg_operator\n SET oprrest = 'intgtsel'::regproc, oprjoin = 'intgtjoinsel'\n WHERE oprname = '>=' AND\n oprleft = oprright AND\n oprleft = (SELECT oid FROM pg_type WHERE typname = 'complex_abs');\n</ProgramListing> \n</Para>\n\n<Para> \nAnd last (Finally!) we register a description of this type.\n\n<ProgramListing>\n INSERT INTO pg_description (objoid, description) \n SELECT oid, 'Two part G/L account'\n\t FROM pg_type WHERE typname = 'complex_abs';\n</ProgramListing> \n</Para> \n\n</Chapter>\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 27 May 1999 09:58:14 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "New xindex.sgml"
},
{
"msg_contents": "> I have made a number of changes to xindex.sgml.\n> Can a few people pick at this and see if I am correct then\n> put it into the tree if it is OK?\n\nIt is going into the tree this morning, so be sure to make any\nadditional changes from that version...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 27 May 1999 14:18:23 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New xindex.sgml"
}
] |
[
{
"msg_contents": "> Are you going to generate HISTORY from (sgml) for this release? If so, I\n> will skip updating that.\n\nYes. So, I'm going to be out of town from this evening through the\nholiday weekend. And I would need a few days (4?; could shrink it a\nbit by taking a day from work) to prepare postscript hardcopy for a\nrelease.\n\nAre we on track to release June 1? If so, I'd like to derail it by a\nfew days to get the hardcopy docs prepared. If not, I won't need to\ntake the fall for asking for a slip ;)\n\nAn alternative which I would support but am not yet as satisfied with\nwould be to decouple the hardcopy docs from the release package. Then\nI could release the hardcopy a few days after the actual release.\n\nThis alternative has the attractive feature that I don't need to get\nfinal updates two weeks in advance of a release to start prepping the\nhardcopy. Especially since I *never* manage to get those updates from\neveryone that early. html and text files like INSTALL would ship with\nthe first release, since those are either automatic (html) or easy (~5\nminutes each for INSTALL or HISTORY).\n\nComments?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 27 May 1999 16:03:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Release date and docs"
},
{
"msg_contents": "> > Are you going to generate HISTORY from (sgml) for this release? If so, I\n> > will skip updating that.\n> \n> Yes. So, I'm going to be out of town from this evening through the\n> holiday weekend. And I would need a few days (4?; could shrink it a\n> bit by taking a day from work) to prepare postscript hardcopy for a\n> release.\n> \n> Are we on track to release June 1? If so, I'd like to derail it by a\n> few days to get the hardcopy docs prepared. If not, I won't need to\n> take the fall for asking for a slip ;)\n\nNo one has requested any kind of slip from June 1. I may if we can't\nget these final items done on the open list. My major item is to fix\nthe psql \\h, man pages, and docs for the new locking parameters from\nVadim. That will have to be done before the release, and I haven't done\nthem yet. We also need regression tests from numeric. I would say we\nshould target June 4 or June 7 for the release, depending on whether we\nneed the weekend. We are certainly in good shape for this release,\nthough.\n\n\n> \n> An alternative which I would support but am not yet as satisfied with\n> would be to decouple the hardcopy docs from the release package. Then\n> I could release the hardcopy a few days after the actual release.\n\nNo, no reason to do that, and once we release, we will be very busy\nfielding problem reports, so it doesn't buy us anything to decouple\nthem.\n\n> \n> This alternative has the attractive feature that I don't need to get\n> final updates two weeks in advance of a release to start prepping the\n> hardcopy. Especially since I *never* manage to get those updates from\n> everyone that early. html and text files like INSTALL would ship with\n> the first release, since those are either automatic (html) or easy (~5\n> minutes each for INSTALL or HISTORY).\n\nAgain, let's not decouple them.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 27 May 1999 12:10:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Release date and docs"
},
{
"msg_contents": "> > Are we on track to release June 1? If so, I'd like to derail it by a\n> > few days to get the hardcopy docs prepared. If not, I won't need to\n> > take the fall for asking for a slip ;)\n> No one has requested any kind of slip from June 1. I may if we can't\n> get these final items done on the open list. My major item is to fix\n> the psql \\h, man pages, and docs for the new locking parameters from\n> Vadim. That will have to be done before the release, and I haven't done\n> them yet. We also need regression tests from numeric. I would say we\n> should target June 4 or June 7 for the release, depending on whether we\n> need the weekend. We are certainly in good shape for this release,\n> though.\n\nOK. I've just sent some updates for your ToDo list. The regression\ntest stuff is coming from Jan afaik, but someone else can do it if he\ncan't.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 27 May 1999 16:35:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Release date and docs"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> An alternative which I would support but am not yet as satisfied with\n> would be to decouple the hardcopy docs from the release package.\n\nTotally apart from any schedule considerations, I think it would be good\nif the derived forms of the docs (the .ps files and tarred .html files\nin pgsql/doc) were decoupled from the source distribution. In\nparticular, remove those files from the CVS archives and distribute\nthem as a separate tarball rather than as part of the source tarballs.\n\nThis'd be good on general principles (derived files should not be in\nCVS) and it'd also reduce the size of snapshot tarballs by a couple of\nmeg, which is a useful savings. Since the derived docs are always a\nversion behind during the runup to a new release, I don't see much\nvalue in forcing people to download 'em.\n\nA further improvement, which oughta be pretty easy if the doc prep tools\nare installed at hub.org, is to produce a nightly tarball of the derived\ndocs *generated from the currently checked-in sources*. As someone who\ndoesn't have the doc prep tools installed locally, I know I would find\nthat very useful. Right now, I have the choice of looking at 6.4.* docs\nor raw SGML :-(.\n\nIf you don't want to change our distribution practices to the extent\nof having separate source-code and doc tarfiles, then it'd at least be\na good idea to regenerate the derived docs as part of the nightly\nsnapshot-building run, so that the snapshots contain up-to-date derived\nfiles rather than historical artifacts...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 May 1999 13:31:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Release date and docs "
},
{
"msg_contents": "On Thu, 27 May 1999, Bruce Momjian wrote:\n\n> > > Are you going to generate HISTORY from (sgml) for this release? If so, I\n> > > will skip updating that.\n> > \n> > Yes. So, I'm going to be out of town from this evening through the\n> > holiday weekend. And I would need a few days (4?; could shrink it a\n> > bit by taking a day from work) to prepare postscript hardcopy for a\n> > release.\n> > \n> > Are we on track to release June 1? If so, I'd like to derail it by a\n> > few days to get the hardcopy docs prepared. If not, I won't need to\n> > take the fall for asking for a slip ;)\n> \n> No one has requested any kind of slip from June 1. I may if we can't\n> get these final items done on the open list. My major item is to fix\n> the psql \\h, man pages, and docs for the new locking parameters from\n> Vadim. That will have to be done before the release, and I haven't done\n> them yet. We also need regression tests from numeric. I would say we\n> should target June 4 or June 7 for the release, depending on whether we\n> need the weekend. We are certainly in good shape for this release,\n> though.\n\nLet's make it June 7th...I prefer a Monday release to a Friday one, and if\nThomas needs a little more time for the docs, and you have a couple of\nthings you want dealt with, take the extra couple of days...\n\nLet's lock her down for June 7th though...Vince?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 28 May 1999 09:58:47 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Release date and docs"
},
{
"msg_contents": "On Fri, 28 May 1999, The Hermit Hacker wrote:\n\n> Let's lock her down for June 7th though...Vince?\n\nAnnouncement on web page? I think I have all code and docs turned in,\nI didn't miss something did I?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 28 May 1999 12:51:52 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Release date and docs"
},
{
"msg_contents": "> On Fri, 28 May 1999, The Hermit Hacker wrote:\n> \n> > Let's lock her down for June 7th though...Vince?\n> \n> Announcement on web page? I think I have all code and docs turned in,\n> I didn't miss something did I?\n> \n\nYes, let's get something on the web page.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 28 May 1999 12:52:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Release date and docs"
},
{
"msg_contents": "On Fri, 28 May 1999, Vince Vielhaber wrote:\n\n> On Fri, 28 May 1999, The Hermit Hacker wrote:\n> \n> > Let's lock her down for June 7th though...Vince?\n> \n> Announcement on web page? I think I have all code and docs turned in,\n> I didn't miss something did I?\n\nJust announcement on web page :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 28 May 1999 13:56:43 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Release date and docs"
},
{
"msg_contents": "> > An alternative which I would support but am not yet as satisfied with\n> > would be to decouple the hardcopy docs from the release package.\n> Totally apart from any schedule considerations, I think it would be good\n> if the derived forms of the docs (the .ps files and tarred .html files\n> in pgsql/doc) were decoupled from the source distribution. In\n> particular, remove those files from the CVS archives and distribute\n> them as a separate tarball rather than as part of the source tarballs.\n> This'd be good on general principles (derived files should not be in\n> CVS) and it'd also reduce the size of snapshot tarballs by a couple of\n> meg, which is a useful savings. Since the derived docs are always a\n> version behind during the runup to a new release, I don't see much\n> value in forcing people to download 'em.\n\nThese are good points.\n\n> A further improvement, which oughta be pretty easy if the doc prep tools\n> are installed at hub.org, is to produce a nightly tarball of the derived\n> docs *generated from the currently checked-in sources*. As someone who\n> doesn't have the doc prep tools installed locally, I know I would find\n> that very useful. Right now, I have the choice of looking at 6.4.* docs\n> or raw SGML :-(.\n\nReally? We've been doing exactly as you suggest for several months now\n:)\n\nLook in ftp://postgresql.org/pub/doc/*.tar.gz for snapshot html, and\nthe web site docs are just untarred versions of the same thing. These\nare all generated on a nightly basis directly from the CVS tree. We\nshould probably split those off into a developer's area rather than\nhave those be the *only* copy of web site docs, but it was a start.\n\nAlso, on hub.org ~thomas/CURRENT/docbuild is the script for the\nnightly cron job, and my tree is set up to allow \"committers\" to use\nit. If you use it from my tree, be sure to have your umask set to \"2\"\nto allow group mods.\n\nIt's pretty nice having this run daily, because I get the cron log of\nthe run and can see when something new breaks the build from sgml.\n\n> If you don't want to change our distribution practices to the extent\n> of having separate source-code and doc tarfiles, then it'd at least be\n> a good idea to regenerate the derived docs as part of the nightly\n> snapshot-building run, so that the snapshots contain up-to-date derived\n> files rather than historical artifacts...\n\nGood idea, but we don't want to do CVS updates on those big tar files.\n\nWhat do other folks think??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 01 Jun 1999 14:23:51 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Release date and docs"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Really? We've been doing exactly as you suggest for several months now\n> :)\n\n> Look in ftp://postgresql.org/pub/doc/*.tar.gz for snapshot html, and\n> the web site docs are just untarred versions of the same thing. These\n> are all generated on a nightly basis directly from the CVS tree.\n\nCool, I didn't know about that. Thanks!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jun 1999 10:39:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Release date and docs "
},
{
"msg_contents": "> It's pretty nice having this run daily, because I get the cron log of\n> the run and can see when something new breaks the build from sgml.\n> \n> > If you don't want to change our distribution practices to the extent\n> > of having separate source-code and doc tarfiles, then it'd at least be\n> > a good idea to regenerate the derived docs as part of the nightly\n> > snapshot-building run, so that the snapshots contain up-to-date derived\n> > files rather than historical artifacts...\n> \n> Good idea, but we don't want to do CVS updates on those big tar files.\n> \n> What do other folks think??\n\nI like the current setup.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jun 1999 10:46:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Release date and docs"
}
] |
[
{
"msg_contents": "OK, now pg_dump dumps ACL's by default. I have added a new -x option to\nskip them, and print a warning message when the old -z option is used.\n\nI have updated the sgml docs that mentioned used of the -z switch.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 27 May 1999 12:23:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump ACL's"
}
] |
[
{
"msg_contents": "I am having some problems w/ LO in postgres 6.5.snapshot (date marked\n5/27). Here is the problem:\n\nI am doing a program that will search through ~250M of text in LO\nformat. The search function seems to be running out of ram as I get a\n'NOTICE: ShmemAlloc: out of memory' error after the program runs for a\nbit. From running 'free', I can see that I am not using any memory in\nmy swap space yet, so it does not really seem to be running out of\nmemory. Postmaster does constantly grow even though I am not\ngenerating any information that should make it grow at all. When I\nhave commented out the lo_open and lo_close function calls, everything\nis ok so I am guessing that there is some kind of a leak in the lo_open\nand lo_close functions if not in the back end in postmaster. Come take\na look at the code if you please:\n\nhttp://x.cwru.edu/~bap/search_4.c\n\n- Brandon\n\n------------------------------------------------------\nSmith Computer Lab Administrator,\nCase Western Reserve University\n [email protected]\n 216 - 368 - 5066\n http://cwrulug.cwru.edu\n------------------------------------------------------\n\nPGP Public Key Fingerprint: 1477 2DCF 8A4F CA2C 8B1F 6DFE 3B7C FDFB\n\n\n",
"msg_date": "Thu, 27 May 1999 15:25:36 -0400",
"msg_from": "\"Brandon Palmer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems w/ LO"
},
{
"msg_contents": "\nOn 27-May-99 Brandon Palmer wrote:\n> I am having some problems w/ LO in postgres 6.5.snapshot (date marked\n> 5/27). Here is the problem:\n> \n> I am doing a program that will search through ~250M of text in LO\n> format. The search function seems to be running out of ram as I get a\n> 'NOTICE: ShmemAlloc: out of memory' error after the program runs for a\n> bit. From running 'free', I can see that I am not using any memory in\n> my swap space yet, so it does not really seem to be running out of\n> memory. Postmaster does constantly grow even though I am not\n> generating any information that should make it grow at all. When I\n> have commented out the lo_open and lo_close function calls, everything\n> is ok so I am guessing that there is some kind of a leak in the lo_open\n> and lo_close functions if not in the back end in postmaster. Come take\n> a look at the code if you please:\n> \n> http://x.cwru.edu/~bap/search_4.c\n\nWhat are you running it on? What kind of limits do you have in your\nshell (man limits in FreeBSD).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Thu, 27 May 1999 15:44:10 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Problems w/ LO"
},
{
"msg_contents": ">I am having some problems w/ LO in postgres 6.5.snapshot (date marked\n>5/27). Here is the problem:\n\nSeems 6.5 has a problem with LOs.\n\nSorry, but I don't have time right now to track this problem since I\nhave another one that has higher priority.\n\nI've been looking into the \"stuck spin lock\" problem under high\nload. Unless it being solved, PostgreSQL would not be usable in the\n\"real world.\"\n\nQuestion to hackers: Why does s_lock_stuck() call abort()? Shouldn't\nbe elog(ERROR) or elog(FATAL)?\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 28 May 1999 11:38:46 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems w/ LO "
},
{
"msg_contents": "> I am having some problems w/ LO in postgres 6.5.snapshot (date marked\n> 5/27). Here is the problem:\n> \n> I am doing a program that will search through ~250M of text in LO\n> format. The search function seems to be running out of ram as I get a\n> 'NOTICE: ShmemAlloc: out of memory' error after the program runs for a\n> bit. From running 'free', I can see that I am not using any memory in\n> my swap space yet, so it does not really seem to be running out of\n> memory. Postmaster does constantly grow even though I am not\n> generating any information that should make it grow at all. When I\n> have commented out the lo_open and lo_close function calls, everything\n> is ok so I am guessing that there is some kind of a leak in the lo_open\n> and lo_close functions if not in the back end in postmaster. Come take\n> a look at the code if you please:\n> \n> http://x.cwru.edu/~bap/search_4.c\n\nI have took look at your code. There are some minor errors in it, but\nthey should not cause 'NOTICE: ShmemAlloc: out of memory' anyway. I\ncouldn't run your program since I don't have test data. So I made a\nsmall test program to make sure if the problem caused by LO (It's\nstolen from test/examples/testlo.c). In the program, ~4k LO is read\nfor 10000 times in a transaction. The backend process size became a\nlittle bit bigger, but I couldn't see any problem you mentioned.\n\nI have attached my test program and your program (modified so that it\ndoes not use LO calls). Can you try them and report back what happens?\n---\nTatsuo Ishii\n\n---------------------------------------------------------------\n#include <stdio.h>\n#include <stdlib.h>\n\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <unistd.h>\n\n#include \"libpq-fe.h\"\n#include \"libpq/libpq-fs.h\"\n\n#define BUFSIZE\t\t\t1024\n\nint\nmain(int argc, char **argv)\n{\n PGconn\t *conn;\n PGresult *res;\n int\t\t\tlobj_fd;\n int\t\t\tnbytes;\n int i;\n char buf[BUFSIZE];\n\n conn = PQsetdb(NULL, NULL, NULL, NULL, \"test\");\n\n /* check to see that the backend connection was successfully made */\n if (PQstatus(conn) == CONNECTION_BAD)\n {\n fprintf(stderr, \"%s\", PQerrorMessage(conn));\n exit(0);\n }\n\n res = PQexec(conn, \"begin\");\n PQclear(res);\n\n for (i=0;i<10000;i++) {\n\n lobj_fd = lo_open(conn, 20225, INV_READ);\n if (lobj_fd < 0)\n {\n\tfprintf(stderr, \"can't open large object\");\n\texit(0);\n }\n printf(\"start read\\n\");\n while ((nbytes = lo_read(conn, lobj_fd, buf, BUFSIZE)) > 0) {\n printf(\"read %d\\n\",nbytes);\n }\n lo_close(conn, lobj_fd);\n }\n\n res = PQexec(conn, \"end\");\n PQclear(res);\n\n PQfinish(conn);\n exit(0);\n}\n---------------------------------------------------------------\n#include <stdio.h>\n#include \"libpq-fe.h\"\n#include <stdlib.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <unistd.h>\n#include \"libpq/libpq-fs.h\"\n#include <string.h>\n\n#define BUFSIZE 1024\n\nint\tlobj_fd;\nchar\t*buf;\nint\tnbytes;\n\nbuf = (char *) malloc (1024);\n\nint print_lo(PGconn*, char*, int);\n\nint print_lo(PGconn *conn, char *search_for, int in_oid)\n{\n\treturn(1);\n\n\tlobj_fd = lo_open(conn, in_oid, INV_READ);\n\n while ((nbytes = lo_read(conn, lobj_fd, buf, BUFSIZE)) > 0)\n {\n if(strstr(buf,search_for))\n {\n lo_close(conn, lobj_fd);\n return 1;\n }\n }\n\n lo_close(conn, lobj_fd);\n\t\n return 0;\n}\n\nint\nmain(int argc, char **argv)\n{\n\tchar *search_1,\n\t\t*search_2;\n\tchar *_insert;\n\tint i;\n\tint nFields;\n\n\tPGconn *conn;\n\tPGresult *res;\n\n\t_insert = (char *) malloc (1024);\n\tsearch_1 = (char *) malloc (1024);\n\tsearch_2 = (char *) malloc (1024);\n\n\tsearch_1 = argv[1];\n\tsearch_2 = argv[2];\n\n\tconn = PQsetdb(NULL, NULL, NULL, NULL, \"lkb_alpha2\");\n\n\tres = PQexec(conn, \"BEGIN\");\n\tPQclear(res);\n\t\n\tres = PQexec(conn, \"CREATE TEMP TABLE __a (finds INT4)\");\n\tres = PQexec(conn, \"CREATE TEMP TABLE __b (finds INT4)\");\n\n res = PQexec(conn, \"SELECT did from master_table\");\n\n nFields = PQnfields(res);\n\n for (i = 0; i < PQntuples(res); i++)\n {\n\t if(print_lo(conn, search_1, atoi(PQgetvalue(res, i, 0))))\n {\n\t\t\tprintf(\"+\");\n\t\t\tfflush(stdout);\n\t\t\tsprintf(_insert, \"INSERT INTO __a VALUES (%i)\", atoi(PQgetvalue(res, i, 0)));\n\t\t\tPQexec (conn, _insert);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tprintf(\".\");\n\t\t\tfflush(stdout);\n\t\t}\n }\n \n\tprintf(\"\\n\\n\");\n\n res = PQexec(conn, \"SELECT finds from __a\");\n\n\tfor (i = 0; i < PQntuples(res); i++)\n {\n\t\tif(print_lo(conn, search_2, atoi(PQgetvalue(res, i, 0))))\n {\n\t\t\tprintf(\"+\");\n\t\t\tfflush(stdout);\n\t\t\tsprintf(_insert, \"INSERT INTO __b VALUES (%i)\", atoi(PQgetvalue(res, i, 0)));\n\t\t\tPQexec (conn, _insert);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tprintf(\".\");\n\t\t\tfflush(stdout);\n\t\t}\n }\n\n\tres = PQexec(conn, \"SELECT finds FROM __b\");\n\n nFields = PQnfields(res);\n\n\tfor(i = 0; i < PQntuples(res); i++)\n\t{\n\t\tprintf(\"\\n\\nMatch: %i\", atoi(PQgetvalue(res, i, 0)));\n\t}\n\n\tprintf(\"\\n\\n\");\n\n\n\tres = PQexec(conn, \"END\");\n\tPQclear(res);\n\n\tPQfinish(conn);\n\n\texit(0);\n}\n",
"msg_date": "Sat, 29 May 1999 11:20:02 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems w/ LO "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I've been looking into the \"stuck spin lock\" problem under high\n> load. Unless it being solved, PostgreSQL would not be usable in the\n> \"real world.\"\n\n> Question to hackers: Why does s_lock_stuck() call abort()? Shouldn't\n> be elog(ERROR) or elog(FATAL)?\n\nI think that is probably the right thing. elog(ERROR) will not do\nanything to release the stuck spinlock, and IIRC not even elog(FATAL)\nwill. The only way out is to clobber all the backends and reinitialize\nshared memory. The postmaster will not do that unless a backend dies\nwithout making an exit report --- which means doing abort().\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 15:39:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: s_lock_stuck (was Problems w/ LO)"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I couldn't run your program since I don't have test data. So I made a\n> small test program to make sure if the problem caused by LO (It's\n> stolen from test/examples/testlo.c). In the program, ~4k LO is read\n> for 10000 times in a transaction. The backend process size became a\n> little bit bigger, but I couldn't see any problem you mentioned.\n\nI tried the same thing, except I simply put a loop around the begin/end\ntransaction part of testlo.c so that it would create and access many\nlarge objects in a single backend process. With today's sources I do\nnot see a 'ShmemAlloc: out of memory' error even after several thousand\niterations. (But I do not know if this test would have triggered one\nbefore...)\n\nWhat I do see is a significant backend memory leak --- several kilobytes\nper cycle.\n\nI think the problem here is that inv_create is done with the palloc\nmemory context set to the private memory context created by lo_open\n... and this memory context is never cleaned out as long as the backend\nsurvives. So whatever junk data might get palloc'd and not freed during\nthe index creation step will just hang around indefinitely. And that\ncode is far from leak-free.\n\nWhat I propose doing about it is modifying lo_commit to destroy\nlo_open's private memory context. This will mean going back to the\nold semantics wherein large object descriptors are not valid across\ntransactions. But I think that's the safest thing anyway. We can\ndetect the case where someone tries to use a stale LO handle if we\nzero out the LO \"cookies\" array as a side-effect of lo_commit.\n\nComments? Objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 17:25:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems w/ LO "
},
{
"msg_contents": "> I think the problem here is that inv_create is done with the palloc\n> memory context set to the private memory context created by lo_open\n> ... and this memory context is never cleaned out as long as the backend\n> survives. So whatever junk data might get palloc'd and not freed during\n> the index creation step will just hang around indefinitely. And that\n> code is far from leak-free.\n> \n> What I propose doing about it is modifying lo_commit to destroy\n> lo_open's private memory context. This will mean going back to the\n> old semantics wherein large object descriptors are not valid across\n> transactions. But I think that's the safest thing anyway. We can\n> detect the case where someone tries to use a stale LO handle if we\n> zero out the LO \"cookies\" array as a side-effect of lo_commit.\n\nMakes sense.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 17:38:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems w/ LO"
},
{
"msg_contents": ">Tatsuo Ishii <[email protected]> writes:\n>> I couldn't run your program since I don't have test data. So I made a\n>> small test program to make sure if the problem caused by LO (It's\n>> stolen from test/examples/testlo.c). In the program, ~4k LO is read\n>> for 10000 times in a transaction. The backend process size became a\n>> little bit bigger, but I couldn't see any problem you mentioned.\n>\n>I tried the same thing, except I simply put a loop around the begin/end\n>transaction part of testlo.c so that it would create and access many\n>large objects in a single backend process. With today's sources I do\n>not see a 'ShmemAlloc: out of memory' error even after several thousand\n>iterations. (But I do not know if this test would have triggered one\n>before...)\n>\n>What I do see is a significant backend memory leak --- several kilobytes\n>per cycle.\n>\n>I think the problem here is that inv_create is done with the palloc\n>memory context set to the private memory context created by lo_open\n>... and this memory context is never cleaned out as long as the backend\n>survives. So whatever junk data might get palloc'd and not freed during\n>the index creation step will just hang around indefinitely. And that\n>code is far from leak-free.\n>\n>What I propose doing about it is modifying lo_commit to destroy\n>lo_open's private memory context. This will mean going back to the\n>old semantics wherein large object descriptors are not valid across\n>transactions. But I think that's the safest thing anyway. We can\n>detect the case where someone tries to use a stale LO handle if we\n>zero out the LO \"cookies\" array as a side-effect of lo_commit.\n>\n>Comments? Objections?\n\nThen why should we use the private memory context if all lo operations \nmust be in a transaction?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 01 Jun 1999 16:33:45 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems w/ LO "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> What I propose doing about it is modifying lo_commit to destroy\n>> lo_open's private memory context. This will mean going back to the\n>> old semantics wherein large object descriptors are not valid across\n>> transactions. But I think that's the safest thing anyway.\n\n> Then why should we use the private memory context if all lo operations \n> must be in a transaction?\n\nRight now, we could dispense with the private context. But I think\nit's best to leave it there for future flexibility. For example, I was\nthinking about flushing the context only if no LOs remain open (easily\nchecked since lo_commit scans the cookies array anyway); that would\nallow cross-transaction LO handles without imposing a permanent memory\nleak. The trouble with that --- and this is a bug that was there anyway\n--- is that you need some way of cleaning up LO handles that are opened\nduring an aborted transaction. They might be pointing at an LO relation\nthat doesn't exist anymore. (And even if it does, the semantics of xact\nabort are supposed to be that all side effects are undone; opening an LO\nhandle would be such a side effect.)\n\nAs things now stand, LO handles are always closed at end of transaction\nregardless of whether it was commit or abort, so there is no bug.\n\nWe could think about someday adding the bookkeeping needed to keep track\nof LO handles opened during the current xact versus ones already open,\nand thereby allow them to live across xact boundaries without risking\nthe bug. But that'd be a New Feature so it's not getting done for 6.5.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jun 1999 10:18:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems w/ LO "
},
{
"msg_contents": "> > Then why should we use the private memory context if all lo operations \n> > must be in a transaction?\n> \n> Right now, we could dispense with the private context. But I think\n> it's best to leave it there for future flexibility. For example, I was\n> thinking about flushing the context only if no LOs remain open (easily\n> checked since lo_commit scans the cookies array anyway); that would\n> allow cross-transaction LO handles without imposing a permanent memory\n> leak. The trouble with that --- and this is a bug that was there anyway\n> --- is that you need some way of cleaning up LO handles that are opened\n> during an aborted transaction. They might be pointing at an LO relation\n> that doesn't exist anymore. (And even if it does, the semantics of xact\n> abort are supposed to be that all side effects are undone; opening an LO\n> handle would be such a side effect.)\n> \n> As things now stand, LO handles are always closed at end of transaction\n> regardless of whether it was commit or abort, so there is no bug.\n> \n> We could think about someday adding the bookkeeping needed to keep track\n> of LO handles opened during the current xact versus ones already open,\n> and thereby allow them to live across xact boundaries without risking\n> the bug. But that'd be a New Feature so it's not getting done for 6.5.\n\nNow I understand your point. Thank you for your detailed explanations!\n---\nTatsuo Ishii\n",
"msg_date": "Tue, 01 Jun 1999 23:49:30 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems w/ LO "
}
] |
[
{
"msg_contents": "I've been able to get today's snapshot of PostgreSQL 6.5 beta to run\non an SGI running Irix 6.4. The following steps must be taken.\n\n1.Add the following block to src/Makefile.shlib\n\nifeq ($(PORTNAME), irix5)\n install-shlib-dep := install-shlib\n shlib := lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)\n LDFLAGS_SL := -shared\n CFLAGS += $(CFLAGS_SL)\nendif\n\n2. Make src/Makefile.custome with\n\nCUSTOM_CC = cc -32\nLD += -32\nMK_NO_LORDER = 1\n\n3. Modify the install script to always copy, never move. The following install will work:\n\n#!/bin/ksh\n# Stripping is ignored in this version of install.\n\ncmd=/bin/cp\nstrip=\"\"\nchmod=\"/bin/chmod 755\"\nuser=`/usr/bin/whoami`\nif [ $user = \"root\" ] \nthen\n chown=\"/bin/chown bin\"\n chgrp=\"/bin/chgrp bin\"\nelse\n chown=\"/bin/chown $user\"\n set -A groups `/usr/bin/groups`\n chgrp=\"/bin/chgrp ${groups[0]}\"\nfi\nwhile :\ndo\n case $1 in\n -s ) strip=\"\"\n ;;\n -c ) cmd=\"/bin/cp\"\n ;;\n -m ) chmod=\"/bin/chmod $2\"\n shift\n ;;\n -o ) chown=\"/bin/chown $2\"\n shift\n ;;\n -g ) chgrp=\"/bin/chgrp $2\"\n shift\n ;;\n * ) break\n ;;\n esac\n shift\ndone\n\ncase \"$2\" in\n\"\") echo \"install: no destination specified\"\n exit 1\n ;;\n.|\"$1\") echo \"install: can't move $1 onto itself\"\n exit 1\n ;;\nesac\ncase \"$3\" in\n\"\") ;;\n*) echo \"install: too many files specified -> $*\"\n exit 1\n ;;\nesac\nif [ -d $2 ]\nthen file=$2/$1\nelse file=$2\nfi\n/bin/rm -f $file\n$cmd $1 $file\n[ $strip ] && $strip $file\n$chown $file\n$chgrp $file\n$chmod $file\n\n4. Configure nsl off and make sure the correct install program is\nused. The following shell statements will do this. \n\ncd src\nrm config.cache\ncat >config.cache <<EOF\nac_cv_lib_nsl_main=${ac_cv_lib_nsl_main='no'}\nEOF\ngmake clean\nINSTALL=\"your install program as above\" ./configure --prefix=/pg/pgsql --without-CXX ...\n\n========================================================================\n\nI haven't yet gotten libpq++ to build correctly and one regression test,\nmisc.out, fails due to a segmentation error in the copying of binary\ndata. The traceback is\n\ndbx ../../../bin/postgres (wd: /pg/pgsql/data/base/regression)\nuse /pg/pgsql/src/backend/commands\n(dbx) where\n> 0 CopyFrom(0x10147450, 0x1, 0x0, 0xfb4d6f4) [\"copy.c\":809, 0x48e1f0]\n 1 DoCopy(0x1013a5d0, 0x1, 0x0, 0x1) [\"copy.c\":304, 0x48c760]\n 2 ProcessUtility(0x1013a630, 0x3, 0x0, 0x1013ab78) [\"utility.c\":227, 0x5598f8]\n 3 pg_exec_query_dest(0x7ffee364, 0x3, 0x0, 0x1013ab78) [\"postgres.c\":718, 0x556eb8]\n 4 pg_exec_query(0x7ffee364, 0x1014e1d0, 0x0, 0x1013ab78) [\"postgres.c\":654, 0x556d34]\n 5 PostgresMain(0x8, 0x7fff28f0, 0x7, 0x7fff2e84) [\"postgres.c\":1649, 0x5586f4]\n 6 DoBackend(0x100e2b10, 0x1014e1d0, 0x0, 0x1013ab78) [\"postmaster.c\":1611, 0x525a6c]\n 7 BackendStartup(0x100e2b10, 0x1014e1d0, 0x0, 0x1013ab78) [\"postmaster.c\":1356, 0x525260]\n 8 ServerLoop(0x1013ab88, 0x1014e1d0, 0x0, 0x1013ab78) [\"postmaster.c\":806, 0x524060]\n 9 PostmasterMain(0x7, 0x7fff2e84, 0x0, 0x1013ab78) [\"postmaster.c\":599, 0x5237cc]\n 10 main(0x7, 0x7fff2e84, 0x0, 0x1013ab78) [\"main.c\":93, 0x4cc92c]\n 11 __istart() [\"crt1tinit.s\":13, 0x42a490]\n(dbx) l\n>* 809 ptr = att_addlength(ptr, attr[i]->attlen, ptr);\n 810 }\n 811 }\n 812 }\n 813 }\n 814 if (done)\n 815 continue;\n 816 \n 817 /*\n 818 * Does it have any sence ? - vadim 12/14/96\n\nSincerely,\nBob\n\n+----------------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n| President, Congenomics, Inc. | Fax: 609 737 7528 |\n| 114 W. Franklin Ave, Suite K1,10 | email: [email protected] |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+----------------------------------+------------------------------------+\n",
"msg_date": "Thu, 27 May 1999 16:59:48 -0400 (EDT)",
"msg_from": "Robert Bruccoleri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Port for SGI Irix using today's snapshot."
}
] |
[
{
"msg_contents": "I just gave an incorrect answer on pgsql-interfaces concerning the\nfollowing bug, which has been around for quite a while:\n\nregression=> create table bug1 (f1 int28 primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'bug1_pkey' for table 'bug1'\nERROR: Can't find a default operator class for type 22.\n\n-- That's fine, since we have no index support for int28. But:\n\nregression=> create table bug1 (f1 int28);\nERROR: Relation 'bug1' already exists\n\n-- Oops.\n\nregression=> drop table bug1;\nERROR: Relation 'bug1' does not exist\n\n-- Double oops.\n\n\nI'm pretty sure I recall a discussion to the effect that CREATE TABLE\nwas failing in this case because pgsql/data/base/dbname/bug1 had already\nbeen created and wasn't deleted at transaction abort. That may have\nbeen the case in older versions of Postgres, but we seem to have fixed\nthat problem: with current sources the database file *is* removed at\ntransaction abort. Unfortunately the bug still persists :-(.\n\nSome quick tracing indicates that the reason the second CREATE TABLE\nfails is that there's still an entry for bug1 in the system cache: the\nsearch in RelnameFindRelid(),\n tuple = SearchSysCacheTuple(RELNAME,\n PointerGetDatum(relname),\n 0, 0, 0);\nis finding an entry! (If you quit the backend and start a new one,\nthings go back to normal, since the new backend's relcache doesn't\nhave the bogus entry.)\n\nSo, apparently the real problem is that the relname cache is not cleaned\nof bogus entries inserted during a failed transaction. This strikes me\nas a fairly serious problem, especially if it applies to all the\nSysCache tables. That could lead to all kinds of erroneous behavior\nafter an aborted transaction. I think this is a \"must fix\" issue...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 May 1999 17:34:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ye olde \"relation doesn't quite exist\" problem"
},
{
"msg_contents": "> I'm pretty sure I recall a discussion to the effect that CREATE TABLE\n> was failing in this case because pgsql/data/base/dbname/bug1 had already\n> been created and wasn't deleted at transaction abort. That may have\n> been the case in older versions of Postgres, but we seem to have fixed\n> that problem: with current sources the database file *is* removed at\n> transaction abort. Unfortunately the bug still persists :-(.\n> \n> Some quick tracing indicates that the reason the second CREATE TABLE\n> fails is that there's still an entry for bug1 in the system cache: the\n> search in RelnameFindRelid(),\n> tuple = SearchSysCacheTuple(RELNAME,\n> PointerGetDatum(relname),\n> 0, 0, 0);\n> is finding an entry! (If you quit the backend and start a new one,\n> things go back to normal, since the new backend's relcache doesn't\n> have the bogus entry.)\n> \n> So, apparently the real problem is that the relname cache is not cleaned\n> of bogus entries inserted during a failed transaction. This strikes me\n> as a fairly serious problem, especially if it applies to all the\n> SysCache tables. That could lead to all kinds of erroneous behavior\n> after an aborted transaction. I think this is a \"must fix\" issue...\n\nOK, let me give two ideas here. First, we could create a linked list of\nall cache additions that happen inside a transaction. If the\ntransaction aborts, we can invalidate all the cache entries in the list.\nSecond, we could just invalidate the entire cache on a transaction\nabort. Third, we could somehow invalidate the cache on transaction\nabort \"only\" if there was some system table modification in the\ntransaction. The third one seems a little harder. Because the linked\nlist could get large, we could do a linked list, and if it gets too\nlarge, do an full invalidation. Also, there may be a way to spin\nthrough the cache and remove all entries marked as part of the aborted\ntransaction.\n\nSeems like this not something for 6.5.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 28 May 1999 11:05:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ye olde \"relation doesn't quite exist\" problem"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, let me give two ideas here. First, we could create a linked list of\n> all cache additions that happen inside a transaction. If the\n> transaction aborts, we can invalidate all the cache entries in the list.\n> Second, we could just invalidate the entire cache on a transaction\n> abort. Third, we could somehow invalidate the cache on transaction\n> abort \"only\" if there was some system table modification in the\n> transaction. The third one seems a little harder.\n\nYes, the second one was the quick-and-dirty answer that occurred to me.\nThat would favor apps that seldom incur errors (no extra overhead to\nkeep track of cache changes), but would be bad news for those that\noften incur errors (unnecessary cache reloads).\n\nIs there room in the SysCaches for the transaction ID of the last\ntransaction to modify each entry? That would provide an easy and\ninexpensive way of finding the ones to zap when the current xact is\naborted, I would think: abort would just scan all the caches looking\nfor entries with the current xact ID, and invalidate only those entries.\nThe cost in the no-error case would just be storing an additional\nfield whenever an entry is modified; seems cheap enough. However,\nif there are a lot of different places in the code that can create/\nmodify a cache entry, this could be a fair amount of work (and it'd\ncarry the risk of missing some places...).\n\n> Seems like this not something for 6.5.\n\nI think we really ought to do *something*. I'd settle for the\nbrute-force blow-away-all-the-caches answer for now, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 28 May 1999 14:33:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Ye olde \"relation doesn't quite exist\" problem "
},
{
"msg_contents": "> Is there room in the SysCaches for the transaction ID of the last\n> transaction to modify each entry? That would provide an easy and\n> inexpensive way of finding the ones to zap when the current xact is\n> aborted, I would think: abort would just scan all the caches looking\n> for entries with the current xact ID, and invalidate only those entries.\n> The cost in the no-error case would just be storing an additional\n> field whenever an entry is modified; seems cheap enough. However,\n> if there are a lot of different places in the code that can create/\n> modify a cache entry, this could be a fair amount of work (and it'd\n> carry the risk of missing some places...).\n\nYes, I think we could put it in, though it may have to sequential scan\nto remove the entries.\n\n> \n> > Seems like this not something for 6.5.\n> \n> I think we really ought to do *something*. I'd settle for the\n> brute-force blow-away-all-the-caches answer for now, though.\n\nOK. I wonder if there are any problems with that. I do that in heap.c:\n\n /*\n * This is heavy-handed, but appears necessary bjm 1999/02/01\n * SystemCacheRelationFlushed(relid) is not enough either.\n */\n RelationForgetRelation(relid);\n ResetSystemCache();\n\nas part of a temp table creation to remove any non-temp table entry in\nthe cache. I could not find another way, and because the temp table\ncreation doesn't cause problems, this could probably be used in\ntransaction abort too.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 28 May 1999 14:41:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ye olde \"relation doesn't quite exist\" problem"
},
{
"msg_contents": "I have dealt with this bug:\n\n> test=> create table bug1 (f1 int28 primary key);\n> ERROR: Can't find a default operator class for type 22.\n> -- That's expected, since we have no index support for int28. But now:\n> test=> create table bug1 (f1 int28);\n> ERROR: Relation 'bug1' already exists\n\nIt is not real clear to me why the existing syscache invalidation\nmechanism (CatalogCacheIdInvalidate() etc) fails to handle this case,\nbut it doesn't. Perhaps it is because the underlying pg_class tuple\nnever actually makes it to \"confirmed good\" status, so the SI code\nfigures it can ignore it.\n\nI think the correct place to handle the problem is in\nSystemCacheRelationFlushed() in catcache.c. That routine is called by\nRelationFlushRelation() (which does the same task for the relcache).\nUnfortunately, it was only handling one aspect of the cache-update\nproblem: it was cleaning out the cache associated with a system table\nwhen the *system table's* relcache entry was flushed. It didn't scan\nthe cache contents to see if any of the records are associated with a\nnon-system table that's being flushed.\n\nFor the moment, I have made it call ResetSystemCache() --- that is, just\nflush *all* the cache entries. Scanning the individual entries to find\nthe ones referencing the given relID would require knowing exactly which\ncolumn to look in for each kind of system cache, which is more knowledge\nthan catcache.c actually has. Eventually we could improve it.\n\nThis means it is no longer necessary for heap.c or index.c to call\nResetSystemCache() when handling a temp table --- their calls to\nRelationForgetRelation are sufficient. I have applied those changes\nas well.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 22:20:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Ye olde \"relation doesn't quite exist\" problem "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I think the correct place to handle the problem is in\n> SystemCacheRelationFlushed() in catcache.c. That routine is called by\n> RelationFlushRelation() (which does the same task for the relcache).\n> Unfortunately, it was only handling one aspect of the cache-update\n> problem: it was cleaning out the cache associated with a system table\n> when the *system table's* relcache entry was flushed. It didn't scan\n> the cache contents to see if any of the records are associated with a\n> non-system table that's being flushed.\n> \n> For the moment, I have made it call ResetSystemCache() --- that is, just\n> flush *all* the cache entries. Scanning the individual entries to find\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nIsn't is tooooo bad for performance ?!\n\n> the ones referencing the given relID would require knowing exactly which\n> column to look in for each kind of system cache, which is more knowledge\n> than catcache.c actually has. Eventually we could improve it.\n> \n> This means it is no longer necessary for heap.c or index.c to call\n> ResetSystemCache() when handling a temp table --- their calls to\n> RelationForgetRelation are sufficient. I have applied those changes\n> as well.\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 11:10:55 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ye olde \"relation doesn't quite exist\" problem"
},
{
"msg_contents": "> For the moment, I have made it call ResetSystemCache() --- that is, just\n> flush *all* the cache entries. Scanning the individual entries to find\n> the ones referencing the given relID would require knowing exactly which\n> column to look in for each kind of system cache, which is more knowledge\n> than catcache.c actually has. Eventually we could improve it.\n> \n> This means it is no longer necessary for heap.c or index.c to call\n> ResetSystemCache() when handling a temp table --- their calls to\n> RelationForgetRelation are sufficient. I have applied those changes\n> as well.\n\nThanks. I am a little confused. I thought you just flushed only on\nelog()/abort. How does the new code work.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 23:13:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ye olde \"relation doesn't quite exist\" problem"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > For the moment, I have made it call ResetSystemCache() --- that is, just\n> > flush *all* the cache entries. Scanning the individual entries to find\n> > the ones referencing the given relID would require knowing exactly which\n> > column to look in for each kind of system cache, which is more knowledge\n> > than catcache.c actually has. Eventually we could improve it.\n> >\n> > This means it is no longer necessary for heap.c or index.c to call\n> > ResetSystemCache() when handling a temp table --- their calls to\n> > RelationForgetRelation are sufficient. I have applied those changes\n> > as well.\n> \n> Thanks. I am a little confused. I thought you just flushed only on\n ^^^^^^^^^^^^^^^^^^^^\n> elog()/abort. How does the new code work.\n ^^^^^^^^^^^^\nIt seems as more right thing to do.\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 11:20:58 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ye olde \"relation doesn't quite exist\" problem"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Bruce Momjian wrote:\n>> Thanks. I am a little confused. I thought you just flushed only on\n> ^^^^^^^^^^^^^^^^^^^^\n>> elog()/abort. How does the new code work.\n> ^^^^^^^^^^^^\n> It seems as more right thing to do.\n\nWhat I just committed does the cache flush whenever\nRelationFlushRelation is called --- in particular, elog/abort will\ncause it to happen if there are any created-in-current-transaction\nrelations to be disposed of. But otherwise, no flush.\n\nThe obvious question about that is \"what about modifications to\ncacheable tuples that are not triggered by a relation creation?\"\nI think that those cases are OK because they are covered by the\nshared-invalidation code. At least, we have no bug reports to\nprove the contrary...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 23:34:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Ye olde \"relation doesn't quite exist\" problem "
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> For the moment, I have made it call ResetSystemCache() --- that is, just\n>> flush *all* the cache entries. Scanning the individual entries to find\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> Isn't is tooooo bad for performance ?!\n\nIt's not ideal, by any means. But I don't know how to fix it better\nright now, and we've got a release to ship ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jun 1999 10:08:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Ye olde \"relation doesn't quite exist\" problem "
},
{
"msg_contents": "\nAdded to TODO:\n\n* elog() flushes cache, try invalidating just entries from current xact\n\n\n> Bruce Momjian <[email protected]> writes:\n> > OK, let me give two ideas here. First, we could create a linked list of\n> > all cache additions that happen inside a transaction. If the\n> > transaction aborts, we can invalidate all the cache entries in the list.\n> > Second, we could just invalidate the entire cache on a transaction\n> > abort. Third, we could somehow invalidate the cache on transaction\n> > abort \"only\" if there was some system table modification in the\n> > transaction. The third one seems a little harder.\n> \n> Yes, the second one was the quick-and-dirty answer that occurred to me.\n> That would favor apps that seldom incur errors (no extra overhead to\n> keep track of cache changes), but would be bad news for those that\n> often incur errors (unnecessary cache reloads).\n> \n> Is there room in the SysCaches for the transaction ID of the last\n> transaction to modify each entry? That would provide an easy and\n> inexpensive way of finding the ones to zap when the current xact is\n> aborted, I would think: abort would just scan all the caches looking\n> for entries with the current xact ID, and invalidate only those entries.\n> The cost in the no-error case would just be storing an additional\n> field whenever an entry is modified; seems cheap enough. However,\n> if there are a lot of different places in the code that can create/\n> modify a cache entry, this could be a fair amount of work (and it'd\n> carry the risk of missing some places...).\n> \n> > Seems like this not something for 6.5.\n> \n> I think we really ought to do *something*. I'd settle for the\n> brute-force blow-away-all-the-caches answer for now, though.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 18:05:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Ye olde \"relation doesn't quite exist\" problem"
}
] |
[
{
"msg_contents": "Snapshot of a few hours ago on 3.2 FreeBSD. The trigger regression test\n(and a few others) fail. From looking at it, the trigger regression test\nfails because refint fails.\n\nIf one simply tries to use the stuff in contrib/spi, the failure is pretty\neasy to\nsee -- the example in contrib/spi/refint.example fails:\nCREATE TRIGGER CT BEFORE INSERT OR UPDATE ON C FOR EACH ROW\nEXECUTE PROCEDURE\ncheck_primary_key ('REFC', 'A', 'ID');\nCREATE\n\n-- Now try\n\nINSERT INTO A VALUES (10);\nINSERT 18567 1\nINSERT INTO A VALUES (20);\nINSERT 18568 1\nINSERT INTO A VALUES (30);\nINSERT 18569 1\nINSERT INTO A VALUES (40);\nINSERT 18570 1\nINSERT INTO A VALUES (50);\nINSERT 18571 1\n\nINSERT INTO B VALUES (1); -- invalid reference\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\n\n\n",
"msg_date": "Thu, 27 May 1999 18:34:50 -0400",
"msg_from": "\"Nat Howard\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "refint (& others?) on current snapshot"
},
{
"msg_contents": "\nI assume this is fixed in 6.5.\n\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Snapshot of a few hours ago on 3.2 FreeBSD. The trigger regression test\n> (and a few others) fail. From looking at it, the trigger regression test\n> fails because refint fails.\n> \n> If one simply tries to use the stuff in contrib/spi, the failure is pretty\n> easy to\n> see -- the example in contrib/spi/refint.example fails:\n> CREATE TRIGGER CT BEFORE INSERT OR UPDATE ON C FOR EACH ROW\n> EXECUTE PROCEDURE\n> check_primary_key ('REFC', 'A', 'ID');\n> CREATE\n> \n> -- Now try\n> \n> INSERT INTO A VALUES (10);\n> INSERT 18567 1\n> INSERT INTO A VALUES (20);\n> INSERT 18568 1\n> INSERT INTO A VALUES (30);\n> INSERT 18569 1\n> INSERT INTO A VALUES (40);\n> INSERT 18570 1\n> INSERT INTO A VALUES (50);\n> INSERT 18571 1\n> \n> INSERT INTO B VALUES (1); -- invalid reference\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 17:49:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] refint (& others?) on current snapshot"
}
] |
[
{
"msg_contents": "Is it possible to do a nonblocking lock? That is, \nI want several clients to execute,\n\n begin\n if table A is locked\n then\n go around doing stuff on other tables\n else\n lock A and do stuff on A\n endif\n\nthe problem is, if I use normal lock, then \nafter one client has locked and is doing stuff on A\nthe other one will block and thus it won't be able\nto go around doing stuff on other tables. Is it\npossible to do a nonblocking lock that will just\nfail if the table is locked already? \n\n\nNOTE: I tried using PQrequestCancel but it won't\ncancel the request. It still blocks for as long\nas the lock lasts. The only way around I've found so \nfar is to use PQreset. That's crude but works. But \nit leaves a dangling postmaster process that lives\nuntil the orignal lock is freed. Any other ideas? \n\nThanks a lot \n\nPablo Funes\nBrandeis University\[email protected]\n",
"msg_date": "Thu, 27 May 1999 18:42:11 -0400 (EDT)",
"msg_from": "Pablo Funes <[email protected]>",
"msg_from_op": true,
"msg_subject": "nonblocking lock? "
},
{
"msg_contents": "> Is it possible to do a nonblocking lock? That is, \n> I want several clients to execute,\n> \n> begin\n> if table A is locked\n> then\n> go around doing stuff on other tables\n> else\n> lock A and do stuff on A\n> endif\n> \n> the problem is, if I use normal lock, then \n> after one client has locked and is doing stuff on A\n> the other one will block and thus it won't be able\n> to go around doing stuff on other tables. Is it\n> possible to do a nonblocking lock that will just\n> fail if the table is locked already? \n\nTry with user locks. You can find the code in contrib/userlocks.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Wed, 2 Jun 1999 12:23:47 +0200 (MEST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] nonblocking lock?"
},
{
"msg_contents": "> > Is it possible to do a nonblocking lock? That is, \n> > I want several clients to execute,\n> > \n> > begin\n> > if table A is locked\n> > then\n> > go around doing stuff on other tables\n> > else\n> > lock A and do stuff on A\n> > endif\n> > \n> > the problem is, if I use normal lock, then \n> > after one client has locked and is doing stuff on A\n> > the other one will block and thus it won't be able\n> > to go around doing stuff on other tables. Is it\n> > possible to do a nonblocking lock that will just\n> > fail if the table is locked already? \n> \n> Try with user locks. You can find the code in contrib/userlocks.\n\nYes, this is the proper PostgreSQL solution.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jun 1999 11:33:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] nonblocking lock?"
}
] |
[
{
"msg_contents": "Here is a draft of a proposed article for an online computer magazine,\nperhaps the Daemon News. They have already expressed their interest in\nthe article.\n\nI am interested in any comments, good or bad.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n\n\n\n\n\n\n\n\nThe History of PostgreSQL Development\n\n\nDraft\n\nPostgreSQL is the most advanced\nopen-source database server. It is Object-Relational(ORDBMS), and\nsupported by a team of Internet developers. PostgreSQL began as Ingres,\ndeveloped at the University of California at Berkeley. The Ingres\ncode was taken and enhanced by Ingres Corporation, which produced\none of the first commercially successful relational database servers. \n(Ingres Corp. was later purchased by Computer Associates.) The Ingres\ncode was taken by Michael Stonebraker as part of a Berkeley project to\ndevelop an object-relational database server called Postgres. The\nPostgres code was taken by Illustra and developed into a commercial product. \n(Illustra was later purchased by Informix and integrated into Informix's\nUniversal Server.) Several graduate students added SQL capabilities\nto Postgres, and called it Postgres95. The graduate students left\nBerkeley, but the code was maintained by one of the graduate students,\nJolly Chen, and had an active mailing list.\nIn the summer of 1996, it became clear that the demand for an open-source\nSQL database server was great, and a team should be formed to continue\ndevelopment. Marc G. Fournier, in Toronto, Canada, offered to host\nthe mailing list, and provide a server to host the source tree. The\n1,000 mailing list subscribers were moved to the new list, and a server\nwas configured, giving a few people login accounts to apply patches to\nthe source tree using CVS.\nJolly Chen had stated, \"This project needs a few people with lots of\ntime, not many people with a little time.\" With 250,000 lines of\nC code, we understood what he meant. In the early days, there were\nfour major people involved, Marc, Thomas Lockhart in Pasadena, California,\nand Vadim Mikheev, in Krasnoyarsk, Russia, and myself. We all had\nfull-time jobs, so were doing this in our spare time. It certainly\nwas a challenge.\nOur first goal was to scour the old mailing list, evaluating patches\nthat had been posted to fix various problems. The system was quite\nfragile then, and not easily understood. During the first six months\nof development, there was fear that some patch would break the system,\nand we would never be able to correct the problem. Many problem reports\nhad us scratching our heads, trying to figure out not only what was wrong,\nbut how the system even performed many functions.\nWe inherited a huge installed base. A typical bug report was,\n\"When I do this, it crashes the database backend.\" We had a whole\nlist of them. It became clear that some organization was needed. \nMost bug reports required significant research to fix, and many were duplicates,\nso our TODO list\nreported every buggy SQL query. It helped us identify our bugs, and\nmade users aware of them too, cutting down on duplicate bug reports. \nWe had many eager developers, but the learning curve in understanding how\nthe backend worked was significant. Many developers got involved\nin the edges of the source code, like language interfaces or database tools,\nwhere things were easier to understand. Other developers focused\non specific problem queries, trying to locate the source of the bug. \nIt was amazing to see that many bugs were fixed with just one line of C\ncode. Postgres had evolved in an academic environment, and had not\nbeen exposed to the full spectrum of real-world queries. During that\ntime, there was talk of adding features, but the instability of the system\nmade bug fixing our major focus.\nWe changed our name from Postgres95 to PostgreSQL. It is a mouthful,\nbut touts our SQL capabilities. We started distributing our source\ntree using sup, which allowed people to keep up-to-date copies of\nthe development tree without downloading a whole tarball. We later\nswitched to remote CVS.\nReleases were every 3-5 months. This consisted of 2-3 months of\ndevelopment, one month of beta testing, a major release, and a few weeks\nto issue subreleases to correct serious bugs. We were never tempted\nto do a more aggressive schedule with more releases. A database server\nis not like a word processor or a game, where you can easily restart it\nif there is a problem. Database are multi-user, and lock user data\ninside our servers, so we have to be very careful that released software\nis as reliable as possible.\nDevelopment of source code of this scale and complexity is not for the\nnovice. We had trouble getting developers interested in a project\nwith such a steep learning curve. However, our civilized atmosphere,\nand our improved reliability and performance, finally helped attract the\nexperienced talent we needed.\nGetting our developers the knowledge they needed to assist with PostgreSQL\nwas clearly a priority. We had a TODO list that outlined what needed\nto be done, but with 250,000 lines of code, taking on any TODO item was\na major project. We realized developer education would pay major\nbenefits in helping people get started. We wrote a flowchart\nof the backend modules, outlining the purpose of each. We wrote a\ndevelopers\nFAQ, to describe some of the common questions/troubles of PostgreSQL\ndevelopers. With this, developers became productive much quicker.\nThe source code we inherited from Berkeley was very modular, but suffered\nfrom bit rot, and some Berkeley coders hadn't understand the proper way\nto handle certain tasks. Their coding styles were also quite varied. \nWe wrote a tool to format/indent the entire source tree in a consistent\nmanner. We wrote a script to find functions that could be marked\nas static, or never-called functions that could be removed completely. \nThese are run just before each release. A release checklist reminds\nus of the things that have to be changed for each release.\nAs we gained knowledge of the code, we became able to perform more complicated\nfixes and feature additions. We started to redesign poorly structured\ncode. We moved into a mode where each release had major features,\ninstead of just fixes for previous bugs. We improved SQL conformance,\nadded subselects, improved locking, and added major missing SQL functionality.\nThe Usenet discussion group archives started touting us. In the\nprevious year, we had searched for PostgreSQL, and found that many people\nwe recommending other databases, even though we were addressing user concerns\nas rapidly as possible. One year later, Usenet clearly recommended\nus to users who needed transaction support, complex queries, commerical-grade\nSQL support, complex data types, and reliability. Other databases\nwere recommended when speed was the overriding concern. This more\nclearly portrayed our strengths. RedHat's shipment of PostgreSQL\nas part of their Linux distribution quickly multiplied our user base.\nEvery release is a major improvement over the last. Our upcoming\n6.5 release marks the development team's final mastery of the source code\nwe inherited from Berkeley. Finally, every code module is understood\nby at least one development team member. We are now easily\nadding major features, thanks to the increasing size and experience of\nour world-wide development\nteam. Like most open-source projects, we don't know how many\npeople are using our software, but our increased functionality, visibility\nand mailing list traffic clearly point to continued growth for PostgreSQL.",
"msg_date": "Fri, 28 May 1999 00:45:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposed article on PostgreSQL development"
}
] |
[
{
"msg_contents": "SELECT * FROM test WHERE test IN (SELECT * FROM test) fails with strange error\nWhen creating a table with either type inet or type cidr as a primary,unique\n key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\nFix function pointer calls to take Datum args for char and int2 args(ecgs)\nRegression test for new Numeric type\nLarge Object memory problems\nrefint problems\ninvalidate cache on aborted transaction\nspinlock stuck problem\nbenchmark performance problem\n\nMake psql \\help, man pages, and sgml reflect changes in grammar\nMarkup sql.sgml, Stefan's intro to SQL\nGenerate Admin, User, Programmer hardcopy postscript\nGenerate INSTALL and HISTORY from sgml sources.\nUpdate ref/lock.sgml, ref/set.sgml to reflect MVCC and locking changes.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 28 May 1999 00:58:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open 6.5 items"
},
{
"msg_contents": ">spinlock stuck problem\n\nThis time I have tested on another slower/less memory machine. Seems\nthings getting worse. I got:\n\n\tLockAcquire: xid table corrupted\n\nThis comes from:\n\n \t/*\n \t * Find or create an xid entry with this tag\n \t */\n \tresult = (XIDLookupEnt *) hash_search(xidTable, (Pointer) &item,\n\n \t HASH_ENTER, &found);\n \tif (!result)\n \t{\n \t\telog(NOTICE, \"LockAcquire: xid table corrupted\");\n \t\treturn STATUS_ERROR;\n \t}\n\nAs you can see the aquired master lock never released, and all\nbackends get stucked. (of course, corrupted xid table is a problem too\n).\n\nAnother error was:\n\n\tout of free buffers: time to abort !\n\nI will do more testing...\n---\nTatsuo Ishii\n\n",
"msg_date": "Fri, 28 May 1999 14:20:22 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> \tLockAcquire: xid table corrupted\n\n> This comes from:\n\n> \t/*\n> \t * Find or create an xid entry with this tag\n> \t */\n> \tresult = (XIDLookupEnt *) hash_search(xidTable, (Pointer) &item,\n\n> \t HASH_ENTER, &found);\n> \tif (!result)\n> \t{\n> \t\telog(NOTICE, \"LockAcquire: xid table corrupted\");\n> \t\treturn STATUS_ERROR;\n> \t}\n\n> As you can see the aquired master lock never released, and all\n> backends get stucked. (of course, corrupted xid table is a problem too\n\nActually, corrupted xid table is *the* problem --- whatever happens\nafter that is just collateral damage. (The elog should likely be\nelog(FATAL) not NOTICE...)\n\nIf I recall the dynahash.c code correctly, a null return value\nindicates either damage to the structure of the table (ie someone\nstomped on memory that didn't belong to them) or running out of memory\nto add entries to the table. The latter should be impossible if we\nsized shared memory correctly. Perhaps the table size estimation code\nhas been obsoleted by recent changes?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 28 May 1999 02:07:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "Thus spake Bruce Momjian\n> When creating a table with either type inet or type cidr as a primary,unique\n> key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\n\nSo have we decided that this is still to be fixed? If so, it's an easy fix\nbut we have to decide which of the following is true.\n\n 198.68.123.0/24 < 198.68.123.0/27\n 198.68.123.0/24 > 198.68.123.0/27\n\nMaybe deciding that should be the TODO item. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 28 May 1999 03:21:37 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> If I recall the dynahash.c code correctly, a null return value\n> indicates either damage to the structure of the table (ie someone\n> stomped on memory that didn't belong to them) or running out of memory\n> to add entries to the table. The latter should be impossible if we\n\nQuite different cases and should result in different reactions.\nIf structure is corrupted then only abort() is proper thing.\nIf running out of memory then elog(ERROR) is enough.\n\n> sized shared memory correctly. Perhaps the table size estimation code\n> has been obsoleted by recent changes?\n\nlock.h:\n\n/* ----------------------\n * The following defines are used to estimate how much shared \n * memory the lock manager is going to require.\n * See LockShmemSize() in lock.c.\n *\n * NLOCKS_PER_XACT - The number of unique locks acquired in a transaction \n * NLOCKENTS - The maximum number of lock entries in the lock table.\n * ----------------------\n */\n#define NLOCKS_PER_XACT 40\n ^^\nIsn't it too low?\n\n#define NLOCKENTS(maxBackends) (NLOCKS_PER_XACT*(maxBackends))\n\nAnd now - LockShmemSize() in lock.c:\n\n /* lockHash table */\n size += hash_estimate_size(NLOCKENTS(maxBackends),\n ^^^^^^^^^^^^^^^^^^^^^^\n SHMEM_LOCKTAB_KEYSIZE,\n SHMEM_LOCKTAB_DATASIZE);\n\n /* xidHash table */\n size += hash_estimate_size(maxBackends,\n ^^^^^^^^^^^\n SHMEM_XIDTAB_KEYSIZE,\n SHMEM_XIDTAB_DATASIZE);\n\nWhy just maxBackends is here? NLOCKENTS should be used too\n(each transaction lock requieres own xidhash entry).\n\nVadim\n",
"msg_date": "Fri, 28 May 1999 18:31:05 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> If I recall the dynahash.c code correctly, a null return value\n>> indicates either damage to the structure of the table (ie someone\n>> stomped on memory that didn't belong to them) or running out of memory\n>> to add entries to the table. The latter should be impossible if we\n\n> Quite different cases and should result in different reactions.\n\nI agree; will see about cleaning up hash_search's call convention after\n6.5 is done. Actually, maybe I should do it now? I'm not convinced yet\nwhether the reports we're seeing are due to memory clobber or running\nout of space... fixing this may be the easiest way to find out.\n\n> #define NLOCKS_PER_XACT 40\n> ^^\n> Isn't it too low?\n\nYou tell me ... that was the number that was in the 6.4 code, but I\nhave no idea if it's right or not. (Does MVCC require more locks\nthan the old stuff?) What is a good upper bound on the number\nof concurrently existing locks?\n\n> /* xidHash table */\n> size += hash_estimate_size(maxBackends,\n> ^^^^^^^^^^^\n> SHMEM_XIDTAB_KEYSIZE,\n> SHMEM_XIDTAB_DATASIZE);\n\n> Why just maxBackends is here? NLOCKENTS should be used too\n> (each transaction lock requieres own xidhash entry).\n\nShould it be NLOCKENTS(maxBackends) xid entries, or do you mean\nNLOCKENTS(maxBackends) + maxBackends? Feel free to stick in any\nestimates that you like better --- what's there now is an interpretation\nof what the 6.4 code was trying to do (but it was sufficiently buggy and\nunreadable that it was probably coming out with different numbers in\nthe end...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 28 May 1999 10:10:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> but we have to decide which of the following is true.\n\n> 198.68.123.0/24 < 198.68.123.0/27\n> 198.68.123.0/24 > 198.68.123.0/27\n\nI'd say the former, on the same principle that 'abc' < 'abcd'.\nThink of the addresses as being bit strings of the specified length,\nand compare them the same way character strings are compared.\n\nBut if Vixie's got a different opinion, I defer to him...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 28 May 1999 10:22:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Vadim Mikheev <[email protected]> writes:\n> >> If I recall the dynahash.c code correctly, a null return value\n> >> indicates either damage to the structure of the table (ie someone\n> >> stomped on memory that didn't belong to them) or running out of memory\n> >> to add entries to the table. The latter should be impossible if we\n> \n> > Quite different cases and should result in different reactions.\n> \n> I agree; will see about cleaning up hash_search's call convention after\n> 6.5 is done. Actually, maybe I should do it now? I'm not convinced yet\n> whether the reports we're seeing are due to memory clobber or running\n> out of space... fixing this may be the easiest way to find out.\n\nImho, we have to fix it in some way before 6.5\nEither by changing dynahash.c (to return 0x1 if table is\ncorrupted and 0x0 if out of space) or by changing\nelog(NOTICE) to elog(ERROR).\n\n> \n> > #define NLOCKS_PER_XACT 40\n> > ^^\n> > Isn't it too low?\n> \n> You tell me ... that was the number that was in the 6.4 code, but I\n> have no idea if it's right or not. (Does MVCC require more locks\n> than the old stuff?) What is a good upper bound on the number\n> of concurrently existing locks?\n\nProbably yes, because of writers can continue to work and lock\nother tables instead of sleeping of first lock due to concurrent\nselect. I'll change it to 64, but this should be configurable\nthing.\n\n> \n> > /* xidHash table */\n> > size += hash_estimate_size(maxBackends,\n> > ^^^^^^^^^^^\n> > SHMEM_XIDTAB_KEYSIZE,\n> > SHMEM_XIDTAB_DATASIZE);\n> \n> > Why just maxBackends is here? NLOCKENTS should be used too\n> > (each transaction lock requieres own xidhash entry).\n> \n> Should it be NLOCKENTS(maxBackends) xid entries, or do you mean\n> NLOCKENTS(maxBackends) + maxBackends? Feel free to stick in any\n> estimates that you like better --- what's there now is an interpretation\n> of what the 6.4 code was trying to do (but it was sufficiently buggy and\n> unreadable that it was probably coming out with different numbers in\n> the end...)\n\nJust NLOCKENTS(maxBackends) - I'll change it now.\n\nVadim\n",
"msg_date": "Sat, 29 May 1999 13:51:13 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Thus spake Tom Lane\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > but we have to decide which of the following is true.\n> \n> > 198.68.123.0/24 < 198.68.123.0/27\n> > 198.68.123.0/24 > 198.68.123.0/27\n> \n> I'd say the former, on the same principle that 'abc' < 'abcd'.\n\nAnd, in fact, that's what happens if you use the operators. The only\nplace they are equal is when sorting them so they can't be used as\nprimary keys. I guess there is no argument about the sorting order\nif we think they should be sorted. There is still the question of\nwhether or not they should be sorted. There seems to be tacit sgreement\nbut could we have a little more discussion. The question is, when inet\nor cidr is used as the primary key on a table, should they be considered\nequal. In fact, think about the question separately as we may want a\ndifferent behaviour for each. Here is my breakdown of the question.\n\nFor inet type, the value specifies primarily, I think, the host but\nalso carries information about its place on the network. Given an inet\ntype you can extract the host, broadcast, netmask and even the cidr\nthat it is part of. So, 198.68.123.0/24 and 198.68.123.0/27 really\nrefer to the same host but on different networks. Since a host can only\nbe on one network, there is an argument that they can't both be used\nas the primary key in the same table.\n\nA cidr type is primarily a network. In fact, some valid inet values\naren't even valid cidr. So, the question is, if one network is part\nof another then should it be possible to have both as a primary key?\n\nOf course, both of these beg the real question, should either of these\ntypes be used as a primary key, but that is a database design question.\n\n> Think of the addresses as being bit strings of the specified length,\n> and compare them the same way character strings are compared.\n\nNot sure that that clarifies it but we do have the code to order them\nin any case. We just need to decide whether we want to.\n\n> But if Vixie's got a different opinion, I defer to him...\n\nPaul's code orders them without regard to netmask which implies \"no\"\nas the answer to the question but his original code only referred to\nwhat we eventually called the cidr type. The question would still\nbe open for the inet type anyway.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 29 May 1999 08:07:42 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> And, in fact, that's what happens if you use the operators. The only\n> place they are equal is when sorting them so they can't be used as\n> primary keys.\n\nHuh? Indexes and operators are the same thing --- or more specifically,\nindexes rely on operators to compare keys. I don't see how it's even\n*possible* that an index would think that two keys are equal when the\nunderlying = operator says they are not.\n\nA little experimentation shows that's indeed what's happening, though.\nWeird. Is this a deliberate effect, and if so how did you achieve it?\nIt looks like what could be a serious bug to me.\n\n> I guess there is no argument about the sorting order\n> if we think they should be sorted. There is still the question of\n> whether or not they should be sorted. There seems to be tacit sgreement\n> but could we have a little more discussion. The question is, when inet\n> or cidr is used as the primary key on a table, should they be considered\n> equal. In fact, think about the question separately as we may want a\n> different behaviour for each.\n\nI'd argue that plain indexing ought not try to do anything especially\nsubtle --- in particular it ought not vary from the behavior of the\ncomparison operators for the type. If someone wants a table wherein you\ncan't enter two spellings of the same hostname, the right way would be\nto construct a unique functional index using a function that reduces the\nINET type into the simpler form. A good analogy might be a text field\nwhere you don't want any two entries to be equal on a case-insensitive\nbasis. You don't up and change the behavior of indexing to be\ncase-insensitive, you say\n\tCREATE UNIQUE INDEX foo_f1_key ON foo (lower(f1) text_ops);\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 May 1999 11:28:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "I wrote:\n> A little experimentation shows that's indeed what's happening, though.\n> Weird. Is this a deliberate effect, and if so how did you achieve it?\n\nOh, I see it: the network_cmp function is deliberately inconsistent with\nthe regular comparison functions on network values.\n\nThis is *very bad*. Indexes depend on both the operators and the cmp\nsupport function. You cannot have inconsistent behavior between these\nfunctions, or indexing will misbehave. Do I need to gin up an example\nwhere it fails?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 May 1999 11:47:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "Hello all,\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Vadim Mikheev\n> Sent: Saturday, May 29, 1999 2:51 PM\n> To: Tom Lane\n> Cc: [email protected]; PostgreSQL-development\n> Subject: Re: [HACKERS] Open 6.5 items\n>\n>\n> Tom Lane wrote:\n> >\n> > Vadim Mikheev <[email protected]> writes:\n> > >> If I recall the dynahash.c code correctly, a null return value\n> > >> indicates either damage to the structure of the table (ie someone\n> > >> stomped on memory that didn't belong to them) or running out\n> of memory\n> > >> to add entries to the table. The latter should be impossible if we\n> >\n> > > Quite different cases and should result in different reactions.\n> >\n> > I agree; will see about cleaning up hash_search's call convention after\n> > 6.5 is done. Actually, maybe I should do it now? I'm not convinced yet\n> > whether the reports we're seeing are due to memory clobber or running\n> > out of space... fixing this may be the easiest way to find out.\n>\n> Imho, we have to fix it in some way before 6.5\n> Either by changing dynahash.c (to return 0x1 if table is\n> corrupted and 0x0 if out of space) or by changing\n> elog(NOTICE) to elog(ERROR).\n>\n\nAnother case exists which causes stuck spinlock abort.\n\n status = WaitOnLock(lockmethod, lock, lockmode);\n\n /*\n * Check the xid entry status, in case something in the ipc\n * communication doesn't work correctly.\n */\n if (!((result->nHolding > 0) && (result->holders[lockmode] >\n0))\n)\n {\n XID_PRINT_AUX(\"LockAcquire: INCONSISTENT \", result);\n LOCK_PRINT_AUX(\"LockAcquire: INCONSISTENT \", lock,\nlockm\node);\n /* Should we retry ? */\n return FALSE;\n\nThis case returns without releasing LockMgrLock and doesn't call even\nelog().\nAs far as I see,different entries in xidHash have a same key when above\ncase occurs. Moreover xidHash has been in abnormal state since the\nnumber of xidHash entries exceeded 256.\n\nIs this bug solved by change maxBackends->NLOCKENTS(maxBackends)\nby Vadim or the change about hash by Tom ?\n\n\nAs for my test case,xidHash is filled with XactLockTable entries which have\nbeen acquired by XactLockTableWait().\nCould those entries be released immediately after they are acquired ?\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Mon, 31 May 1999 09:33:25 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> Sent: Friday, May 28, 1999 1:58 PM\n> To: PostgreSQL-development\n> Subject: [HACKERS] Open 6.5 items\n>\n>\n> SELECT * FROM test WHERE test IN (SELECT * FROM test) fails with\n> strange error\n> When creating a table with either type inet or type cidr as a\n> primary,unique\n> key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\n> Fix function pointer calls to take Datum args for char and int2 args(ecgs)\n> Regression test for new Numeric type\n> Large Object memory problems\n> refint problems\n> invalidate cache on aborted transaction\n> spinlock stuck problem\n> benchmark performance problem\n>\n> Make psql \\help, man pages, and sgml reflect changes in grammar\n> Markup sql.sgml, Stefan's intro to SQL\n> Generate Admin, User, Programmer hardcopy postscript\n> Generate INSTALL and HISTORY from sgml sources.\n> Update ref/lock.sgml, ref/set.sgml to reflect MVCC and locking changes.\n>\n\nWhat about mdtruncate() for multi-segments relation ?\nAFAIK,it has not been solved yet.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Mon, 31 May 1999 10:36:45 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> As far as I see,different entries in xidHash have a same key when above\n> case occurs. Moreover xidHash has been in abnormal state since the\n> number of xidHash entries exceeded 256.\n> \n> Is this bug solved by change maxBackends->NLOCKENTS(maxBackends)\n> by Vadim or the change about hash by Tom ?\n\nShould be fixed now.\n\n> \n> As for my test case,xidHash is filled with XactLockTable entries which have\n> been acquired by XactLockTableWait().\n> Could those entries be released immediately after they are acquired ?\n\nOps. Thanks! Must be released. \n\nVadim\n",
"msg_date": "Mon, 31 May 1999 09:41:55 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> > Make psql \\help, man pages, and sgml reflect changes in grammar\n> > Markup sql.sgml, Stefan's intro to SQL\n> > Generate Admin, User, Programmer hardcopy postscript\n> > Generate INSTALL and HISTORY from sgml sources.\n> > Update ref/lock.sgml, ref/set.sgml to reflect MVCC and locking changes.\n> >\n> \n> What about mdtruncate() for multi-segments relation ?\n> AFAIK,it has not been solved yet.\n> \n\nI thought we decided that file descriptors are kept by backends, and are\nstill accessable while new backends don't see the files. Correct?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 30 May 1999 22:15:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Monday, May 31, 1999 11:15 AM\n> To: Hiroshi Inoue\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Open 6.5 items\n> \n> \n> > > Make psql \\help, man pages, and sgml reflect changes in grammar\n> > > Markup sql.sgml, Stefan's intro to SQL\n> > > Generate Admin, User, Programmer hardcopy postscript\n> > > Generate INSTALL and HISTORY from sgml sources.\n> > > Update ref/lock.sgml, ref/set.sgml to reflect MVCC and \n> locking changes.\n> > >\n> > \n> > What about mdtruncate() for multi-segments relation ?\n> > AFAIK,it has not been solved yet.\n> > \n> \n> I thought we decided that file descriptors are kept by backends, and are\n> still accessable while new backends don't see the files. Correct?\n>\n\nYes,other backends could write to unliked files which would be \nvanished before long.\nI think it's more secure to truncate useless segments to size 0 \nthan unlinking the segments though vacuum would never remove \nuseless segments.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 31 May 1999 11:48:39 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> > I thought we decided that file descriptors are kept by backends, and are\n> > still accessable while new backends don't see the files. Correct?\n> >\n> \n> Yes,other backends could write to unliked files which would be \n> vanished before long.\n> I think it's more secure to truncate useless segments to size 0 \n> than unlinking the segments though vacuum would never remove \n> useless segments.\n\nIf you truncate, other backends will see the data gone, and will be\nwriting into the middle of an empty file. Better to remove.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 30 May 1999 23:40:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> \n> > > I thought we decided that file descriptors are kept by \n> backends, and are\n> > > still accessable while new backends don't see the files. Correct?\n> > >\n> > \n> > Yes,other backends could write to unliked files which would be \n> > vanished before long.\n> > I think it's more secure to truncate useless segments to size 0 \n> > than unlinking the segments though vacuum would never remove \n> > useless segments.\n> \n> If you truncate, other backends will see the data gone, and will be\n> writing into the middle of an empty file. Better to remove.\n>\n\nI couldn't explain more because of my poor English,sorry.\n\nBut my test case usually causes backend abort.\nMy test case is\n\tWhile 1 or more sessions frequently insert/update a table,\n\tvacuum the table.\n\nAfter vacuum, those sessions abort with message \n\tERROR: cannot open segment .. of relation ...\n\nThis ERROR finally causes spinlock freeze as I reported in a posting\n[HACKERS] spinlock freeze ?(Re: INSERT/UPDATE waiting (another \nexample)). \n\nComments ?\n\nThanks.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 31 May 1999 14:59:33 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> I couldn't explain more because of my poor English,sorry.\n> \n> But my test case usually causes backend abort.\n> My test case is\n> \tWhile 1 or more sessions frequently insert/update a table,\n> \tvacuum the table.\n> \n> After vacuum, those sessions abort with message \n> \tERROR: cannot open segment .. of relation ...\n> \n> This ERROR finally causes spinlock freeze as I reported in a posting\n> [HACKERS] spinlock freeze ?(Re: INSERT/UPDATE waiting (another \n> example)). \n> \n> Comments ?\n\nOK, I buy that. How will truncate fix things? Isn't that going to be\nstrange too. Hard to imagine how we are going to modify these things. \nI am now leaning to the truncate option, especially considering that\nusually only the last segment is going to be truncated.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 02:14:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": ">Tom Lane wrote:\n>> \n>> Vadim Mikheev <[email protected]> writes:\n>> >> If I recall the dynahash.c code correctly, a null return value\n>> >> indicates either damage to the structure of the table (ie someone\n>> >> stomped on memory that didn't belong to them) or running out of memory\n>> >> to add entries to the table. The latter should be impossible if we\n>> \n>> > Quite different cases and should result in different reactions.\n>> \n>> I agree; will see about cleaning up hash_search's call convention after\n>> 6.5 is done. Actually, maybe I should do it now? I'm not convinced yet\n>> whether the reports we're seeing are due to memory clobber or running\n>> out of space... fixing this may be the easiest way to find out.\n>\n>Imho, we have to fix it in some way before 6.5\n>Either by changing dynahash.c (to return 0x1 if table is\n>corrupted and 0x0 if out of space) or by changing\n>elog(NOTICE) to elog(ERROR).\n>\n>> \n>> > #define NLOCKS_PER_XACT 40\n>> > ^^\n>> > Isn't it too low?\n>> \n>> You tell me ... that was the number that was in the 6.4 code, but I\n>> have no idea if it's right or not. (Does MVCC require more locks\n>> than the old stuff?) What is a good upper bound on the number\n>> of concurrently existing locks?\n>\n>Probably yes, because of writers can continue to work and lock\n>other tables instead of sleeping of first lock due to concurrent\n>select. I'll change it to 64, but this should be configurable\n>thing.\n>\n>> \n>> > /* xidHash table */\n>> > size += hash_estimate_size(maxBackends,\n>> > ^^^^^^^^^^^\n>> > SHMEM_XIDTAB_KEYSIZE,\n>> > SHMEM_XIDTAB_DATASIZE);\n>> \n>> > Why just maxBackends is here? NLOCKENTS should be used too\n>> > (each transaction lock requieres own xidhash entry).\n>> \n>> Should it be NLOCKENTS(maxBackends) xid entries, or do you mean\n>> NLOCKENTS(maxBackends) + maxBackends? Feel free to stick in any\n>> estimates that you like better --- what's there now is an interpretation\n>> of what the 6.4 code was trying to do (but it was sufficiently buggy and\n>> unreadable that it was probably coming out with different numbers in\n>> the end...)\n>\n>Just NLOCKENTS(maxBackends) - I'll change it now.\n\nI have just done cvs update and saw your changes. I tried the same\ntesting as I did before (64 conccurrent connections, and each\nconnection excutes 100 transactions), but it failed again.\n\n(1) without -B 1024, it failed: out of free buffers: time to abort!\n\n(2) with -B 1024, it went into stuck spin lock\n\nSo I looked into sources a little bit, and made a minor change to\ninclude/storage/lock.h:\n\n#define INIT_TABLE_SIZE 100\n\nto:\n\n#define INIT_TABLE_SIZE 4096\n\nthen restarted postmaster with -B 1024 (this will prevent\nout-of-free-buffers problem, I guess). Now everything seems to work\ngreat!\n\nI suspect that huge INIT_TABLE_SIZE prevented dynamic expanding the\nhash tables and seems there's something wrong in the routines\nresponsible for that.\n\nComments?\n--\nTatsuo Ishii\n\n",
"msg_date": "Mon, 31 May 1999 17:24:46 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> I have just done cvs update and saw your changes. I tried the same\n> testing as I did before (64 conccurrent connections, and each\n> connection excutes 100 transactions), but it failed again.\n> \n> (1) without -B 1024, it failed: out of free buffers: time to abort!\n> \n> (2) with -B 1024, it went into stuck spin lock\n> \n> So I looked into sources a little bit, and made a minor change to\n> include/storage/lock.h:\n> \n> #define INIT_TABLE_SIZE 100\n> \n> to:\n> \n> #define INIT_TABLE_SIZE 4096\n> \n> then restarted postmaster with -B 1024 (this will prevent\n> out-of-free-buffers problem, I guess). Now everything seems to work\n> great!\n> \n> I suspect that huge INIT_TABLE_SIZE prevented dynamic expanding the\n> hash tables and seems there's something wrong in the routines\n> responsible for that.\n\nSeems like that -:(\n\nIf we'll not fix expand hash code before 6.5 release then\nI would recommend to don't use INIT_TABLE_SIZE in\n\n lockMethodTable->lockHash = (HTAB *) ShmemInitHash(shmemName,\n INIT_TABLE_SIZE, MAX_TABLE_SIZE,\n &info, hash_flags);\n\nand\n\n lockMethodTable->xidHash = (HTAB *) ShmemInitHash(shmemName,\n INIT_TABLE_SIZE, MAX_TABLE_SIZE,\n &info, hash_flags);\n\nbut use NLOCKENTS(maxBackends) instead.\n\nVadim\n",
"msg_date": "Mon, 31 May 1999 17:33:52 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> As for my test case,xidHash is filled with XactLockTable entries which have\n>> been acquired by XactLockTableWait().\n>> Could those entries be released immediately after they are acquired ?\n\n> Ops. Thanks! Must be released. \n\nDoes this account for the \"ShmemAlloc: out of memory\" errors we've been\nseeing? I spent a good deal of time yesterday grovelling through all\nthe calls to ShmemAlloc, and concluded that (unless there is a memory\nstomp somewhere) it has to be caused by one of the shared hashtables\ngrowing well beyond its size estimate.\n\nI did find that the PROC structures are not counted in the initial\nsizing of the shared memory block. This is no problem at the default\nlimit of 32 backends, but could get to be an issue for hundreds of\nbackends. I will add that item to the size estimate today.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 11:03:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> I suspect that huge INIT_TABLE_SIZE prevented dynamic expanding the\n>> hash tables and seems there's something wrong in the routines\n>> responsible for that.\n\n> Seems like that -:(\n\nOK, as the last guy to touch dynahash.c I suppose this is my\nbailiwick... I will look into it today.\n\n> If we'll not fix expand hash code before 6.5 release then\n> I would recommend to don't use INIT_TABLE_SIZE in\n\nIf we can't figure out what's really wrong then that might be\na good kluge solution to get 6.5 out the door. But I'd rather\ntry to fix it right first.\n\nCan anyone send me a test case script that reproduces the problem?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 11:22:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "I wrote:\n>>> I suspect that huge INIT_TABLE_SIZE prevented dynamic expanding the\n>>> hash tables and seems there's something wrong in the routines\n>>> responsible for that.\n\n> OK, as the last guy to touch dynahash.c I suppose this is my\n> bailiwick... I will look into it today.\n\nIt's amazing how much easier it is to see a bug when you know it must be\nthere ;-).\n\nI discovered that the hashtable expansion routine would mess up in the\ncase that all the records in the bucket being split ended up in the new\nbucket rather than the old. In that case it forgot to clear the old\nbucket's chain header, with the result that all the records appeared to\nbe in both buckets at once. This would not be a big problem until and\nunless the first record in the chain got deleted --- it would only be\ncorrectly removed from the new bucket, leaving the old bucket's chain\nheader pointing at a now-free record (and failing to link to any records\nthat had been added to the shared chain on its behalf in the meanwhile).\nDisaster ensues.\n\nAn actual failure via this path seems somewhat improbable, but that\nmay just explain why we hadn't seen it happen very much before...\n\nI have committed a fix in dynahash.c. Hiroshi and Tatsuo, would you\nplease grab latest sources and see whether the problems you are\nobserving are fixed?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 13:13:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I have just done cvs update and saw your changes. I tried the same\n> testing as I did before (64 conccurrent connections, and each\n> connection excutes 100 transactions), but it failed again.\n>\n> (1) without -B 1024, it failed: out of free buffers: time to abort!\n\nRight now, the postmaster will let you set any combination of -B and -N\nyou please. But it seems obvious that there is some minimum number of\nbuffers per backend below which things aren't going to work very well.\nI wonder whether the postmaster startup code ought to enforce a minimum\nratio, say -B at least twice -N ? I have no idea what an appropriate\nlimit would be, however. Vadim, do you have any thoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 14:03:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "I said:\n> I did find that the PROC structures are not counted in the initial\n> sizing of the shared memory block.\n\nEr ... yes they are ... never mind ...\n\nAt this point I am not going to make any further changes to the\nshared-mem code unless Hiroshi and Tatsuo report that there are still\nproblems.\n\nI would still like to alter the calling conventions for hash_search\nand hash_seq, which are ugly and also dependent on static state\nvariables. But as far as I can tell right now, those are just code\nbeautification items rather than repairs for currently-existing bugs.\nSo it seems best not to risk breaking anything with so little testing\ntime left in the 6.5 cycle. I will put them on my to-do-for-6.6 list\ninstead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 15:01:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Vadim Mikheev <[email protected]> writes:\n> >> As for my test case,xidHash is filled with XactLockTable entries which have\n> >> been acquired by XactLockTableWait().\n> >> Could those entries be released immediately after they are acquired ?\n> \n> > Ops. Thanks! Must be released.\n> \n> Does this account for the \"ShmemAlloc: out of memory\" errors we've been\n> seeing? I spent a good deal of time yesterday grovelling through all\n\nYes.\n\nVadim\n",
"msg_date": "Tue, 01 Jun 1999 10:12:41 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Tatsuo Ishii <[email protected]> writes:\n> > I have just done cvs update and saw your changes. I tried the same\n> > testing as I did before (64 conccurrent connections, and each\n> > connection excutes 100 transactions), but it failed again.\n> >\n> > (1) without -B 1024, it failed: out of free buffers: time to abort!\n> \n> Right now, the postmaster will let you set any combination of -B and -N\n> you please. But it seems obvious that there is some minimum number of\n> buffers per backend below which things aren't going to work very well.\n> I wonder whether the postmaster startup code ought to enforce a minimum\n> ratio, say -B at least twice -N ? I have no idea what an appropriate\n ^^^^^^^^^^^^^^\nIt's enough for select from single table using index, so it's\nprobably good ratio.\n\n> limit would be, however. Vadim, do you have any thoughts?\n\nVadim\n",
"msg_date": "Tue, 01 Jun 1999 10:29:07 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Tom,\n\n>It's amazing how much easier it is to see a bug when you know it must be\n>there ;-).\n>\n>I discovered that the hashtable expansion routine would mess up in the\n>case that all the records in the bucket being split ended up in the new\n>bucket rather than the old. In that case it forgot to clear the old\n>bucket's chain header, with the result that all the records appeared to\n>be in both buckets at once. This would not be a big problem until and\n>unless the first record in the chain got deleted --- it would only be\n>correctly removed from the new bucket, leaving the old bucket's chain\n>header pointing at a now-free record (and failing to link to any records\n>that had been added to the shared chain on its behalf in the meanwhile).\n>Disaster ensues.\n>\n>An actual failure via this path seems somewhat improbable, but that\n>may just explain why we hadn't seen it happen very much before...\n>\n>I have committed a fix in dynahash.c. Hiroshi and Tatsuo, would you\n>please grab latest sources and see whether the problems you are\n>observing are fixed?\n\nBingo! Your fix seems to solve the problem! Now 64 concurrent\ntransactions ran 100 transactions each without any problem. Thanks.\n\nBTW, the script I'm using for the heavy load testing is written in\nJava(not written by me). Do you want to try it?\n\n>> (1) without -B 1024, it failed: out of free buffers: time to abort!\n>\n>Right now, the postmaster will let you set any combination of -B and -N\n>you please. But it seems obvious that there is some minimum number of\n>buffers per backend below which things aren't going to work very well.\n>I wonder whether the postmaster startup code ought to enforce a minimum\n>ratio, say -B at least twice -N ? I have no idea what an appropriate\n>limit would be, however. Vadim, do you have any thoughts?\n\nJust a few questions.\n\n o I observed the backend processes grew ~10MB with -B 1024. Is this\nnormal?\n\no Is it possible to let the backend wait for free buffers in case of\ninsufficient shared buffers? (with reasonable retries, of course)\n--\nTatsuo Ishii\n\n",
"msg_date": "Tue, 01 Jun 1999 15:57:46 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> Just a few questions.\n> \n> o I observed the backend processes grew ~10MB with -B 1024. Is this\n> normal?\n\nBackend attaches to 1024*8K + other shmem, so probably\nps takes it into account.\n\n> o Is it possible to let the backend wait for free buffers in case of\n> insufficient shared buffers? (with reasonable retries, of course)\n\nYes, but not now.\n\nVadim\n",
"msg_date": "Tue, 01 Jun 1999 15:40:45 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": ">> Just a few questions.\n>> \n>> o I observed the backend processes grew ~10MB with -B 1024. Is this\n>> normal?\n>\n>Backend attaches to 1024*8K + other shmem, so probably\n>ps takes it into account.\n\nOh, I see.\n\n>> o Is it possible to let the backend wait for free buffers in case of\n>> insufficient shared buffers? (with reasonable retries, of course)\n>\n>Yes, but not now.\n\nAgreed.\n--\nTatsuo Ishii\n\n",
"msg_date": "Tue, 01 Jun 1999 16:44:30 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Tuesday, June 01, 1999 2:14 AM\n> To: Hiroshi Inoue; [email protected]\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Open 6.5 items \n> \n> \n> I wrote:\n> >>> I suspect that huge INIT_TABLE_SIZE prevented dynamic expanding the\n> >>> hash tables and seems there's something wrong in the routines\n> >>> responsible for that.\n> \n> > OK, as the last guy to touch dynahash.c I suppose this is my\n> > bailiwick... I will look into it today.\n> \n> It's amazing how much easier it is to see a bug when you know it must be\n> there ;-).\n>\n\n[snip] \n\n> \n> I have committed a fix in dynahash.c. Hiroshi and Tatsuo, would you\n> please grab latest sources and see whether the problems you are\n> observing are fixed?\n>\n\nIt works fine.\nThe number of xidHash entry exceeded 600 but spinlock error didn't \noccur.\n\nHowever,when I did vacuum while testing I got the following error \nmessage.\n\tERROR: Child itemid marked as unused\n\nTransactionId-s of tuples in update chain may be out of order.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 2 Jun 1999 10:24:57 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> Sent: Monday, May 31, 1999 3:15 PM\n> To: Hiroshi Inoue\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Open 6.5 items\n> \n> \n> > I couldn't explain more because of my poor English,sorry.\n> > \n> > But my test case usually causes backend abort.\n> > My test case is\n> > \tWhile 1 or more sessions frequently insert/update a table,\n> > \tvacuum the table.\n> > \n> > After vacuum, those sessions abort with message \n> > \tERROR: cannot open segment .. of relation ...\n> > \n> > This ERROR finally causes spinlock freeze as I reported in a posting\n> > [HACKERS] spinlock freeze ?(Re: INSERT/UPDATE waiting (another \n> > example)). \n> > \n> > Comments ?\n> \n> OK, I buy that. How will truncate fix things? Isn't that going to be\n> strange too. Hard to imagine how we are going to modify these things. \n> I am now leaning to the truncate option, especially considering that\n> usually only the last segment is going to be truncated.\n>\n\nI made a patch on trial.\n\n1) Useless segments are never removed by my implementation \n because I call FileTruncate() instead of File(Name)Unlink().\n2) mdfd_lstbcnt member of MdfdVec was abused in mdnblocks().\n I am maintaining the value of mdfd_lstbcnt unwillingly.\n Is it preferable to get rid of mdfd_lstbcnt completely ?\n\nI'm not sure that this patch has no problem.\nPlease check and comment on my patch.\n\nI don't have > 1G disk space.\nCould someone run under > 1G environment ?\n \nAs Ole Gjerde mentioned,current implementation by his old \npatch is not right. His new patch seems right if vacuum is \nexecuted alone.\nPlease run vacuum while other concurrent sessions are \nreading or writing,if you would see the difference.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n*** storage/smgr/md.c.orig\tWed May 26 16:05:02 1999\n--- storage/smgr/md.c\tWed Jun 2 15:35:35 1999\n***************\n*** 674,684 ****\n \tsegno = 0;\n \tfor (;;)\n \t{\n! \t\tif (v->mdfd_lstbcnt == RELSEG_SIZE\n! \t\t\t|| (nblocks = _mdnblocks(v->mdfd_vfd, BLCKSZ)) == RELSEG_SIZE)\n \t\t{\n- \n- \t\t\tv->mdfd_lstbcnt = RELSEG_SIZE;\n \t\t\tsegno++;\n \n \t\t\tif (v->mdfd_chain == (MdfdVec *) NULL)\n--- 674,685 ----\n \tsegno = 0;\n \tfor (;;)\n \t{\n! \t\tnblocks = _mdnblocks(v->mdfd_vfd, BLCKSZ);\n! \t\tif (nblocks > RELSEG_SIZE)\n! \t\t\telog(FATAL, \"segment too big in mdnblocks!\");\n! \t\tv->mdfd_lstbcnt = nblocks;\n! \t\tif (nblocks == RELSEG_SIZE)\n \t\t{\n \t\t\tsegno++;\n \n \t\t\tif (v->mdfd_chain == (MdfdVec *) NULL)\n***************\n*** 714,745 ****\n \tint\t\t\tcurnblk,\n \t\t\t\ti,\n \t\t\t\toldsegno,\n! \t\t\t\tnewsegno;\n! \tchar\t\tfname[NAMEDATALEN];\n! \tchar\t\ttname[NAMEDATALEN + 10];\n \n \tcurnblk = mdnblocks(reln);\n \toldsegno = curnblk / RELSEG_SIZE;\n \tnewsegno = nblocks / RELSEG_SIZE;\n \n- \tStrNCpy(fname, RelationGetRelationName(reln)->data, NAMEDATALEN);\n- \n- \tif (newsegno < oldsegno)\n- \t{\n- \t\tfor (i = (newsegno + 1);; i++)\n- \t\t{\n- \t\t\tsprintf(tname, \"%s.%d\", fname, i);\n- \t\t\tif (FileNameUnlink(tname) < 0)\n- \t\t\t\tbreak;\n- \t\t}\n- \t}\n #endif\n \n \tfd = RelationGetFile(reln);\n \tv = &Md_fdvec[fd];\n \n \tif (FileTruncate(v->mdfd_vfd, nblocks * BLCKSZ) < 0)\n \t\treturn -1;\n \n \treturn nblocks;\n \n--- 715,766 ----\n \tint\t\t\tcurnblk,\n \t\t\t\ti,\n \t\t\t\toldsegno,\n! \t\t\t\tnewsegno,\n! \t\t\t\tlastsegblocks;\n! \tMdfdVec\t\t\t**varray;\n \n \tcurnblk = mdnblocks(reln);\n+ \tif (nblocks > curnblk)\n+ \t\treturn -1;\n \toldsegno = curnblk / RELSEG_SIZE;\n \tnewsegno = nblocks / RELSEG_SIZE;\n \n #endif\n \n \tfd = RelationGetFile(reln);\n \tv = &Md_fdvec[fd];\n \n+ #ifndef LET_OS_MANAGE_FILESIZE\n+ \tvarray = (MdfdVec **)palloc((oldsegno + 1) * sizeof(MdfdVec *));\n+ \tfor (i = 0; i <= oldsegno; i++)\n+ \t{\n+ \t\tif (!v)\n+ \t\t\telog(ERROR,\"segment isn't open in mdtruncate!\");\n+ \t\tvarray[i] = v;\n+ \t\tv = v->mdfd_chain;\n+ \t}\n+ \tfor (i = oldsegno; i > newsegno; i--)\n+ \t{\n+ \t\tv = varray[i];\n+ \t\tif (FileTruncate(v->mdfd_vfd, 0) < 0)\n+ \t\t{\n+ \t\t\tpfree(varray);\n+ \t\t\treturn -1;\n+ \t\t}\n+ \t\tv->mdfd_lstbcnt = 0;\n+ \t}\n+ \t/* Calculate the # of blocks in the last segment */\n+ \tlastsegblocks = nblocks - (newsegno * RELSEG_SIZE);\n+ \tv = varray[newsegno];\n+ \tpfree(varray);\n+ \tif (FileTruncate(v->mdfd_vfd, lastsegblocks * BLCKSZ) < 0)\n+ \t\treturn -1;\n+ \tv->mdfd_lstbcnt = lastsegblocks;\n+ #else\n \tif (FileTruncate(v->mdfd_vfd, nblocks * BLCKSZ) < 0)\n \t\treturn -1;\n+ \tv->mdfd_lstbcnt = nblocks;\n+ #endif\n \n \treturn nblocks;\n \n\n\n",
"msg_date": "Wed, 2 Jun 1999 17:48:22 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> However,when I did vacuum while testing I got the following error\n> message.\n> ERROR: Child itemid marked as unused\n> \n> TransactionId-s of tuples in update chain may be out of order.\n\nI see... Need 1-2 days to fix this -:(\n\nVadim\n",
"msg_date": "Wed, 02 Jun 1999 17:57:03 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> However,when I did vacuum while testing I got the following error\n> message.\n> ERROR: Child itemid marked as unused\n> \n> TransactionId-s of tuples in update chain may be out of order.\n\n1. Fix and explanation in xact.c:CommitTransaction():\n\n RecordTransactionCommit();\n\n /*\n * Let others know about no transaction in progress by me.\n * Note that this must be done _before_ releasing locks we hold\n * and SpinAcquire(ShmemIndexLock) is required - or bad (too high)\n * XmaxRecent value might be used by vacuum: UPDATE with xid 0 is\n * blocked by xid 1' UPDATE, xid 1 is doing commit while xid 2\n * gets snapshot - if xid 2' GetSnapshotData sees xid 1 as running\n * then it must see xid 0 as running as well or XmaxRecent = 1\n * might be used by concurrent vacuum causing\n * ERROR: Child itemid marked as unused\n * This bug was reported by Hiroshi Inoue and I was able to reproduce\n * it with 3 sessions and gdb. - vadim 06/03/99\n */\n if (MyProc != (PROC *) NULL)\n {\n SpinAcquire(ShmemIndexLock);\n MyProc->xid = InvalidTransactionId;\n MyProc->xmin = InvalidTransactionId;\n SpinRelease(ShmemIndexLock);\n }\n\n2. It was possible to get two versions of the same row from\n select. Fixed by moving MyProc->xid assignment from\n StartTransaction() inside GetNewTransactionId().\n\nThanks, Hiroshi! And please run your tests - I used just \n3 sessions and gdb.\n\nVadim\n",
"msg_date": "Thu, 03 Jun 1999 21:44:58 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> ERROR: Child itemid marked as unused\n> [ is fixed ]\n\nGreat! Vadim (also Hiroshi and Tatsuo), how many bugs remain on your\nmust-fix-for-6.5 lists? I was just wondering over in the \"Freezing\ndocs\" thread whether we had any problems severe enough to justify\ndelaying the release. It sounds like at least one such problem is\ngone...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 10:59:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Vadim Mikheev <[email protected]> writes:\n> >> ERROR: Child itemid marked as unused\n> > [ is fixed ]\n> \n> Great! Vadim (also Hiroshi and Tatsuo), how many bugs remain on your\n> must-fix-for-6.5 lists? I was just wondering over in the \"Freezing\n> docs\" thread whether we had any problems severe enough to justify\n> delaying the release. It sounds like at least one such problem is\n> gone...\n\nNo one in mine.\n\nThere are still some bad things, but they are old:\n\n1. elog(NOTICE) in lock manager when locking was not succeeded.\n Hope that our recent changes will reduce possibility of this.\n\nHiroshi wrote:\n\n2. \n> spinlock io_in_progress_lock of a buffer page is not\n> released by operations called by elog() such as\n> ProcReleaseSpins(),ResetBufferPool() etc.\n\nI tried to fix this before 6.4 but without success (don't\nremember why).\n\n3.\n> It seems elog(FATAL) doesn't release allocated buffer pages.\n> It's OK ?\n> AFAIC elog(FATAL) causes proc_exit(0) and proc_exit() doesn't\n> call ResetBufferPool().\n\nSeems to me that elog(FATAL) should call siglongjmp(Warn_restart, 1),\nlike elog(ERROR), but force exit in tcop main loop after\nAbortCurrentTransaction(). AbortTransaction() does pretty nice\nthings like RelationPurgeLocalRelation(false) and DestroyNoNameRels()...\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 01:24:28 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> It seems elog(FATAL) doesn't release allocated buffer pages.\n>> It's OK ?\n>> AFAIC elog(FATAL) causes proc_exit(0) and proc_exit() doesn't\n>> call ResetBufferPool().\n\n> Seems to me that elog(FATAL) should call siglongjmp(Warn_restart, 1),\n> like elog(ERROR), but force exit in tcop main loop after\n> AbortCurrentTransaction(). AbortTransaction() does pretty nice\n> things like RelationPurgeLocalRelation(false) and DestroyNoNameRels()...\n\nSeems reasonable to me. It seems to me that elog(FATAL) means \"this\nbackend is too messed up to continue, but I think the rest of the\nbackends can keep going.\" So we need to clean up our allocated\nresources before quitting. abort() is for the cases where we think\nshared memory may be corrupted and everyone must bail out. We might\nneed to revisit the uses of each routine and make sure that different\nerror conditions are properly classified.\n\nOf course, if things *are* messed up then trying to AbortTransaction\nmight make it worse. How bad is that risk?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 13:53:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Error exits (Re: [HACKERS] Open 6.5 items)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Vadim Mikheev <[email protected]> writes:\n> >> It seems elog(FATAL) doesn't release allocated buffer pages.\n> >> It's OK ?\n> >> AFAIC elog(FATAL) causes proc_exit(0) and proc_exit() doesn't\n> >> call ResetBufferPool().\n> \n> > Seems to me that elog(FATAL) should call siglongjmp(Warn_restart, 1),\n> > like elog(ERROR), but force exit in tcop main loop after\n> > AbortCurrentTransaction(). AbortTransaction() does pretty nice\n> > things like RelationPurgeLocalRelation(false) and DestroyNoNameRels()...\n> \n> Seems reasonable to me. It seems to me that elog(FATAL) means \"this\n> backend is too messed up to continue, but I think the rest of the\n> backends can keep going.\" So we need to clean up our allocated\n> resources before quitting. abort() is for the cases where we think\n> shared memory may be corrupted and everyone must bail out. We might\n> need to revisit the uses of each routine and make sure that different\n> error conditions are properly classified.\n> \n> Of course, if things *are* messed up then trying to AbortTransaction\n> might make it worse. How bad is that risk?\n\nDon't know. I think that we have no time to fix this in 6.5, -:(\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 01:59:20 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error exits (Re: [HACKERS] Open 6.5 items)"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> Sent: Thursday, June 03, 1999 10:45 PM\n> To: Hiroshi Inoue\n> Cc: Tom Lane; [email protected]; PostgreSQL-development\n> Subject: Re: [HACKERS] Open 6.5 items\n> \n> \n> Hiroshi Inoue wrote:\n> > \n> > However,when I did vacuum while testing I got the following error\n> > message.\n> > ERROR: Child itemid marked as unused\n> > \n> > TransactionId-s of tuples in update chain may be out of order.\n> \n> 1. Fix and explanation in xact.c:CommitTransaction():\n> \n> RecordTransactionCommit();\n> \n> /*\n> * Let others know about no transaction in progress by me.\n> * Note that this must be done _before_ releasing locks we hold\n> * and SpinAcquire(ShmemIndexLock) is required - or bad (too high)\n> * XmaxRecent value might be used by vacuum: UPDATE with xid 0 is\n> * blocked by xid 1' UPDATE, xid 1 is doing commit while xid 2\n> * gets snapshot - if xid 2' GetSnapshotData sees xid 1 as running\n> * then it must see xid 0 as running as well or XmaxRecent = 1\n> * might be used by concurrent vacuum causing\n> * ERROR: Child itemid marked as unused\n> * This bug was reported by Hiroshi Inoue and I was able to reproduce\n> * it with 3 sessions and gdb. - vadim 06/03/99\n> */\n> if (MyProc != (PROC *) NULL)\n> {\n> SpinAcquire(ShmemIndexLock);\n> MyProc->xid = InvalidTransactionId;\n> MyProc->xmin = InvalidTransactionId;\n> SpinRelease(ShmemIndexLock);\n> }\n> \n> 2. It was possible to get two versions of the same row from\n> select. Fixed by moving MyProc->xid assignment from\n> StartTransaction() inside GetNewTransactionId().\n> \n> Thanks, Hiroshi! And please run your tests - I used just \n> 3 sessions and gdb.\n>\n\nUnfortunately,the error still occurs(I changed xact.c as above \nby hand OK ?).\n\nIt seems there are cases that tuples are updated by older \ntransactions than their xmin-s and only some tuples in the middle \nof update chain may be deleted.\n\nI have no idea to fix this now.\nIt's OK for me to leave this unsolved because those cases would \nrarely occur.\n\nThanks.\n\nHiroshi Inoue\[email protected] \n\n",
"msg_date": "Fri, 4 Jun 1999 17:21:13 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> Sent: Friday, June 04, 1999 2:24 AM\n> To: Tom Lane\n> Cc: Hiroshi Inoue; [email protected]; PostgreSQL-development\n> Subject: Re: [HACKERS] Open 6.5 items\n> \n> Tom Lane wrote:\n> > \n> > Vadim Mikheev <[email protected]> writes:\n> > >> ERROR: Child itemid marked as unused\n> > > [ is fixed ]\n> > \n> > Great! Vadim (also Hiroshi and Tatsuo), how many bugs remain on your\n> > must-fix-for-6.5 lists? I was just wondering over in the \"Freezing\n> > docs\" thread whether we had any problems severe enough to justify\n> > delaying the release. It sounds like at least one such problem is\n> > gone...\n> \n> No one in mine.\n>\n \n[snip]\n\n> \n> Hiroshi wrote:\n> \n> 2. \n> > spinlock io_in_progress_lock of a buffer page is not\n> > released by operations called by elog() such as\n> > ProcReleaseSpins(),ResetBufferPool() etc.\n> \n> I tried to fix this before 6.4 but without success (don't\n> remember why).\n>\n\nThis is not in my must-fix-for-6.5 lists. \nFor the present this is caused by other bugs not by bufmgr/smgr itself \nand an easy fix may introduce other bugs.\n\n\nAnd on segmented relations.\n\nOle Gjerde who provided the patch for current implementation of \nmdtruncate() sayz.\n\"First, please reverse my patch to mdtruncate() in md.c as soon as\n possible. It does not work properly in some cases.\"\n\nI also recommend to reverse his patch to mdtruncate().\n\nThough we could not shrink segmented relations by old implementation \nthe result by vacuum would never be inconsistent(?).\n\nI think we don't have enough time to fix this.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 4 Jun 1999 17:22:00 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> >\n> > 2. It was possible to get two versions of the same row from\n> > select. Fixed by moving MyProc->xid assignment from\n> > StartTransaction() inside GetNewTransactionId().\n> >\n> > Thanks, Hiroshi! And please run your tests - I used just\n> > 3 sessions and gdb.\n> >\n> \n> Unfortunately,the error still occurs(I changed xact.c as above\n> by hand OK ?).\n\nDid you add 2. changes too?\nAlso, I made some changes in shmem.c:GetSnapshotData() but seems\nthat they are not relevant. Could you post me your xact.c and\nvarsup.c or grab current sources?\n\n> It seems there are cases that tuples are updated by older\n> transactions than their xmin-s and only some tuples in the middle\n> of update chain may be deleted.\n> \n> I have no idea to fix this now.\n> It's OK for me to leave this unsolved because those cases would\n> rarely occur.\n\nI would like to see it fixed anyway.\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 16:46:40 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> Sent: Friday, June 04, 1999 5:47 PM\n> To: Hiroshi Inoue\n> Cc: Tom Lane; [email protected]; PostgreSQL-development\n> Subject: Re: [HACKERS] Open 6.5 items\n> \n> \n> Hiroshi Inoue wrote:\n> > \n> > >\n> > > 2. It was possible to get two versions of the same row from\n> > > select. Fixed by moving MyProc->xid assignment from\n> > > StartTransaction() inside GetNewTransactionId().\n> > >\n> > > Thanks, Hiroshi! And please run your tests - I used just\n> > > 3 sessions and gdb.\n> > >\n> > \n> > Unfortunately,the error still occurs(I changed xact.c as above\n> > by hand OK ?).\n> \n> Did you add 2. changes too?\n> Also, I made some changes in shmem.c:GetSnapshotData() but seems\n> that they are not relevant. Could you post me your xact.c and\n> varsup.c or grab current sources?\n>\n\nOK I would grab current sources and retry.\n\nThanks.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Fri, 4 Jun 1999 17:56:48 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> > \n> > Hiroshi wrote:\n> > \n> > 2. \n> > > spinlock io_in_progress_lock of a buffer page is not\n> > > released by operations called by elog() such as\n> > > ProcReleaseSpins(),ResetBufferPool() etc.\n> > \n> > I tried to fix this before 6.4 but without success (don't\n> > remember why).\n> >\n> \n> This is not in my must-fix-for-6.5 lists. \n> For the present this is caused by other bugs not by bufmgr/smgr itself \n> and an easy fix may introduce other bugs.\n> \n> \n> And on segmented relations.\n> \n> Ole Gjerde who provided the patch for current implementation of \n> mdtruncate() sayz.\n> \"First, please reverse my patch to mdtruncate() in md.c as soon as\n> possible. It does not work properly in some cases.\"\n> \n> I also recommend to reverse his patch to mdtruncate().\n> \n> Though we could not shrink segmented relations by old implementation \n> the result by vacuum would never be inconsistent(?).\n> \n> I think we don't have enough time to fix this.\n\nSo what do we put in its place when we reverse out the patch?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 4 Jun 1999 10:37:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> Right now, the postmaster will let you set any combination of -B and -N\n>> you please. But it seems obvious that there is some minimum number of\n>> buffers per backend below which things aren't going to work very well.\n>> I wonder whether the postmaster startup code ought to enforce a minimum\n>> ratio, say -B at least twice -N ? I have no idea what an appropriate\n> ^^^^^^^^^^^^^^\n> It's enough for select from single table using index, so it's\n> probably good ratio.\n\nI've added a check in postmaster.c to require -B to be at least twice\n-N, per this discussion. It also enforces a minimum -B of 16 no matter\nhow small -N is. I pulled that number out of the air --- anyone have an\nidea whether some other number would be better?\n\nThe current default values of -B 64, -N 32 are not affected. However,\nsince the -N default is easily configurable, I wonder whether we should\nmove the default -B value into config.h as well...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jun 1999 17:51:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Friday, June 04, 1999 11:37 PM\n> To: Hiroshi Inoue\n> Cc: Vadim Mikheev; Tom Lane; [email protected]; PostgreSQL-development\n> Subject: Re: [HACKERS] Open 6.5 items\n> \n> \n> > > \n> > > Hiroshi wrote:\n> > > \n> > \n> > And on segmented relations.\n> > \n> > Ole Gjerde who provided the patch for current implementation of \n> > mdtruncate() sayz.\n> > \"First, please reverse my patch to mdtruncate() in md.c as soon as\n> > possible. It does not work properly in some cases.\"\n> > \n> > I also recommend to reverse his patch to mdtruncate().\n> > \n> > Though we could not shrink segmented relations by old implementation \n> > the result by vacuum would never be inconsistent(?).\n> > \n> > I think we don't have enough time to fix this.\n> \n> So what do we put in its place when we reverse out the patch?\n>\n\nFuture TODO items ?\n\nAs far as I see,there's no consensus of opinion whether we would \nremove useless segments(I also think it's preferable if possible) or \nwe would only truncate the segments(as my trial patch does).\n\nOnly Bruce and Ole objected to my opinion and no one agreed \nwith me.\nHow do other people who would use segmented relations think ? \n \nThanks.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Mon, 7 Jun 1999 10:00:52 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> Future TODO items ?\n> \n> As far as I see,there's no consensus of opinion whether we would \n> remove useless segments(I also think it's preferable if possible) or \n> we would only truncate the segments(as my trial patch does).\n> \n> Only Bruce and Ole objected to my opinion and no one agreed \n> with me.\n> How do other people who would use segmented relations think ? \n> \n\nI liked unlinking because it allowed old backends to still see the\nsegments if they still have open file descriptors, and new backends can\nsee there is no file there. That seemed nice, but you clearly\ndemostrated it caused major problems. Maybe truncation is the answer. \nI don't know, but we need to resolve this for 6.5. I can't imagine us\nfocusing on this like we have in the past few weeks. Let's just figure\nout an answer. I am on IRC now if someone can get on to discuss this. I\nwill even phone someone in US or Canada to discuss it.\n\nWhat is it on the backend that causes some backend to think there is\nanother segment. Does it just go off the end of the max segment size\nand try to open another, or do we store the number of segments\nsomewhere. I thought it was the former in sgml() area. I honestly don't\ncare if the segment files stay around if that is going to be a reliable\nsolution.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 6 Jun 1999 21:33:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "On Sun, 6 Jun 1999, Bruce Momjian wrote:\n\n> > Future TODO items ?\n> > \n> > As far as I see,there's no consensus of opinion whether we would \n> > remove useless segments(I also think it's preferable if possible) or \n> > we would only truncate the segments(as my trial patch does).\n> > \n> > Only Bruce and Ole objected to my opinion and no one agreed \n> > with me.\n> > How do other people who would use segmented relations think ? \n> > \n> \n> I liked unlinking because it allowed old backends to still see the\n> segments if they still have open file descriptors, and new backends can\n> see there is no file there. That seemed nice, but you clearly\n> demostrated it caused major problems. Maybe truncation is the answer. \n> I don't know, but we need to resolve this for 6.5. I can't imagine us\n> focusing on this like we have in the past few weeks. Let's just figure\n> out an answer. I am on IRC now if someone can get on to discuss this. I\n> will even phone someone in US or Canada to discuss it.\n> \n> What is it on the backend that causes some backend to think there is\n> another segment. Does it just go off the end of the max segment size\n> and try to open another, or do we store the number of segments\n> somewhere. I thought it was the former in sgml() area. I honestly don't\n> care if the segment files stay around if that is going to be a reliable\n> solution.\n\nOther then the inode being used, what is wrong with a zero-length segment\nfile?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 7 Jun 1999 02:42:26 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> > What is it on the backend that causes some backend to think there is\n> > another segment. Does it just go off the end of the max segment size\n> > and try to open another, or do we store the number of segments\n> > somewhere. I thought it was the former in sgml() area. I honestly don't\n> > care if the segment files stay around if that is going to be a reliable\n> > solution.\n> \n> Other then the inode being used, what is wrong with a zero-length segment\n> file?\n\nNothing is wrong with it. I just thought it would be more reliable to\nunlink it, but now am considering I was wrong.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Jun 1999 09:20:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "On Mon, 7 Jun 1999, Bruce Momjian wrote:\n\n> > > What is it on the backend that causes some backend to think there is\n> > > another segment. Does it just go off the end of the max segment size\n> > > and try to open another, or do we store the number of segments\n> > > somewhere. I thought it was the former in sgml() area. I honestly don't\n> > > care if the segment files stay around if that is going to be a reliable\n> > > solution.\n> > \n> > Other then the inode being used, what is wrong with a zero-length segment\n> > file?\n> \n> Nothing is wrong with it. I just thought it would be more reliable to\n> unlink it, but now am considering I was wrong.\n\nJust a thought, but if you left it zero length, the dba could use it as a\nmeans for estimating disk space requirements? :) buff.0 buff.1 is zero\nlenght, but buff.2 isn't, we know that we've filled 2x1gig buffers plus a\nlittle bit, so can allocate space accordingly? :)\n\nI'm groping here, help me out ... :)\n\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 7 Jun 1999 11:15:51 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> > Nothing is wrong with it. I just thought it would be more reliable to\n> > unlink it, but now am considering I was wrong.\n> \n> Just a thought, but if you left it zero length, the dba could use it as a\n> means for estimating disk space requirements? :) buff.0 buff.1 is zero\n> lenght, but buff.2 isn't, we know that we've filled 2x1gig buffers plus a\n> little bit, so can allocate space accordingly? :)\n> \n> I'm groping here, help me out ... :)\n> \n\nOne reason to do truncate is that if it is a symbolic link to another\ndriver, that link will stay, while unlink will not, and will recreate on\non the same drive.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Jun 1999 10:30:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "On Wed, 2 Jun 1999, Hiroshi Inoue wrote:\n> I made a patch on trial.\n> 1) Useless segments are never removed by my implementation \n> because I call FileTruncate() instead of File(Name)Unlink().\n> I'm not sure that this patch has no problem.\n> Please check and comment on my patch.\n\nI have tried it, and it seems to work.\n\n> As Ole Gjerde mentioned,current implementation by his old \n> patch is not right. His new patch seems right if vacuum is \n> executed alone.\n\nYes, my first patch was horribly wrong :)\nThe second one, as you mention, only works right if no reading or writing\nis going on.\n\nI'll talk more about the new patch in later emails.\n\nOle Gjerde\n\n\n",
"msg_date": "Mon, 7 Jun 1999 12:25:46 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "On Sun, 6 Jun 1999, Bruce Momjian wrote:\n\n> I liked unlinking because it allowed old backends to still see the\n> segments if they still have open file descriptors, and new backends can\n> see there is no file there. That seemed nice, but you clearly\n> demostrated it caused major problems. Maybe truncation is the answer. \n> I don't know, but we need to resolve this for 6.5. I can't imagine us\n> focusing on this like we have in the past few weeks. Let's just figure\n> out an answer. I am on IRC now if someone can get on to discuss this. I\n> will even phone someone in US or Canada to discuss it.\n\nPersonally, I think the right thing is to unlink the unused segments. For\nthe most part keeping them around is not going to cause any problems, but\nI can't really think of any good reasons to keep them around. Keeping the\ndatabase directories clean is a good thing in my opinion.\n\n> What is it on the backend that causes some backend to think there is\n> another segment. Does it just go off the end of the max segment size\n> and try to open another, or do we store the number of segments\n> somewhere. I thought it was the former in sgml() area. I honestly don't\n> care if the segment files stay around if that is going to be a reliable\n> solution.\n\nThe new patch from Hiroshi Inoue <[email protected]> works. I believe it is\na reliable solution, I just don't agree it's the right one. That is\nprobably just a matter of opinion however. As his patch doesn't have any\nimmediate problems, so I vote for that to be included in 6.5.\n\nThanks,\nOle Gjerde\n\n\n\n\n",
"msg_date": "Mon, 7 Jun 1999 12:33:58 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "\n\nFolks, do we have anything to revisit here?\n\n\n> Tatsuo Ishii wrote:\n> > \n> > I have just done cvs update and saw your changes. I tried the same\n> > testing as I did before (64 conccurrent connections, and each\n> > connection excutes 100 transactions), but it failed again.\n> > \n> > (1) without -B 1024, it failed: out of free buffers: time to abort!\n> > \n> > (2) with -B 1024, it went into stuck spin lock\n> > \n> > So I looked into sources a little bit, and made a minor change to\n> > include/storage/lock.h:\n> > \n> > #define INIT_TABLE_SIZE 100\n> > \n> > to:\n> > \n> > #define INIT_TABLE_SIZE 4096\n> > \n> > then restarted postmaster with -B 1024 (this will prevent\n> > out-of-free-buffers problem, I guess). Now everything seems to work\n> > great!\n> > \n> > I suspect that huge INIT_TABLE_SIZE prevented dynamic expanding the\n> > hash tables and seems there's something wrong in the routines\n> > responsible for that.\n> \n> Seems like that -:(\n> \n> If we'll not fix expand hash code before 6.5 release then\n> I would recommend to don't use INIT_TABLE_SIZE in\n> \n> lockMethodTable->lockHash = (HTAB *) ShmemInitHash(shmemName,\n> INIT_TABLE_SIZE, MAX_TABLE_SIZE,\n> &info, hash_flags);\n> \n> and\n> \n> lockMethodTable->xidHash = (HTAB *) ShmemInitHash(shmemName,\n> INIT_TABLE_SIZE, MAX_TABLE_SIZE,\n> &info, hash_flags);\n> \n> but use NLOCKENTS(maxBackends) instead.\n> \n> Vadim\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 19:20:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Folks, do we have anything to revisit here?\n\nI believe it is fixed --- I went back and found some more bugs in\ndynahash.c after seeing Tatsuo's report ;-)\n\n\t\t\tregards, tom lane\n\n\n>> Tatsuo Ishii wrote:\n>>>> \n>>>> I have just done cvs update and saw your changes. I tried the same\n>>>> testing as I did before (64 conccurrent connections, and each\n>>>> connection excutes 100 transactions), but it failed again.\n>>>> \n>>>> (1) without -B 1024, it failed: out of free buffers: time to abort!\n>>>> \n>>>> (2) with -B 1024, it went into stuck spin lock\n>>>> \n>>>> So I looked into sources a little bit, and made a minor change to\n>>>> include/storage/lock.h:\n>>>> \n>>>> #define INIT_TABLE_SIZE 100\n>>>> \n>>>> to:\n>>>> \n>>>> #define INIT_TABLE_SIZE 4096\n>>>> \n>>>> then restarted postmaster with -B 1024 (this will prevent\n>>>> out-of-free-buffers problem, I guess). Now everything seems to work\n>>>> great!\n>>>> \n>>>> I suspect that huge INIT_TABLE_SIZE prevented dynamic expanding the\n>>>> hash tables and seems there's something wrong in the routines\n>>>> responsible for that.\n>> \n>> Seems like that -:(\n>> \n>> If we'll not fix expand hash code before 6.5 release then\n>> I would recommend to don't use INIT_TABLE_SIZE in\n>> \nlockMethodTable-> lockHash = (HTAB *) ShmemInitHash(shmemName,\n>> INIT_TABLE_SIZE, MAX_TABLE_SIZE,\n>> &info, hash_flags);\n>> \n>> and\n>> \nlockMethodTable-> xidHash = (HTAB *) ShmemInitHash(shmemName,\n>> INIT_TABLE_SIZE, MAX_TABLE_SIZE,\n>> &info, hash_flags);\n>> \n>> but use NLOCKENTS(maxBackends) instead.\n>> \n>> Vadim\n",
"msg_date": "Wed, 07 Jul 1999 19:52:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
}
] |
[
{
"msg_contents": "I have problem with pg_dump with CVS snapshot 19990526:\ndavid=> \\c test\nconnecting to new database: test\ntest=> \\d\nDatabase = test\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | david | tst | table |\n | david | xinx35274 | index |\n +------------------+----------------------------------+----------+\n\ntest=> select * from tst;\nentry\n-----\n35274\n(1 row)\n\ntest=> \\d tst\nTable = tst\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| entry | oid | 4 |\n+----------------------------------+----------------------------------+-------+\nEOF\n\nBut pg_dump prints:\n$ pg_dump test\nCREATE TABLE \"tst\" (\n\t\"entry\" oid);\nCOPY \"tst\" FROM stdin;\n35274\n\\.\nfailed sanity check, table xinv35274 was not found\n===============================================================\nBut xinv35274 exist:\n\ndavid=> \\c test \nconnecting to new database: test\ntest=> select * from pg_class where relname like 'xin%';\nrelname |reltype|relowner|relam|relpages|reltuples|relhasindex|relisshared|relkind|relnatts|relchecks|reltriggers|relukeys|relfkeys|relrefs|relhaspkey|relhasrules|relacl\n---------+-------+--------+-----+--------+---------+-----------+-----------+-------+--------+---------+-----------+--------+--------+-------+----------+-----------+------\nxinv35274| 0| 501| 0| 0| 0|t |f |l | 2| 0| 0| 0| 0| 0|f |f | \nxinx35274| 0| 501| 403| 2| 2048|f |f |i | 1| 0| 0| 0| 0| 0|f |f | \n(2 rows)\n\nI think, that this is problem (mainly if I need backup).\n\n-- \n* David Sauer, student of Czech Technical University\n* electronic mail: [email protected] (mime compatible)\n",
"msg_date": "28 May 1999 13:21:05 +0200",
"msg_from": "David Sauer <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump doesn't work well with large object ..."
},
{
"msg_contents": "David Sauer <[email protected]> writes:\n> I have problem with pg_dump with CVS snapshot 19990526:\n> failed sanity check, table xinv35274 was not found\n\nI have fixed this to the extent that pg_dump ignores large objects,\nas it is documented to do. (It was doing that just fine, but it\nfailed to ignore the indexes on the large objects :-(.)\n\nOf course what you'd really like is for a pg_dump script to save\nand restore large objects along with everything else. But there\nseem to be several big problems to be solved before that can happen.\nThe worst is that a large object's OID will likely be recorded in\nat least one other table in the database, and pg_dump is not nearly\nsmart enough to find and update those references...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 00:46:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump doesn't work well with large object ... "
}
] |
[
{
"msg_contents": "I have added this article to our documenation page.\n\nYou will need to use:\n\n\twww.postgresql.org/index.html\n\nto see it now until the mirrors get it. Note the presence of index.html\nto force it not to use a mirror.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 28 May 1999 11:14:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Development History article"
}
] |
[
{
"msg_contents": "i got no response on the interfaces list -- maybe someone here has \nexperienced this too??\n\n(msaccess97 / win98, 6.40.0005 odbc driver, postgres 6.4.2, linux mandrake \n5.3) i wrote earlier about getting illegal page faults every time i CLOSE a \ncertain form (it lets me use it just fine!) that is used to ADD RECORDS \nonly. well, i have created an ultra-basic form linked to the same database \nas the crashing form -- no vba underlying it, no fancy formats, nothing \nexcept the fields of the tables dropped in with all default settings. it \ncrashes too! gotta be in the table or ??. i am not linking the form to a \nquery, but directly to the table. i have another form linked to the same \ntable to edit records only -- it limits you to one record only -- works \nbeautifully. and yet another form showing only limited info from the same \ntable, but multiple records (never all), works fine too. i do have \ndeclare/fetch on. the table is huge (>45k records, 14,983,168 bytes). if \ni try to open the table directly in datasheet view, the odbc driver fails \n-- no explanation, but at least no illegal page fault and i get clammed out \nof access! i figure that is because of the size of the table. i have \ncache set to 100, maxvarchar = 550, maxlongvarchar=15000. in the table i \nhave one field varchar(300) and another varchar(550). a few \"checks\" \n(state char(2) not null check (state<>' ')). -- ok that's all i know that \nwould be suspicious. but the edit forms work fine. i also have several \nother add-new-record forms set up in the same way that use other tables -- \nthey work fine. any ideas???\n\nhere is the win98 message -- these things never do me any good, but maybe \nthey reveal something!?\nMSACCESS caused an invalid page fault in\nmodule PSQLODBC.DLL at 0177:100076a4.\nRegisters:\nEAX=1bfc5000 CS=0177 EIP=100076a4 EFLGS=00010216\nEBX=02855d7a SS=017f ESP=0062cbf0 EBP=ffffff99\nECX=00000001 DS=017f ESI=00002328 FS=0eff\nEDX=00002328 ES=017f EDI=1bfc2cda GS=0000\nBytes at CS:EIP:\n80 38 0d 75 21 8b 7c 24 18 b9 ff ff ff ff 2b c0\nStack dump:\n02855c4a 02855d7a ffffff99 00000130 1bfc5000 10006df9 1bfc2cd8 02855d7a \nffffff99 fffffffd 02855660 02855b84 022206e0 00000000 1bfc2cd8 1002000c\n\n\n\nalso, here is the log file -- it never shows a problem that i can see. . .\n\nthanks in advance for helping! this system is oh so close to being in \nuse!!!\n\n\njt\n\nbegin 600 psqlodbc_4294848641.log\nM8V]N;CTT-#@R-C<T-\"P@4U%,1')I=F5R0V]N;F5C=\"@@:6XI/2=$4DE615(]\nM>U!O<W1G<F5344Q].T1!5$%\"05-%/7-E<G9I8V5W;W)K<SM315)615(]9W)E\nM96YM86X[4$]25#TU-#,R.U)%041/3DQ9/3 [4%)/5$]#3TP].T9!2T5/241)\nM3D1%6#TP.U-(3U=/241#3TQ534X],#M23U=615)324].24Y'/3 [4TA/5U-9\nM4U1%351!0DQ%4STP.T-/3DY3151424Y'4ST[)RP@9D1R:79E<D-O;7!L971I\nM;VX],PT*1VQO8F%L($]P=&EO;G,Z(%9E<G-I;VX])S V+C0P+C P,#4G+\"!F\nM971C:#TQ,# L('-O8VME=#TT,#DV+\"!U;FMN;W=N7W-I>F5S/3 L(&UA>%]V\nM87)C:&%R7W-I>F4]-34P+\"!M87A?;&]N9W9A<F-H87)?<VEZ93TQ-3 P, T*\nM(\" @(\" @(\" @(\" @(\" @(&1I<V%B;&5?;W!T:6UI>F5R/3 L(&MS<6\\],2P@\nM=6YI<75E7VEN9&5X/3$L('5S95]D96-L87)E9F5T8V@],0T*(\" @(\" @(\" @\nM(\" @(\" @('1E>'1?87-?;&]N9W9A<F-H87(],2P@=6YK;F]W;G-?87-?;&]N\nM9W9A<F-H87(],\"P@8F]O;'-?87-?8VAA<CTQ#0H@(\" @(\" @(\" @(\" @(\" @\nM97AT<F%?<WES=&%B;&5?<')E9FEX97,])V1D7SLG+\"!C;VYN7W-E='1I;F=S\nM/2<G#0IC;VYN/30T.#(V-S0T+\"!Q=65R>3TG(\"<-\"F-O;FX]-#0X,C8W-#0L\nM('%U97)Y/2=S970@1&%T95-T>6QE('1O(\"=)4T\\G)PT*8V]N;CTT-#@R-C<T\nM-\"P@<75E<GD])W-E=\"!K<W%O('1O(\"=/3B<G#0IC;VYN/30T.#(V-S0T+\"!Q\nM=65R>3TG0D5'24XG#0IC;VYN/30T.#(V-S0T+\"!Q=65R>3TG9&5C;&%R92!3\nM44Q?0U52,#)!0S,Y,S@@8W5R<V]R(&9O<B!S96QE8W0@;VED(&9R;VT@<&=?\nM='EP92!W:&5R92!T>7!N86UE/2=L;R<G#0IC;VYN/30T.#(V-S0T+\"!Q=65R\nM>3TG9F5T8V@@,3 P(&EN(%-13%]#55(P,D%#,SDS.\"<-\"B @(\"!;(&9E=&-H\nM960@,\"!R;W=S(%T-\"F-O;FX]-#0X,C8W-#0L('%U97)Y/2=C;&]S92!344Q?\nM0U52,#)!0S,Y,S@G#0IC;VYN/30T.#(V-S0T+\"!Q=65R>3TG14Y$)PT*8V]N\nM;CTT-#@R-C<T-\"P@4U%,1')I=F5R0V]N;F5C=\"AO=70I/2=$4DE615(]>U!O\nM<W1G<F5344Q].T1!5$%\"05-%/7-E<G9I8V5W;W)K<SM315)615(]9W)E96YM\nM86X[4$]25#TU-#,R.U5)1#UK:7)K<&%J=#M05T0].U)%041/3DQ9/3 [4%)/\nM5$]#3TP]-BXT.T9!2T5/241)3D1%6#TP.U-(3U=/241#3TQ534X],#M23U=6\nM15)324].24Y'/3 [4TA/5U-94U1%351!0DQ%4STP.T-/3DY3151424Y'4STG\nM#0IC;VYN/30T.#(V-S0T+\"!Q=65R>3TG0D5'24XG#0IC;VYN/30T.#(V-S0T\nM+\"!Q=65R>3TG9&5C;&%R92!344Q?0U52,#)!0S,Y,S@@8W5R<V]R(&9O<B!3\nM14Q%0U0@0V]N9FEG+\"!N5F%L=64@1E)/32!-4WES0V]N9B<-\"D524D]2(&9R\nM;VT@8F%C:V5N9\"!D=7)I;F<@<V5N9%]Q=65R>3H@)T524D]2.B @;7-Y<V-O\nM;F8Z(%1A8FQE(&1O97,@;F]T(&5X:7-T+B<-\"F-O;FX]-#0X,C8W-#0L('%U\nM97)Y/2=!0D]25\"<-\"E-4051%345.5\"!%4E)/4CH@9G5N8SU30U]E>&5C=71E\nM+\"!D97-C/2<G+\"!E<G)N=6T],2P@97)R;7-G/2=%<G)O<B!W:&EL92!E>&5C\nM=71I;F<@=&AE('%U97)Y)PT*(\" @(\" @(\" @(\" @(\" @(\" M+2TM+2TM+2TM\nM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM\nM+2TM+2T-\"B @(\" @(\" @(\" @(\" @(\" @:&1B8STT-#@R-C<T-\"P@<W1M=#TT\nM-#@T,3(W,BP@<F5S=6QT/3 -\"B @(\" @(\" @(\" @(\" @(\" @;6%N=6%L7W)E\nM<W5L=#TP+\"!P<F5P87)E/3 L(&EN=&5R;F%L/3 -\"B @(\" @(\" @(\" @(\" @\nM(\" @8FEN9&EN9W,],\"P@8FEN9&EN9W-?86QL;V-A=&5D/3 -\"B @(\" @(\" @\nM(\" @(\" @(\" @<&%R86UE=&5R<STP+\"!P87)A;65T97)S7V%L;&]C871E9#TP\nM#0H@(\" @(\" @(\" @(\" @(\" @('-T871E;65N=%]T>7!E/3 L('-T871E;65N\nM=#TG4T5,14-4($-O;F9I9RP@;E9A;'5E($923TT@35-Y<T-O;F8G#0H@(\" @\nM(\" @(\" @(\" @(\" @('-T;71?=VET:%]P87)A;7,])V1E8VQA<F4@4U%,7T-5\nM4C R04,S.3,X(&-U<G-O<B!F;W(@4T5,14-4($-O;F9I9RP@;E9A;'5E($92\nM3TT@35-Y<T-O;F8G#0H@(\" @(\" @(\" @(\" @(\" @(&1A=&%?871?97AE8STM\nM,2P@8W5R<F5N=%]E>&5C7W!A<F%M/2TQ+\"!P=71?9&%T83TP#0H@(\" @(\" @\nM(\" @(\" @(\" @(&-U<G)4=7!L93TM,2P@8W5R<F5N=%]C;VP]+3$L(&QO8FI?\nM9F0]+3$-\"B @(\" @(\" @(\" @(\" @(\" @;6%X4F]W<STP+\"!R;W=S971?<VEZ\nM93TQ+\"!K97ES971?<VEZ93TP+\"!C=7)S;W)?='EP93TP+\"!S8W)O;&Q?8V]N\nM8W5R<F5N8WD],0T*(\" @(\" @(\" @(\" @(\" @(\"!C=7)S;W)?;F%M93TG4U%,\nM7T-54C R04,S.3,X)PT*(\" @(\" @(\" @(\" @(\" @(\" M+2TM+2TM+2TM+2TM\nM+2TM45)E<W5L=\"!);F9O(\"TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM\nM+2T-\"D-/3DX@15)23U(Z(&9U;F,]4T-?97AE8W5T92P@9&5S8STG)RP@97)R\nM;G5M/3$Q,\"P@97)R;7-G/2=%4E)/4CH@(&US>7-C;VYF.B!486)L92!D;V5S\nM(&YO=\"!E>&ES=\"XG#0H@(\" @(\" @(\" @(\" M+2TM+2TM+2TM+2TM+2TM+2TM\nM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2T-\"B @\nM(\" @(\" @(\" @(&AE;G8]-#4X-S4S,S(L(&-O;FX]-#0X,C8W-#0L('-T871U\nM<STQ+\"!N=6U?<W1M=',],38-\"B @(\" @(\" @(\" @('-O8VL]-#4X-S4S-#@L\nM('-T;71S/30U.#<U,S@X+\"!L;V)J7W1Y<&4]+3DY.0T*(\" @(\" @(\" @(\" @\nM+2TM+2TM+2TM+2TM+2TM+2!3;V-K970@26YF;R M+2TM+2TM+2TM+2TM+2TM\nM+2TM+2TM+2TM+2TM+2TM#0H@(\" @(\" @(\" @(\"!S;V-K970],C(T+\"!R979E\nM<G-E/3 L(&5R<F]R;G5M8F5R/3 L(&5R<F]R;7-G/2<H;G5L;\"DG#0H@(\" @\nM(\" @(\" @(\"!B=69F97)?:6X]-#0X,S,P-S(L(&)U9F9E<E]O=70]-#0X,S<Q\nM-S(-\"B @(\" @(\" @(\" @(&)U9F9E<E]F:6QL961?:6X],RP@8G5F9F5R7V9I\nM;&QE9%]O=70],\"P@8G5F9F5R7W)E861?:6X],@T*8V]N;CTT-#@R-C<T-\"P@\nM<75E<GD])T)%1TE.)PT*8V]N;CTT-#@R-C<T-\"P@<75E<GD])V1E8VQA<F4@\nM4U%,7T-54C R04,S.3,X(&-U<G-O<B!F;W(@4T5,14-4(\")P;6%S=&5R(BXB\nM<&%R=&YU;2(@1E)/32 B<&UA<W1E<B(@)PT*8V]N;CTT-#@R-C<T-\"P@<75E\nM<GD])V9E=&-H(#$P,\"!I;B!344Q?0U52,#)!0S,Y,S@G#0H@(\" @6R!F971C\nM:&5D(#$P,\"!R;W=S(%T-\"F-O;FX]-#0X,C8W-#0L('%U97)Y/2=F971C:\" Q\nM,# @:6X@4U%,7T-54C R04,S.3,X)PT*(\" @(%L@9F5T8VAE9\" Q,# @<F]W\nM<R!=#0IC;VYN/30T.3$Q-C$R+\"!344Q$<FEV97)#;VYN96-T*\"!I;BD])T12\nM259%4CU[4&]S=&=R95-13'T[54E$/6MI<FMP86IT.U!71#T[1$%404)!4T4]\nM<V5R=FEC97=O<FMS.U-%4E9%4CUG<F5E;FUA;CM03U)4/34T,S([4D5!1$].\nM3%D],#M04D]43T-/3#T[1D%+14])1$E.1$58/3 [4TA/5T])1$-/3%5-3CTP\nM.U)/5U9%4E-)3TY)3D<],#M32$]74UE35$5-5$%\"3$53/3 [0T].3E-%5%1)\nM3D=3/3LG+\"!F1')I=F5R0V]M<&QE=&EO;CTP#0I';&]B86P@3W!T:6]N<SH@\nM5F5R<VEO;CTG,#8N-# N,# P-2<L(&9E=&-H/3$P,\"P@<V]C:V5T/30P.38L\nM('5N:VYO=VY?<VEZ97,],\"P@;6%X7W9A<F-H87)?<VEZ93TU-3 L(&UA>%]L\nM;VYG=F%R8VAA<E]S:7IE/3$U,# P#0H@(\" @(\" @(\" @(\" @(\" @9&ES86)L\nM95]O<'1I;6EZ97(],\"P@:W-Q;STQ+\"!U;FEQ=65?:6YD97@],2P@=7-E7V1E\nM8VQA<F5F971C:#TQ#0H@(\" @(\" @(\" @(\" @(\" @=&5X=%]A<U]L;VYG=F%R\nM8VAA<CTQ+\"!U;FMN;W=N<U]A<U]L;VYG=F%R8VAA<CTP+\"!B;V]L<U]A<U]C\nM:&%R/3$-\"B @(\" @(\" @(\" @(\" @(\"!E>'1R85]S>7-T86)L95]P<F5F:7AE\nM<STG9&1?.R<L(&-O;FY?<V5T=&EN9W,])R<-\"F-O;FX]-#0Y,3$V,3(L('%U\nM97)Y/2<@)PT*8V]N;CTT-#DQ,38Q,BP@<75E<GD])W-E=\"!$871E4W1Y;&4@\nM=&\\@)TE33R<G#0IC;VYN/30T.3$Q-C$R+\"!Q=65R>3TG<V5T(&MS<6\\@=&\\@\nM)T].)R<-\"F-O;FX]-#0Y,3$V,3(L('%U97)Y/2=\"14=)3B<-\"F-O;FX]-#0Y\nM,3$V,3(L('%U97)Y/2=D96-L87)E(%-13%]#55(P,D%$.#4R.\"!C=7)S;W(@\nM9F]R('-E;&5C=\"!O:60@9G)O;2!P9U]T>7!E('=H97)E('1Y<&YA;64])VQO\nM)R<-\"F-O;FX]-#0Y,3$V,3(L('%U97)Y/2=F971C:\" Q,# @:6X@4U%,7T-5\nM4C R040X-3(X)PT*(\" @(%L@9F5T8VAE9\" P(')O=W,@70T*8V]N;CTT-#DQ\nM,38Q,BP@<75E<GD])V-L;W-E(%-13%]#55(P,D%$.#4R.\"<-\"F-O;FX]-#0Y\nM,3$V,3(L('%U97)Y/2=%3D0G#0IC;VYN/30T.3$Q-C$R+\"!344Q$<FEV97)#\nM;VYN96-T*&]U=\"D])T12259%4CU[4&]S=&=R95-13'T[1$%404)!4T4]<V5R\nM=FEC97=O<FMS.U-%4E9%4CUG<F5E;FUA;CM03U)4/34T,S([54E$/6MI<FMP\nM86IT.U!71#T[4D5!1$].3%D],#M04D]43T-/3#TV+C0[1D%+14])1$E.1$58\nM/3 [4TA/5T])1$-/3%5-3CTP.U)/5U9%4E-)3TY)3D<],#M32$]74UE35$5-\nM5$%\"3$53/3 [0T].3E-%5%1)3D=3/2<-\"F-O;FX]-#0Y,3$V,3(L('%U97)Y\nM/2=\"14=)3B<-\"F-O;FX]-#0Y,3$V,3(L('%U97)Y/2=D96-L87)E(%-13%]#\nM55(P,D%$.#4R.\"!C=7)S;W(@9F]R(%-%3$5#5\" B<&%R=&YU;2(L(G!D97-C\nM(BPB<'5N:71C;W-T(BPB<'5N:71P<F,B+\")P;F]T97,B+\")V;G5M(BPB:6YA\nM8W0B(\"!&4D]-(\")P;6%S=&5R(B @5TA%4D4@(G!A<G1N=6TB(#T@)U!44# T\nM-R<@3U(@(G!A<G1N=6TB(#T@)U!44# T.\"<@3U(@(G!A<G1N=6TB(#T@)U!5\nM,# P,2<@3U(@(G!A<G1N=6TB(#T@)U!7,# S,\"<@3U(@(G!A<G1N=6TB(#T@\nM)U!7,# S,2<@3U(@(G!A<G1N=6TB(#T@)U!7,# S,B<@3U(@(G!A<G1N=6TB\nM(#T@)U!7,# S,R<@3U(@(G!A<G1N=6TB(#T@)U!7,# S-\"<@3U(@(G!A<G1N\nM=6TB(#T@)U!7,# S-2<@3U(@(G!A<G1N=6TB(#T@)U!7,# S-B<G#0IC;VYN\nM/30T.3$Q-C$R+\"!Q=65R>3TG9F5T8V@@,3 P(&EN(%-13%]#55(P,D%$.#4R\nM.\"<-\"B @(\"!;(&9E=&-H960@,3 @<F]W<R!=#0IC;VYN/30T.3$Q-C$R+\"!Q\nM=65R>3TG8VQO<V4@4U%,7T-54C R040X-3(X)PT*8V]N;CTT-#DQ,38Q,BP@\nM<75E<GD])T5.1\"<-\"F-O;FX]-#0Y,3$V,3(L('%U97)Y/2=\"14=)3B<-\"F-O\nM;FX]-#0Y,3$V,3(L('%U97)Y/2=D96-L87)E(%-13%]#55(P,D%$.#4R.\"!C\nM=7)S;W(@9F]R(%-%3$5#5\" B<&%R=&YU;2(L(G!D97-C(BPB<'5N:71C;W-T\nM(BPB<'5N:71P<F,B+\")P;F]T97,B+\")V;G5M(BPB:6YA8W0B(\"!&4D]-(\")P\nM;6%S=&5R(B @5TA%4D4@(G!A<G1N=6TB(#T@)U!7,# S-R<@3U(@(G!A<G1N\nM=6TB(#T@)U!7,# S.\"<@3U(@(G!A<G1N=6TB(#T@)U!7,# S.2<@3U(@(G!A\nM<G1N=6TB(#T@)U!7,# T,\"<@3U(@(G!A<G1N=6TB(#T@)U!7,# T,2<@3U(@\nM(G!A<G1N=6TB(#T@)U!7,# T,B<@3U(@(G!A<G1N=6TB(#T@)U!7,# T,R<@\nM3U(@(G!A<G1N=6TB(#T@)U!7,# T-\"<@3U(@(G!A<G1N=6TB(#T@)U!7,# T\nM-2<@3U(@(G!A<G1N=6TB(#T@)U!7,# T-B<G#0IC;VYN/30T.3$Q-C$R+\"!Q\nM=65R>3TG9F5T8V@@,3 P(&EN(%-13%]#55(P,D%$.#4R.\"<-\"B @(\"!;(&9E\nM=&-H960@,3 @<F]W<R!=#0IC;VYN/30T.3$Q-C$R+\"!Q=65R>3TG8VQO<V4@\nM4U%,7T-54C R040X-3(X)PT*8V]N;CTT-#DQ,38Q,BP@<75E<GD])T5.1\"<-\nM\"F-O;FX]-#0Y,3$V,3(L('%U97)Y/2=\"14=)3B<-\"F-O;FX]-#0Y,3$V,3(L\nM('%U97)Y/2=D96-L87)E(%-13%]#55(P,D%$.#4R.\"!C=7)S;W(@9F]R(%-%\nM3$5#5\" B<&%R=&YU;2(L(G!D97-C(BPB<'5N:71C;W-T(BPB<'5N:71P<F,B\nM+\")P;F]T97,B+\")V;G5M(BPB:6YA8W0B(\"!&4D]-(\")P;6%S=&5R(B @5TA%\nM4D4@(G!A<G1N=6TB(#T@)U!7,# T-R<@3U(@(G!A<G1N=6TB(#T@)U!7,# T\nM.\"<@3U(@(G!A<G1N=6TB(#T@)U!7,# T.2<@3U(@(G!A<G1N=6TB(#T@)U!7\nM,# U,\"<@3U(@(G!A<G1N=6TB(#T@)U!7,# U,2<@3U(@(G!A<G1N=6TB(#T@\nM)U!7,# U,B<@3U(@(G!A<G1N=6TB(#T@)U!7,# U,R<@3U(@(G!A<G1N=6TB\nM(#T@)U!7,# U-\"<@3U(@(G!A<G1N=6TB(#T@)U!7,# U-2<@3U(@(G!A<G1N\nM=6TB(#T@)U!7,# U-B<G#0IC;VYN/30T.3$Q-C$R+\"!Q=65R>3TG9F5T8V@@\nM,3 P(&EN(%-13%]#55(P,D%$.#4R.\"<-\"B @(\"!;(&9E=&-H960@,3 @<F]W\nM<R!=#0IC;VYN/30T.3$Q-C$R+\"!Q=65R>3TG8VQO<V4@4U%,7T-54C R040X\nM-3(X)PT*8V]N;CTT-#DQ,38Q,BP@<75E<GD])T5.1\"<-\"F-O;FX]-#0X,C8W\nM-#0L('%U97)Y/2=C;&]S92!344Q?0U52,#)!0S,Y,S@G#0IC;VYN/30T.#(V\nM-S0T+\"!Q=65R>3TG14Y$)PT*8V]N;CTT-#DQ,38Q,BP@4U%,1&ES8V]N;F5C\nA= T*8V]N;CTT-#@R-C<T-\"P@4U%,1&ES8V]N;F5C= T*\n`\nend\n\n",
"msg_date": "Fri, 28 May 1999 11:16:14 -0400",
"msg_from": "JT Kirkpatrick <[email protected]>",
"msg_from_op": true,
"msg_subject": "illegal page faults. . ."
}
] |
[
{
"msg_contents": "Now, it is on the documenation web page. Are there any areas I should\nhave mentioned that I have forgotten. Does anyone have years for the\ndevelopment of Ingres, Postgres, and Postgres95?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 28 May 1999 13:02:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL article"
}
] |
[
{
"msg_contents": "I've been looking into Mergl's \"update\" performance problem. With\ncurrent sources, on a sequential-scan update of about 10,000 out of\n1,000,000 records, I observe 33712 read() calls and 34107 write() calls.\nThe table occupies 33334 disk blocks, so the number of reads looks about\nright -- but the number of writes is at least a factor of 3 higher than\nit should be!\n\nIt looks to me like something is broken such that bufmgr.c *always*\nthinks that a buffer is dirty (and needs written out) when it is\nreleased.\n\nPoking around for the cause, I find that heapgettup() calls\nSetBufferCommitInfoNeedsSave() for every single tuple read from the\ntable:\n\n 7.14 42.15 1000055/1000055 heap_getnext [9]\n[10] 18.8 7.14 42.15 1000055 heapgettup [10]\n 1.53 30.10 1000020/1000020 HeapTupleSatisfiesSnapshot [11]\n 1.68 3.27 1000055/1000055 RelationGetBufferWithBuffer [50]\n 4.31 0.00 2066832/4129472 LockBuffer [45]\n 0.25 0.56 33361/33698 ReleaseAndReadBuffer [76]\n 0.44 0.00 1000000/1000000 SetBufferCommitInfoNeedsSave [92]\n 0.01 0.00 33371/33371 nextpage [240]\n 0.00 0.00 10/2033992 ReleaseBuffer [46]\n 0.00 0.00 45/201 HeapTupleSatisfiesNow [647]\n 0.00 0.00 5/9 nocachegetattr [730]\n\nThis could only be from the call to SetBufferCommitInfoNeedsSave in\nthe HeapTupleSatisfies macro. If I'm reading the code correctly,\nthat means that HeapTupleSatisfiesSnapshot() always changes the\nt_infomask field of the tuple.\n\nI don't understand this code well enough to fix it, but I assert that\nit's broken. Most of these tuples are *not* being modified, and there\nis no reason to have to rewrite the buffer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 28 May 1999 14:06:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance problem partially identified"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> It looks to me like something is broken such that bufmgr.c *always*\n> thinks that a buffer is dirty (and needs written out) when it is\n> released.\n\nThat could also explain why the performance increases quite noticeably \neven for _select_ queries when you specify \"no fsync\" for backend. \n(I have'nt checked it lately, but it was the case about a year ago)\n\n> Poking around for the cause, I find that heapgettup() calls\n> SetBufferCommitInfoNeedsSave() for every single tuple read from the\n> table:\n...\n> I don't understand this code well enough to fix it, but I assert that\n> it's broken. \n\nMore likely this is a \"quick fix - will look at it later\" for something\nelse, praobably an execution path that fails to call \nSetBufferCommitInfoNeedsSave() when needed. \n\nOr it can be just code to check if this fixes it which has been \nforgotten in.\n\n> Most of these tuples are *not* being modified, and there\n> is no reason to have to rewrite the buffer.\n\nIt can be quite a lot of work to find all the places that can modify the\ntuple, even with some special tools.\n\nIs there any tool that can report writes to areas set read only \n(similar to malloc/free debuggers ?)\n\nIf there is then we can replace SetBufferCommitInfoNeedsSave()\nwith a macro that does both SetBufferCommitInfoNeedsSave() and allows \nwriting, so that we can automate the checking. \n\n-----------------\nHannu\n",
"msg_date": "Sat, 29 May 1999 00:19:36 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Performance problem partially identified"
},
{
"msg_contents": "Further info:\n\nI think this behavior may be state-dependent. The first update after\nloading the table rewrites all the blocks, but subsequent ones do not.\nIs it just a matter of marking recently-written tuples as confirmed\ngood? If so, then there is not a problem (or at least, not as big\na problem as I thought).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 28 May 1999 19:08:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance problem partially identified "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Poking around for the cause, I find that heapgettup() calls\n> SetBufferCommitInfoNeedsSave() for every single tuple read from the\n> table:\n> \n> 7.14 42.15 1000055/1000055 heap_getnext [9]\n> [10] 18.8 7.14 42.15 1000055 heapgettup [10]\n> 1.53 30.10 1000020/1000020 HeapTupleSatisfiesSnapshot [11]\n> 1.68 3.27 1000055/1000055 RelationGetBufferWithBuffer [50]\n> 4.31 0.00 2066832/4129472 LockBuffer [45]\n> 0.25 0.56 33361/33698 ReleaseAndReadBuffer [76]\n> 0.44 0.00 1000000/1000000 SetBufferCommitInfoNeedsSave [92]\n> 0.01 0.00 33371/33371 nextpage [240]\n> 0.00 0.00 10/2033992 ReleaseBuffer [46]\n> 0.00 0.00 45/201 HeapTupleSatisfiesNow [647]\n> 0.00 0.00 5/9 nocachegetattr [730]\n> \n> This could only be from the call to SetBufferCommitInfoNeedsSave in\n> the HeapTupleSatisfies macro. If I'm reading the code correctly,\n> that means that HeapTupleSatisfiesSnapshot() always changes the\n ^^^^^^^^^^^^^^\n> t_infomask field of the tuple.\n\nNot always but only if HEAP_XMIN_COMMITTED/HEAP_XMAX_COMMITTED\nare not setted. Run vacuum before update and SetBufferCommitInfoNeedSave\nwill not be called. This func is just to avoid pg_log lookup\nwithout vacuum.\n\nVadim\n",
"msg_date": "Sat, 29 May 1999 12:55:28 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Performance problem partially identified"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> Tom Lane wrote:\n> >\n> > It looks to me like something is broken such that bufmgr.c *always*\n> > thinks that a buffer is dirty (and needs written out) when it is\n> > released.\n> \n> That could also explain why the performance increases quite noticeably\n> even for _select_ queries when you specify \"no fsync\" for backend.\n> (I have'nt checked it lately, but it was the case about a year ago)\n\nEnev selects try to update t_infomask to avoid pg_log lookup\nfor other queries.\n\nVadim\n",
"msg_date": "Sat, 29 May 1999 12:58:23 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Performance problem partially identified"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Further info:\n> \n> I think this behavior may be state-dependent. The first update after\n> loading the table rewrites all the blocks, but subsequent ones do not.\n> Is it just a matter of marking recently-written tuples as confirmed\n> good? If so, then there is not a problem (or at least, not as big\n> a problem as I thought).\n\nExactly!\n\nOne have to run vacuum after loading a big amount of data.\n\nVadim\n",
"msg_date": "Sat, 29 May 1999 13:10:16 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Performance problem partially identified"
},
{
"msg_contents": "> > This could only be from the call to SetBufferCommitInfoNeedsSave in\n> > the HeapTupleSatisfies macro. If I'm reading the code correctly,\n> > that means that HeapTupleSatisfiesSnapshot() always changes the\n> ^^^^^^^^^^^^^^\n> > t_infomask field of the tuple.\n> \n> Not always but only if HEAP_XMIN_COMMITTED/HEAP_XMAX_COMMITTED\n> are not setted. Run vacuum before update and SetBufferCommitInfoNeedSave\n> will not be called. This func is just to avoid pg_log lookup\n> without vacuum.\n> \n\nYes, we store the transaction status on first in the tuple so we don't\nhave to look it up.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 May 1999 01:24:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Performance problem partially identified"
}
] |
[
{
"msg_contents": "Tom Lane wrote:\n> \n> Update of /usr/local/cvsroot/pgsql/src/backend/storage/buffer\n> In directory hub.org:/tmp/cvs-serv67287\n> \n> Modified Files:\n> bufmgr.c\n> Log Message:\n> Missing semicolons in non-HAS_TEST_AND_SET code paths :-(\n\nThanks, Tom!\nI must say that I never tested non-Has_TEST_AND_SET case.\n\nVadim\n",
"msg_date": "Sat, 29 May 1999 13:23:07 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] 'pgsql/src/backend/storage/buffer bufmgr.c'"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> bufmgr.c\n>> Log Message:\n>> Missing semicolons in non-HAS_TEST_AND_SET code paths :-(\n\n> Thanks, Tom!\n> I must say that I never tested non-Has_TEST_AND_SET case.\n\nI haven't either --- but those two places stuck out like sore thumbs\nafter the pgindent run...\n\nThis suggests that none of the beta-testing group uses a machine that\ndoesn't have TEST_AND_SET support. I suppose that's good news about the\ncoverage of s_lock.h, but it makes me worry that the non-TEST_AND_SET\ncode hasn't even been compiled, let alone exercised. Someone ought to\nbuild and test a copy with TEST_AND_SET deliberately removed from the\nport.h file.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 May 1999 11:03:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [COMMITTERS] 'pgsql/src/backend/storage/buffer\n\tbufmgr.c'"
},
{
"msg_contents": "On Sat, 29 May 1999, Tom Lane wrote:\n\n> Vadim Mikheev <[email protected]> writes:\n> >> bufmgr.c\n> >> Log Message:\n> >> Missing semicolons in non-HAS_TEST_AND_SET code paths :-(\n> \n> > Thanks, Tom!\n> > I must say that I never tested non-Has_TEST_AND_SET case.\n> \n> I haven't either --- but those two places stuck out like sore thumbs\n> after the pgindent run...\n> \n> This suggests that none of the beta-testing group uses a machine that\n> doesn't have TEST_AND_SET support. I suppose that's good news about the\n> coverage of s_lock.h, but it makes me worry that the non-TEST_AND_SET\n> code hasn't even been compiled, let alone exercised. Someone ought to\n> build and test a copy with TEST_AND_SET deliberately removed from the\n> port.h file.\n\nMIght this not indicate that that code is, in fact, useless? Designed for\nolder OSs that didn't have appropriate support?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 29 May 1999 19:44:09 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [COMMITTERS] 'pgsql/src/backend/storage/buffer\n\tbufmgr.c'"
},
{
"msg_contents": "At 3:44 PM -0700 5/29/99, The Hermit Hacker wrote:\n>On Sat, 29 May 1999, Tom Lane wrote:\n>> This suggests that none of the beta-testing group uses a machine that\n>> doesn't have TEST_AND_SET support. I suppose that's good news about the\n>> coverage of s_lock.h, but it makes me worry that the non-TEST_AND_SET\n>> code hasn't even been compiled, let alone exercised. Someone ought to\n>> build and test a copy with TEST_AND_SET deliberately removed from the\n>> port.h file.\n>\n>MIght this not indicate that that code is, in fact, useless? Designed for\n>older OSs that didn't have appropriate support?\n\nNo, absolutely not!\n\nIf anyone want's to port to a new architecture they shouldn't have to learn\nassembly language just to get started. They should be able to make things\njust work using semaphores, and then go back and add the TAS routines to\nspeed things up later.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n",
"msg_date": "Sat, 29 May 1999 16:39:11 -0700",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/storage/bufferbufmgr.c'"
}
] |
[
{
"msg_contents": "I have gotten the OK from Daemon News to publish the article I posted a\nfew days ago. The original is our web site. Any comments before I\nrelease it. I will wait a few days.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 May 1999 14:07:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Daemon News article"
},
{
"msg_contents": "At 11:07 AM -0700 5/29/99, Bruce Momjian wrote:\n>I have gotten the OK from Daemon News to publish the article I posted a\n>few days ago. The original is our web site. Any comments before I\n>release it. I will wait a few days.\n\nThe quote from Jolly Chen does not seem consistent with Eric Raymond's\nanalysis in \"The Cathedral and the Bazzar.\" Since the latter is pretty\nwell known, I would expect it to provoke disagreement unless it can be\njustified or explained a bit more.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n",
"msg_date": "Sat, 29 May 1999 15:01:11 -0700",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Daemon News article"
},
{
"msg_contents": "On Sat, 29 May 1999, Henry B. Hotz wrote:\n\n> The quote from Jolly Chen does not seem consistent with Eric Raymond's\n> analysis in \"The Cathedral and the Bazzar.\" Since the latter is pretty\n> well known, I would expect it to provoke disagreement unless it can be\n> justified or explained a bit more.\n\nAssuming that one finds the analysis in TCAB to be correct, which a lot\nof people don't. There's no need to launch into an anti-ESR war because\nof these, but there's certainly no reason not to voice honest opinions\njust because Eric disagrees with them.\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n",
"msg_date": "Sat, 29 May 1999 18:15:49 -0400 (EDT)",
"msg_from": "Todd Graham Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Daemon News article"
},
{
"msg_contents": "> At 11:07 AM -0700 5/29/99, Bruce Momjian wrote:\n> >I have gotten the OK from Daemon News to publish the article I posted a\n> >few days ago. The original is our web site. Any comments before I\n> >release it. I will wait a few days.\n> \n> The quote from Jolly Chen does not seem consistent with Eric Raymond's\n> analysis in \"The Cathedral and the Bazzar.\" Since the latter is pretty\n> well known, I would expect it to provoke disagreement unless it can be\n> justified or explained a bit more.\n\nI totally agree that it is inconsistent, but I have never felt Rayond\nwas 100% correct. While I don't believe developers should have a\n'holier than thow' attitude (and I don't think we have), most patches\nfrom people who did not take the time to figure out how things worked\nwere a disaster if applied. If we don't have reliability, we have\nnothing in the db world, and a bazzar is not reliable.\n\nPerhaps I should add some text in there to address this. Good point.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 May 1999 18:28:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Daemon News article"
},
{
"msg_contents": "At 3:15 PM -0700 5/29/99, Todd Graham Lewis wrote:\n>On Sat, 29 May 1999, Henry B. Hotz wrote:\n>\n>> The quote from Jolly Chen does not seem consistent with Eric Raymond's\n>> analysis in \"The Cathedral and the Bazzar.\" Since the latter is pretty\n>> well known, I would expect it to provoke disagreement unless it can be\n>> justified or explained a bit more.\n>\n>Assuming that one finds the analysis in TCAB to be correct, which a lot\n>of people don't. There's no need to launch into an anti-ESR war because\n>of these, but there's certainly no reason not to voice honest opinions\n>just because Eric disagrees with them.\n>\n\nMy focus was wrong, sorry. The real point is that there is an obvious\ninconsistancy with a current hot viewpoint. If one wants to publish a\nconflicting viewpoint it should probably be explained more or else the\nresulting discussion will generate more heat than light.\n\nI suspect that if you got Jolly and Eric in a real conversation about what\nthey really meant then they might not disagree as much as it first\nappeared. Eric himself points out that the need to make something that\nactually works requires some kind of coordination and filtering of patches.\nIf the coordination is too rigid then either the project languishes (*BSD\nvice Linux) or a new project splits off (egcs vice gcc). (Does MySQL vice\nmSQL constitute another example in this space?) If the coordination is\ntoo free then the reliability of the product suffers. (Can't think of a\ngood example, but then the good examples are probably projects that have\ndied and been forgotten.)\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n",
"msg_date": "Sat, 29 May 1999 15:49:57 -0700",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Daemon News article"
},
{
"msg_contents": "> On Sat, 29 May 1999, Henry B. Hotz wrote:\n> \n> > The quote from Jolly Chen does not seem consistent with Eric Raymond's\n> > analysis in \"The Cathedral and the Bazzar.\" Since the latter is pretty\n> > well known, I would expect it to provoke disagreement unless it can be\n> > justified or explained a bit more.\n> \n> Assuming that one finds the analysis in TCAB to be correct, which a lot\n> of people don't. There's no need to launch into an anti-ESR war because\n> of these, but there's certainly no reason not to voice honest opinions\n> just because Eric disagrees with them.\n\nLet me address this again.\n\nPeople have asked why we don't follow the Linux model, where development\nis continuous(no beta) and stable releases are odd (or even?). The\nreason is that this type of development model has a tendency to follow a\n\"it works, it's broken, it works, it's broken again\" style, that spends\nlots of time trying to clean up improper fixes that were applied in\nprevious releases. This reminds me of gcc development, where the\ncompilers were riddled with bugs and certain releases where ok, but\nlater ones were not, and there was no way to tell them apart. You got\nto wondering whether they were going forward or backward in their\ndevelopment.\n\nNow, I know we have done that sometimes, but that certainly would happen\nmuch more without a development/beta/final release cycle, where\nexperienced developers review all patches that get applied. The\ndevelopment finally becomes stable, but there is a tremendous amount of\nwasted energy getting there.\n\nNow, I will also say that I have seen software companies that do this\ntoo, and they burden their customers with \"beta of the week\" while it is\nlabeled as an official release. They cram features into that software\nmuch faster that way, but is it worth the grief the customer endures?\n\nNow, if people want to disagree with me on this, let's discuss it.\n\nAlso, I need advise on whether I should bring up such a hot issue in the\npaper, or just leave the statement by Jolly as a \"given\" for our group.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 May 1999 18:59:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Daemon News article"
},
{
"msg_contents": "At 3:28 PM -0700 5/29/99, Bruce Momjian wrote:\n>> The quote from Jolly Chen does not seem consistent with Eric Raymond's\n\n>\n>I totally agree that it is inconsistent, but I have never felt Rayond\n>was 100% correct. While I don't believe developers should have a\n>'holier than thow' attitude (and I don't think we have), most patches\n>from people who did not take the time to figure out how things worked\n>were a disaster if applied. If we don't have reliability, we have\n>nothing in the db world, and a bazzar is not reliable.\n>\n\nI don't think Eric is claiming that a bazzar is ideal, just that there are\nenormous advantages to going ahead and releasing code which isn't quite\ndone. Once you have a good framework set up an awful lot of people can\nhelp with the detail debugging. A really good test case is 90% of a\ncomplete fix.\n\n25 years ago I observed that part of the IBM monopoly was based on how\n*bad* their stuff was. It was so difficult to get things working that by\nthe time you did you were afraid to do it over with another company. And\nyou were kind of proud of the fact that you *did* get it working in the end\nafter all.\n\nIn my opinion, Microsoft has done a similarly masterful job of making\nthings just good enough that the competition wasn't obviously better, while\nmaking their stuff bad enough to maximize the required custommer\ncommitment. This paragraph is intended as a side observation, not flame\nbait.\n\nPerhaps the point to make in the paper is just that we have chosen a\nparticular development cycle/philosophy. It doesn't happen to coincide\nprecisely with Eric Raymond's recommendations, but we're not exactly a\ncathedral either.\n\n---------------\n\nResponding to what Bruce wrote while I was writing this note: I don't\nentirely agree with the comment about gcc. Before egcs they generally had\na \"stable\" release, e.g. 2.5.8, or 2.6.3, or 2.7.2.x that was being widely\nused while there was also a \"current\" release like 2.6.1, or 2.7.1, or\n2.8.0. In their case there was also an internal-only alpha/beta version\nwhich we never saw. Coming from this background I found it difficult to\nevaluate the stability of various Postgres versions because there was no\nparallel maintenance of a \"stable patched\" version independent of the\n\"development\" version.\n\nI still feel it is unfair to expect all developers to \"switch gears\"\nbetween just fixing bugs and just developing new stuff in order to conform\nto a single release cycle. Some people are better at one than the other.\nLikewise I think our users would welcome a choice between a known stable\nversion and a more featureful current version. I'm not sure our beta\nversions quite provide this kind of distinction.\n\nBut at this point I am commenting randomly on actual deveopment policy\nrather than on the article.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n",
"msg_date": "Sat, 29 May 1999 16:31:57 -0700",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Daemon News article"
},
{
"msg_contents": "> Responding to what Bruce wrote while I was writing this note: I don't\n> entirely agree with the comment about gcc. Before egcs they generally had\n> a \"stable\" release, e.g. 2.5.8, or 2.6.3, or 2.7.2.x that was being widely\n> used while there was also a \"current\" release like 2.6.1, or 2.7.1, or\n> 2.8.0. In their case there was also an internal-only alpha/beta version\n> which we never saw. Coming from this background I found it difficult to\n> evaluate the stability of various Postgres versions because there was no\n> parallel maintenance of a \"stable patched\" version independent of the\n> \"development\" version.\n\nI am thinking of the earlier 2.x releases, and even the 1.x releases. \nIt was very fluid for quite a while.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 May 1999 21:37:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Daemon News article"
},
{
"msg_contents": "OK, I can't resist adding my two cents worth ...\n\n\"Henry B. Hotz\" <[email protected]> writes:\n> I don't think Eric is claiming that a bazzar is ideal, just that there are\n> enormous advantages to going ahead and releasing code which isn't quite\n> done. Once you have a good framework set up an awful lot of people can\n> help with the detail debugging.\n\nActually, I think we are closer to the bazaar model than you say; we\njust don't use some of the terminology that has been popularized by\nLinux etc. For example, we *do* release current code --- anyone can\npull the current sources from the CVS server, or grab a nightly\nsnapshot. And we do accept patches from anyone, subject to review by\none or more of the \"inner circle\"; I doubt that Linus allows world\nwrite access on his kernel sources either ;-).\n\nThere is a difference in emphasis, which I think comes from the agreed\nneed for *all* Postgres releases to be as stable as we can make them.\nBut that's really not much more than a difference in naming conventions.\nPostgres major releases (6.4, 6.5, etc) seem to me to correspond to\nthe start of a \"stable version\" series in the Linux scheme, whereas the\ncurrent sources are always the equivalent of the \"unstable version\".\nWe don't normally make very many releases in a \"stable version\" series,\nbut that's partially due to having a strong emphasis on getting it right\nbefore the major release. (Also, I believe that one focus of the new\ncommercial-support effort will be on improving maintenance of past\nreleases, ie, back-patching more bugs.)\n\nI'll close by saying that both Jolly and Eric are right, and that what\nis really working well for Postgres is a core group of people with a\nheavy commitment (Marc, Bruce, Vadim, Thomas) and a much larger group\nof people with smaller amounts of time to contribute. I don't think\nthat's so much different from what other open-source projects are doing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 30 May 1999 11:02:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Daemon News article "
},
{
"msg_contents": "> I'll close by saying that both Jolly and Eric are right, and that what\n> is really working well for Postgres is a core group of people with a\n> heavy commitment (Marc, Bruce, Vadim, Thomas) and a much larger group\n> of people with smaller amounts of time to contribute. I don't think\n> that's so much different from what other open-source projects are doing.\n\nTom, I totally agree with you, though I will say you are moving into\nthat first group pretty rapidly. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 30 May 1999 15:25:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Daemon News article"
},
{
"msg_contents": "On Sun, 30 May 1999, Tom Lane wrote:\n\n> OK, I can't resist adding my two cents worth ...\n> \n> \"Henry B. Hotz\" <[email protected]> writes:\n> > I don't think Eric is claiming that a bazzar is ideal, just that there are\n> > enormous advantages to going ahead and releasing code which isn't quite\n> > done. Once you have a good framework set up an awful lot of people can\n> > help with the detail debugging.\n> \n> Actually, I think we are closer to the bazaar model than you say; we\n> just don't use some of the terminology that has been popularized by\n> Linux etc. For example, we *do* release current code --- anyone can\n> pull the current sources from the CVS server, or grab a nightly\n> snapshot. And we do accept patches from anyone, subject to review by\n> one or more of the \"inner circle\"; I doubt that Linus allows world\n> write access on his kernel sources either ;-).\n> \n> There is a difference in emphasis, which I think comes from the agreed\n> need for *all* Postgres releases to be as stable as we can make them.\n> But that's really not much more than a difference in naming conventions.\n> Postgres major releases (6.4, 6.5, etc) seem to me to correspond to\n> the start of a \"stable version\" series in the Linux scheme, whereas the\n> current sources are always the equivalent of the \"unstable version\".\n> We don't normally make very many releases in a \"stable version\" series,\n> but that's partially due to having a strong emphasis on getting it right\n> before the major release. (Also, I believe that one focus of the new\n> commercial-support effort will be on improving maintenance of past\n> releases, ie, back-patching more bugs.)\n\nWhich pretty much sums up the *BSD model of development vs the Linux one\n:)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 30 May 1999 21:05:43 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Daemon News article "
}
] |
[
{
"msg_contents": "Looks like Stonebraker has left Informix, and started a new company,\nCohera(http://www.cohera.com/), that is commercializing Mariposa, which\nwas a distributed database system developed at Berkeley from Postgres95.\nMariposa never really got completed at Berkeley. It was more of a proof\nof concept. Here is an article about it:\n\n http://www.zdnet.com/intweek/stories/news/0,4164,2233210,00.html\n\nThe article is dated March, 1999. The Berkeley page at\nhttp://db.cs.berkeley.edu/source.html mentions, \"Mariposa has been\ncommercialized by Cohera Corp.\"\n\nYou know, if the guy was smart, he would use PostgreSQL, and with our\nBSD license, there is nothing we can do to stop him.\n\nThere is also a company called MariposaTech at\nhttp://www.mariposatech.com/. Not sure what they do. Seems they do\nrouters.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 May 1999 14:56:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Mariposa commericalized by Stonebraker"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Looks like Stonebraker has left Informix, and started a new company,\n> Cohera(http://www.cohera.com/), that is commercializing Mariposa, which\n> was a distributed database system developed at Berkeley from Postgres95.\n> Mariposa never really got completed at Berkeley. It was more of a proof\n> of concept. Here is an article about it:\n> \n> http://www.zdnet.com/intweek/stories/news/0,4164,2233210,00.html\n> \n> The article is dated March, 1999. The Berkeley page at\n> http://db.cs.berkeley.edu/source.html mentions, \"Mariposa has been\n> commercialized by Cohera Corp.\"\n> \n> You know, if the guy was smart, he would use PostgreSQL, and with our\n> BSD license, there is nothing we can do to stop him.\n\nHow about removing OIDS from Postgres ;)\n\nAFAIK Mariposa relies heavyly on OIDs (and possibly time-travel?) to\nmanage \nthe distribution of database. In fact it has double-length oids, where\none \ndword is instance id of the DB instance where it originated and the\nother \nis our traditional oid.\n\nFor PostgreSQL, if I'm not mistaken, there have been plans to also\nremove \nOIDS in addition to already removed time-travel, both of them with a \npre-text that they are inefficiently implemented and can be later put \nback in a better way.\n\n---------------\n Hannu Krosing\n",
"msg_date": "Sun, 30 May 1999 10:06:59 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Mariposa commericalized by Stonebraker"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> >\n> > You know, if the guy was smart, he would use PostgreSQL, and with our\n> > BSD license, there is nothing we can do to stop him.\n> \n> How about removing OIDS from Postgres ;)\n\n-:))\n\n> For PostgreSQL, if I'm not mistaken, there have been plans to also\n> remove OIDS in addition to already removed time-travel, both of them \n> with a pre-text that they are inefficiently implemented and can be \n> later put back in a better way.\n\nMy plan was (is) to make them optional.\n\nVadim\n",
"msg_date": "Sun, 30 May 1999 20:24:31 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Mariposa commericalized by Stonebraker"
}
] |
[
{
"msg_contents": "I have gotten some dates from the old Ingres and Postgres source code. \nInteresting how old these are:\n\n PostgreSQL is the most advanced open-source database server. It is\n Object-Relational(ORDBMS), and is supported by a team of Internet\n developers. PostgreSQL began as Ingres, developed at the University\n of California at Berkeley(1977-1985). The Ingres code was taken and\n enhanced by Ingres Corporation, which produced one of the first\n commercially successful relational database servers. (Ingres Corp.\n was later purchased by Computer Associates.) The Ingres code was\n taken by Michael Stonebraker as part of a Berkeley project to develop\n an object-relational database server called Postgres(1986-1994). The\n Postgres code was taken by Illustra and developed into a commercial\n product. (Illustra was later purchased by Informix and integrated\n into Informix's Universal Server.) Several graduate students added\n SQL capabilities to Postgres, and called it Postgres95(1995). The\n graduate students left Berkeley, but the code was maintained by one of\n the graduate students, Jolly Chen, and had an active mailing list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 May 1999 15:46:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "History of PostgreSQL"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> I have gotten some dates from the old Ingres and Postgres source code.\n> Interesting how old these are:\n> \n> PostgreSQL is the most advanced open-source database server. It is\n> Object-Relational(ORDBMS), and is supported by a team of Internet\n> developers. PostgreSQL began as Ingres, developed at the University\n> of California at Berkeley(1977-1985). The Ingres code was taken and\n> enhanced by Ingres Corporation, which produced one of the first\n> commercially successful relational database servers. (Ingres Corp.\n> was later purchased by Computer Associates.) The Ingres code was\n> taken by Michael Stonebraker as part of a Berkeley project to develop\n> an object-relational database server called Postgres(1986-1994). The\n> Postgres code was taken by Illustra and developed into a commercial\n> product. (Illustra was later purchased by Informix and integrated\n> into Informix's Universal Server.) Several graduate students added\n> SQL capabilities to Postgres, and called it Postgres95(1995). The\n> graduate students left Berkeley, but the code was maintained by one of\n> the graduate students, Jolly Chen, and had an active mailing list.\n\nhttp://www-are.berkeley.edu:80/mason/computing/help/manuals/postgres/c0102.htm\n\n Postgres95\n\n In 1994, Andrew Yu and Jolly Chen added a SQL language interpreter to Postgres, and the code was\n ^^^^^^^^^\nHe should be mentioned as well...\n\n subsequently released to the Web to find its own way in the world. Postgres95 was a public-domain,\n open source descendant of this original Berkeley code.\n\nYou can find more about Ingres, Postgres and Postgres'95 at\nhttp://search.berkeley.edu/.\n\nBTW, what's the birthday of our project?\nAndrew/Jolly stoped development ~ May 1996.\nMark, can you remember/find when you posted your historic\nmessage to Postgres'95 mailing list?\n\nVadim\n",
"msg_date": "Sun, 30 May 1999 20:11:21 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] History of PostgreSQL"
},
{
"msg_contents": "> http://www-are.berkeley.edu:80/mason/computing/help/manuals/postgres/c0102.htm\n> \n> Postgres95\n> \n> In 1994, Andrew Yu and Jolly Chen added a SQL language interpreter to Postgres, and the code was\n> ^^^^^^^^^\n> He should be mentioned as well...\n> \n> subsequently released to the Web to find its own way in the world. Postgres95 was a public-domain,\n> open source descendant of this original Berkeley code.\n\nGee, I didn't see that. Thanks. The new paragraph reads:\n\nThe Postgres code was taken by Illustra and developed into a commercial\nproduct. (Illustra was later purchased by Informix and integrated into\nInformix's Universal Server.) Two Berkeley graduate students, Jolly\nChen and Andrew Yu, added SQL capabilities to Postgres, and called it\nPostgres95(1994-1995). They left Berkeley, but Jolly continued\nmaintaining Postgres95, which had an active mailing list.\n\n> \n> You can find more about Ingres, Postgres and Postgres'95 at\n> http://search.berkeley.edu/.\n\nI used:\n\n\thttp://s2k-ftp.CS.Berkeley.EDU:8000/\n\nChoose 'database systems'. Also inside that tree is:\n\n\thttp://db.cs.berkeley.edu/source.html\n\nWhich mentions Postgres, Postgres95, and Mariposa, and Ingres. I have\nasked him to add us to the \"Other Databases\" page Paul maitians.\n\n> \n> BTW, what's the birthday of our project?\n> Andrew/Jolly stoped development ~ May 1996.\n> Mark, can you remember/find when you posted your historic\n> message to Postgres'95 mailing list?\n\nI may have that somewhere, though the start of development was at the\nbeginning of July, perhaps July 3rd.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 30 May 1999 15:43:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] History of PostgreSQL"
},
{
"msg_contents": "On Sun, 30 May 1999, Bruce Momjian wrote:\n\n> > BTW, what's the birthday of our project?\n> > Andrew/Jolly stoped development ~ May 1996.\n> > Mark, can you remember/find when you posted your historic\n> > message to Postgres'95 mailing list?\n> \n> I may have that somewhere, though the start of development was at the\n> beginning of July, perhaps July 3rd.\n\nThe furthest back I have is sometime in '97 ... I didn't really start\ngettign into saving my 'sent-mail' logs until then :(\n\nThe oldest files in CVS is:\n\n287027 2 -r-xr-xr-x 1 scrappy pgsql 585 Sep\n28 1994 /usr/local/cvsroot/CVSROOT/Attic/avail-actions,v\n\nGeez, has it been almost 4years now??\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 30 May 1999 21:24:19 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] History of PostgreSQL"
},
{
"msg_contents": "Thus spake The Hermit Hacker\n> The oldest files in CVS is:\n> \n> 287027 2 -r-xr-xr-x 1 scrappy pgsql 585 Sep 28 1994 ...\n> \n> Geez, has it been almost 4years now??\n\nEr, can someone check any math related work that Scrappy has done? :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sun, 30 May 1999 21:25:51 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] History of PostgreSQL"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> On Sun, 30 May 1999, Bruce Momjian wrote:\n> \n> > > BTW, what's the birthday of our project?\n> > > Andrew/Jolly stoped development ~ May 1996.\n> > > Mark, can you remember/find when you posted your historic\n> > > message to Postgres'95 mailing list?\n> >\n> > I may have that somewhere, though the start of development was at the\n> > beginning of July, perhaps July 3rd.\n> \n> The furthest back I have is sometime in '97 ... I didn't really start\n> gettign into saving my 'sent-mail' logs until then :(\n> \n> The oldest files in CVS is:\n> \n> 287027 2 -r-xr-xr-x 1 scrappy pgsql 585 Sep\n> 28 1994 /usr/local/cvsroot/CVSROOT/Attic/avail-actions,v\n> \n> Geez, has it been almost 4years now??\n\nNo, your message was posted in June 1996 -:)\n\nVadim\n",
"msg_date": "Mon, 31 May 1999 09:52:05 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] History of PostgreSQL"
},
{
"msg_contents": "> Thus spake The Hermit Hacker\n> > The oldest files in CVS is:\n> > \n> > 287027 2 -r-xr-xr-x 1 scrappy pgsql 585 Sep 28 1994 ...\n> > \n> > Geez, has it been almost 4years now??\n> \n> Er, can someone check any math related work that Scrappy has done? :-)\n\nSystem time must have been messed up that day. CVS started July 1996.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 30 May 1999 22:13:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] History of PostgreSQL"
},
{
"msg_contents": "> > The furthest back I have is sometime in '97 ... I didn't really start\n> > gettign into saving my 'sent-mail' logs until then :(\n> > \n> > The oldest files in CVS is:\n> > \n> > 287027 2 -r-xr-xr-x 1 scrappy pgsql 585 Sep\n> > 28 1994 /usr/local/cvsroot/CVSROOT/Attic/avail-actions,v\n> > \n> > Geez, has it been almost 4years now??\n> \n> No, your message was posted in June 1996 -:)\n> \n> Vadim\n> \n\nI had kept it, but somehow deleted it when going through the old\npostgres95 mailing list archives.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 30 May 1999 22:16:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] History of PostgreSQL"
},
{
"msg_contents": "On Sun, 30 May 1999, D'Arcy J.M. Cain wrote:\n\n> Thus spake The Hermit Hacker\n> > The oldest files in CVS is:\n> > \n> > 287027 2 -r-xr-xr-x 1 scrappy pgsql 585 Sep 28 1994 ...\n> > \n> > Geez, has it been almost 4years now??\n> \n> Er, can someone check any math related work that Scrappy has done? :-)\n\nSorry, was going by a previous file that I had found, which was Oct '95\n... will go back to sleep now...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 31 May 1999 09:19:13 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] History of PostgreSQL"
},
{
"msg_contents": "On Sun, 30 May 1999, Bruce Momjian wrote:\n\n> > Thus spake The Hermit Hacker\n> > > The oldest files in CVS is:\n> > > \n> > > 287027 2 -r-xr-xr-x 1 scrappy pgsql 585 Sep 28 1994 ...\n> > > \n> > > Geez, has it been almost 4years now??\n> > \n> > Er, can someone check any math related work that Scrappy has done? :-)\n> \n> System time must have been messed up that day. CVS started July 1996.\n\nAh, good, I was getting worried that I'd blocked out more of my past 10\nyears then I thought :) I couldn't account for the '94->96 time\nframe...:)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 31 May 1999 09:20:24 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] History of PostgreSQL"
},
{
"msg_contents": "I am still getting feedback from people on the hackers list, so I will\nwait a few more days to send it to Daemon News.\n\nOn the issue of \"Bazaar vs. PostgreSQL\", judging from the discussion we\nhave had, I think that may be a good topic for a separate followup\narticle.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jun 1999 00:51:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] History of PostgreSQL"
}
] |
[
{
"msg_contents": "It seems some sites don't understand shtml is html:\n\n\thttp://postgresql.nextpath.com/docs/faq-english.shtml\n\nThis displays as text, not html, which looks terrible. I have just\nrenamed all the *.shtml files to *.html in html/docs. When the mirrors\nsync up, that should fix the problem. Most of the FAQ stuff comes\nthrough here anyway, and I have changed my naming. If someone\naccidentally puts an shtml file in there, it will not be seen. The new\nnames are *.html.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 29 May 1999 16:36:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "shtml file names"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nSorry this has taken me so long to get back to you. Just to refresh\neveryones memory, I was the one who was having problems with postgres'\nbackends just hanging around in waiting, not doing anything. Tom Lane sent\nme a patch to fix this for 6.4.2.\n\nWe didn't just install the patch on our live system and run it, as we were\nworried about breaking something, so we spent a lot of time thrashing it\naround, trying to reproduce the problem to check if it had been fixed\n(this is why its taken me a while to do this). We captured a few hundred\nsessions that our CGI's have with the database, including begin..commit\npairs and everything, in order to accurately simulate a heavy load on the\ndbms. We tried this program, keeping about 40-50 connections going the\nwhole time, and we could not get the waiting problem to occur with even\nthe normal 6.4.2 so it was not possible to test if the patch had fixed our\nparticular problem.\n\nSo that was disappointing, we figured because we were hammering it so hard\nthat it would fail quickly and could use this as a good test program. The\n6.4.2 patched version ran fine as well, so this was good. It seems that\nthe problem was caused by very rare circumstances which we just couldn't\nreproduce during testing.\n\nOne thing we did notice is that when we tried to open more than say 50\nbackends, we would get the following:\n\nInitPostgres\nIpcSemaphoreCreate: semget failed (No space left on device) key=5432017,\nnum=16, permission=600\nproc_exit(3) [#0] \n\nShortly after, we would get:\n\nFATAL: s_lock(18001065) at spin.c:125, stuck spinlock. Aborting.\n\n\nOur FreeBSD machine was not setup for a huge number of semaphores, so the\nsemget was failing. That was fair enough, but then postmaster would die\nafterwards with the spinlock error. I saw a post by Hiroshi Inoue with the\nfollowing:\n\n>Hi all,\n>\n>ProcReleaseSpins() does nothing unless MyProc is set.\n>So both elog(ERROR/FATAL) and proc_exit(0) before \n>InitProcess() don't release spinlocks.\n>\n>Comments ?\n>\n>Hiroshi Inoue\n>[email protected]\n\nI would have to agree with him here, i'm not familiar with postgres\ninternals but it looks like when semget fails, the backend doesn't clean\nup the resources it already owns. I'm not sure if this is fixed, as I\ncan't always read the hackers list, but I thought I'd mention this in case\nsomeone found it interesting.\n\n\nWe tried the same massive number of connections test with 6.5 and it\nrefuses to accept the connection after a while, which is good. I'm reading\nthrough archives about MaxBackendId now, so I'm going to play with that.\n\n\n\nSo anyways, we installed the 6.4.2 patch a few days ago, and it seems to\nbe running ok. I haven't seen any cases where we get processes waiting for\nnothing, (yet anyway - i'll have to wait and see for a few days). However,\nnow we are getting the stuck spinlock errors due to too many backends\nbeing open, which I'm trying to prevent now so hopefully these two\nproblems will both go away now.\n\n\nNow that I've learned more about the stuck spinlock problem, I realise\nthat when I emailed the first time, it was not just one problem, but two\nor three at the same time which were making it harder to nail down what\nthe problem was. We will watch it over the week.\n\nWe have also been doing some testing with the latest 6.5 from the other\nday, to check that certain problems we've bumped into have been fixed. We\ncan't run it live, but we'll try to run our testing programs on it as a\nbest approximation to help flush out any bugs that might be left.\n\n\nThanks for your help everyone, I hope that this has been helpful for\neveryone else as well. I'm really looking forward to 6.5 :)\n\n\nbye,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Sun, 30 May 1999 23:36:25 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backends waiting, spinlocks, shared mem patches"
},
{
"msg_contents": "Wayne Piekarski <[email protected]> writes:\n> Sorry this has taken me so long to get back to you.\n\nThanks for reporting back, Wayne.\n\n> One thing we did notice is that when we tried to open more than say 50\n> backends, we would get the following:\n> InitPostgres\n> IpcSemaphoreCreate: semget failed (No space left on device) key=5432017,\n> num=16, permission=600\n> proc_exit(3) [#0] \n> Shortly after, we would get:\n> FATAL: s_lock(18001065) at spin.c:125, stuck spinlock. Aborting.\n\nYes, 6.4.* does not cope gracefully at all with running out of kernel\nsemaphores. This is \"fixed\" in 6.5 by the brute-force approach of\ngrabbing all the semaphores we could want at postmaster startup, rather\nthan trying to allocate them on-the-fly during backend startup. Either\nway, you want your kernel to be able to provide one semaphore per\npotential backend.\n\n> We tried the same massive number of connections test with 6.5 and it\n> refuses to accept the connection after a while, which is good. I'm reading\n> through archives about MaxBackendId now, so I'm going to play with that.\n\nIn 6.5 you just need to set the postmaster's -N switch.\n\n> We have also been doing some testing with the latest 6.5 from the other\n> day, to check that certain problems we've bumped into have been fixed. We\n> can't run it live, but we'll try to run our testing programs on it as a\n> best approximation to help flush out any bugs that might be left.\n\nOK, please let us know ASAP if you spot problems... we are shooting for\nformal 6.5 release one week from today...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 10:56:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backends waiting, spinlocks, shared mem patches "
},
{
"msg_contents": "Hi,\n\n> Yes, 6.4.* does not cope gracefully at all with running out of kernel\n> semaphores. This is \"fixed\" in 6.5 by the brute-force approach of\n> grabbing all the semaphores we could want at postmaster startup, rather\n> than trying to allocate them on-the-fly during backend startup. Either\n> way, you want your kernel to be able to provide one semaphore per\n> potential backend.\n\nRight now, every so often we have a problem where all of a sudden the\nbackends will just start piling up, we exceed 50-60 backends, and then the\nthing fails. The wierd part is that some times it happens during times of\nthe day which are very quiet and I wouldn't expect there to be that many\ntasks being done. I'm thinking something is getting jammed up in Postgres\nand then this occurs [more about this later] We get the spinlock fail\nmessage and then we just restart, so it does \"recover\" in a way, although\nit would be better if it didn't die. At least I understand what is\nhappening here ..... \n\n> > We have also been doing some testing with the latest 6.5 from the other\n> > day, to check that certain problems we've bumped into have been fixed. We\n> > can't run it live, but we'll try to run our testing programs on it as a\n> > best approximation to help flush out any bugs that might be left.\n> \n> OK, please let us know ASAP if you spot problems... we are shooting for\n> formal 6.5 release one week from today...\n\nOk, well the past two days or so, we've still had the backends waiting\nproblem like before, even though we installed the 6.4.2 shared memory\npatches. (ie, lots of backends waiting for nothing to happen - some kind\nof lock is getting left around by a backend) It has been running better\nthan it was before, but we still get one problem or two per day, which\nisn't very good. This time, when we kill all the waiting backends, new\nbackends will still jam anyways, so we kill and restart the whole thing.\nThe problem appears to have changed from what it was before, where we\ncould selectively kill off backends and eventually it would start working\nagain.\n\nUnfortunately, this is not the kind of thing I can reproduce with a\ntesting program, and so I can't try it against 6.5 - but it still exists\nin 6.4.2 so unless someones made more changes related to this area, there\nmight be a chance it is still in 6.5 - although the locking code has been\nchanged a lot maybe not?\n\nIs there anything I can do, like enable some extra debugging code,\n#define, (I've tried turning on a few of the locking defines but they\nwaiting for, so I or someone else can have a look and see if the problem\ncan be spotted? I can get it to happen one or twice per day, but I can\nonly test against 6.4.2 and it can't adversely affect the performance. \n\nOne thing I thought is this problem could still be related to the\nspinlock/semget problem. ie, too many backends start up, something fails\nand dies off, but leaves a semaphore laying around, and so from then\nonwards, all the backends are waiting for this semaphore to go when it is\nstill hanging around, causing problems ... The postmaster code fails to\ndetect the stuck spinlock and so it looks like a different problem? Hope\nthat made sense?\n\nthanks,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Thu, 3 Jun 1999 15:11:06 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Backends waiting, spinlocks, shared mem patches"
},
{
"msg_contents": "Wayne Piekarski <[email protected]> writes:\n> Unfortunately, this is not the kind of thing I can reproduce with a\n> testing program, and so I can't try it against 6.5 - but it still exists\n> in 6.4.2 so unless someones made more changes related to this area, there\n> might be a chance it is still in 6.5 - although the locking code has been\n> changed a lot maybe not?\n\nI honestly don't know what to tell you here. There have been a huge\nnumber of changes and bugfixes between 6.4.2 and 6.5, but there's really\nno way to guess from your report whether any of them will cure your\nproblem (or, perhaps, make it worse :-(). I wish you could run 6.5-\ncurrent for a while under your live load and see how it fares. But\nI understand your reluctance to do that.\n\n> Is there anything I can do, like enable some extra debugging code,\n\nThere is some debug logging code in the lockmanager, but it produces\na huge volume of log output when turned on, and I for one am not\nqualified to decipher it (perhaps one of the other list members can\noffer more help). What I'd suggest first is trying to verify that\nit *is* a lock problem. Attaching to some of the hung backends with\ngdb and dumping their call stacks with \"bt\" could be very illuminating.\nEspecially if you compile the backend with -g first.\n\n> One thing I thought is this problem could still be related to the\n> spinlock/semget problem. ie, too many backends start up, something fails\n> and dies off, but leaves a semaphore laying around, and so from then\n> onwards, all the backends are waiting for this semaphore to go when it is\n> still hanging around, causing problems ...\n\nIIRC, 6.4.* will absolutely *not* recover from running out of kernel\nsemaphores or backend process slots. This is fixed in 6.5, and I think\nsomeone posted a patch for 6.4 that covers the essentials, but I do\nnot recall the details.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 02:22:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backends waiting, spinlocks, shared mem patches "
}
] |
[
{
"msg_contents": "For work-related reasons I recently had to install egcs 1.1.2 here.\nI thought I'd try Postgres with it, since we have at least two reports\nof problems seen only with egcs:\n\t1. The business about char and short parameters to functions\n\t called through fmgr;\n\t2. Oleg's report of instability seen only with egcs and -O.\n\nThe upshot is that I found a few minor glitches in the configure\nscript, and cleaned up two or three insignificant warnings that\negcs generates but gcc doesn't. I was *not* able to duplicate\nany instability using either -O2 or -O3. The regression tests all\npass, and Oleg's fifteen-way join test is happy too.\n\nThis is on an HP-PA box, which is not the same as the PowerPC that\nproblem #1 was reported on, but is likewise a RISC architecture.\nSo I was hopeful I would see that problem here.\n\nSo, now what? From talking to Bruce off-list, I know that he's not\neager to make changes as extensive as problem #1 appears to require\non the strength of just one unconfirmed trouble report. I think I\nhave to agree...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 30 May 1999 11:47:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "egcs experimentation results"
},
{
"msg_contents": "On Sun, 30 May 1999, Tom Lane wrote:\n\n> Date: Sun, 30 May 1999 11:47:15 -0400\n> From: Tom Lane <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] egcs experimentation results\n> \n> For work-related reasons I recently had to install egcs 1.1.2 here.\n> I thought I'd try Postgres with it, since we have at least two reports\n> of problems seen only with egcs:\n> \t1. The business about char and short parameters to functions\n> \t called through fmgr;\n> \t2. Oleg's report of instability seen only with egcs and -O.\n\nTom,\n\nthe problem seems gone away after cleaning computer box with vacuum\nmachine and replacing cooler :-! This could be goind to FAQ :-)\n\n> \n> The upshot is that I found a few minor glitches in the configure\n> script, and cleaned up two or three insignificant warnings that\n> egcs generates but gcc doesn't. I was *not* able to duplicate\n> any instability using either -O2 or -O3. The regression tests all\n> pass, and Oleg's fifteen-way join test is happy too.\n> \n\nI just run 60 tables join and it took 17 minutes\non my P200, 64Mb RAM and postgres compiled with -O2 -mpentium\nI don't know how it's fast but it works ! Just explain requires\n15:30 minutes.\n\n\tOleg\n\n> This is on an HP-PA box, which is not the same as the PowerPC that\n> problem #1 was reported on, but is likewise a RISC architecture.\n> So I was hopeful I would see that problem here.\n> \n> So, now what? From talking to Bruce off-list, I know that he's not\n> eager to make changes as extensive as problem #1 appears to require\n> on the strength of just one unconfirmed trouble report. I think I\n> have to agree...\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sun, 30 May 1999 22:46:31 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] egcs experimentation results"
},
{
"msg_contents": "\nOn 30-May-99 Oleg Bartunov wrote:\n> On Sun, 30 May 1999, Tom Lane wrote:\n> \n>> Date: Sun, 30 May 1999 11:47:15 -0400\n>> From: Tom Lane <[email protected]>\n>> To: [email protected]\n>> Subject: [HACKERS] egcs experimentation results\n>> \n>> For work-related reasons I recently had to install egcs 1.1.2 here.\n>> I thought I'd try Postgres with it, since we have at least two reports\n>> of problems seen only with egcs:\n>> 1. The business about char and short parameters to functions\n>> called through fmgr;\n>> 2. Oleg's report of instability seen only with egcs and -O.\n> \n> Tom,\n> \n> the problem seems gone away after cleaning computer box with vacuum\n> machine and replacing cooler :-! This could be goind to FAQ :-)\n> \n>\n\nI use egcs on FreeBSD and Solaris x86 about a year and have no problems at all.\n\nThere was some compilation problems of libpg++, but it seems to be fixed.\n\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Mon, 31 May 1999 11:34:59 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] egcs experimentation results"
}
] |
[
{
"msg_contents": "Cool! :-)\n\n> \n> > Pablo Funes <[email protected]> writes:\n> > > Perhaps in a future version will PQrequestCancel be able to terminate\n> > > a waiting-for-lock state?\n> > \n> > Seems like a reasonable suggestion. It's too late to consider this for\n> > 6.5 (we were supposed to freeze the feature list quite a while back)\n> > but I support putting it on the TODO list for a future release.\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> Added:\n> \n> \t* Allow PQrequestCancel() to terminate when in waiting-for-lock state\n> \n",
"msg_date": "Mon, 31 May 1999 20:10:52 +2000 (EDT)",
"msg_from": "Pablo Funes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "Please take a sec to read this question. I've posted\nit several times but got no comments at all. Thanx, Pablo. \n\n---\n\nForwarded message:\n>From pablo Thu May 27 18:42:11 1999\nSubject: nonblocking lock? \nTo: [email protected]\nDate: Thu, 27 May 1999 18:42:11 -0400 (EDT)\nContent-Type: text\nContent-Length: 890 \n\nIs it possible to do a nonblocking lock? That is, \nI want several clients to execute,\n\n begin\n if table A is locked\n then\n go around doing stuff on other tables\n else\n lock A and do stuff on A that takes a long time\n endif\n\nthe problem is, if I use normal lock, then \nafter one client has locked and is doing stuff on A\nthe other one will block and thus it won't be able\nto go around doing stuff on other tables. Is it\npossible to do a nonblocking lock that will just\nfail if the table is locked already? \n\n\nNOTE: I tried using PQrequestCancel but it won't\ncancel the request. It still blocks for as long\nas the lock lasts. The only way around I've found so \nfar is to use PQreset. That's crude but works. But \nit leaves a dangling postmaster process that lives\nuntil the orignal lock is freed. Any other ideas? \n\nThanks a lot \n\nPablo Funes\nBrandeis University\[email protected]\n",
"msg_date": "Mon, 31 May 1999 13:08:32 -0400 (EDT)",
"msg_from": "Pablo Funes <[email protected]>",
"msg_from_op": true,
"msg_subject": "please?"
},
{
"msg_contents": "Pablo Funes <[email protected]> writes:\n> Is it possible to do a nonblocking lock?\n\nThere is no way to do that in 6.4. I am not sure whether the MVCC\nadditions in 6.5 provide a way to do it or not (Vadim?).\n\n> NOTE: I tried using PQrequestCancel but it won't\n> cancel the request. It still blocks for as long\n> as the lock lasts. The only way around I've found so \n> far is to use PQreset. That's crude but works.\n\nNot really --- what PQreset is really doing is disconnecting your\nclient from its original backend and starting a new backend. The\nold backend is still there trying to get the lock; it won't notice\nthat you've disconnected from it until after it acquires the lock.\nObviously, this approach doesn't scale up very well... you'll soon\nrun out of backend processes.\n\nA possible approach is for your clients to maintain more than one\nbackend connection, and use one of the backends to do the stuff \nthat might block while using another one to do the stuff that won't.\nThis would take a little more bookkeeping in the client but it seems\nlike a logically cleaner way to think about it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 13:35:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please? "
},
{
"msg_contents": "> A possible approach is for your clients to maintain more than one\n> backend connection, and use one of the backends to do the stuff \n> that might block while using another one to do the stuff that won't.\n> This would take a little more bookkeeping in the client but it seems\n> like a logically cleaner way to think about it.\n\nOr you could do it outside of the database using a Unix filesystem lock\nfile. There are symantics for no-blocking lock stuff in flock():\n\n #define LOCK_SH 0x01 /* shared file lock */\n #define LOCK_EX 0x02 /* exclusive file lock */\n #define LOCK_NB 0x04 /* don't block when locking */\n #define LOCK_UN 0x08 /* unlock file */\n\nI don't know of any SQL databases that allow non-blocking lock requests.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 13:43:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> I don't know of any SQL databases that allow non-blocking lock requests.\n> \n\nOracle OCI has oopt() and Informix Online has dirty read that do the trick for\nme.\n--------\nRegards\nTheo\n",
"msg_date": "Mon, 31 May 1999 20:31:48 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "First, thanks all for the feedback and good luck with the new\nrelease!\n\n> \n> > A possible approach is for your clients to maintain more than one\n> > backend connection, and use one of the backends to do the stuff \n> > that might block while using another one to do the stuff that won't.\n\nYes. Same effect as PQreset() if the code is to be ran only once, but\na lot better if inside a loop!. \n\n> Or you could do it outside of the database using a Unix filesystem lock\n> file. There are symantics for no-blocking lock stuff in flock():\n> \n> #define LOCK_SH 0x01 /* shared file lock */\n> #define LOCK_EX 0x02 /* exclusive file lock */\n> #define LOCK_NB 0x04 /* don't block when locking */\n> #define LOCK_UN 0x08 /* unlock file */\n\nExactly what's wanted in this case. The unix flock() locks a file or,\nif already locked, either waits or fails depending on what you\nrequested. The lock is released by either an unlock operation or the\ndeath of the locking process. It would solve my problem, except it \nrequires all clients to share a filesystem. \n\n> I don't know of any SQL databases that allow non-blocking lock requests.\n\nI'm not very familiar with full-scale SQL but seems odd not to have\nsuch things. I guess from the language point of view there ought to be\na way to know when an item is unavailable/undefined (it's been locked \nfor writing), if you don't want to wait a long time to get a value.\n\nImagine I go to the store at 11am and can't buy soap - because\nthe price of soap is unknown because it's a heavy trading day for soap \nat the ny stock exchange. Even if the shop's definition may not allow for\nsoap to be sold before the stock market closes and the final price is\nknown, I shouldn't be forced to wait there doing nothing. I can do \nother shopping around and come back later for my soap! ;-)\n\nRegards,\n\nPablo\n",
"msg_date": "Mon, 31 May 1999 14:35:24 -0400 (EDT)",
"msg_from": "Pablo Funes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "> First, thanks all for the feedback and good luck with the new\n> release!\n> \n> > \n> > > A possible approach is for your clients to maintain more than one\n> > > backend connection, and use one of the backends to do the stuff \n> > > that might block while using another one to do the stuff that won't.\n> \n> Yes. Same effect as PQreset() if the code is to be ran only once, but\n> a lot better if inside a loop!. \n> \n> > Or you could do it outside of the database using a Unix filesystem lock\n> > file. There are symantics for no-blocking lock stuff in flock():\n> > \n> > #define LOCK_SH 0x01 /* shared file lock */\n> > #define LOCK_EX 0x02 /* exclusive file lock */\n> > #define LOCK_NB 0x04 /* don't block when locking */\n> > #define LOCK_UN 0x08 /* unlock file */\n> \n> Exactly what's wanted in this case. The unix flock() locks a file or,\n> if already locked, either waits or fails depending on what you\n> requested. The lock is released by either an unlock operation or the\n> death of the locking process. It would solve my problem, except it \n> requires all clients to share a filesystem. \n\nSharing file systems. Good point. You could have a table you use to\nlock. Lock the table, view the value, possibly modify, and unlock. \nThis does not handle the case where someone died and did not remove\ntheir entry from the lock table. \n\n> \n> > I don't know of any SQL databases that allow non-blocking lock requests.\n> \n> I'm not very familiar with full-scale SQL but seems odd not to have\n> such things. I guess from the language point of view there ought to be\n> a way to know when an item is unavailable/undefined (it's been locked \n> for writing), if you don't want to wait a long time to get a value.\n> \n> Imagine I go to the store at 11am and can't buy soap - because\n> the price of soap is unknown because it's a heavy trading day for soap \n> at the ny stock exchange. Even if the shop's definition may not allow for\n> soap to be sold before the stock market closes and the final price is\n> known, I shouldn't be forced to wait there doing nothing. I can do \n> other shopping around and come back later for my soap! ;-)\n\nYes, I can see why having such a facility would be nice.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 14:43:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > I don't know of any SQL databases that allow non-blocking lock requests.\n> > \n> \n> Oracle OCI has oopt() and Informix Online has dirty read that do the trick for\n> me.\n> --------\n\nPlease give me more information. How does dirty read fix the problem?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 14:45:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > I don't know of any SQL databases that allow non-blocking lock requests.\n> > >\n> >\n> > Oracle OCI has oopt() and Informix Online has dirty read that do the trick for\n> > me.\n> > --------\n> \n> Please give me more information. How does dirty read fix the problem?\n\nIt allows me to read uncommited records without blocking.\n\n--------\nRegards\nTheo\n",
"msg_date": "Mon, 31 May 1999 20:58:08 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "> > > Oracle OCI has oopt() and Informix Online has dirty read that do the trick for\n> > > me.\n> > > --------\n> > \n> > Please give me more information. How does dirty read fix the problem?\n> \n> It allows me to read uncommited records without blocking.\n\nI suppose it somehow lets you know whether the read was dirty or\nclean...\n",
"msg_date": "Mon, 31 May 1999 15:05:53 -0400 (EDT)",
"msg_from": "Pablo Funes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > Bruce Momjian wrote:\n> > > >\n> > > > I don't know of any SQL databases that allow non-blocking lock requests.\n> > > >\n> > >\n> > > Oracle OCI has oopt() and Informix Online has dirty read that do the trick for\n> > > me.\n> > > --------\n> > \n> > Please give me more information. How does dirty read fix the problem?\n> \n> It allows me to read uncommited records without blocking.\n\nYes, but that does not solve his problem. He wants a single lock, and\nwants to test the lock, and immediately return if the lock fails.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 15:10:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "> > > > Oracle OCI has oopt() and Informix Online has dirty read that do the trick for\n> > > > me.\n> > > > --------\n> > > \n> > > Please give me more information. How does dirty read fix the problem?\n> > \n> > It allows me to read uncommited records without blocking.\n> \n> I suppose it somehow lets you know whether the read was dirty or\n> clean...\n\nNot that I am aware of. Never heard that of Informix.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 15:27:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "> > It allows me to read uncommited records without blocking.\n> \n> Yes, but that does not solve his problem. He wants a single lock, and\n> wants to test the lock, and immediately return if the lock fails.\n> \n\nIf you know the read was dirty, you know there was somebody else\nlocking/writing the table or record, it's locked, you failed to lock. \n\nOf course you should be able to aquire the lock in the same atomic\noperation... \n\n\n...Pablo\n\n",
"msg_date": "Mon, 31 May 1999 15:27:58 -0400 (EDT)",
"msg_from": "Pablo Funes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Sharing file systems. Good point. You could have a table you use to\n> lock. Lock the table, view the value, possibly modify, and unlock.\n> This does not handle the case where someone died and did not remove\n> their entry from the lock table.\n\nYou can always write the modification time to the table as well and if \nit's \"too old\", then try to override it.\n\n-------\n Hannu\n",
"msg_date": "Mon, 31 May 1999 22:51:06 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > > > Oracle OCI has oopt() and Informix Online has dirty read that do the trick for\n> > > > > me.\n> > > > > --------\n> > > >\n> > > > Please give me more information. How does dirty read fix the problem?\n> > >\n> > > It allows me to read uncommited records without blocking.\n> >\n> > I suppose it somehow lets you know whether the read was dirty or\n> > clean...\n> \n> Not that I am aware of. Never heard that of Informix.\n\nI also cheat. I use a 3 buffer approach, compare fields and see if a record\nhas\nchanged before I do an update.\n--------\nRegards\nTheo\n",
"msg_date": "Mon, 31 May 1999 22:01:30 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > > > > Oracle OCI has oopt() and Informix Online has dirty read that do the trick for\n> > > > > > me.\n> > > > > > --------\n> > > > >\n> > > > > Please give me more information. How does dirty read fix the problem?\n> > > >\n> > > > It allows me to read uncommited records without blocking.\n> > >\n> > > I suppose it somehow lets you know whether the read was dirty or\n> > > clean...\n> > \n> > Not that I am aware of. Never heard that of Informix.\n> \n> I also cheat. I use a 3 buffer approach, compare fields and see if a record\n> has\n> changed before I do an update.\n\nOh, now we get the full picture. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 16:12:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > Sharing file systems. Good point. You could have a table you use to\n> > lock. Lock the table, view the value, possibly modify, and unlock.\n> > This does not handle the case where someone died and did not remove\n> > their entry from the lock table.\n> \n> You can always write the modification time to the table as well and if \n> it's \"too old\", then try to override it.\n> \n\nAssuming you can set a reasonable \"too old\" time. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 19:28:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "> \n> > Bruce Momjian wrote:\n> > > Sharing file systems. Good point. You could have a table you use to\n> > > lock. Lock the table, view the value, possibly modify, and unlock.\n> > > This does not handle the case where someone died and did not remove\n> > > their entry from the lock table.\n> > \n> > You can always write the modification time to the table as well and if \n> > it's \"too old\", then try to override it.\n> > \n> \n> Assuming you can set a reasonable \"too old\" time. \n> \n\nThere may be many partial workarounds, depending on the\napplication, but there seems to be no robust way to have \na failed lock right now. Perhaps in a future version will \nPQrequestCancel be able to terminate a waiting-for-lock \nstate? \n\nPablo\n\n",
"msg_date": "Mon, 31 May 1999 19:40:53 -0400 (EDT)",
"msg_from": "Pablo Funes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "Pablo Funes <[email protected]> writes:\n> Perhaps in a future version will PQrequestCancel be able to terminate\n> a waiting-for-lock state?\n\nSeems like a reasonable suggestion. It's too late to consider this for\n6.5 (we were supposed to freeze the feature list quite a while back)\nbut I support putting it on the TODO list for a future release.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 20:05:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please? "
},
{
"msg_contents": "> Pablo Funes <[email protected]> writes:\n> > Perhaps in a future version will PQrequestCancel be able to terminate\n> > a waiting-for-lock state?\n> \n> Seems like a reasonable suggestion. It's too late to consider this for\n> 6.5 (we were supposed to freeze the feature list quite a while back)\n> but I support putting it on the TODO list for a future release.\n> \n> \t\t\tregards, tom lane\n> \n\nAdded:\n\n\t* Allow PQrequestCancel() to terminate when in waiting-for-lock state\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 20:07:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Pablo Funes <[email protected]> writes:\n> > Is it possible to do a nonblocking lock?\n> \n> There is no way to do that in 6.4. I am not sure whether the MVCC\n> additions in 6.5 provide a way to do it or not (Vadim?).\n\nI want to have it in later versions.\n\nAt the moment try to use contrib/userlock/\n\n> \n> > NOTE: I tried using PQrequestCancel but it won't\n> > cancel the request. It still blocks for as long\n\nAnd this is bug that should be fixed... after 6.5\n\nVadim\n",
"msg_date": "Tue, 01 Jun 1999 09:56:32 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "Theo Kramer wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > > Bruce Momjian wrote:\n> > > >\n> > > > I don't know of any SQL databases that allow non-blocking lock requests.\n> > > >\n> > >\n> > > Oracle OCI has oopt() and Informix Online has dirty read that do the trick for\n> > > me.\n> > > --------\n> >\n> > Please give me more information. How does dirty read fix the problem?\n> \n> It allows me to read uncommited records without blocking.\n\nI plan to implement it in 6.6\n\nVadim\n",
"msg_date": "Tue, 01 Jun 1999 10:10:51 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "Vadim Mikheev wrote:\n> > It allows me to read uncommited records without blocking.\n> \n> I plan to implement it in 6.6\n\nThat's the best thing I've heard so far. I will then be able to use\npostgres for my interactive applications :-).\n--------\nRegards\nTheo\n",
"msg_date": "Tue, 01 Jun 1999 08:41:53 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "Theo Kramer wrote:\n> \n> Vadim Mikheev wrote:\n> > > It allows me to read uncommited records without blocking.\n> >\n> > I plan to implement it in 6.6\n> \n> That's the best thing I've heard so far. I will then be able to use\n> postgres for my interactive applications :-).\n\nHow about savepoints? -:)\nAnd implicit savepoint before executing a query, like one in Oracle?\n\nVadim\n",
"msg_date": "Tue, 01 Jun 1999 16:03:35 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "Vadim Mikheev wrote:\n> \n> How about savepoints? -:)\n> And implicit savepoint before executing a query, like one in Oracle?\n\nThat would be the cherry on the top.\n--------\nRegards\nTheo\n",
"msg_date": "Tue, 01 Jun 1999 10:13:44 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "> > There is no way to do that in 6.4. I am not sure whether the MVCC\n> > additions in 6.5 provide a way to do it or not (Vadim?).\n> \n> I want to have it in later versions.\n> \n> At the moment try to use contrib/userlock/\n> \n\nAHA! It looks like this solves my problem, at least for now,\nuntil an official way to do nonblocking locs shows up on a \nfuture release. \nHere's what contrib/userlock/user_locks.doc says: \n\n select some_fields, user_write_lock_oid(oid) from table where id='key';\n\n Now if the returned user_write_lock_oid field is 1 you have acquired an\n user lock on the oid of the selected tuple and can now do some long operation\n on it, like let the data being edited by the user.\n\n If it is 0 it means that the lock has been already acquired by some other\n process and you should not use that item until the other has finished.\n\n [...]\n\n update table set some_fields where id='key';\n select user_write_unlock_oid(oid) from table where id='key';\n\n [...]\n\n This could also be done by setting a flag in the record itself but in\n this case you have the overhead of the updates to the records and there\n could be some locks not released if the backend or the application crashes\n before resetting the lock flag.\n\n It could also be done with a begin/end block but in this case the entire\n table would be locked by postgres and it is not acceptable to do this for\n a long period because other transactions would block completely.\n\n",
"msg_date": "Tue, 1 Jun 1999 12:09:36 -0400 (EDT)",
"msg_from": "Pablo Funes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] please?"
},
{
"msg_contents": "\nAdded to TODO:\n\n\t* PQrequestCancel() be able to terminate backend waiting for lock\n\n> Pablo Funes <[email protected]> writes:\n> > Perhaps in a future version will PQrequestCancel be able to terminate\n> > a waiting-for-lock state?\n> \n> Seems like a reasonable suggestion. It's too late to consider this for\n> 6.5 (we were supposed to freeze the feature list quite a while back)\n> but I support putting it on the TODO list for a future release.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 19:22:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please?"
}
] |
[
{
"msg_contents": "subscribe\n\n\n",
"msg_date": "Mon, 31 May 1999 11:34:27 +1000",
"msg_from": "Colin McCormack <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Hi,\n\nMy apologies for posting the subscribe to the list (I hate it when I do that.)\n\nAnyway, I wrote a very quick, very small Swig tcl8 interface to libpq, to get \nfacilities for asynchronous query and cancellation. It seems to work, it's \nvery light, and here it is:\n\n\tftp://field.medicine.adelaide.edu.au/pub/Outgoing/libtclpq.tgz\n\nPlease let me know what you think.\n\nColin.\n\n\n",
"msg_date": "Mon, 31 May 1999 11:44:47 +1000",
"msg_from": "Colin McCormack <[email protected]>",
"msg_from_op": true,
"msg_subject": "Announcement - SWIG based tcl interface (with asynch query)"
}
] |
[
{
"msg_contents": "Hi,\n\nMy apologies for posting the subscribe to the list (I hate it when I do that.)\n\nAnyway, I wrote a very quick, very small Swig tcl8 interface to libpq, to get \nfacilities for asynchronous query and cancellation. It seems to work, it's \nvery light, and here it is:\n\n\tftp://field.medicine.adelaide.edu.au/pub/Outgoing/libtclpq.tgz\n\nPlease let me know what you think.\n\nColin.\n\n\n\n\n",
"msg_date": "Mon, 31 May 1999 12:07:13 +1000",
"msg_from": "Colin McCormack <[email protected]>",
"msg_from_op": true,
"msg_subject": "Announcement - SWIG based tcl interface (with asynch query)"
}
] |
[
{
"msg_contents": "Hi,\n\nI wrote a very quick, very small Swig tcl8 interface to libpq, to get \nfacilities for asynchronous query and cancellation. It seems to work, it's \nvery light, and here it is:\n\n\tftp://field.medicine.adelaide.edu.au/pub/Outgoing/libtclpq.tgz\n\nPlease let me know what you think.\n\nColin.\n\n\n\n\n\n\n",
"msg_date": "Mon, 31 May 1999 12:09:35 +1000",
"msg_from": "Colin McCormack <[email protected]>",
"msg_from_op": true,
"msg_subject": "Announcement - SWIG based tcl interface (with asynch query)"
}
] |
[
{
"msg_contents": "discovery=> select count(*) from publications;\ncount\n-----\n 0\n(1 row)\n\n Does 1 rows is a correct result ?\n\n\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 31 May 1999 11:41:49 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "select count(*) from <empty table>: 0 rows or 1 rows ?"
},
{
"msg_contents": "Thus spake Oleg Bartunov\n> discovery=> select count(*) from publications;\n> count\n> -----\n> 0\n> (1 row)\n> \n> Does 1 rows is a correct result ?\n\nAbsolutely. \"SELECT COUNT(*)...\" always returns exactly one row (well,\nassuming that the table exists, of course) so this is correct. A return\nof zero rows would imply that no information was returned but you did\nget back information here, the fact that the table is empty.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 31 May 1999 08:29:55 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] select count(*) from <empty table>: 0 rows or 1 rows ?"
},
{
"msg_contents": "> discovery=> select count(*) from publications;\n> count\n> -----\n> 0\n> (1 row)\n> \n> Does 1 rows is a correct result ?\n> \n> \tOleg\n\nOne row is returned, and that row is zero. It is correct.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 12:11:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] select count(*) from <empty table>: 0 rows or 1 rows ?"
}
] |
[
{
"msg_contents": "The message is a bit off. I had a database working fine on one system\nunder 6.3.x. The software I had written to use it worked WONDERFULLY as\nwell. I then brought up a new machine using RedHat 6.0. I installed\nthe 6.4.2 that came with it. I recompiled my app on that machine. I\ndid a pgdump of the database (cannot remember command line options...\ndata and schema I remember that much). I then loaded it into a newly\ncreated database on the new machine.\n\nI get the error above (or something VERY similar...). I have posted a\nbug report, but I want to know what the heck the message really means. \nWhat is type 0x45? Does this mean my app has a problem or? I have\nsearched deja.com, I have used postgresql.org's search engines. No\nhelp.\n\nThe database is kind of on the private side, but if it must be shared,\nit can be. A slightly older version of the code (should be current in a\nfew days) is the webgen tool at http://nueorder.netpedia.net.\n\nI really need to get this cleared up. Any help, any pointers, any info\non what this all means would be GREATLY appreciated. Thank you,\n\nTrever Adams\[email protected]\n",
"msg_date": "Mon, 31 May 1999 01:59:20 -0600",
"msg_from": "Trever Adams <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backend sent 0x45 type while idle"
}
] |
[
{
"msg_contents": "The message is a bit off. I had a database working fine on one system\nunder 6.3.x. The software I had written to use it worked WONDERFULLY as\nwell. I then brought up a new machine using RedHat 6.0. I installed\nthe 6.4.2 that came with it. I recompiled my app on that machine. I\ndid a pgdump of the database (cannot remember command line options...\ndata and schema I remember that much). I then loaded it into a newly\ncreated database on the new machine.\n\nI get the error above (or something VERY similar...). I have posted a\nbug report, but I want to know what the heck the message really means. \nWhat is type 0x45? Does this mean my app has a problem or? I have\nsearched deja.com, I have used postgresql.org's search engines. No\nhelp.\n\nThe database is kind of on the private side, but if it must be shared,\nit can be. A slightly older version of the code (should be current in a\nfew days) is the webgen tool at http://nueorder.netpedia.net.\n\nI really need to get this cleared up. Any help, any pointers, any info\non what this all means would be GREATLY appreciated. Thank you,\n\nTrever Adams\[email protected]\n",
"msg_date": "Mon, 31 May 1999 02:06:23 -0600",
"msg_from": "Trever Adams <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backend sent 0x45 type while idle"
},
{
"msg_contents": "Trever Adams <[email protected]> writes:\n> did a pgdump of the database (cannot remember command line options...\n> data and schema I remember that much). I then loaded it into a newly\n> created database on the new machine.\n\n> I get the error above (or something VERY similar...).\n\nWhen exactly? While trying to reload the pg_dump script, or during\nsubsequent usage of the database?\n\n> I have posted a\n> bug report, but I want to know what the heck the message really means. \n> What is type 0x45? Does this mean my app has a problem or?\n\nIt's probably a symptom of a backend bug :-(. It means that libpq\nwasn't expecting a backend message when it got one. 0x45 = 'E' which\nwould be the start of an Error message, which ordinarily shouldn't be\nemitted except in response to a frontend query. Try looking in the\npostmaster log --- the error message should be logged there as well.\nKnowing what the backend is trying to tell us would be helpful...\n\nAnother possibility is that the frontend and backend got out of sync,\nwhich is particularly likely during COPY commands --- does your pg_dump\nscript use COPY or INSERT to reload data into tables?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 11:12:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend sent 0x45 type while idle "
},
{
"msg_contents": "Probably means you have the old binaries in your path somewhere.\n\n\n> The message is a bit off. I had a database working fine on one system\n> under 6.3.x. The software I had written to use it worked WONDERFULLY as\n> well. I then brought up a new machine using RedHat 6.0. I installed\n> the 6.4.2 that came with it. I recompiled my app on that machine. I\n> did a pgdump of the database (cannot remember command line options...\n> data and schema I remember that much). I then loaded it into a newly\n> created database on the new machine.\n> \n> I get the error above (or something VERY similar...). I have posted a\n> bug report, but I want to know what the heck the message really means. \n> What is type 0x45? Does this mean my app has a problem or? I have\n> searched deja.com, I have used postgresql.org's search engines. No\n> help.\n> \n> The database is kind of on the private side, but if it must be shared,\n> it can be. A slightly older version of the code (should be current in a\n> few days) is the webgen tool at http://nueorder.netpedia.net.\n> \n> I really need to get this cleared up. Any help, any pointers, any info\n> on what this all means would be GREATLY appreciated. Thank you,\n> \n> Trever Adams\n> [email protected]\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 12:12:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend sent 0x45 type while idle"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Trever Adams <[email protected]> writes:\n> > did a pgdump of the database (cannot remember command line options...\n> > data and schema I remember that much). I then loaded it into a newly\n> > created database on the new machine.\n> \n> > I get the error above (or something VERY similar...).\n> \n> When exactly? While trying to reload the pg_dump script, or during\n> subsequent usage of the database?\n\nAh, sorry. I could have down a query, but the machine went down. It is\nmy program that gets this. It is after the first query I believe.\n\n> > I have posted a\n> > bug report, but I want to know what the heck the message really means.\n> > What is type 0x45? Does this mean my app has a problem or?\n> \n> It's probably a symptom of a backend bug :-(. It means that libpq\n> wasn't expecting a backend message when it got one. 0x45 = 'E' which\n> would be the start of an Error message, which ordinarily shouldn't be\n> emitted except in response to a frontend query. Try looking in the\n> postmaster log --- the error message should be logged there as well.\n> Knowing what the backend is trying to tell us would be helpful...\n\nOk, this is going to sound very dumb: Where is this log kept? Is it\nkept through syslogd? If so, I apparently have it turned off somewhere.\n \n> Another possibility is that the frontend and backend got out of sync,\n> which is particularly likely during COPY commands --- does your pg_dump\n> script use COPY or INSERT to reload data into tables?\n> \n> regards, tom lane\n\nAgain, it isn't using pg_dump or pgsql. Just my program. I do insert\nand update from my program. The rest is all select. No copy. I will\nget a good ltrace/strace, modify some private data that WILL show in it,\nand then post it to you.\n\nThanks for the help,\nTrever\n",
"msg_date": "Mon, 31 May 1999 13:23:03 -0600",
"msg_from": "Trever Adams <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Backend sent 0x45 type while idle"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Probably means you have the old binaries in your path somewhere.\n> \n\nNo. As I said this new box is completely installed fresh from RedHat\n6.0. Even my programs are recompiled on this box. The only thing old\non the entire system is some configs I have laying around in user\ndirectories that I Am using to update the new style configs for various\nprograms.\n\nTrever\n",
"msg_date": "Mon, 31 May 1999 13:24:22 -0600",
"msg_from": "Trever Adams <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Backend sent 0x45 type while idle"
},
{
"msg_contents": "Trever Adams <[email protected]> writes:\n>> Try looking in the\n>> postmaster log --- the error message should be logged there as well.\n>> Knowing what the backend is trying to tell us would be helpful...\n\n> Ok, this is going to sound very dumb: Where is this log kept? Is it\n> kept through syslogd? If so, I apparently have it turned off\n> somewhere.\n\nWith the default configuration of Postgres, this logfile is just the\npostmaster's stderr output --- that should be getting put into a file\nsomewhere, if you are using recommended procedures for starting the\npostmaster. ~postgres/server.log is the usual place.\n\nI think it is possible to redirect the postmaster log to syslogd, but\nyou have to specifically configure things that way to make it happen.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 16:00:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend sent 0x45 type while idle "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Trever Adams <[email protected]> writes:\n> >> Try looking in the\n> >> postmaster log --- the error message should be logged there as well.\n> >> Knowing what the backend is trying to tell us would be helpful...\n> \n> > Ok, this is going to sound very dumb: Where is this log kept? Is it\n> > kept through syslogd? If so, I apparently have it turned off\n> > somewhere.\n> \n> With the default configuration of Postgres, this logfile is just the\n> postmaster's stderr output --- that should be getting put into a file\n> somewhere, if you are using recommended procedures for starting the\n> postmaster. ~postgres/server.log is the usual place.\n> \n> I think it is possible to redirect the postmaster log to syslogd, but\n> you have to specifically configure things that way to make it happen.\n> \n> regards, tom lane\n\nIt seems libpq is crazy. I was incorrect. The password and user name\nare indeed bogus. At least the password was. It seems that libpq for\n6.4.2 doesn't return the correct return code for invalid login (compared\nto 6.3). I am going to be checking further into this, but it may be a\nday or two.\n\nUnfortunately, the brakes on my vehicle have failed and I am rebuilding\nthe system for one wheel to fix the problem. I will post my findings as\nsoon as possible.\n\nTrever\n",
"msg_date": "Fri, 04 Jun 1999 04:33:31 -0600",
"msg_from": "Trever Adams <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Backend sent 0x45 type while idle"
},
{
"msg_contents": "Trever Adams <[email protected]> writes:\n> It seems that libpq for 6.4.2 doesn't return the correct return code\n> for invalid login (compared to 6.3).\n\nHard to believe ... not only do I recall checking that for 6.4, but\nif it *were* broken you would not be the first one to discover it.\n\nI am guessing there is some other contributing factor in your situation.\nNot sure what.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jun 1999 09:48:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Backend sent 0x45 type while idle "
}
] |
[
{
"msg_contents": "> > I have tested current snapshot (from CVS) to compile and \n> run on Windows NT.\n> > \n> > It compiles mostly OK. The only problem is with linking the \n> libpq++, but it\n> > can be a general problem:\n> > \n> > pgcursordb.o: In function `_8PgCursorRC12PgConnectionPCc':\n> > \n> /usr/src/pgsql.test/src/interfaces/libpq++/pgcursordb.cc:37: undefined\n> > reference\n> > to `PgTransaction::PgTransaction(PgConnection const &)'\n> \n> Interesting. I wonder if any other platforms or compilers are also \n> showing this... I'll submit the patch later today.\n\nBecause it is still here, I have looked at this and I think this is the\nproblem:\nin file pgtransdb.h there is declared constructor PgTransaction(const\nPgConnection&)', but there is no implementation in pgtransdb.cc and possibly\nin higher layers of the \"call stack\" (pgDatabase,....)\n\nThe solution can be to remove the constructor pgCursor(const PgConnection&,\nconst char* cursor) from pgcursordb.h (and .cc).\n\n\t\t\tDan\n",
"msg_date": "Mon, 31 May 1999 14:44:31 +0200",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] report for Win32 port"
}
] |
[
{
"msg_contents": "Hi,\n\nplease add this lines to template/.similar:\ni386-pc-cygwin=cygwin32\ni486-pc-cygwin=cygwin32\ni586-pc-cygwin=cygwin32\ni686-pc-cygwin=cygwin32\n\nThey enable the template autodetection for the Cygwin port.\n\n\t\t\tDan\n",
"msg_date": "Mon, 31 May 1999 17:18:55 +0200",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] report for Win32 port"
},
{
"msg_contents": "Done.\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hi,\n> \n> please add this lines to template/.similar:\n> i386-pc-cygwin=cygwin32\n> i486-pc-cygwin=cygwin32\n> i586-pc-cygwin=cygwin32\n> i686-pc-cygwin=cygwin32\n> \n> They enable the template autodetection for the Cygwin port.\n> \n> \t\t\tDan\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 12:18:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] report for Win32 port"
}
] |
[
{
"msg_contents": "\nI'm pretty sure there were PostgreSQL releases before Postgres'95.\nThe 95 was kind of a joke on Linux Torvald's announcement of\na closed Linux95, which (of course) was a jab at Microsoft Win95.\n\nI seem to recall that Postgres95 was a blip in the numbering, but\nI'm not sure where it fit. Between 5.something and 6.1? Not sure.\n\nIt has been a while. \n\n-- cary\n",
"msg_date": "Mon, 31 May 1999 11:45:07 -0400 (EDT)",
"msg_from": "\"Cary O'Brien\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] History of PostgreSQL"
}
] |
[
{
"msg_contents": "I'm constructing a new type \"ip4\" as a unified replacement to inet and\ncidr,\nto hopefully relieve some of the confusion involving those types.\nWould anyone be interested?\n\nMark\n",
"msg_date": "Mon, 31 May 1999 11:46:56 -0400",
"msg_from": "Mark Volpe <[email protected]>",
"msg_from_op": true,
"msg_subject": "New IP address datatype"
},
{
"msg_contents": "> I'm constructing a new type \"ip4\" as a unified replacement to inet and\n> cidr,\n> to hopefully relieve some of the confusion involving those types.\n> Would anyone be interested?\n> \n> Mark\n> \n> \n\nBut they are the same, except for output, right? We discussed the\nhaving a unified type, but could not figure out how to output things\nproperly. I recommend you see the huge discussion on the hackers list\nabout these types in the October/November 1998 timeframe.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 12:20:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New IP address datatype"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I recommend you see the huge discussion on the hackers list\n> about these types in the October/November 1998 timeframe.\n\nYup ... and note that the existing types were designed partly on the\nadvice of Paul Vixie, who knows a thing or three about IP addressing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 13:21:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New IP address datatype "
},
{
"msg_contents": "On Mon, 31 May 1999, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > I recommend you see the huge discussion on the hackers list\n> > about these types in the October/November 1998 timeframe.\n> \n> Yup ... and note that the existing types were designed partly on the\n> advice of Paul Vixie, who knows a thing or three about IP addressing.\n\nHave to agree here...what we have now was prompted, and, in large part,\ndesigned by Paul Vixie, and *that* was after some major discussions on the\nlists concerning how to implement.\n\nI think there would have to be some very strong arguments for changing it\nnow, as well as opening discussions with Paul on this...in alot of ways,\nits his arena...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 31 May 1999 17:04:40 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New IP address datatype "
},
{
"msg_contents": "Thus spake Mark Volpe\n> I'm constructing a new type \"ip4\" as a unified replacement to inet and\n> cidr,\n> to hopefully relieve some of the confusion involving those types.\n> Would anyone be interested?\n\nYikes! Please be very careful. We went through a lot of work to get\nit right. The fact that there are two types was a bit of a compromise\nto get what everyone wanted into the system. Note that the underlying\nroutines are exactly the same anyway. The difference is all in the\ninput and output and pretty minor at that but the differences are\nessential.\n\nIf you are talking about the recent discussions, we do have some issues\nto resolve but making one type won't clarify the situation. I think\nwe are pretty sure about what to do. Someone just needs to find time\nto do it. \n\nIf you found the dual types confusing, maybe the problem is in the\ndocumentation. I am assuming from your offer that you have spent some\ntime studying the type and understand the point of both so perhaps\nyou can attack the documentation instead.\n\nOh, and if ip4 means IPv4, that's a step backwards. The current types\nare designed to be easily extended to handle IPv6 in the same types.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 31 May 1999 19:51:06 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New IP address datatype"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > I'm constructing a new type \"ip4\" as a unified replacement to inet and\n> > cidr,\n> But they are the same, except for output, right? We discussed the\n\nAnd input. Some values that are valid for inet are not valid cidr.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 31 May 1999 19:52:19 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New IP address datatype"
},
{
"msg_contents": "Thus spake Tom Lane\n> Yup ... and note that the existing types were designed partly on the\n> advice of Paul Vixie, who knows a thing or three about IP addressing.\n\nSpeaking of which, I wonder what Paul would say about the primary key\ndiscussion. Maybe I'll drop him a note.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 31 May 1999 19:53:33 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New IP address datatype"
},
{
"msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> Speaking of which, I wonder what Paul would say about the primary key\n> discussion. Maybe I'll drop him a note.\n\nGood thought, if he's not reading the mailing list anymore (which seems\nlikely given the volume...).\n\nI still assert that indexes need to behave the same as the comparison\noperators --- but maybe the comparison operators ought to behave\ndifferently for INET and CIDR types? It seems reasonable that\nthe netmask should be ignored when comparing one, but not the other...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 20:14:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New IP address datatype "
}
] |
[
{
"msg_contents": "I've been looking at those discussions -- my idea is to simplify\nthe ip network types ( and operators ) a little:\n\nHosts are specified as '134.67.131.10' or '134.67.131.10/32' and\ndisplay 134.67.131.10.\n\nSubnets are specified as '134.67.131.0/24', '134.67.131/24', or\njust '134.67.131', but they would display '134.67.131.0/24'.\n\nThere would be no provision for storing a host/netmask in the\nsame structure; it seems confusing to me anyway since you could\nput the netmask in a seperate column.\n\n\nThanks,\nMark\n\n( Sorry, I meant to post to the list the first time )\nBruce Momjian wrote:\n> \n> But they are the same, except for output, right? We discussed the\n> having a unified type, but could not figure out how to output things\n> properly. I recommend you see the huge discussion on the hackers list\n> about these types in the October/November 1998 timeframe.\n>\n",
"msg_date": "Mon, 31 May 1999 14:08:44 -0400",
"msg_from": "Mark Volpe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New IP address datatype"
},
{
"msg_contents": "Thus spake Mark Volpe\n> I've been looking at those discussions -- my idea is to simplify\n> the ip network types ( and operators ) a little:\n> \n> Hosts are specified as '134.67.131.10' or '134.67.131.10/32' and\n> display 134.67.131.10.\n\ndarcy=> \\d x\nTable = x\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| i | inet | var |\n| c | cidr | var |\n+----------------------------------+----------------------------------+-------+\ndarcy=> insert into x values ('134.67.131.0/24', '134.67.131.0/24');\nINSERT 34272 1\ndarcy=> insert into x values ('134.67.131/24', '134.67.131/24'); \nINSERT 34273 1\ndarcy=> insert into x values ('134.67.131', '134.67.131'); \nERROR: could not parse \"134.67.131\"\ndarcy=> insert into x values ('134.67.131.0', '134.67.131');\nINSERT 34274 1\n\nNote how 134.67.131 is a valid cidr but not a valid inet. Now look\nhow they display.\n\ndarcy=> select * from x;\ni |c \n---------------+-------------\n134.67.131.0/24|134.67.131/24\n134.67.131.0/24|134.67.131/24\n134.67.131.0 |134.67.131/24\n(3 rows)\n\nAs inet types, all octets are displayed. In the last case, it assumes\na host and displays accordingly. Note that while cidr will accept the\nold classfull syntax, it displays using proper cidr format.\n\n> Subnets are specified as '134.67.131.0/24', '134.67.131/24', or\n> just '134.67.131', but they would display '134.67.131.0/24'.\n\nAs an inet type. As a cidr type they should display as above. You\nseem to be confusing two concepts.\n\n> There would be no provision for storing a host/netmask in the\n> same structure; it seems confusing to me anyway since you could\n> put the netmask in a seperate column.\n\nYou could and, if all you want to store is a netmask, you could store\nthe number of bits in an int. If, however, you want to track a network\n(cidr) or a host with all it's network information (inet) then they\nshould be in one type. Hosts can be stored in inet simply by leaving\noff the bits part.\n\nSo, if you combine the types, will the new type act like a cidr or an inet?\nPersonally, I wouldn't kick if a third type (host) was created just to\nallow for a type that doesn't allow network information to be included.\nDifferent input (just doesn't allow the slash) and everything else like\nthe inet type.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 31 May 1999 20:08:37 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New IP address datatype"
},
{
"msg_contents": "> Thus spake Mark Volpe\n>> Hosts are specified as '134.67.131.10' or '134.67.131.10/32' and\n>> display 134.67.131.10.\n\nHmm. This suggests that the example given in the recent discussion\nabout primary keys is bogus: 198.68.123.0/24 is never equal to\n198.68.123.0/27, because they represent networks of different sizes.\nIf you were talking about host addresses, then the netmask would be\n/32 in both cases, and so the issue doesn't arise.\n\nI'm back to the opinion that netmask does matter in comparisons and in\nindexes ... but I'd sure like to hear what Vixie has to say about it.\n\nBTW, if we did want to make INET and CIDR have different behavior in\ncomparisons and indexes, that would mean having two sets of operators\nlisted in the system catalogs. We cannot add that as a post-6.5 patch\nbecause it would require an initdb, which is one of the things we don't\ndo between major releases. If it's wrong (I'm not convinced) we must\neither fix it this week or live with it till 6.6 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 20:34:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New IP address datatype "
},
{
"msg_contents": "On Mon, 31 May 1999, Tom Lane wrote:\n\n> BTW, if we did want to make INET and CIDR have different behavior in\n> comparisons and indexes, that would mean having two sets of operators\n> listed in the system catalogs. We cannot add that as a post-6.5 patch\n> because it would require an initdb, which is one of the things we don't\n> do between major releases. If it's wrong (I'm not convinced) we must\n> either fix it this week or live with it till 6.6 ...\n\nLive with it until 6.6...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 31 May 1999 23:24:29 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New IP address datatype "
},
{
"msg_contents": "Thus spake Tom Lane\n> > Thus spake Mark Volpe\n> >> Hosts are specified as '134.67.131.10' or '134.67.131.10/32' and\n> >> display 134.67.131.10.\n> \n> Hmm. This suggests that the example given in the recent discussion\n> about primary keys is bogus: 198.68.123.0/24 is never equal to\n> 198.68.123.0/27, because they represent networks of different sizes.\n\nI don't think it's so clear cut. For INET, the two addresses refer\nto the same host but contradict each other in network details. The\nINET type is primarily a host type with optional network information\nadded. One might even argue that 198.68.123.1/24 and 198.68.123.2/27\nshould not be allowed to coexist but that's probably going too far.\n\nFor the CIDR type, they refer to two different networks but they overlap.\nThe argument is that as a primary key they partially conflict so they\nshouldn't be allowed to coexist.\n\n> If you were talking about host addresses, then the netmask would be\n> /32 in both cases, and so the issue doesn't arise.\n\nRight. For the INET type the netbits defaults to /32 so it can be used\nfor hosts transparently.\n\n> I'm back to the opinion that netmask does matter in comparisons and in\n> indexes ... but I'd sure like to hear what Vixie has to say about it.\n\nI have asked him.\n\n> BTW, if we did want to make INET and CIDR have different behavior in\n> comparisons and indexes, that would mean having two sets of operators\n> listed in the system catalogs. We cannot add that as a post-6.5 patch\n> because it would require an initdb, which is one of the things we don't\n> do between major releases. If it's wrong (I'm not convinced) we must\n> either fix it this week or live with it till 6.6 ...\n\nAt this point I doubt we want to start mucking with catalogues and new\noperators. Fixing it to be consistent is probably doable.\n\nAnd since I will never use either type as a primary key, I can live\nwith either decision. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 1 Jun 1999 08:10:58 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New IP address datatype"
},
{
"msg_contents": "> Thus spake Tom Lane\n> > > Thus spake Mark Volpe\n> > >> Hosts are specified as '134.67.131.10' or '134.67.131.10/32' and\n> > >> display 134.67.131.10.\n> > \n> > Hmm. This suggests that the example given in the recent discussion\n> > about primary keys is bogus: 198.68.123.0/24 is never equal to\n> > 198.68.123.0/27, because they represent networks of different sizes.\n> \n> I don't think it's so clear cut. For INET, the two addresses refer\n> to the same host but contradict each other in network details. The\n> INET type is primarily a host type with optional network information\n> added. One might even argue that 198.68.123.1/24 and 198.68.123.2/27\n> should not be allowed to coexist but that's probably going too far.\n> \n> For the CIDR type, they refer to two different networks but they overlap.\n> The argument is that as a primary key they partially conflict so they\n> shouldn't be allowed to coexist.\n> \n> > If you were talking about host addresses, then the netmask would be\n> > /32 in both cases, and so the issue doesn't arise.\n> \n> Right. For the INET type the netbits defaults to /32 so it can be used\n> for hosts transparently.\n> \n> > I'm back to the opinion that netmask does matter in comparisons and in\n> > indexes ... but I'd sure like to hear what Vixie has to say about it.\n> \n> I have asked him.\n> \n> > BTW, if we did want to make INET and CIDR have different behavior in\n> > comparisons and indexes, that would mean having two sets of operators\n> > listed in the system catalogs. We cannot add that as a post-6.5 patch\n> > because it would require an initdb, which is one of the things we don't\n> > do between major releases. If it's wrong (I'm not convinced) we must\n> > either fix it this week or live with it till 6.6 ...\n> \n> At this point I doubt we want to start mucking with catalogues and new\n> operators. Fixing it to be consistent is probably doable.\n> \n> And since I will never use either type as a primary key, I can live\n> with either decision. :-)\n\nOK, but let's make a decision.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jun 1999 10:38:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New IP address datatype"
},
{
"msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> \n> Thus spake Mark Volpe\n> > I've been looking at those discussions -- my idea is to simplify\n> > the ip network types ( and operators ) a little:\n> >\n> > Hosts are specified as '134.67.131.10' or '134.67.131.10/32' and\n> > display 134.67.131.10.\n> \n\nActually I was talking about the behavior of my \"unified\" type :)\n\nIf I have:\n\nCREATE TABLE x ( i ip4 );\nINSERT INTO x VALUES('10.20.30.40');\nINSERT INTO x VALUES('10.20.30');\nINSERT INTO x VALUES('10.20');\nINSERT INTO x VALUES('10.20.30/20');\n\nI would have:\n\nSELECT * FROM x;\ni \n-------------\n10.20.30.40 \n10.20.30.0/24\n10.20.0.0/16 \n10.20.16.0/20\n\nIn most applications ( e.g., IP and network registration )\nyou would require that there be no overlapping address space,\nso the above table would be illegal in a unique index. I thought\nabout creating two different operator sets, but that means if\nyou commit to one in a btree, using the other one always requires\na Seq Scan ( am I right here? ). So I used one and as a result,\nthe '=' operator checks if its two operands overlap ( I also\nhave operators for reading and coercing the masks ). Our group\nuses this sort of thing and it works pretty well.\nThanks for your comments.\n\nMark\n",
"msg_date": "Tue, 01 Jun 1999 13:13:36 -0400",
"msg_from": "Mark Volpe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New IP address datatype"
}
] |
[
{
"msg_contents": "\nBruce Momjian <[email protected]>\n> \n> > Bruce Momjian wrote:\n> > > \n> > > I don't know of any SQL databases that allow non-blocking lock requests.\n> > > \n> > \n> > Oracle OCI has oopt() and Informix Online has dirty read that do the trick for\n> > me.\n> > --------\n> \n> Please give me more information. How does dirty read fix the problem?\n> \n\nThis all sounds like the Oracle NOWAIT option.\n\nSession1.\n\n SQL> select value from sys_param where name = 'ASG_SYSTEM_DESC' for update;\n\n VALUE\n ------------------------------\n tpwmamda\n\n SQL> \n\nSession2.\n\n SQL> select value from sys_param where name = 'ASG_SYSTEM_DESC' for update nowait;\n ERROR:\n ORA-00054: resource busy and acquire with NOWAIT specified\n\n\n\n no rows selected\n\n SQL> \n\nNo idea how to impliment it though!!\n\n",
"msg_date": "Mon, 31 May 1999 21:44:24 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] please?"
}
] |
[
{
"msg_contents": ">I tried the same thing, except I simply put a loop around the begin/end\n>transaction part of testlo.c so that it would create and access many\n>large objects in a single backend process. With today's sources I do\n>not see a 'ShmemAlloc: out of memory' error even after several thousand\n>iterations. (But I do not know if this test would have triggered one\n>before...)\n\nWas something changed in the LO? I will take it down again and try\nwith my data set again (~250M) and let yall know what happens. I have\nsolved my problem by taking the transaction out of the program, so I\nthink that the error is in there somewhere. Like I said, I will try\nit and post later with my results...\n\n- Brandon\n\n\n------------------------------------------------------\nSmith Computer Lab Administrator,\nCase Western Reserve University\n [email protected]\n 216 - 368 - 5066\n http://cwrulug.cwru.edu\n------------------------------------------------------\n\nPGP Public Key Fingerprint: 1477 2DCF 8A4F CA2C 8B1F 6DFE 3B7C FDFB\n\n\n",
"msg_date": "Mon, 31 May 1999 19:02:18 -0400",
"msg_from": "\"Brandon Palmer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problems w/ LO"
}
] |
[
{
"msg_contents": "\nWhich list are the cvs changes posted to? I see that Tom Lane made\nchanges to just about all of the libpq++ source files but I have no\nidea what was done.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 31 May 1999 20:19:31 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "which list?"
},
{
"msg_contents": "> \n> Which list are the cvs changes posted to? I see that Tom Lane made\n> changes to just about all of the libpq++ source files but I have no\n> idea what was done.\n\nOh, it's a secret.\n\nTo the committers list, I think.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 20:28:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] which list?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Which list are the cvs changes posted to? I see that Tom Lane made\n>> changes to just about all of the libpq++ source files but I have no\n>> idea what was done.\n\n> Oh, it's a secret.\n> To the committers list, I think.\n\nRight. You can subscribe to either committers or committers-digest\n(I use the latter). Also, don't forget that you can use \"cvs log\"\nto examine the log file for any particular file you are concerned\nabout.\n\nFor example:\n\n$ cvs log pgtransdb.cc | more\n\nRCS file: /usr/local/cvsroot/pgsql/src/interfaces/libpq++/pgtransdb.cc,v\nWorking file: pgtransdb.cc\nhead: 1.3\nbranch:\nlocks: strict\naccess list:\nsymbolic names:\n REL6_4: 1.1.0.2\n release-6-3: 1.1\nkeyword substitution: kv\ntotal revisions: 3; selected revisions: 3\ndescription:\n----------------------------\nrevision 1.3\ndate: 1999/05/30 15:17:58; author: tgl; state: Exp; lines: +4 -4\nReplace static rcsid[] strings by IDENTIFICATION comments in\nfile headers, to conform to established Postgres coding style and avoid\nwarnings from gcc.\n----------------------------\n\nOr even more directly, use \"cvs diff\" to see the differences between\nany two revisions:\n\n$ cvs diff -c -r 1.4 -r 1.5 pglobject.cc\nIndex: pglobject.cc\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/interfaces/libpq++/pglobject.cc,v\nretrieving revision 1.4\nretrieving revision 1.5\ndiff -c -r1.4 -r1.5\n*** pglobject.cc\t1999/05/23 01:04:03\t1.4\n--- pglobject.cc\t1999/05/30 15:17:58\t1.5\n***************\n*** 9,14 ****\n--- 9,16 ----\n *\n * Copyright (c) 1994, Regents of the University of California\n *\n+ * IDENTIFICATION\n+ *\t $Header: /usr/local/cvsroot/pgsql/src/interfaces/libpq++/pglobject.cc,v 1.5 1999/05/30 15:17:58 tgl Exp $\n *\n *-------------------------------------------------------------------------\n */\n***************\n*** 18,25 ****\n }\n \n #include \"pglobject.h\"\n- \n- static char rcsid[] = \"$Id: pglobject.cc,v 1.4 1999/05/23 01:04:03 momjian Exp $\";\n \n // ****************************************************************\n //\n--- 20,25 ----\n\nSee the cvs manual for these and many more useful features...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 May 1999 20:45:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] which list? "
},
{
"msg_contents": "\nOn 01-Jun-99 Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n>>> Which list are the cvs changes posted to? I see that Tom Lane made\n>>> changes to just about all of the libpq++ source files but I have no\n>>> idea what was done.\n> \n>> Oh, it's a secret.\n>> To the committers list, I think.\n> \n> Right. You can subscribe to either committers or committers-digest\n> (I use the latter). Also, don't forget that you can use \"cvs log\"\n> to examine the log file for any particular file you are concerned\n> about.\n> \n> For example:\n> \n> $ cvs log pgtransdb.cc | more\n\nUsing cvsup that doesn't work very well. I'll have to configure cvs to\nlook at hub. From hub I apparently aren't configured/not allowed to see\nthat part of the tree.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 31 May 1999 20:57:26 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] which list?"
},
{
"msg_contents": "\nOn 01-Jun-99 Vince Vielhaber wrote:\n> \n> On 01-Jun-99 Tom Lane wrote:\n>> Bruce Momjian <[email protected]> writes:\n>>>> Which list are the cvs changes posted to? I see that Tom Lane made\n>>>> changes to just about all of the libpq++ source files but I have no\n>>>> idea what was done.\n>> \n>>> Oh, it's a secret.\n>>> To the committers list, I think.\n>> \n>> Right. You can subscribe to either committers or committers-digest\n>> (I use the latter). Also, don't forget that you can use \"cvs log\"\n>> to examine the log file for any particular file you are concerned\n>> about.\n>> \n>> For example:\n>> \n>> $ cvs log pgtransdb.cc | more\n> \n> Using cvsup that doesn't work very well. I'll have to configure cvs to\n> look at hub. From hub I apparently aren't configured/not allowed to see\n> that part of the tree.\n\nResponding to my own post (it's ok as long as I don't argue with myself :)\n\nWhile trying to catch upon a few older items, I got to a little thing\nthat Hal sent me a couple weeks ago and solves that dilemna. \n\nhttp://www.postgresql.org/cgi/cvswebtest.cgi\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 31 May 1999 21:22:57 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] which list?"
},
{
"msg_contents": "\nYou should have access to the complete pgsql source tree from hub...do you\nget an error if you try?\n\nOn Mon, 31 May 1999, Vince Vielhaber wrote:\n\n> \n> On 01-Jun-99 Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> >>> Which list are the cvs changes posted to? I see that Tom Lane made\n> >>> changes to just about all of the libpq++ source files but I have no\n> >>> idea what was done.\n> > \n> >> Oh, it's a secret.\n> >> To the committers list, I think.\n> > \n> > Right. You can subscribe to either committers or committers-digest\n> > (I use the latter). Also, don't forget that you can use \"cvs log\"\n> > to examine the log file for any particular file you are concerned\n> > about.\n> > \n> > For example:\n> > \n> > $ cvs log pgtransdb.cc | more\n> \n> Using cvsup that doesn't work very well. I'll have to configure cvs to\n> look at hub. From hub I apparently aren't configured/not allowed to see\n> that part of the tree.\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n> # include <std/disclaimers.h> TEAM-OS2\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 31 May 1999 23:25:59 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] which list?"
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n>> For example:\n>> $ cvs log pgtransdb.cc | more\n\n> Using cvsup that doesn't work very well.\n\nHmm ... I've got to think you've got cvsup misconfigured somehow.\nIt works great for me with a plain cvs setup. cvs log and similar\noperations have to contact hub.org to work, but that's no big\nproblem for me. I believe the advantage of cvsup is that you have\nall the same info stored locally, which is cool if you don't mind\nexpending the disk space. So it *should* Just Work.\n\nIf there's some critical bit of configuration info that's missing\nfrom the new Postgres docs about CVS (doc/src/sgml/cvs.sgml),\nplease let Thomas know when you figure it out...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jun 1999 01:51:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] which list? "
},
{
"msg_contents": "> > Also, don't forget that you can use \"cvs log\"\n> > to examine the log file for any particular file\n> > you are concerned about.\n> > For example:\n> > $ cvs log pgtransdb.cc | more\n> Using cvsup that doesn't work very well.\n\nJust a reminder: imho it makes no sense to run cvsup in any mode other\nthan fetching the entire cvs repository. Then you can do any and all\ncvs reading operations locally. The new cvs docs show an example which\ndoes this, but the sample cvsup configuration file posted at\npostgresql.org still only shows the \"fetch current checkout tree\"\nexample which is far less useful.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 01 Jun 1999 16:22:54 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] which list?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Just a reminder: imho it makes no sense to run cvsup in any mode other\n> than fetching the entire cvs repository. Then you can do any and all\n> cvs reading operations locally. The new cvs docs show an example which\n> does this, but the sample cvsup configuration file posted at\n> postgresql.org still only shows the \"fetch current checkout tree\"\n> example which is far less useful.\n\nRight ... if you want to store only the current tree locally, you might\nas well forget cvsup and just use bare cvs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jun 1999 12:23:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] which list? "
},
{
"msg_contents": "On Tue, 1 Jun 1999, Thomas Lockhart wrote:\n\n> > > Also, don't forget that you can use \"cvs log\"\n> > > to examine the log file for any particular file\n> > > you are concerned about.\n> > > For example:\n> > > $ cvs log pgtransdb.cc | more\n> > Using cvsup that doesn't work very well.\n> \n> Just a reminder: imho it makes no sense to run cvsup in any mode other\n> than fetching the entire cvs repository. Then you can do any and all\n> cvs reading operations locally. The new cvs docs show an example which\n> does this, but the sample cvsup configuration file posted at\n> postgresql.org still only shows the \"fetch current checkout tree\"\n> example which is far less useful.\n\nYep, I read your FAQ this mourning and made some changes to my cvsup \nfile to get the entire tree. Had I done that sooner I'd have saved \nmyself some extra work when I wiped out an sgml file I'd been working\non! Tonite I should be set up the rest of the way.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 1 Jun 1999 12:26:57 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] which list?"
}
] |
[
{
"msg_contents": "Is there any interest in having an IRC meeting on #postgresql at some\ntime, to discuss any current issues, or is the mailing list sufficient.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 31 May 1999 20:45:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "IRC meeting"
},
{
"msg_contents": "On Mon, 31 May 1999, Bruce Momjian wrote:\n\n> Is there any interest in having an IRC meeting on #postgresql at some\n> time, to discuss any current issues, or is the mailing list sufficient.\n\nMailing list, IMHO, is better...it means its easier for ppl to post more\ndetailed responses then time generally permits in IRC. IRC is good for\n'quickies' but that's about it :(\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 31 May 1999 23:27:04 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] IRC meeting"
}
] |
[
{
"msg_contents": "\n> I don't know of any SQL databases that allow non-blocking lock requests.\n> \nInformix has all kinds of non blocking locks:\n\treturn \"record locked\" at once\n\treturn \"record locked\" after a specified timeout\n\twait for the lock indefinitely\n\nTo supply this behavior it has the following statements:\n\tset lock mode to not wait;\t-- return immediately with error \n\t\t\t\t\t-- if record already locked\n\tset lock mode to wait 10; \t-- wait at max 10 seconds\n\tset lock mode to wait;\t\t-- wait indefinitely\n\nDirty read isolation has actually nothing to do with the wanted feature.\n\nAndreas\n",
"msg_date": "Tue, 1 Jun 1999 10:03:10 +0200 ",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] please? (non blocking lock)"
},
{
"msg_contents": "> \n> > I don't know of any SQL databases that allow non-blocking lock requests.\n> > \n> Informix has all kinds of non blocking locks:\n> \treturn \"record locked\" at once\n> \treturn \"record locked\" after a specified timeout\n> \twait for the lock indefinitely\n> \n> To supply this behavior it has the following statements:\n> \tset lock mode to not wait;\t-- return immediately with error \n> \t\t\t\t\t-- if record already locked\n> \tset lock mode to wait 10; \t-- wait at max 10 seconds\n> \tset lock mode to wait;\t\t-- wait indefinitely\n> \n> Dirty read isolation has actually nothing to do with the wanted feature.\n\nOh, that's nice. I never looked at those commands before.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jun 1999 10:32:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] please? (non blocking lock)"
}
] |
[
{
"msg_contents": "[forwarded to hackers list]\n\n>> Bingo! Your fix seems to solve the problem! Now 64 concurrent\n>> transactions ran 100 transactions each without any problem. Thanks.\n ~~~~~~~~~~~~users\n>> \n>> BTW, the script I'm using for the heavy load testing is written in\n>> Java(not written by me). Do you want to try it?\n>\n>I am doing some benchmarks and would really appreciate if you could\n>let me have your Java routines.\n\nIt's \"JDBCBench\" available from:\n\nhttp://www.worldserver.com/mm.mysql/performance/\n\nSeems it is originally made for MySQL, can be used with PostgreSQL and\nother commercial dbms including Oracle, however.\n\nI like it since it:\n\no automatically creates test data\no simulates heavy loads with the specified number of users and\n transactions per user from the command line\n\nI noticed minor bugs with JDBCBench 1.0. Also I added begin/end so\nthat the set of operations are performed in a transaction. Here are\ndiffs: (please make sure that the file is in Unix format. seems the\noriginal file is in DOS format.)\n\n*** JDBCBench.java.orig\tTue Jun 1 17:31:11 1999\n--- JDBCBench.java\tTue Jun 1 17:32:04 1999\n***************\n*** 18,24 ****\n public final static int TELLER = 0;\n public final static int BRANCH = 1;\n public final static int ACCOUNT = 2;\n! \n \n \n private Connection Conn = null;\n--- 18,24 ----\n public final static int TELLER = 0;\n public final static int BRANCH = 1;\n public final static int ACCOUNT = 2;\n! static String DBUrl = \"\";\n \n \n private Connection Conn = null;\n***************\n*** 40,46 ****\n public static void main(String[] Args)\n {\n String DriverName = \"\";\n! String DBUrl = \"\";\n boolean initialize_dataset = false;\n \n for (int i = 0; i < Args.length; i++) {\n--- 40,46 ----\n public static void main(String[] Args)\n {\n String DriverName = \"\";\n! \n boolean initialize_dataset = false;\n \n for (int i = 0; i < Args.length; i++) {\n***************\n*** 286,291 ****\n--- 286,299 ----\n \n public void run()\n {\n+ \t Connection myC = null;\n+ \t try {\n+ \t myC = DriverManager.getConnection(DBUrl);\n+ \t }\n+ \t catch (Exception E) {\n+ \t System.out.println(E.getMessage());\n+ \t E.printStackTrace();\n+ \t }\n while (ntrans-- > 0) {\n \n int account = JDBCBench.getRandomID(ACCOUNT);\n***************\n*** 293,299 ****\n int teller = JDBCBench.getRandomID(TELLER);\n int delta = JDBCBench.getRandomInt(0,1000);\n \n! doOne(account, branch, teller, delta);\n incrementTransactionCount();\n }\n reportDone();\n--- 301,307 ----\n int teller = JDBCBench.getRandomID(TELLER);\n int delta = JDBCBench.getRandomInt(0,1000);\n \n! doOne(myC, account, branch, teller, delta);\n incrementTransactionCount();\n }\n reportDone();\n***************\n*** 303,320 ****\n * doOne() - Executes a single TPC BM B transaction.\n */\n \n! int doOne(int bid, int tid, int aid, int delta)\n {\n try {\n! Statement Stmt = Conn.createStatement();\n \n! String Query = \"UPDATE accounts \";\n Query+= \"SET Abalance = Abalance + \" + delta + \" \";\n Query+= \"WHERE Aid = \" + aid;\n \n Stmt.executeUpdate(Query);\n Stmt.clearWarnings();\n! \n Query = \"SELECT Abalance \";\n Query+= \"FROM accounts \";\n Query+= \"WHERE Aid = \" + aid;\n--- 311,334 ----\n * doOne() - Executes a single TPC BM B transaction.\n */\n \n! int doOne(Connection myC, int aid, int bid, int tid, int delta)\n {\n+ \t int aBalance = 0;\n try {\n! \t String Query;\n! Statement Stmt = myC.createStatement();\n \n! Stmt.executeUpdate(\"begin\");\n! Stmt.clearWarnings();\n! \n! Query = \"UPDATE accounts \";\n Query+= \"SET Abalance = Abalance + \" + delta + \" \";\n Query+= \"WHERE Aid = \" + aid;\n \n Stmt.executeUpdate(Query);\n Stmt.clearWarnings();\n! \n! \n Query = \"SELECT Abalance \";\n Query+= \"FROM accounts \";\n Query+= \"WHERE Aid = \" + aid;\n***************\n*** 322,333 ****\n ResultSet RS = Stmt.executeQuery(Query);\n Stmt.clearWarnings();\n \n! int aBalance = 0;\n \n while (RS.next()) {\n aBalance = RS.getInt(1);\n }\n! \n Query = \"UPDATE tellers \";\n Query+= \"SET Tbalance = Tbalance + \" + delta + \" \";\n Query+= \"WHERE Tid = \" + tid;\n--- 336,348 ----\n ResultSet RS = Stmt.executeQuery(Query);\n Stmt.clearWarnings();\n \n! aBalance = 0;\n \n+ \n while (RS.next()) {\n aBalance = RS.getInt(1);\n }\n! \n Query = \"UPDATE tellers \";\n Query+= \"SET Tbalance = Tbalance + \" + delta + \" \";\n Query+= \"WHERE Tid = \" + tid;\n***************\n*** 350,355 ****\n--- 365,373 ----\n Query+= delta + \")\";\n \n Stmt.executeUpdate(Query);\n+ Stmt.clearWarnings();\n+ \n+ Stmt.executeUpdate(\"end\");\n Stmt.clearWarnings();\n \n return aBalance;\n",
"msg_date": "Tue, 01 Jun 1999 17:47:37 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
}
] |
[
{
"msg_contents": "typedef struct LTAG\n{\n Oid relId;\n Oid dbId;\n union\n {\n BlockNumber blkno;\n TransactionId xid;\n } objId;\n>\n> Added:\n> /*\n> * offnum should be part of objId.tupleId above, but would increase\n> * sizeof(LOCKTAG) and so moved here; currently used by userlocks only.\n> */ \n> OffsetNumber offnum;\n uint16 lockmethod; /* needed by userlocks */\n} LOCKTAG;\n\nUser locks are ready for 6.5 release...\n\nVadim\n",
"msg_date": "Tue, 01 Jun 1999 17:39:15 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "LOCKTAG updated -> gmake clean is required"
}
] |
[
{
"msg_contents": "I just did a CVS update on the current version of Postgres.\n\nI loaded in my database, and then I tried to dump the database. \nI got this error....\n\ngetTypes(): SELECT failed. Explanation from backend: 'ERROR: nodeRead:\nBad type 0\n'.\n\n--\nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Tue, 01 Jun 1999 21:22:48 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LIMITS"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> getTypes(): SELECT failed. Explanation from backend: 'ERROR: nodeRead:\n> Bad type 0\n> '.\n\nDid you do a full recompile and initdb?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jun 1999 10:33:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LIMITS "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > getTypes(): SELECT failed. Explanation from backend: 'ERROR: nodeRead:\n> > Bad type 0\n> > '.\n> \n> Did you do a full recompile and initdb?\n\nI did a full compile, but I didn't do an initdb. I was upgrading from a\n6.5 beta of about a month ago to the latest CVS. Should it be necessary?\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Wed, 02 Jun 1999 00:42:50 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LIMITS"
},
{
"msg_contents": "> Tom Lane wrote:\n> > \n> > Chris Bitmead <[email protected]> writes:\n> > > getTypes(): SELECT failed. Explanation from backend: 'ERROR: nodeRead:\n> > > Bad type 0\n> > > '.\n> > \n> > Did you do a full recompile and initdb?\n> \n> I did a full compile, but I didn't do an initdb. I was upgrading from a\n> 6.5 beta of about a month ago to the latest CVS. Should it be necessary?\n\nYou bet. Technically, we don't like to change the database during\nbeta's, but for 6.5beta, we have had to several times.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jun 1999 11:15:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LIMITS"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n>> Did you do a full recompile and initdb?\n\n> I did a full compile, but I didn't do an initdb. I was upgrading from a\n> 6.5 beta of about a month ago to the latest CVS. Should it be necessary?\n\nYes, I recall someone (Jan?) changed a couple of node types recently.\nThat affects the stored representation of rules among other things.\n\nIt's considered courteous to mention it in the hackers list when you\ndo something that requires a full recompile and/or initdb, but a quick\nnote is likely to be all the notice there is for such changes on the\ncurrent sources.\n\nIf you're not paying close attention to pghackers traffic, the safest\napproach is make distclean, rebuild, initdb every time you pull current\nsources. I do that routinely, even though I pull sources every few\ndays. Machine time is cheap; wasted debugging effort is not.\n\n\nMemo to hackers: it might be nice to have some sort of \"INITDB serial\nnumber\" value somewhere that could be bumped anytime someone makes an\ninitdb-forcing change; then the postmaster could refuse to start up\nif you are trying to run it against an incompatible database. As far\nas I know we do this at the granularity of major releases, but it'd be\neven more useful with a finer-grained serial number...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jun 1999 11:17:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LIMITS "
},
{
"msg_contents": "Chris Bitmead wrote:\n\n>\n> Tom Lane wrote:\n> >\n> > Chris Bitmead <[email protected]> writes:\n> > > getTypes(): SELECT failed. Explanation from backend: 'ERROR: nodeRead:\n> > > Bad type 0\n> > > '.\n> >\n> > Did you do a full recompile and initdb?\n>\n> I did a full compile, but I didn't do an initdb. I was upgrading from a\n> 6.5 beta of about a month ago to the latest CVS. Should it be necessary?\n\n I think we shouldn't call anything BETA until it is released.\n The current CVS tree has ALPHA state.\n\n Until the official release (when Marc rolls the tarball),\n development can cause all kind of changes, including schema\n changes to system catalogs, print strings for\n parsetrees/plans etc. Those changes require an initdb run\n because the db files aren't binary compatible any more or the\n corresponding node read functions aren't able to get back the\n right trees from the string representations found in the\n catalogs.\n\n Until Marc officially releases BETA, you should allways\n compile clean and run initdb after cvs updates. It's not the\n first time you've got trapped by this.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 1 Jun 1999 17:39:12 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LIMITS"
},
{
"msg_contents": "> Chris Bitmead wrote:\n> \n> >\n> > Tom Lane wrote:\n> > >\n> > > Chris Bitmead <[email protected]> writes:\n> > > > getTypes(): SELECT failed. Explanation from backend: 'ERROR: nodeRead:\n> > > > Bad type 0\n> > > > '.\n> > >\n> > > Did you do a full recompile and initdb?\n> >\n> > I did a full compile, but I didn't do an initdb. I was upgrading from a\n> > 6.5 beta of about a month ago to the latest CVS. Should it be necessary?\n> \n> I think we shouldn't call anything BETA until it is released.\n> The current CVS tree has ALPHA state.\n> \n> Until the official release (when Marc rolls the tarball),\n> development can cause all kind of changes, including schema\n> changes to system catalogs, print strings for\n> parsetrees/plans etc. Those changes require an initdb run\n> because the db files aren't binary compatible any more or the\n> corresponding node read functions aren't able to get back the\n> right trees from the string representations found in the\n> catalogs.\n> \n> Until Marc officially releases BETA, you should allways\n> compile clean and run initdb after cvs updates. It's not the\n> first time you've got trapped by this.\n\nBut we have been in beta officially since at least May 1. Why is this\nnot beta?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jun 1999 11:53:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LIMITS"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Until Marc officially releases BETA, you should allways\n>> compile clean and run initdb after cvs updates. It's not the\n>> first time you've got trapped by this.\n\n> But we have been in beta officially since at least May 1. Why is this\n> not beta?\n\nI think Jan's point is that what we call a beta is not as stable as\nwhat other people call a beta, and that calling the test releases\nalpha releases would convey a more accurate impression of their\nstability. I don't agree, but he's got a tenable position.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jun 1999 12:30:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LIMITS "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Until Marc officially releases BETA, you should allways\n> >> compile clean and run initdb after cvs updates. It's not the\n> >> first time you've got trapped by this.\n> \n> > But we have been in beta officially since at least May 1. Why is this\n> > not beta?\n> \n> I think Jan's point is that what we call a beta is not as stable as\n> what other people call a beta, and that calling the test releases\n> alpha releases would convey a more accurate impression of their\n> stability. I don't agree, but he's got a tenable position.\n\nI don't think our betas are less stable, but the dump/reload requirement\nis clearly not something for a beta release. We _usually_ get through a\nbeta release without such changes, but the MVCC stuff has required it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jun 1999 12:33:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LIMITS"
},
{
"msg_contents": ">\n> > Chris Bitmead wrote:\n> >\n> > >\n> > > Tom Lane wrote:\n> > > >\n> > > > Chris Bitmead <[email protected]> writes:\n> > > > > getTypes(): SELECT failed. Explanation from backend: 'ERROR: nodeRead:\n> > > > > Bad type 0\n> > > > > '.\n> > > >\n> > > > Did you do a full recompile and initdb?\n> > >\n> > > I did a full compile, but I didn't do an initdb. I was upgrading from a\n> > > 6.5 beta of about a month ago to the latest CVS. Should it be necessary?\n> >\n> > I think we shouldn't call anything BETA until it is released.\n> > The current CVS tree has ALPHA state.\n> >\n> > Until the official release (when Marc rolls the tarball),\n> > development can cause all kind of changes, including schema\n> > changes to system catalogs, print strings for\n> > parsetrees/plans etc. Those changes require an initdb run\n> > because the db files aren't binary compatible any more or the\n> > corresponding node read functions aren't able to get back the\n> > right trees from the string representations found in the\n> > catalogs.\n> >\n> > Until Marc officially releases BETA, you should allways\n> > compile clean and run initdb after cvs updates. It's not the\n> > first time you've got trapped by this.\n>\n> But we have been in beta officially since at least May 1. Why is this\n> not beta?\n\n If fact it is - right buddy - but some {loo|u}sers think\n \"BETA\" is something ready for use with the risk of having to\n install some bugfixes later. But using our BETA might require\n to dump/reload and that's not simply installing a fix.\n\n It all has to do with how we handle our BETA phase. I know, I\n was myself one of those who caused an initdb during this. It\n was required for one of our TODO's for v6.5.\n\n In the future, at the moment we want to declare current CVS\n beeing BETA, we should identify all those TODO items that\n potentially require an initdb and decide upon them if they\n have to go into the next release or if they cause a BETA\n delay. After we declared BETA, any TODO item that requires an\n initdb must by default go into the next release. Closed shop!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 1 Jun 1999 19:27:25 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LIMITS"
},
{
"msg_contents": "> If fact it is - right buddy - but some {loo|u}sers think\n> \"BETA\" is something ready for use with the risk of having to\n> install some bugfixes later. But using our BETA might require\n> to dump/reload and that's not simply installing a fix.\n> \n> It all has to do with how we handle our BETA phase. I know, I\n> was myself one of those who caused an initdb during this. It\n> was required for one of our TODO's for v6.5.\n> \n> In the future, at the moment we want to declare current CVS\n> beeing BETA, we should identify all those TODO items that\n> potentially require an initdb and decide upon them if they\n> have to go into the next release or if they cause a BETA\n> delay. After we declared BETA, any TODO item that requires an\n> initdb must by default go into the next release. Closed shop!\n\nWe usually discourage any initdb changes in beta, but we have had so\nmany required ones, it we didn't make any big deal about it in 6.5.\n\nI belive earlier releases have not required dump/reload in beta. I know\nit has happened only a few times in three years. 6.5 did it a lot,\npartially because we now understand so much more, and are mucking/fixing\nso much more detailed code.\n\nIt is clearly more than an alpha, were we expect serious breakage. We\nhave a list of clearly-defined bugs for the release. Maybe we call it\nbeta-light. Also, we require beta people to be on the hackers list, so\nthey can know of dump/reload, so it is sort of a subscriber beta.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jun 1999 13:37:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LIMITS"
}
] |
[
{
"msg_contents": "Zalman Stern <[email protected]> writes:\n> Here are the two diffs that up the \"name size\" from 32 characters to 256\n> characters. (Once I get bit, I try to fix things real good so I don't get\n> bit again :-))\n> -----\n> diff postgresql-6.4.2/src/include/postgres_ext.h postgres-build/src/include/postgres_ext.h\n> 34c34\n> < #define NAMEDATALEN 32\n> ---\n>> #define NAMEDATALEN 256\n> 37c37\n> < #define OIDNAMELEN 36\n> ---\n>> #define OIDNAMELEN 260\n> -----\n> diff postgresql-6.4.2/src/include/storage/buf_internals.h postgres-build/src/include/storage/buf_internals.h\n> 87c87\n> < #define PADDED_SBUFDESC_SIZE 128\n> ---\n>> #define PADDED_SBUFDESC_SIZE 1024\n> -----\n\nIt'd probably be worthwhile to move NAMEDATALEN to config.h and make the\nother two symbols be computed off NAMEDATALEN. Any objections if I\nsneak that change into 6.5, or is it too close to being a \"new feature\"?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jun 1999 10:07:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] Column name's length "
},
{
"msg_contents": "> Zalman Stern <[email protected]> writes:\n> > Here are the two diffs that up the \"name size\" from 32 characters to 256\n> > characters. (Once I get bit, I try to fix things real good so I don't get\n> > bit again :-))\n> > -----\n> > diff postgresql-6.4.2/src/include/postgres_ext.h postgres-build/src/include/postgres_ext.h\n> > 34c34\n> > < #define NAMEDATALEN 32\n> > ---\n> >> #define NAMEDATALEN 256\n> > 37c37\n> > < #define OIDNAMELEN 36\n> > ---\n> >> #define OIDNAMELEN 260\n> > -----\n> > diff postgresql-6.4.2/src/include/storage/buf_internals.h postgres-build/src/include/storage/buf_internals.h\n> > 87c87\n> > < #define PADDED_SBUFDESC_SIZE 128\n> > ---\n> >> #define PADDED_SBUFDESC_SIZE 1024\n> > -----\n> \n> It'd probably be worthwhile to move NAMEDATALEN to config.h and make the\n> other two symbols be computed off NAMEDATALEN. Any objections if I\n> sneak that change into 6.5, or is it too close to being a \"new feature\"?\n\nDon't they have to be visible to outside apps, so it is in postgres_ext.h?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jun 1999 10:45:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> It'd probably be worthwhile to move NAMEDATALEN to config.h and make the\n>> other two symbols be computed off NAMEDATALEN. Any objections if I\n>> sneak that change into 6.5, or is it too close to being a \"new feature\"?\n\n> Don't they have to be visible to outside apps, so it is in postgres_ext.h?\n\nGood point --- I was thinking that postgres_ext.h includes config.h,\nbut I see it ain't so. You're right, those definitions must stay where\nthey are.\n\nStill, I wonder why OIDNAMELEN isn't just defined as\n(NAMEDATALEN+sizeof(Oid)) rather than putting a comment to that effect.\nI will check the uses and see if that is a safe change or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jun 1999 11:25:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> It'd probably be worthwhile to move NAMEDATALEN to config.h and make the\n> >> other two symbols be computed off NAMEDATALEN. Any objections if I\n> >> sneak that change into 6.5, or is it too close to being a \"new feature\"?\n> \n> > Don't they have to be visible to outside apps, so it is in postgres_ext.h?\n> \n> Good point --- I was thinking that postgres_ext.h includes config.h,\n> but I see it ain't so. You're right, those definitions must stay where\n> they are.\n> \n> Still, I wonder why OIDNAMELEN isn't just defined as\n> (NAMEDATALEN+sizeof(Oid)) rather than putting a comment to that effect.\n> I will check the uses and see if that is a safe change or not.\n\nYes, probably should be changed. The old code did some fancy sed with\nit, so maybe it had to be a real number back then, or perhaps initdb\npulls it from the file. Not sure.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jun 1999 11:49:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": "On Tue, 1 Jun 1999, Bruce Momjian wrote:\n\n> > Zalman Stern <[email protected]> writes:\n> > > Here are the two diffs that up the \"name size\" from 32 characters to 256\n> > > characters. (Once I get bit, I try to fix things real good so I don't get\n> > > bit again :-))\n> > > -----\n> > > diff postgresql-6.4.2/src/include/postgres_ext.h postgres-build/src/include/postgres_ext.h\n> > > 34c34\n> > > < #define NAMEDATALEN 32\n> > > ---\n> > >> #define NAMEDATALEN 256\n> > > 37c37\n> > > < #define OIDNAMELEN 36\n> > > ---\n> > >> #define OIDNAMELEN 260\n> > > -----\n> > > diff postgresql-6.4.2/src/include/storage/buf_internals.h postgres-build/src/include/storage/buf_internals.h\n> > > 87c87\n> > > < #define PADDED_SBUFDESC_SIZE 128\n> > > ---\n> > >> #define PADDED_SBUFDESC_SIZE 1024\n> > > -----\n> > \n> > It'd probably be worthwhile to move NAMEDATALEN to config.h and make the\n> > other two symbols be computed off NAMEDATALEN. Any objections if I\n> > sneak that change into 6.5, or is it too close to being a \"new feature\"?\n> \n> Don't they have to be visible to outside apps, so it is in postgres_ext.h?\n\nWe would need to have some way of getting at it from the client - the\nDatabaseMetaData method getColumnNameLength() would need to know about\nthis, and we can't refer to the C header files.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Tue, 1 Jun 1999 21:06:06 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": "We've seen this table-name-plus-column-name-too-long problem before,\nand I'm sure we're going to keep hearing about it until we fix it\nsomehow. Messing with NAMEDATALEN is probably not a very useful\nanswer for the average user, given the compatibility problems it\ncreates.\n\nHow about something like this: if the code finds that the names are\ntoo long when forming an implicit index name, it truncates the names\nto fit, and you are OK as long as the truncated name is unique.\nFor example\n\n\tcreate table averylongtablename (averylongfieldname serial);\n\nwould truncate the input names to produce something like\n\n\taverylongtable_averylongfie_key\n\taverylongtable_averylongfie_seq\n\nand you'd only get a failure if those indexes/sequences already existed.\n(Truncating both names as shown above, not just the field name,\nshould reduce the probability of collisions.)\n\nYou could even imagine trying a few different possibilities in order\nto find an unused name, but that worries me. I'd rather that it were\ncompletely predictable what name would be used for a given key, and if\nit depends on what already exists then it wouldn't be so predictable.\nBut there's nothing unpredictable about truncation to fit a known\nlength.\n\nThis is obviously not a 100% solution, since there's a risk of name\ncollisions (averylongfieldname1 and averylongfieldname2) but it's\nprobably a 95% solution, and it wouldn't take much work or risk.\n\nComments? Objections? I think I could argue that this is a bug fix\nand deserves to be slipped into 6.5 ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jun 1999 17:52:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length "
},
{
"msg_contents": "> How about something like this: if the code finds that the names are\n> too long when forming an implicit index name, it truncates the names\n> to fit, and you are OK as long as the truncated name is unique.\n> For example\n> \n> \tcreate table averylongtablename (averylongfieldname serial);\n> \n> would truncate the input names to produce something like\n> \n> \taverylongtable_averylongfie_key\n> \taverylongtable_averylongfie_seq\n> \n> and you'd only get a failure if those indexes/sequences already existed.\n> (Truncating both names as shown above, not just the field name,\n> should reduce the probability of collisions.)\n\nThis only partially solves the problem and can introduce bugs into code\nwhich is only reading from a database. When someone is setting up the\ndatabase to work on the system, they'll in theory get a failure so they\nknow it won't work. This really isn't true for our software though because\nwe have functions which dynamically query a table to see what columns it\nhas. In theory two queries for different longnames can resolve to the same\ncolumn name.\n\nIt is also a backwards compatibility hassle if you ever want to increase\nthe number of significant characters in the name. This is because the\nexisting database only knows the first 32 characters and *must* ignore\nanything after that in lookups. You would have to keep track of which names\nare \"old style\" and which are new. Why set yourself up like that?\n\n-Z-\n",
"msg_date": "Tue, 1 Jun 1999 16:34:59 -0700 (PDT)",
"msg_from": "Zalman Stern <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": "> How about something like this: if the code finds that the names are\n> too long when forming an implicit index name, it truncates the names\n> to fit, and you are OK as long as the truncated name is unique.\n> Comments? Objections? I think I could argue that this is a bug fix\n> and deserves to be slipped into 6.5 ;-)\n\nI understand some folks think this is a problem, but have been\nreluctant to include a \"randomizer\" in the created index name since it\nwould make the index name less clearly predictable. May as well use\nsomething like \"idx_<procid>_<timestamp>\" or somesuch...\n\nNo real objection though, other than aesthetics. And those only count\nfor so much...\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 02 Jun 1999 04:25:56 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": "> This is obviously not a 100% solution, since there's a risk of name\n> collisions (averylongfieldname1 and averylongfieldname2) but it's\n> probably a 95% solution, and it wouldn't take much work or risk.\n> \n> Comments? Objections? I think I could argue that this is a bug fix\n> and deserves to be slipped into 6.5 ;-)\n\nTrying to slip it in as a bug fix. Sounds like me, Tom. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jun 1999 00:33:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": "Zalman Stern <[email protected]> writes:\n>> How about something like this: if the code finds that the names are\n>> too long when forming an implicit index name, it truncates the names\n>> to fit, and you are OK as long as the truncated name is unique.\n\n> This only partially solves the problem and can introduce bugs into code\n> which is only reading from a database. When someone is setting up the\n> database to work on the system, they'll in theory get a failure so they\n> know it won't work. This really isn't true for our software though because\n> we have functions which dynamically query a table to see what columns it\n> has. In theory two queries for different longnames can resolve to the same\n> column name.\n\nUm, no, I don't think this has anything to do with whether you can\ndistinguish the names of different columns in a table.\n\nWhat we are talking about is the names generated for indexes and\nsequences that are needed to implement PRIMARY KEY and SERIAL column\nattributes. Ideally these names are completely invisible to an SQL\napplication --- there's certainly no direct need for the app to know\nabout them. We could eliminate the whole issue if we generated names\nalong the lines of \"pg_pkey_idx_48812091\". But when you are looking at\nthe system catalogs it is useful to be able to tell what's what by eye.\nSo we compromise by generating names that include the table and column\nname for which we're creating an implicit index or sequence.\n\nThe problem is that this implementation-detail-that-should-be-invisible\n*is* visible to an SQL application, because it restricts the SQL app's\nchoice of table and column names. We need to avoid that restriction,\nor at least reduce it as much as we can. I'm willing to sacrifice\na little bit of SQL naming freedom to preserve readability of the names\ngenerated behind the scenes, but putting a hard limit on name length\nis too much sacrifice. (This is more a question of designer's taste\nthan anything else --- you're certainly free to argue for a different\ntradeoff point. But it is a tradeoff; there's no perfect solution.)\n\n> It is also a backwards compatibility hassle if you ever want to increase\n> the number of significant characters in the name. This is because the\n> existing database only knows the first 32 characters and *must* ignore\n> anything after that in lookups. You would have to keep track of which names\n> are \"old style\" and which are new. Why set yourself up like that?\n\nNo, because the SQL app should never need to know these names at all.\nIf they change, it won't affect app code. Increasing NAMEDATALEN\ncould not cause two table+field names to conflict where they did not\nconflict before, so I see no risk there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jun 1999 01:32:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length "
},
{
"msg_contents": "I misunderstood the context quite a bit. I would consider gluing the entire\nfull length name together and using 8 bytes or so for a strong hash of the\nfullname. If there is another lookup path to get to the correct index name,\nthen one can just increment the hash until the name is unique. Or whatever.\n\n-Z-\n",
"msg_date": "Tue, 1 Jun 1999 23:17:54 -0700 (PDT)",
"msg_from": "Zalman Stern <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": ">\n> > How about something like this: if the code finds that the names are\n> > too long when forming an implicit index name, it truncates the names\n> > to fit, and you are OK as long as the truncated name is unique.\n> > Comments? Objections? I think I could argue that this is a bug fix\n> > and deserves to be slipped into 6.5 ;-)\n>\n> I understand some folks think this is a problem, but have been\n> reluctant to include a \"randomizer\" in the created index name since it\n> would make the index name less clearly predictable. May as well use\n> something like \"idx_<procid>_<timestamp>\" or somesuch...\n>\n> No real objection though, other than aesthetics. And those only count\n> for so much...\n\n I've been wondering for some time why at all to build the\n index and sequence names from those table/fieldnames. Only to\n make them guessable?\n\n What about building them from the tables OID plus the column\n numbers. That way, auto created sequences could also be\n automatically removed on a DROP TABLE because the system can\n \"guess\" them.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 2 Jun 1999 10:51:50 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> >\n> > I understand some folks think this is a problem, but have been\n> > reluctant to include a \"randomizer\" in the created index name since it\n> > would make the index name less clearly predictable. May as well use\n> > something like \"idx_<procid>_<timestamp>\" or somesuch...\n> >\n> > No real objection though, other than aesthetics. And those only count\n> > for so much...\n> \n> I've been wondering for some time why at all to build the\n\nAnd me -:)\n\n> index and sequence names from those table/fieldnames. Only to\n> make them guessable?\n> \n> What about building them from the tables OID plus the column\n> numbers. That way, auto created sequences could also be\n> automatically removed on a DROP TABLE because the system can\n> \"guess\" them.\n\nActually, we should use names not allowed in CREATE statements!\nSo I would use \"pg_\" prefix...\n\nVadim\n",
"msg_date": "Wed, 02 Jun 1999 17:24:07 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": ">\n> Jan Wieck wrote:\n> >\n> > >\n> > > I understand some folks think this is a problem, but have been\n> > > reluctant to include a \"randomizer\" in the created index name since it\n> > > would make the index name less clearly predictable. May as well use\n> > > something like \"idx_<procid>_<timestamp>\" or somesuch...\n> > >\n> > > No real objection though, other than aesthetics. And those only count\n> > > for so much...\n> >\n> > I've been wondering for some time why at all to build the\n>\n> And me -:)\n>\n> > index and sequence names from those table/fieldnames. Only to\n> > make them guessable?\n> >\n> > What about building them from the tables OID plus the column\n> > numbers. That way, auto created sequences could also be\n> > automatically removed on a DROP TABLE because the system can\n> > \"guess\" them.\n>\n> Actually, we should use names not allowed in CREATE statements!\n> So I would use \"pg_\" prefix...\n\n This would implicitly deny the user from dropping the created\n index for a unique constraint :-) Same for the sequences -\n what's good because they are used in the default clauses for\n the serial field and dropping the sequence would corrupt the\n table though.\n\n I like it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 2 Jun 1999 12:05:48 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n>> Actually, we should use names not allowed in CREATE statements!\n>> So I would use \"pg_\" prefix...\n\n> This would implicitly deny the user from dropping the created\n> index for a unique constraint :-) Same for the sequences -\n> what's good because they are used in the default clauses for\n> the serial field and dropping the sequence would corrupt the\n> table though.\n\nWell, it's only good if the system will get rid of the objects when\nthe user drops the owning table. This is true for indexes but AFAIK\nit is not yet true for sequences. So if we go with pg_ prefix now,\nthere will be *no* way short of superuser privilege to get rid of the\nsequence object for a deleted table that had a serial field.\n\nAlso, this will break pg_dump, which will have no good way to restore\nthe state of a serial sequence object. (CREATE SEQUENCE pg_xxx will\nfail, no?)\n\n> I like it.\n\nPerhaps eventually we should wind up using names like \"pg_pkey_8381292\"\nbut I think this ought to wait until the system retains an explicit\nrepresentation of the relationship between these indexes/sequences and\nthe owning table, and until we think through the consequences for\npg_dump. For now we had better stick to unprivileged names.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jun 1999 09:16:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length "
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> What about building them from the tables OID plus the column\n> numbers.\n\nThe parser doesn't know what OID will be assigned to the table at the\ntime it builds the names for the derived objects. I suppose we could\npostpone the creation of these names until after the table OID is known,\nbut that looks like a rather large and risky change to be making at this\nstage of the release cycle...\n\nAt this point I like Zalman's idea, which if I understood it properly\nwent like this:\n\n1. If table and column name are short enough, use \"table_column_key\"\n etc (so, no change in the cases that the system accepts now).\n\n2. Otherwise, truncate table and/or column name to fit, leaving room for\n a few extra characters that are made from a hash of the removed\n characters. The result would look something like \"tab_col_5927_key\".\n\nThis still isn't a 100% solution, but it's probably a 99.5% solution\nwhere the simple truncation idea would be maybe 95%. Not sure that\nthe additional coverage is worth making the names harder to predict\nfor a person, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jun 1999 09:41:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length "
},
{
"msg_contents": "Tom Lane wrote:\n\n> > I like it.\n>\n> Perhaps eventually we should wind up using names like \"pg_pkey_8381292\"\n> but I think this ought to wait until the system retains an explicit\n> representation of the relationship between these indexes/sequences and\n> the owning table, and until we think through the consequences for\n> pg_dump. For now we had better stick to unprivileged names.\n\n Of course! I didn't meant to do anything on it for v6.5.\n Implementing automatic sequence deletion if they got created\n due to serial fields is definitely feature. And I agree that\n all the odds and ends have to get discussed down first.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 2 Jun 1999 16:34:05 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": "> >\n> > > How about something like this: if the code finds that the names are\n> > > too long when forming an implicit index name, it truncates the names\n> > > to fit, and you are OK as long as the truncated name is unique.\n> > > Comments? Objections? I think I could argue that this is a bug fix\n> > > and deserves to be slipped into 6.5 ;-)\n> >\n> > I understand some folks think this is a problem, but have been\n> > reluctant to include a \"randomizer\" in the created index name since it\n> > would make the index name less clearly predictable. May as well use\n> > something like \"idx_<procid>_<timestamp>\" or somesuch...\n> >\n> > No real objection though, other than aesthetics. And those only count\n> > for so much...\n> \n> I've been wondering for some time why at all to build the\n> index and sequence names from those table/fieldnames. Only to\n> make them guessable?\n> \n> What about building them from the tables OID plus the column\n> numbers. That way, auto created sequences could also be\n> automatically removed on a DROP TABLE because the system can\n> \"guess\" them.\n\nAnother idea would be to truncate table and column names equally to fit\nin NAMEDATALEN, then if that is not unique, start replacing the last\nletters of the string with number until it is unique:\n\n\ttabnamecolname\n\ttabnamecolnam1\n\ttabnamecolnam2\n\ttabnamecolna32\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jun 1999 11:19:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": "Just a suggestion: use an printably-encoded version of\nmd5 or sha, which are cryptographic hash algorithms.\n\nIt will make the name completely predictable:\nif(too_long(name)) {\n\tname = md5(name);\n}\n\nIt will be *very* unlikely that there are any collisions.\n\nOf course, a person won't say \"gee, party_address_relation_code_types_seq is\ntoo\nlong, I guess that will turn out to be d4420a3105e98e3e2e12c5c73019db59\".\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Thomas Lockhart\n> Sent: Wednesday, June 02, 1999 12:26 AM\n> To: Tom Lane\n> Cc: [email protected]; Zalman Stern\n> Subject: Re: [HACKERS] Re: [SQL] Column name's length\n>\n>\n> > How about something like this: if the code finds that the names are\n> > too long when forming an implicit index name, it truncates the names\n> > to fit, and you are OK as long as the truncated name is unique.\n> > Comments? Objections? I think I could argue that this is a bug fix\n> > and deserves to be slipped into 6.5 ;-)\n>\n> I understand some folks think this is a problem, but have been\n> reluctant to include a \"randomizer\" in the created index name since it\n> would make the index name less clearly predictable. May as well use\n> something like \"idx_<procid>_<timestamp>\" or somesuch...\n>\n> No real objection though, other than aesthetics. And those only count\n> for so much...\n>\n> - Tom\n>\n> --\n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n>\n>\n\n",
"msg_date": "Wed, 2 Jun 1999 15:23:38 -0400",
"msg_from": "\"Nat Howard\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: [SQL] Column name's length"
},
{
"msg_contents": "At 09:16 2/06/99 -0400, you wrote:\n>\n>Well, it's only good if the system will get rid of the objects when\n>the user drops the owning table. This is true for indexes but AFAIK\n>it is not yet true for sequences. So if we go with pg_ prefix now,\n>there will be *no* way short of superuser privilege to get rid of the\n>sequence object for a deleted table that had a serial field.\n>\n>Also, this will break pg_dump, which will have no good way to restore\n>the state of a serial sequence object. (CREATE SEQUENCE pg_xxx will\n>fail, no?)\n\nI know I'm probably out of my depth here, but couldn't pg_dump ignore everything with a pg_* prefix? It can (safely?) assume any 'system' structures will be created as a result of some other user-based definition it is dumping? \n\n[If you beat me about the head, I'll shut up]\n\nPhilip Warner.\n\nP.S. I also like the idea of creating the 'system' structures with readily and reliably identifiable names, since it potentially gives the option of the user choosing to 'hide' them. As a user with about 20000 blobs to load, the output of a \\d is pretty cumbersome.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 03 Jun 1999 13:00:57 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n>> Also, this will break pg_dump, which will have no good way to restore\n>> the state of a serial sequence object. (CREATE SEQUENCE pg_xxx will\n>> fail, no?)\n\n> I know I'm probably out of my depth here, but couldn't pg_dump ignore\n> everything with a pg_* prefix?\n\nIt does, for the most part. The trouble is that if we rename SERIAL\nsequences to pg_xxx, and pg_dump then ignores them, then dump and\nreload will fail to restore the next-serial-number state of a SERIAL\ncolumn. (Actually, given no other code changes, the serial column\nwould fail entirely because its underlying sequence wouldn't be\nrecreated at all. I was pointing out that it's not even *possible*\nfor pg_dump to restore the sequence's state if the sequence is given\na protected name.)\n\n> As a user with about 20000 blobs to load, the output of a \\d is pretty\n> cumbersome.\n\nHmm, I suppose \\d ought to ignore xinv relations ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jun 1999 23:36:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length "
},
{
"msg_contents": ">> Still, I wonder why OIDNAMELEN isn't just defined as\n>> (NAMEDATALEN+sizeof(Oid)) rather than putting a comment to that effect.\n>> I will check the uses and see if that is a safe change or not.\n\n> Yes, probably should be changed. The old code did some fancy sed with\n> it, so maybe it had to be a real number back then, or perhaps initdb\n> pulls it from the file. Not sure.\n\nThere was indeed a script pulling it from the file ... but it turns out\nthe value wasn't actually being *used* anywhere! So I just removed\nOIDNAMELEN entirely.\n\nPeter Mount pointed out that the Java interface code has 32 hardwired as\na constant for name length, and there may be similar problems in other\nnon-C interfaces that can't conveniently use the NAMEDATALEN constant\nfrom postgres_ext.h. Another problem is that some of psql's formats for\nsystem table display have hardwired column widths. So there is still\nwork to do if you want to alter NAMEDATALEN.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jun 1999 17:46:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [SQL] Column name's length "
}
] |
[
{
"msg_contents": "Does postgres support the use of variables as in the example below?\n\n\nupdate table1 set column1=@variable\nwhere column2='text4column2'\nand @variable=(select column3 from table2 where column4='text4column4');\n\nIf not, how do I execute a statement like this in postgres?\n\nPlease respond as soon as possible as I am working on a time sensitive\nproject that depends on this.\nThank you and have a great day",
"msg_date": "Tue, 01 Jun 1999 10:18:06 -0500",
"msg_from": "Keala Jacobs <[email protected]>",
"msg_from_op": true,
"msg_subject": "using variables with postgres"
}
] |
[
{
"msg_contents": "Does postgres support the use of variables as in the example below?\n\nupdate table1 set column1=@variable\nwhere column2='text4column2'\nand @variable=(select column3 from table2 where column4='text4column4');\n\nIf not, how do I execute a statement like this in postgres?",
"msg_date": "Tue, 01 Jun 1999 15:14:05 -0500",
"msg_from": "Keala Jacobs <[email protected]>",
"msg_from_op": true,
"msg_subject": "variables in psql"
},
{
"msg_contents": "Keala Jacobs <[email protected]> writes:\n> Does postgres support the use of variables as in the example below?\n\n> update table1 set column1=@variable\n> where column2='text4column2'\n> and @variable=(select column3 from table2 where column4='text4column4');\n\n> If not, how do I execute a statement like this in postgres?\n\nThat isn't a particularly compelling example, since it looks like what\nyou mean is the same as\n\nupdate table1 set column1 = table2.column3\nwhere table1.column2 = 'text4column2' and table2.column4 = 'text4column4';\n\nI have seen examples where it'd be nice to have an SQL variable that\ncould hold a value from one statement to the next; currently you have\nto do that with a temporary table, which seems rather cumbersome...\n\nBTW, I think this'd be more appropriate in pgsql-sql.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jun 1999 17:26:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] variables in psql "
}
] |
[
{
"msg_contents": "I heard back from Paul Vixie and he says that it should be possible to\nstore two values in a unique index if they differ only in netbits. I\nam sending a patch in.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 1 Jun 1999 21:30:16 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "INET and CIDR comparisons"
},
{
"msg_contents": "At 09:30 PM 6/1/99 -0400, D'Arcy\" \"J.M.\" Cain wrote:\n>I heard back from Paul Vixie and he says that it should be possible to\n>store two values in a unique index if they differ only in netbits. I\n>am sending a patch in.\n\nI've stayed out of this discussion, but Paul makes a lot of\nsense. They're NOT the same networks...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Tue, 01 Jun 1999 19:40:36 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INET and CIDR comparisons"
},
{
"msg_contents": "On Tue, 1 Jun 1999, D'Arcy J.M. Cain wrote:\n\n> I heard back from Paul Vixie and he says that it should be possible to\n> store two values in a unique index if they differ only in netbits. \n\nThis is definitely true. E.g., we have an in-house database which records\ninternet address delegations wherein a.b.c.0/22 can be and often is the\nparent of a.b.c.0/23. They should definitely be recignized as unique\nfrom each other.\n\n> I am sending a patch in.\n\nGroovy. I'll be pestering the present developer of that database to\nuniquify the index. 8^)\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n",
"msg_date": "Tue, 1 Jun 1999 22:56:59 -0400 (EDT)",
"msg_from": "Todd Graham Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INET and CIDR comparisons"
},
{
"msg_contents": "> I heard back from Paul Vixie and he says that it should be possible to\n> store two values in a unique index if they differ only in netbits. I\n> am sending a patch in.\n\nThanks. One more item I can remove from Open Items list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jun 1999 23:34:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INET and CIDR comparisons"
},
{
"msg_contents": "Thus spake Don Baccus\n> At 09:30 PM 6/1/99 -0400, D'Arcy\" \"J.M.\" Cain wrote:\n> >I heard back from Paul Vixie and he says that it should be possible to\n> >store two values in a unique index if they differ only in netbits. I\n> >am sending a patch in.\n> \n> I've stayed out of this discussion, but Paul makes a lot of\n> sense. They're NOT the same networks...\n\nAgreed. My only point was that using the fields was probably a bad\nidea anyway and, if you did, allowing both then made even less sense\nbut that's more of design issue.\n\nIn any case, if someone wants the protection, they can always add a unique\nindex on host(field).\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 2 Jun 1999 18:18:33 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] INET and CIDR comparisons"
}
] |
[
{
"msg_contents": "Hi,\n\nLooking at a previous bug report I noticed a strange behaviour\nin rule creation and display.\n\n\npostgres=> CREATE RULE rule1 AS ON UPDATE TO test1 DO INSERT INTO test2 SELECT * FROM\npostgres-> test1 WHERE oid=current.oid;\nERROR: current: Table does not exist.\n\nAbove we do not recognise \"current\" as a special case.\n\nIf I substitute \"old\" for \"current\" the definition is accepted.\n\npostgres=> CREATE RULE rule1 AS ON UPDATE TO test1 DO INSERT INTO test2 SELECT * FROM\npostgres-> test1 WHERE oid=old.oid;\nCREATE\n\nThings get spooky when pg_rules shows the keyword \"current\" where I said \"old\".\n\npostgres=> select * from pg_rules where rulename like '%rule1%';\ntablename|rulename|definition \n---------+--------+------------------------------------------------------------------------------------------------------------------\n-----------------------------------------\ntest1 |rule1 |CREATE RULE \"rule1\" AS ON UPDATE TO \"test1\" DO INSERT INTO \"test2\" (\"field1\", \"field2\") SELECT \"field1\", \"field2\" \nFROM \"test1\" WHERE \"oid\" = current.\"oid\";\n(1 row)\n\nIt could be that just the parser and rule decoder are out of step?\n\nI'm not sure which is correct now \"old\" or \"current\", anyone care to comment?\n\nKeith.\n\n",
"msg_date": "Wed, 2 Jun 1999 10:33:00 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Rules puzzle with \"current\" keyword."
},
{
"msg_contents": ">\n> Hi,\n>\n> Looking at a previous bug report I noticed a strange behaviour\n> in rule creation and display.\n>\n>\n> postgres=> CREATE RULE rule1 AS ON UPDATE TO test1 DO INSERT INTO test2 SELECT * FROM\n> postgres-> test1 WHERE oid=current.oid;\n> ERROR: current: Table does not exist.\n>\n> Above we do not recognise \"current\" as a special case.\n>\n> If I substitute \"old\" for \"current\" the definition is accepted.\n>\n> postgres=> CREATE RULE rule1 AS ON UPDATE TO test1 DO INSERT INTO test2 SELECT * FROM\n> postgres-> test1 WHERE oid=old.oid;\n> CREATE\n>\n> Things get spooky when pg_rules shows the keyword \"current\" where I said \"old\".\n>\n> postgres=> select * from pg_rules where rulename like '%rule1%';\n> tablename|rulename|definition\n> ---------+--------+------------------------------------------------------------------------------------------------------------------\n> -----------------------------------------\n> test1 |rule1 |CREATE RULE \"rule1\" AS ON UPDATE TO \"test1\" DO INSERT INTO \"test2\" (\"field1\", \"field2\") SELECT \"field1\", \"field2\"\n> FROM \"test1\" WHERE \"oid\" = current.\"oid\";\n> (1 row)\n>\n> It could be that just the parser and rule decoder are out of step?\n>\n> I'm not sure which is correct now \"old\" or \"current\", anyone care to comment?\n\n Sure - I'm the one who added OLD to v6.4 and removed CURRENT\n from v6.5. I think it was announced in the release notes for\n v6.4 that CURRENT will disappear in v6.5.\n\n Seems I missed that change myself in the utilities that make\n up pg_views and pg_rules.\n\n Anyway - OLD is the correct keyword in your case. I'll take\n a look at it - thanks.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 2 Jun 1999 12:11:27 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Rules puzzle with \"current\" keyword."
},
{
"msg_contents": "> > It could be that just the parser and rule decoder are out of step?\n> >\n> > I'm not sure which is correct now \"old\" or \"current\", anyone care to comment?\n> \n> Anyway - OLD is the correct keyword in your case. I'll take\n> a look at it - thanks.\n\n Fixed\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n",
"msg_date": "Wed, 2 Jun 1999 13:49:11 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Rules puzzle with \"current\" keyword."
}
] |
[
{
"msg_contents": "Hello there,\n\nTrying to enable syslog support with 6.4.2 on AIX4.[12] (gcc 2.8.1), I\nhave added:\n* in Makefile.custom:\n\tCFLAGS+=-DUSE_SYSLOG -D_XOPEN_EXTENDED_SOURCE\n(the -D_XOPEN_EXTENDED_SOURCE is for AIX's <syslog.h>\n* and in trace.c:\n*** trace.c.distrib Wed Jun 2 11:45:34 1999\n--- trace.c Wed Jun 2 10:01:18 1999\n***************\n*** 21,26 ****\n--- 21,27 ----\n \n #ifdef USE_SYSLOG\n #include <syslog.h>\n+ int openlog_done = 0;\n #endif\n \n #include \"postgres.h\"\n\n\nSeems to work fine.\nCould the -D_XOPEN_EXTENDED_SOURCE be a problem somehow ?\n\n\nTIA\n-- \n\n Thierry Holtzer\n\n E-mail : [email protected]\n ----------------------------------------------------------\n Dptmt Informatique d'Entreprise\n CERAM Tel : +33 4 9395 4545\n Rue Dostoievski - BP 085 Fax : +33 4 9365 4524\n 06 902 Sophia-Antipolis Cedex - France\n",
"msg_date": "Wed, 02 Jun 1999 11:49:58 +0200",
"msg_from": "Thierry Holtzer <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.4.2/AIX: syslog support seems alright ?"
},
{
"msg_contents": "> Hello there,\n> \n> Trying to enable syslog support with 6.4.2 on AIX4.[12] (gcc 2.8.1), I\n> have added:\n> * in Makefile.custom:\n\nSeems this is already fixed in the current tree:\n\t\n\tvoid\n\twrite_syslog(int level, char *line)\n\t{ static int openlog_done = 0;\n\t\n\t if (UseSyslog >= 1) {\n\t if (!openlog_done)\n\t {\n\t openlog_done = 1;\n\t openlog(PG_LOG_IDENT, LOG_PID | LOG_NDELAY, PG_LOG_FACILITY);\n\t }\n\t\n\n> \tCFLAGS+=-DUSE_SYSLOG -D_XOPEN_EXTENDED_SOURCE\n> (the -D_XOPEN_EXTENDED_SOURCE is for AIX's <syslog.h>\n> * and in trace.c:\n> *** trace.c.distrib Wed Jun 2 11:45:34 1999\n> --- trace.c Wed Jun 2 10:01:18 1999\n> ***************\n> *** 21,26 ****\n> --- 21,27 ----\n> \n> #ifdef USE_SYSLOG\n> #include <syslog.h>\n> + int openlog_done = 0;\n> #endif\n> \n> #include \"postgres.h\"\n> \n> \n> Seems to work fine.\n> Could the -D_XOPEN_EXTENDED_SOURCE be a problem somehow ?\n> \n> \n> TIA\n> -- \n> \n> Thierry Holtzer\n> \n> E-mail : [email protected]\n> ----------------------------------------------------------\n> Dptmt Informatique d'Entreprise\n> CERAM Tel : +33 4 9395 4545\n> Rue Dostoievski - BP 085 Fax : +33 4 9365 4524\n> 06 902 Sophia-Antipolis Cedex - France\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jun 1999 11:31:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.4.2/AIX: syslog support seems alright ?"
}
] |
[
{
"msg_contents": "\n>From the CVS version of a day or two ago, I'm getting errors on the\nfollowing queries, which worked from a snapshot from about a month ago.\nOne it is rejecting apparently valid syntax. One the backend is\ncrashing.\n\n\n\nSELECT category.oid, category.title FROM category*, urllink WHERE\nurllink.category=category.oid AND category.category IS NULL GROUP BY\ncategory.title, category.oid;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\n\nSELECT category.oid, category.title FROM category*, urllink WHERE\nurllink.category=category.oid AND category.category IS NULL UNION SELECT\nc1.oid, c1.title FROM category* c1, category* c2, urllink WHERE\nc1.category IS NULL AND urllink.category = c2.oid and c1.oid =\nc2.category GROUP BY c1.title UNION SELECT c1.oid, c1.title FROM\ncategory* c1, story WHERE story.category = c1.oid ORDER BY\ncategory.title;\nERROR: Illegal use of aggregates or non-group column in target list\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Thu, 03 Jun 1999 00:40:04 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re:ORDER BY"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n>> From the CVS version of a day or two ago, I'm getting errors on the\n> following queries, which worked from a snapshot from about a month ago.\n> One it is rejecting apparently valid syntax. One the backend is\n> crashing.\n\n> SELECT category.oid, category.title FROM category*, urllink WHERE\n> urllink.category=category.oid AND category.category IS NULL GROUP BY\n> category.title, category.oid;\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n\nYeah, I see that too. Will look into it.\n\n> SELECT category.oid, category.title FROM category*, urllink WHERE\n> urllink.category=category.oid AND category.category IS NULL UNION SELECT\n> c1.oid, c1.title FROM category* c1, category* c2, urllink WHERE\n> c1.category IS NULL AND urllink.category = c2.oid and c1.oid =\n> c2.category GROUP BY c1.title UNION SELECT c1.oid, c1.title FROM\n> category* c1, story WHERE story.category = c1.oid ORDER BY\n> category.title;\n> ERROR: Illegal use of aggregates or non-group column in target list\n\nThis one is OK: notice you have\n\tSELECT c1.oid, c1.title FROM ... GROUP BY c1.title\nYou can't select an ungrouped column in a select with GROUP BY.\nUntil recently, the system failed to notice this error unless there\nwas an aggregate function somewhere in the query --- but it catches\nit now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jun 1999 19:35:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re:ORDER BY "
}
] |
[
{
"msg_contents": " Hi World! \n\n Does backend О©╫reates pid file?\n\n Postgres 6.5 backend (current CVS) stop answering query\n\n >>> Jun 2 00:12:32 mail popper[17585]: PgSQL:ROLLBACK failed: PQsendQuery()\n >>> There is no connection to the backend. \n\n every 300 000 cursor allocation so I wish to restart it every 100 000\n is there a way to do it simple than \n kill `ps ax | awk .....`\n\n Thank you!\n\nPS:\n 6.4 works without problems with the same code, but I need more flexible\nlocking on our new mail server.\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Wed, 02 Jun 1999 18:56:55 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "PID of backend"
},
{
"msg_contents": "On Wed, 2 Jun 1999, Dmitry Samersoff wrote:\n\n> Date: Wed, 02 Jun 1999 18:56:55 +0400 (MSD)\n> From: Dmitry Samersoff <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] PID of backend\n> \n> Hi World! \n> \n> Does backend О©╫reates pid file?\n> \n> Postgres 6.5 backend (current CVS) stop answering query\n> \n> >>> Jun 2 00:12:32 mail popper[17585]: PgSQL:ROLLBACK failed: PQsendQuery()\n> >>> There is no connection to the backend. \n> \n> every 300 000 cursor allocation so I wish to restart it every 100 000\n> is there a way to do it simple than \n> kill `ps ax | awk .....`\n\nHave you tried pidof postmaster ?\nkill `pidof postmaster` | ....\n\n\n\tOleg\n\n> \n> Thank you!\n> \n> PS:\n> 6.4 works without problems with the same code, but I need more flexible\n> locking on our new mail server.\n> \n> \n> ---\n> Dmitry Samersoff, [email protected], ICQ:3161705\n> http://devnull.wplus.net\n> * There will come soft rains ...\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 2 Jun 1999 19:05:05 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PID of backend"
},
{
"msg_contents": "\nOn 02-Jun-99 Oleg Bartunov wrote:\n> On Wed, 2 Jun 1999, Dmitry Samersoff wrote:\n> \n>> Date: Wed, 02 Jun 1999 18:56:55 +0400 (MSD)\n>> From: Dmitry Samersoff <[email protected]>\n>> To: [email protected]\n>> Subject: [HACKERS] PID of backend\n>> \n>> Hi World! \n>> \n>> Does backend О©╫reates pid file?\n>> \n>> Postgres 6.5 backend (current CVS) stop answering query\n>> \n>> >>> Jun 2 00:12:32 mail popper[17585]: PgSQL:ROLLBACK failed:\n>> >>> PQsendQuery()\n>> >>> There is no connection to the backend. \n>> \n>> every 300 000 cursor allocation so I wish to restart it every 100 000\n>> is there a way to do it simple than \n>> kill `ps ax | awk .....`\n> \n> Have you tried pidof postmaster ?\n> kill `pidof postmaster` | ....\n\n I have no pidof on my computer but it's exactly the same solution.\nI need something usable inside program other than scanning process table.\n\n The best one - add code to create postgres.5432.pid \nto backend startup.\n\nIt also is good way to automatically unlink socket file after \nbackend crush.\n (read pidfile, check existence of process and remove file if no process\nfound)\n \n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Wed, 02 Jun 1999 19:52:43 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PID of backend"
},
{
"msg_contents": ">>> Postgres 6.5 backend (current CVS) stop answering query\n>>> \n>>>>>>>> Jun 2 00:12:32 mail popper[17585]: PgSQL:ROLLBACK failed:\n>>>>>>>> PQsendQuery()\n>>>>>>>> There is no connection to the backend. \n>>> \n>>> every 300 000 cursor allocation so I wish to restart it every 100 000\n>>> is there a way to do it simple than \n>>> kill `ps ax | awk .....`\n\nWhy in the world do you want to use kill at all? If you want to get\nrid of your current backend, just close the connection. I really doubt\nthat killing the postmaster is necessary or appropriate.\n\n(Of course the real answer is to find a way to avoid the memory leak that\nI suppose you are running into. But you haven't given us enough info\nto offer any advice in that direction.)\n\n> I need something usable inside program other than scanning process table.\n\nThere is a libpq function that will tell you the PID of the currently\nconnected backend: PQbackendPID. But it's not usually good for much\nexcept debugging purposes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jun 1999 13:32:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PID of backend "
},
{
"msg_contents": "\nOn 02-Jun-99 Tom Lane wrote:\n>>>> Postgres 6.5 backend (current CVS) stop answering query\n>>>> \n>>>>>>>>> Jun 2 00:12:32 mail popper[17585]: PgSQL:ROLLBACK failed:\n>>>>>>>>> PQsendQuery()\n>>>>>>>>> There is no connection to the backend. \n>>>> \n>>>> every 300 000 cursor allocation so I wish to restart it every 100 000\n>>>> is there a way to do it simple than \n>>>> kill `ps ax | awk .....`\n> \n> Why in the world do you want to use kill at all? If you want to get\n> rid of your current backend, just close the connection. I really doubt\n> that killing the postmaster is necessary or appropriate.\n> \n> (Of course the real answer is to find a way to avoid the memory leak that\n> I suppose you are running into. But you haven't given us enough info\n> to offer any advice in that direction.)\n> \n>> I need something usable inside program other than scanning process table.\n> \n> There is a libpq function that will tell you the PID of the currently\n> connected backend: PQbackendPID. But it's not usually good for much\n> except debugging purposes.\n\nThanks ! It's exactly what I need. (But it s'seems not documented ?) \n\nI need to restart backend because (as written above) \nevery 300 000 \"open cursor\" query completly loose it's mind.\n\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Wed, 02 Jun 1999 22:09:24 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PID of backend"
},
{
"msg_contents": "[Charset KOI8-R unsupported, filtering to ASCII...]\n> \n> On 02-Jun-99 Tom Lane wrote:\n> >>>> Postgres 6.5 backend (current CVS) stop answering query\n> >>>> \n> >>>>>>>>> Jun 2 00:12:32 mail popper[17585]: PgSQL:ROLLBACK failed:\n> >>>>>>>>> PQsendQuery()\n> >>>>>>>>> There is no connection to the backend. \n> >>>> \n> >>>> every 300 000 cursor allocation so I wish to restart it every 100 000\n> >>>> is there a way to do it simple than \n> >>>> kill `ps ax | awk .....`\n> > \n> > Why in the world do you want to use kill at all? If you want to get\n> > rid of your current backend, just close the connection. I really doubt\n> > that killing the postmaster is necessary or appropriate.\n> > \n> > (Of course the real answer is to find a way to avoid the memory leak that\n> > I suppose you are running into. But you haven't given us enough info\n> > to offer any advice in that direction.)\n> > \n> >> I need something usable inside program other than scanning process table.\n> > \n> > There is a libpq function that will tell you the PID of the currently\n> > connected backend: PQbackendPID. But it's not usually good for much\n> > except debugging purposes.\n> \n> Thanks ! It's exactly what I need. (But it s'seems not documented ?) \n> \n> I need to restart backend because (as written above) \n> every 300 000 \"open cursor\" query completly loose it's mind.\n\nI hope 6.5 due on June 7 will fix your problems.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jun 1999 14:14:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PID of backend"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I need to restart backend because (as written above) \n>> every 300 000 \"open cursor\" query completly loose it's mind.\n\n> I hope 6.5 due on June 7 will fix your problems.\n\nIt might. Looking back at the original gripe, I notice it mentions\ndoing rollbacks:\n\n>>>>> Jun 2 00:12:32 mail popper[17585]: PgSQL:ROLLBACK failed:\n\nIf Dmitry is doing a whole lot of rollbacks, he might be running into\nthat aborted-transactions-leak-memory bug that we fixed a few weeks ago.\n\nMeanwhile, I still say that getting rid of a backend via kill() is a\ndangerous and unnecessary \"recovery\" mechanism. What's wrong with\njust closing and reopening the connection instead?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jun 1999 15:14:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PID of backend "
},
{
"msg_contents": "On Wed, 2 Jun 1999, Dmitry Samersoff wrote:\n\n> Thanks ! It's exactly what I need. (But it s'seems not documented ?) \n> \n> I need to restart backend because (as written above) \n> every 300 000 \"open cursor\" query completly loose it's mind.\n\nI'm lost here...doesn't doing a close and then reopening the connection\nrestart the backend? Why would you have to 'kill' it?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 3 Jun 1999 01:15:50 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PID of backend"
},
{
"msg_contents": "\nOn 03-Jun-99 The Hermit Hacker wrote:\n> On Wed, 2 Jun 1999, Dmitry Samersoff wrote:\n> \n>> Thanks ! It's exactly what I need. (But it s'seems not documented ?) \n>> \n>> I need to restart backend because (as written above) \n>> every 300 000 \"open cursor\" query completly loose it's mind.\n> \n> I'm lost here...doesn't doing a close and then reopening the connection\n> restart the backend? Why would you have to 'kill' it?\n\nAfter 300 000 open cursor query (and certtainly close cursor), \nnext query returns error like example below for all other connections \n(ROLLBACK is a query from other process, it just example of message nothing\nmore)\n\n - ie backend exists in memory but stops answer query and allocates new\nconnections. \n\n>> Jun 2 00:12:32 mail popper[17585]: PgSQL:ROLLBACK failed: PQsendQuery()\n>>> There is no connection to the backend. \n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Thu, 03 Jun 1999 16:23:01 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PID of backend"
},
{
"msg_contents": "\n\nTom Lane wrote:\n> \n> Meanwhile, I still say that getting rid of a backend via kill() is a\n> dangerous and unnecessary \"recovery\" mechanism. What's wrong with\n> just closing and reopening the connection instead?\n\nI don't know about later versions of pgsql, but I've a 6.3.2 system running\non a production system, and every once in a while one of the\nbackends will go crazy and eat CPU. This system is on a web server, and processes\nrequests tru a CGI script. For the administrator (i.e. me), it is impossible\nto close the CGI<-->backend connection (the backend will keep running after I kill\noff the CGI script). Only thing that will get things back in order is to kill that\nbackend (which sometimes also requires me to restart the postmaster, probably\nbecause of some shared mem corruption).\n\nMaarten\n\nps. This system is not a priority for me, I', quote happy with how it's running,\nso please don't tell me to upgrade or give me any other suggestions.\n\n-- \n\nMaarten Boekhold, [email protected]\nTIBCO Finance Technology Inc.\nThe Atrium\nStrawinskylaan 3051\n1077 ZX Amsterdam, The Netherlands\ntel: +31 20 3012158, fax: +31 20 3012358\nhttp://www.tibco.com\n",
"msg_date": "Tue, 08 Jun 1999 15:49:22 +0200",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PID of backend"
}
] |
[
{
"msg_contents": "Hi, we're looking at using Postgresql for our dyamic database web development - does\n it run on Windows 95 or NT - or is it just Unix based? I'm trying to determine how\n I would develop on a Windows PC and then port to our remote ISP server...\n\nBetty Walker\nQubic Development, Inc.\n(941) 549-3727\[email protected]\n\n\n\n\n\n\n\nHi, we're looking at using Postgresql for our \ndyamic database web development - does it run on Windows 95 or NT - or is it \njust Unix based? I'm trying to determine how I would develop on a Windows \nPC and then port to our remote ISP server...\nBetty WalkerQubic Development, \nInc.(941) [email protected]",
"msg_date": "Wed, 02 Jun 1999 14:51:46 -0400 (EDT)",
"msg_from": "BA Walker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Questions"
}
] |
[
{
"msg_contents": "Hi everyone.....i am having a problem with i try to compile TCL support\nin...im getting this error when it checks for XOpenDisplay in -lX11\nit says:\nchecking for XOpenDisplay in -lX11... (cached) no\nconfigure: warning: The X11 library '-lX11' could not be found,\nso TK support will be disabled. To enable TK support,\nplease use the configure options '--x-includes=DIR'\nand '--x-libraries=DIR' to specify the X location.\n\nI have X installed cause i run it with this account...im using freebsd\n3.1...if anyone could give me insight to whats going on it would be much\nappreciated...i have even try to reinstall FreeBSD and X and it did not\nsolve the problem.\n\nthanks\nBrent\n\nP.S. If you could please email me directly at [email protected] it\nwould be much appreciated....\n\n\n",
"msg_date": "Wed, 2 Jun 1999 14:05:15 -0500",
"msg_from": "\"Brent Waldrop\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "XOpenDisplay problems?"
}
] |
[
{
"msg_contents": "I am a universitary student from BRASIL, and I�m using PostGreSQL data base.\nI�m designing my data base and I have one question.\nHow to create a foreing key in PostGreSQL.\nI�m using the following command:\nCREATE TABLE test(\npnr char(10),\nnome char(20),\nid decimal REFERENCE <parent table>);\n\nand\n\nCREATE TABLE test(\npnr char(10),\nnome char(20),\nid decimal,\nFOREING KEY (id) REFERENCES <parent table>);\n\nWhere parent table is the table with the primary key.\n\nRegards\n\nFilipi Damasceno Vianna\n&\nAlessandro Orso\n",
"msg_date": "Wed, 2 Jun 1999 17:29:36 -0300",
"msg_from": "Filipi Damasceno Viana <[email protected]>",
"msg_from_op": true,
"msg_subject": "Foreign Key in PostGreSQL"
}
] |
[
{
"msg_contents": "This happens, when I want do this:\n(4) $ sql\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: david\n\ndavid=> create table t (e oid);\nCREATE\ndavid=> insert into t values (lo_import('/tmp/1'));\nINSERT 19000 1\ndavid=> \\d\nDatabase = david\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | david | t | table |\n | david | xinx18986 | index |\n +------------------+----------------------------------+----------+\n\ndavid=> select textout(byteaout(odata)) from xinv18986;\npqReadData() -- backend closed the channel unexpectedly.\n\tThis probably means the backend terminated abnormally\n\tbefore or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible. Terminating.\n\n\nAnd in log is:\nJun 2 22:56:27 chameleon PGSQL: FATAL 1: Memory exhausted in AllocSetAlloc()\n\n================================\nFile /tmp/1 contains:\ndjdjd\ndjdjdjd\ndjdjdjdjdjdj\nddjdjdjjd\ndjdjdjjjdjdjdjdjd\ndjdjdj\n\nThis example doesn't have good meaning, but I dont like crash of my\ndatabase ...\n\n thanks,\n\n-- \n* David Sauer, student of Czech Technical University\n* electronic mail: [email protected] (mime compatible)\n",
"msg_date": "02 Jun 1999 23:00:40 +0200",
"msg_from": "David Sauer <[email protected]>",
"msg_from_op": true,
"msg_subject": "current CVS snapshot of pgsql crash ..."
},
{
"msg_contents": "> This happens, when I want do this:\n> (4) $ sql\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> [PostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: david\n> \n> david=> create table t (e oid);\n> CREATE\n> david=> insert into t values (lo_import('/tmp/1'));\n> INSERT 19000 1\n> david=> \\d\n> Database = david\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | david | t | table |\n> | david | xinx18986 | index |\n> +------------------+----------------------------------+----------+\n> \n> david=> select textout(byteaout(odata)) from xinv18986;\n> pqReadData() -- backend closed the channel unexpectedly.\n> \tThis probably means the backend terminated abnormally\n> \tbefore or while processing the request.\n> We have lost the connection to the backend, so further processing is impossible. Terminating.\n> \n> \n> And in log is:\n> Jun 2 22:56:27 chameleon PGSQL: FATAL 1: Memory exhausted in AllocSetAlloc()\n> \n> ================================\n> File /tmp/1 contains:\n> djdjd\n> djdjdjd\n> djdjdjdjdjdj\n> ddjdjdjjd\n> djdjdjjjdjdjdjdjd\n> djdjdj\n> \n> This example doesn't have good meaning, but I dont like crash of my\n> database ...\n\nYes, I see your point.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jun 1999 17:21:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current CVS snapshot of pgsql crash ..."
},
{
"msg_contents": "David Sauer <[email protected]> writes:\n> david=> select textout(byteaout(odata)) from xinv18986;\n> pqReadData() -- backend closed the channel unexpectedly.\n\nI think this is not related to large objects per se --- it's a\ntypechecking failure. textout is expecting a text datum, and it's\nnot getting one because that's not what comes out of byteaout.\n(The proximate cause of the crash is that textout tries to interpret\nthe first four bytes of byteaout's output as a varlena length...)\n\nThe parser's typechecking machinery is unable to catch this\nerror because textout is declared to take any parameter type\nwhatever (its proargtype is 0).\n\nWhy don't the type output functions have the correct input types\ndeclared for them in pg_proc???\n\nFor that matter, why do we allow user expressions to call the type\ninput/output functions at all? They're not really usable as SQL\nfunctions AFAICS...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jun 1999 19:51:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current CVS snapshot of pgsql crash ... "
},
{
"msg_contents": "> David Sauer <[email protected]> writes:\n> > david=> select textout(byteaout(odata)) from xinv18986;\n> > pqReadData() -- backend closed the channel unexpectedly.\n> \n> I think this is not related to large objects per se --- it's a\n> typechecking failure. textout is expecting a text datum, and it's\n> not getting one because that's not what comes out of byteaout.\n> (The proximate cause of the crash is that textout tries to interpret\n> the first four bytes of byteaout's output as a varlena length...)\n> \n> The parser's typechecking machinery is unable to catch this\n> error because textout is declared to take any parameter type\n> whatever (its proargtype is 0).\n> \n> Why don't the type output functions have the correct input types\n> declared for them in pg_proc???\n> \n> For that matter, why do we allow user expressions to call the type\n> input/output functions at all? They're not really usable as SQL\n> functions AFAICS...\n\nYes, they take C pointers, don't they. You can't return one of those in\nany SQL function or column name.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jun 1999 20:16:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current CVS snapshot of pgsql crash ..."
},
{
"msg_contents": "> For that matter, why do we allow user expressions to call the type\n> input/output functions at all? They're not really usable as SQL\n> functions AFAICS...\n\nWhat probably happened is that those are in the system catalogs, but are\nassigned a zero for input, rather than a non-valid oid. Does zero mean\nany type? I guess so.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jun 1999 20:17:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current CVS snapshot of pgsql crash ..."
},
{
"msg_contents": "> > For that matter, why do we allow user expressions to call the type\n> > input/output functions at all? They're not really usable as SQL\n> > functions AFAICS...\n>\n> Yes, they take C pointers, don't they. You can't return one of those in\n> any SQL function or column name.\n\n Doing textout(byteaout(... really makes no sense. But being\n able to do a textin(mytypeout(... does make sense for me.\n Without that, there MUST be type casting support for\n MYTYPE->TEXT in the parser.\n\n Sometimes ppl implement user defined types. I assume this\n kind of type casting is used somewhere in a couple of\n applications.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 3 Jun 1999 12:39:05 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current CVS snapshot of pgsql crash ..."
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n>>>> For that matter, why do we allow user expressions to call the type\n>>>> input/output functions at all? They're not really usable as SQL\n>>>> functions AFAICS...\n\n> Doing textout(byteaout(... really makes no sense. But being\n> able to do a textin(mytypeout(... does make sense for me.\n> Without that, there MUST be type casting support for\n> MYTYPE->TEXT in the parser.\n\nThe real problem here is that the type system needs to have a notion\nof \"C string\" as a datatype so that the type input and output functions\ncan be declared *properly* with the true nature of their inputs and\nresults given correctly. Then typeain(typebout(typebvalue)) would work\nand textout(byteaout(...)) would be rejected, as it should be.\n\nThe typechecking escape convention (zero in the proargtypes signature)\nshould only be used for functions that really do accept any kind of\ndatum. I think there are some (count(*) for one) but not many.\n\nThe \"C string\" type is not quite a real type, because we don't want to\nlet people declare columns of that type (I assume). OTOH it must be\nreal enough to let people declare user-defined functions that accept or\nreturn it. Right now, the I/O functions for user-defined types are\nsupposed to be declared to take or return type OPAQUE, but I think\nthat pseudo-type is being used for too many different things.\n\nObviously none of this is going to happen for 6.5, but it should go\non the TODO list.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 10:25:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current CVS snapshot of pgsql crash ... "
},
{
"msg_contents": "\nAdded to TODO:\n\n\t* Fix typein/out functions to not be user-callable\n\n\n> David Sauer <[email protected]> writes:\n> > david=> select textout(byteaout(odata)) from xinv18986;\n> > pqReadData() -- backend closed the channel unexpectedly.\n> \n> I think this is not related to large objects per se --- it's a\n> typechecking failure. textout is expecting a text datum, and it's\n> not getting one because that's not what comes out of byteaout.\n> (The proximate cause of the crash is that textout tries to interpret\n> the first four bytes of byteaout's output as a varlena length...)\n> \n> The parser's typechecking machinery is unable to catch this\n> error because textout is declared to take any parameter type\n> whatever (its proargtype is 0).\n> \n> Why don't the type output functions have the correct input types\n> declared for them in pg_proc???\n> \n> For that matter, why do we allow user expressions to call the type\n> input/output functions at all? They're not really usable as SQL\n> functions AFAICS...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 19:52:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current CVS snapshot of pgsql crash ..."
},
{
"msg_contents": "\n\nRemoved from TODO list:\n\n\n\t* Fix typein/out functions to not be user-callable\n\n\n> > > For that matter, why do we allow user expressions to call the type\n> > > input/output functions at all? They're not really usable as SQL\n> > > functions AFAICS...\n> >\n> > Yes, they take C pointers, don't they. You can't return one of those in\n> > any SQL function or column name.\n> \n> Doing textout(byteaout(... really makes no sense. But being\n> able to do a textin(mytypeout(... does make sense for me.\n> Without that, there MUST be type casting support for\n> MYTYPE->TEXT in the parser.\n> \n> Sometimes ppl implement user defined types. I assume this\n> kind of type casting is used somewhere in a couple of\n> applications.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #======================================== [email protected] (Jan Wieck) #\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 19:54:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current CVS snapshot of pgsql crash ..."
},
{
"msg_contents": "\nRe-added to TODO list :-)\n\n\t* Fix typein/out functions to not be user-callable\n\n\n> [email protected] (Jan Wieck) writes:\n> >>>> For that matter, why do we allow user expressions to call the type\n> >>>> input/output functions at all? They're not really usable as SQL\n> >>>> functions AFAICS...\n> \n> > Doing textout(byteaout(... really makes no sense. But being\n> > able to do a textin(mytypeout(... does make sense for me.\n> > Without that, there MUST be type casting support for\n> > MYTYPE->TEXT in the parser.\n> \n> The real problem here is that the type system needs to have a notion\n> of \"C string\" as a datatype so that the type input and output functions\n> can be declared *properly* with the true nature of their inputs and\n> results given correctly. Then typeain(typebout(typebvalue)) would work\n> and textout(byteaout(...)) would be rejected, as it should be.\n> \n> The typechecking escape convention (zero in the proargtypes signature)\n> should only be used for functions that really do accept any kind of\n> datum. I think there are some (count(*) for one) but not many.\n> \n> The \"C string\" type is not quite a real type, because we don't want to\n> let people declare columns of that type (I assume). OTOH it must be\n> real enough to let people declare user-defined functions that accept or\n> return it. Right now, the I/O functions for user-defined types are\n> supposed to be declared to take or return type OPAQUE, but I think\n> that pseudo-type is being used for too many different things.\n> \n> Obviously none of this is going to happen for 6.5, but it should go\n> on the TODO list.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 19:55:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current CVS snapshot of pgsql crash ..."
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Re-added to TODO list :-)\n\n> \t* Fix typein/out functions to not be user-callable\n\nPlease rephrase it:\n\n* Declare typein/out functions in pg_proc with a special \"C string\" data type\n\nWe want to type-check their uses correctly, not forbid their use.\n\n\t\t\tregards, tom lane\n\n\n>> The real problem here is that the type system needs to have a notion\n>> of \"C string\" as a datatype so that the type input and output functions\n>> can be declared *properly* with the true nature of their inputs and\n>> results given correctly. Then typeain(typebout(typebvalue)) would work\n>> and textout(byteaout(...)) would be rejected, as it should be.\n>> \n>> The typechecking escape convention (zero in the proargtypes signature)\n>> should only be used for functions that really do accept any kind of\n>> datum. I think there are some (count(*) for one) but not many.\n>> \n>> The \"C string\" type is not quite a real type, because we don't want to\n>> let people declare columns of that type (I assume). OTOH it must be\n>> real enough to let people declare user-defined functions that accept or\n>> return it. Right now, the I/O functions for user-defined types are\n>> supposed to be declared to take or return type OPAQUE, but I think\n>> that pseudo-type is being used for too many different things.\n>> \n>> Obviously none of this is going to happen for 6.5, but it should go\n>> on the TODO list.\n>> \n>> regards, tom lane\n",
"msg_date": "Wed, 07 Jul 1999 20:03:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current CVS snapshot of pgsql crash ... "
}
] |
[
{
"msg_contents": "I've frozen the docs sources for the Programmer's Guide, and have\ngenerated the hardcopy. I'm planning on generating the Tutorial,\nINSTALL, and HISTORY tonight, and the User's Guide tomorrow.\n\nBruce, are install.sgml and release.sgml finished?\n\nI am looking for updates to ref/{lock,set}.sgml for MVCC and related\nchanges, which go into the User's Guide. I'm out of town Saturday Jun\n5 01:00 UTC through Monday June 7 5:00 UTC, so will need the updates\nby tomorrow or I'll need to ask for yet another extension.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 03 Jun 1999 02:07:48 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Freezing docs for v6.5"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> I've frozen the docs sources for the Programmer's Guide, and have\n> generated the hardcopy. I'm planning on generating the Tutorial,\n> INSTALL, and HISTORY tonight, and the User's Guide tomorrow.\n> \n> Bruce, are install.sgml and release.sgml finished?\n> \n> I am looking for updates to ref/{lock,set}.sgml for MVCC and related\n> changes, which go into the User's Guide. I'm out of town Saturday Jun\n> 5 01:00 UTC through Monday June 7 5:00 UTC, so will need the updates\n> by tomorrow or I'll need to ask for yet another extension.\n\nI'll post these changes today directly to you.\nText version...\n\nVadim\n",
"msg_date": "Thu, 03 Jun 1999 10:22:55 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Freezing docs for v6.5"
},
{
"msg_contents": "> I'll post these changes today directly to you.\n> Text version...\n\nThanks :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 03 Jun 1999 02:41:31 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Freezing docs for v6.5"
},
{
"msg_contents": "> I've frozen the docs sources for the Programmer's Guide, and have\n> generated the hardcopy. I'm planning on generating the Tutorial,\n> INSTALL, and HISTORY tonight, and the User's Guide tomorrow.\n> \n> Bruce, are install.sgml and release.sgml finished?\n> \n> I am looking for updates to ref/{lock,set}.sgml for MVCC and related\n> changes, which go into the User's Guide. I'm out of town Saturday Jun\n> 5 01:00 UTC through Monday June 7 5:00 UTC, so will need the updates\n> by tomorrow or I'll need to ask for yet another extension.\n> \n\nNo, nor is the ref stuff done. Extend it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jun 1999 22:47:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Freezing docs for v6.5"
},
{
"msg_contents": "> > I've frozen the docs sources for the Programmer's Guide, and have\n> > generated the hardcopy. I'm planning on generating the Tutorial,\n> > INSTALL, and HISTORY tonight, and the User's Guide tomorrow.\n> > Bruce, are install.sgml and release.sgml finished?\n> > I am looking for updates to ref/{lock,set}.sgml for MVCC and related\n> > changes, which go into the User's Guide. I'm out of town Saturday Jun\n> > 5 01:00 UTC through Monday June 7 5:00 UTC, so will need the updates\n> > by tomorrow or I'll need to ask for yet another extension.\n> No, nor is the ref stuff done. Extend it.\n\nWhoops! The \"No\" means that install.sgml and release.sgml are not\nfinished? What else needs to be done on them? I've made some very\nminor wording and markup changes in the release notes, and will commit\nthose tonight. the Programmer's Guide and the Tutorial are finished,\nand I was hoping to do INSTALL, HISTORY, and the Admin Guide (all of\nwhich need install.sgml and release.sgml to be done) tonight.\n\nVadim sez he'll send some plain text updates (hopefully for the ref\npages) tomorrow, so I should be able to get that incorporated into the\nUser's Guide then.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 03 Jun 1999 03:48:57 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Freezing docs for v6.5"
},
{
"msg_contents": "> I've frozen the docs sources for the Programmer's Guide, and have\n> generated the hardcopy. I'm planning on generating the Tutorial,\n> INSTALL, and HISTORY tonight, and the User's Guide tomorrow.\n> \n> Bruce, are install.sgml and release.sgml finished?\n> \n> I am looking for updates to ref/{lock,set}.sgml for MVCC and related\n> changes, which go into the User's Guide. I'm out of town Saturday Jun\n> 5 01:00 UTC through Monday June 7 5:00 UTC, so will need the updates\n> by tomorrow or I'll need to ask for yet another extension.\n\nIf someone else can take on the lock/set grammar changes, that would be\ngreat. I did a cvs diff of gram.y from the 6.4.2 date and current, and\nthere is what needs to be changed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 2 Jun 1999 23:57:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Freezing docs for v6.5"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > > I've frozen the docs sources for the Programmer's Guide, and have\n> > > generated the hardcopy. I'm planning on generating the Tutorial,\n> > > INSTALL, and HISTORY tonight, and the User's Guide tomorrow.\n> > > Bruce, are install.sgml and release.sgml finished?\n> > > I am looking for updates to ref/{lock,set}.sgml for MVCC and related\n> > > changes, which go into the User's Guide. I'm out of town Saturday Jun\n> > > 5 01:00 UTC through Monday June 7 5:00 UTC, so will need the updates\n> > > by tomorrow or I'll need to ask for yet another extension.\n> > No, nor is the ref stuff done. Extend it.\n> \n> Whoops! The \"No\" means that install.sgml and release.sgml are not\n> finished? What else needs to be done on them? I've made some very\n> minor wording and markup changes in the release notes, and will commit\n> those tonight. the Programmer's Guide and the Tutorial are finished,\n> and I was hoping to do INSTALL, HISTORY, and the Admin Guide (all of\n> which need install.sgml and release.sgml to be done) tonight.\n> \n> Vadim sez he'll send some plain text updates (hopefully for the ref\n> pages) tomorrow, so I should be able to get that incorporated into the\n\nFor lock.sgml and set.sgml. I assume that they are used for lock and\nset man pages which I would like to use as sources for updation.\n\n> User's Guide then.\n\nBTW, I have some notes for release.sgml, \"Migration to v6.5\" section,\nabout using contrib/refint.* - one will have to use LOCK statements\nto get it working properly, -:(. I'll write it in the next 12 hours...\n\nVadim\n",
"msg_date": "Thu, 03 Jun 1999 11:58:18 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "> Whoops! The \"No\" means that install.sgml and release.sgml are not\n> finished? What else needs to be done on them? I've made some very\n> minor wording and markup changes in the release notes, and will commit\n> those tonight. the Programmer's Guide and the Tutorial are finished,\n> and I was hoping to do INSTALL, HISTORY, and the Admin Guide (all of\n> which need install.sgml and release.sgml to be done) tonight.\n> \n> Vadim sez he'll send some plain text updates (hopefully for the ref\n> pages) tomorrow, so I should be able to get that incorporated into the\n> User's Guide then.\n\nSorry. Install is done. release notes just need to be finalized with\nall listed changes. When should we say no more? Things are still\nhappening in the fix area, I think.\n\nThe man pages and sgml/ref stuff is the hard part. I am not sure what\nthey all do, and was discouraged to see the 'set' manual page doesn't\nhave lots of stuff that should be in there, and are in the sgml version.\nThe psql \\h stuff also needs work. I am pretty busy with other stuff\nfor the next two days. That is the problem.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 00:07:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Freezing docs for v6.5"
},
{
"msg_contents": "> > Vadim sez he'll send some plain text updates (hopefully for the ref\n> > pages) tomorrow\n> For lock.sgml and set.sgml. I assume that they are used for lock and\n> set man pages which I would like to use as sources for updation.\n\nThe sgml produces html and ps formats. The man pages are not (yet)\ngenerated automatically from the sgml, but will be in some future\nrelease (offers of help on the conversion have not yet resulted in\nactual help...).\n\nI would appreciate getting the updates inside of ref/{lock,set}.sgml,\nand then you or someone else can update the man pages while I am\nworking on the hardcopy.\n\n> BTW, I have some notes for release.sgml, \"Migration to v6.5\" section,\n> about using contrib/refint.* - one will have to use LOCK statements\n> to get it working properly, -:(. I'll write it in the next 12 hours...\n\nOK, great. That might delay me a bit, but it will be well worth it. I\nhad hoped to run through another ~200 pages of hardcopy tonight, but\nam on hold for both the release notes and the User's Guide.\n\nIf you can get to the release notes first, then I can go ahead and\nfinish 3 docs (the Admin Guide, INSTALL, and HISTORY).\n\nThanks for all of your work. Hope this crunch isn't too painful :/\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 03 Jun 1999 04:17:52 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > > Vadim sez he'll send some plain text updates (hopefully for the ref\n> > > pages) tomorrow\n> > For lock.sgml and set.sgml. I assume that they are used for lock and\n> > set man pages which I would like to use as sources for updation.\n> \n> The sgml produces html and ps formats. The man pages are not (yet)\n> generated automatically from the sgml, but will be in some future\n> release (offers of help on the conversion have not yet resulted in\n> actual help...).\n> \n> I would appreciate getting the updates inside of ref/{lock,set}.sgml,\n\nOk.\n\n> and then you or someone else can update the man pages while I am\n> working on the hardcopy.\n> \n> > BTW, I have some notes for release.sgml, \"Migration to v6.5\" section,\n> > about using contrib/refint.* - one will have to use LOCK statements\n> > to get it working properly, -:(. I'll write it in the next 12 hours...\n> \n> OK, great. That might delay me a bit, but it will be well worth it. I\n> had hoped to run through another ~200 pages of hardcopy tonight, but\n> am on hold for both the release notes and the User's Guide.\n> \n> If you can get to the release notes first, then I can go ahead and\n> finish 3 docs (the Admin Guide, INSTALL, and HISTORY).\n\nOk. In the next 3 hours... ok?\n\nVadim\n",
"msg_date": "Thu, 03 Jun 1999 12:24:43 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "On Thu, 3 Jun 1999, Thomas Lockhart wrote:\n\n> If you can get to the release notes first, then I can go ahead and\n> finish 3 docs (the Admin Guide, INSTALL, and HISTORY).\n> \n> Thanks for all of your work. Hope this crunch isn't too painful :/\n\nHow are we doing for the 7th as a release? Need a couple of more days to\ntouch up docs? Or are we okay for the 7th?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 3 Jun 1999 01:27:47 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "> > Whoops! The \"No\" means that install.sgml and release.sgml are not\n> > finished? What else needs to be done on them? I've made some very\n> > minor wording and markup changes in the release notes, and will commit\n> > those tonight. the Programmer's Guide and the Tutorial are finished,\n> > and I was hoping to do INSTALL, HISTORY, and the Admin Guide (all of\n> > which need install.sgml and release.sgml to be done) tonight.\n> > \n> > Vadim sez he'll send some plain text updates (hopefully for the ref\n> > pages) tomorrow, so I should be able to get that incorporated into the\n> \n> For lock.sgml and set.sgml. I assume that they are used for lock and\n> set man pages which I would like to use as sources for updation.\n\nThe man pages and sgml pages are separate, and require separate\nmaintenance.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 00:33:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5]"
},
{
"msg_contents": "> How are we doing for the 7th as a release? Need a couple of more days \n> to touch up docs? Or are we okay for the 7th?\n\nNot sure yet. I expect that I will need an extra day or two since I'm\nout of town for the 5th and 6th. But if we plan to slip, then the\nstuff I need might slip too :/\n\nTom Lane is still chasing bugs with great efficiency, so an extra day\nto test a \"candidate release tarball\" (missing only a few of the docs)\ncould certainly do no harm. If nothing else it would let us test the\ntarball to make sure that the yacc/bison file phasing is OK (this has\nbit us a few times recently).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 03 Jun 1999 05:31:45 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "On Thu, 3 Jun 1999, Thomas Lockhart wrote:\n\n> > How are we doing for the 7th as a release? Need a couple of more days \n> > to touch up docs? Or are we okay for the 7th?\n> \n> Not sure yet. I expect that I will need an extra day or two since I'm\n> out of town for the 5th and 6th. But if we plan to slip, then the\n> stuff I need might slip too :/\n> \n> Tom Lane is still chasing bugs with great efficiency, so an extra day\n> to test a \"candidate release tarball\" (missing only a few of the docs)\n> could certainly do no harm. If nothing else it would let us test the\n> tarball to make sure that the yacc/bison file phasing is OK (this has\n> bit us a few times recently).\n\nOkay, let's go with a 'pre-release' tarball for the 7th and a release on\nthe 9th then, if no objections?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 3 Jun 1999 09:16:33 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Tom Lane is still chasing bugs with great efficiency,\n\nChasing 'em, anyway; no claims about efficiency :-). Most of the stuff\non my \"to fix\" list is not showstopper material; it'd be nice to get it\ndone before 6.5 but I won't feel bad if it isn't. There's always\nanother bug...\n\nThe two bugs I am really concerned about right now are the\ninheritance-vs-GROUP-BY issue and the bogus-cache-entries-not-flushed-\nat-xact-abort issue, because I am not sure I know enough to fix either\none right, and there is very little testing time left. These are bad\nbugs, but they exist in older releases too, so maybe we should just\nleave 'em alone for 6.5? Opinions? (There seem to be some nontrivial\nopen issues in locking and segmented relations, too, so maybe there is\nenough stuff here to delay the release while we fix these things?)\n\n> so an extra day\n> to test a \"candidate release tarball\" (missing only a few of the docs)\n> could certainly do no harm. If nothing else it would let us test the\n> tarball to make sure that the yacc/bison file phasing is OK (this has\n> bit us a few times recently).\n\nIn theory, that problem is Permanently Fixed, since the derived yacc\nfiles are no longer in the CVS tree but are created on-the-fly during\ntarball preparation. In practice, we should double-check it.\n\nYou've all seen this one, right?\n\tQ: What is the difference between theory and practice?\n\tA: In theory, there is no difference.\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 10:13:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5 "
},
{
"msg_contents": "> Sorry. Install is done. release notes just need to be finalized with\n> all listed changes. When should we say no more? Things are still\n> happening in the fix area, I think.\n\nOK, I'm freezing install.sgml and release.sgml. Feel free to make\nchanges to release.sgml, and *if* you limit the changes to one or a\nfew one-liners I can update the hardcopy at the last minute. But we\nshould document release info up to the last minute rather than leaving\nsomething out because docs were supposed to be frozen.\n\n> The man pages and sgml/ref stuff is the hard part. I am not sure what\n> they all do, and was discouraged to see the 'set' manual page doesn't\n> have lots of stuff that should be in there, and are in the sgml version.\n> The psql \\h stuff also needs work. I am pretty busy with other stuff\n> for the next two days. That is the problem.\n\nNo problem. afaik the sgml sources are almost done, with a couple of\npatches coming from Vadim sometime soon. imho the original man pages\nare doing pretty well, and we aren't promising that they are as\ncomplete or voluminous as the html docs.\n\nThe \"set\" man page is probably the most out of date since there were\nseveral new commands added recently. So if you have a chance to update\nthat one things are likely to be good enough.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 03 Jun 1999 14:58:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Freezing docs for v6.5"
},
{
"msg_contents": "On Thu, 3 Jun 1999, Tom Lane wrote:\n\n> The two bugs I am really concerned about right now are the\n> inheritance-vs-GROUP-BY issue and the bogus-cache-entries-not-flushed-\n> at-xact-abort issue, because I am not sure I know enough to fix either\n> one right, and there is very little testing time left. These are bad\n> bugs, but they exist in older releases too, so maybe we should just\n> leave 'em alone for 6.5? Opinions? (There seem to be some nontrivial\n> open issues in locking and segmented relations, too, so maybe there is\n> enough stuff here to delay the release while we fix these things?)\n\nIf the 'bugs' aren't something new we created since v6.4.2, leave them\nalone...would be nice to fix them, but nobody expected them to work to\ndate, so leavign it for v6.6 (or even a v6.5.1) is acceptable...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 3 Jun 1999 13:25:54 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5 "
},
{
"msg_contents": "> If the 'bugs' aren't something new we created since v6.4.2, leave them\n> alone...would be nice to fix them, but nobody expected them to work to\n> date, so leavign it for v6.6 (or even a v6.5.1) is acceptable...\n> \n\nIsn't minor bugfixing in 6.5.1 a no-no. We only do major fixes in those\nminor releases, right?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 12:48:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Thu, 3 Jun 1999, Tom Lane wrote:\n>> The two bugs I am really concerned about right now are the\n>> inheritance-vs-GROUP-BY issue and the bogus-cache-entries-not-flushed-\n>> at-xact-abort issue, because I am not sure I know enough to fix either\n>> one right, and there is very little testing time left. These are bad\n>> bugs, but they exist in older releases too, so maybe we should just\n>> leave 'em alone for 6.5?\n\n> If the 'bugs' aren't something new we created since v6.4.2, leave them\n> alone...would be nice to fix them, but nobody expected them to work to\n> date, so leavign it for v6.6 (or even a v6.5.1) is acceptable...\n\nI don't mind postponing the inheritance/GROUP-BY issue on that basis,\nbecause it's an identifiable feature that doesn't work (and never has\nworked). I'm more troubled about the cache issue, because that could\ngive rise to hard-to-predict flaky behavior; we might waste a lot of\ntime chasing bug reports that ultimately reduce to that problem but\nare not easily recognizable as such.\n\nBruce seemed to think that we could just flush the sys caches and\nrelation cache completely during xact abort. This would probably be\nless efficient than identifying the specific entries to get rid of, but\nit would plug the hole in the dike well enough for 6.5. Any objections\nto doing that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 13:43:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5 "
},
{
"msg_contents": "> Bruce seemed to think that we could just flush the sys caches and\n> relation cache completely during xact abort. This would probably be\n> less efficient than identifying the specific entries to get rid of, but\n> it would plug the hole in the dike well enough for 6.5. Any objections\n> to doing that?\n\nI don't see how we could do anything more aggressive at this point,\nthough the flush may have some side-affects we don't know about yet,\neven though it works for temp tables.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 14:09:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "On Thu, 3 Jun 1999, Tom Lane wrote:\n\n> Bruce seemed to think that we could just flush the sys caches and\n> relation cache completely during xact abort. This would probably be\n> less efficient than identifying the specific entries to get rid of, but\n> it would plug the hole in the dike well enough for 6.5. Any objections\n> to doing that?\n\nSounds reasonable to me...its a stop gap that, the wya things have gone,\nsince its less efficient, will get re-identified in the future when\nsomeone trying to optimize hings even further...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 3 Jun 1999 15:15:03 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5 "
},
{
"msg_contents": "On Thu, 3 Jun 1999, Bruce Momjian wrote:\n\n> > If the 'bugs' aren't something new we created since v6.4.2, leave them\n> > alone...would be nice to fix them, but nobody expected them to work to\n> > date, so leavign it for v6.6 (or even a v6.5.1) is acceptable...\n> > \n> \n> Isn't minor bugfixing in 6.5.1 a no-no. We only do major fixes in those\n> minor releases, right?\n\n*raised eyebrow* Wouldn't minor-bugfixing be \"safer\" then major ones?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 3 Jun 1999 15:16:34 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> On Thu, 3 Jun 1999, Tom Lane wrote:\n> \n> > Bruce seemed to think that we could just flush the sys caches and\n> > relation cache completely during xact abort. This would probably be\n> > less efficient than identifying the specific entries to get rid of, but\n> > it would plug the hole in the dike well enough for 6.5. Any objections\n> > to doing that?\n> \n> Sounds reasonable to me...its a stop gap that, the wya things have gone,\n> since its less efficient, will get re-identified in the future when\n> someone trying to optimize hings even further...\n\nCould you remember me what's the problem with cache?\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 02:25:54 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "> On Thu, 3 Jun 1999, Bruce Momjian wrote:\n> \n> > > If the 'bugs' aren't something new we created since v6.4.2, leave them\n> > > alone...would be nice to fix them, but nobody expected them to work to\n> > > date, so leavign it for v6.6 (or even a v6.5.1) is acceptable...\n> > > \n> > \n> > Isn't minor bugfixing in 6.5.1 a no-no. We only do major fixes in those\n> > minor releases, right?\n> \n> *raised eyebrow* Wouldn't minor-bugfixing be \"safer\" then major ones?\n> \n\nI thought we only fixed must-fix bugs in minor releases because there\nwas too much of a risk that any fix will break too many things, and\nthere is little testing of minor releases, so we fix as few things as\npossible.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 14:42:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "On Thu, 3 Jun 1999, Bruce Momjian wrote:\n\n> > On Thu, 3 Jun 1999, Bruce Momjian wrote:\n> > \n> > > > If the 'bugs' aren't something new we created since v6.4.2, leave them\n> > > > alone...would be nice to fix them, but nobody expected them to work to\n> > > > date, so leavign it for v6.6 (or even a v6.5.1) is acceptable...\n> > > > \n> > > \n> > > Isn't minor bugfixing in 6.5.1 a no-no. We only do major fixes in those\n> > > minor releases, right?\n> > \n> > *raised eyebrow* Wouldn't minor-bugfixing be \"safer\" then major ones?\n> > \n> \n> I thought we only fixed must-fix bugs in minor releases because there\n> was too much of a risk that any fix will break too many things, and\n> there is little testing of minor releases, so we fix as few things as\n> possible.\n\nsorry, heat was hitting me over here :) minor releases are meant to fix\nthings like 'it doesn't quite install cleaning on X platform', where the\nfix is harmless...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 3 Jun 1999 16:01:10 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "> sorry, heat was hitting me over here :) minor releases are meant to fix\n> things like 'it doesn't quite install cleaning on X platform', where the\n> fix is harmless...\n\nYes, the magnitude of possible damage for a patch is considered much\nmore in minor releases.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 15:11:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> I've frozen the docs sources for the Programmer's Guide, and have\n> generated the hardcopy. I'm planning on generating the Tutorial,\n> INSTALL, and HISTORY tonight, and the User's Guide tomorrow.\n> \n> Bruce, are install.sgml and release.sgml finished?\n> \n> I am looking for updates to ref/{lock,set}.sgml for MVCC and related\n> changes, which go into the User's Guide. I'm out of town Saturday Jun\n> 5 01:00 UTC through Monday June 7 5:00 UTC, so will need the updates\n> by tomorrow or I'll need to ask for yet another extension.\n\nSorry, but I'm not able to update lock.sgml now - too many things\nto say about and I'm tired, -:(.\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 04:21:00 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Freezing docs for v6.5"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> On Thu, 3 Jun 1999, Tom Lane wrote:\n>>>> Bruce seemed to think that we could just flush the sys caches and\n>>>> relation cache completely during xact abort.\n\n> Could you remember me what's the problem with cache?\n\nThe reported problem was that if a new relation is created, and then\nthe transaction is aborted, the SysCache entry for the new relation's\npg_class entry doesn't get removed. For example:\n\ntest=> create table bug1 (f1 int28 primary key);\nERROR: Can't find a default operator class for type 22.\n-- That's expected, since we have no index support for int28. But now:\ntest=> create table bug1 (f1 int28);\nERROR: Relation 'bug1' already exists\n\nThe second try fails because it finds an entry for 'bug1' in the\nRELNAME SysCache, which was made before the create-index step of\nCREATE TABLE failed. That entry should not be there anymore.\n\nI suspect that this is an instance of a generic problem with *all*\nthe SysCache tables, and perhaps the relcache as well: there is no\nmechanism to ensure that the caches stay in sync with the underlying\nrelation during an abort. So there could be all kinds of weird\nmisbehavior following an error, if the transaction added or modified\na SysCache entry before failing.\n\nBruce has a related problem for temp tables: he needs to make sure that\ntheir entries in these caches go away at end of transaction. (BTW, what\nmakes that happen if the transaction is aborted rather than committed?)\n\nThere is probably a better way to fix it than the brute force \"flush the\nwhole cache\" method --- for example, how do cache entries get deleted\nnormally, if the underlying relation entry is deleted? Maybe that\nmechanism could be used. But I doubt we have time to do anything fancy\nfor 6.5.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 17:23:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5 "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Vadim Mikheev <[email protected]> writes:\n> >> On Thu, 3 Jun 1999, Tom Lane wrote:\n> >>>> Bruce seemed to think that we could just flush the sys caches and\n> >>>> relation cache completely during xact abort.\n> \n> > Could you remember me what's the problem with cache?\n> \n> The reported problem was that if a new relation is created, and then\n> the transaction is aborted, the SysCache entry for the new relation's\n> pg_class entry doesn't get removed. For example:\n> \n> test=> create table bug1 (f1 int28 primary key);\n> ERROR: Can't find a default operator class for type 22.\n> -- That's expected, since we have no index support for int28. But now:\n> test=> create table bug1 (f1 int28);\n> ERROR: Relation 'bug1' already exists\n> \n> The second try fails because it finds an entry for 'bug1' in the\n> RELNAME SysCache, which was made before the create-index step of\n> CREATE TABLE failed. That entry should not be there anymore.\n\nNote this:\n\nvac=> begin;\nBEGIN\nvac=> create table bug1 (f1 int28);\nCREATE\nvac=> abort;\nABORT\nvac=> create table bug1 (f1 int28);\nCREATE\n\nI would leave this in 6.5 as is.\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 09:53:54 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "> Vadim Mikheev <[email protected]> writes:\n> >> On Thu, 3 Jun 1999, Tom Lane wrote:\n> >>>> Bruce seemed to think that we could just flush the sys caches and\n> >>>> relation cache completely during xact abort.\n> \n> > Could you remember me what's the problem with cache?\n> \n> The reported problem was that if a new relation is created, and then\n> the transaction is aborted, the SysCache entry for the new relation's\n> pg_class entry doesn't get removed. For example:\n> \n> test=> create table bug1 (f1 int28 primary key);\n> ERROR: Can't find a default operator class for type 22.\n> -- That's expected, since we have no index support for int28. But now:\n> test=> create table bug1 (f1 int28);\n> ERROR: Relation 'bug1' already exists\n> \n> The second try fails because it finds an entry for 'bug1' in the\n> RELNAME SysCache, which was made before the create-index step of\n> CREATE TABLE failed. That entry should not be there anymore.\n\nYou know, I wonder if this is somewhat new. The older code did more\nsequential scans of the system tables. We now do almost everything\nthrough the syscache.\n\n> \n> I suspect that this is an instance of a generic problem with *all*\n> the SysCache tables, and perhaps the relcache as well: there is no\n> mechanism to ensure that the caches stay in sync with the underlying\n> relation during an abort. So there could be all kinds of weird\n> misbehavior following an error, if the transaction added or modified\n> a SysCache entry before failing.\n> \n> Bruce has a related problem for temp tables: he needs to make sure that\n> their entries in these caches go away at end of transaction. (BTW, what\n> makes that happen if the transaction is aborted rather than committed?)\n\nNo, that is not the problem. If there exists a non-temp table with the\nsame name as the new temp table I am creating, I want the non-temp table\nout of the system cache so my new table is seen on the next cache\nlookup.\n\n> There is probably a better way to fix it than the brute force \"flush the\n> whole cache\" method --- for example, how do cache entries get deleted\n> normally, if the underlying relation entry is deleted? Maybe that\n> mechanism could be used. But I doubt we have time to do anything fancy\n> for 6.5.\n\nEven if we knew how to do that, and I don't(though I tried), we still\nhave to have some way of knowing _which_ cache entries are invalidated\nby the transaction rollback. One idea was to mark the cache entries\nwith a transaction id of lookup, and remove those entries that are part\nof an invalid transaction.\n\nOther backends don't see the rows until they are committed, but we do\nsee them because they are part of our own transaction.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 21:59:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "> Sorry, but I'm not able to update lock.sgml now - too many things\n> to say about and I'm tired, -:(.\n\nOK. Bruce has made some additions already, and I'll do the User's\nGuide (which contains ref/lock.sgml and ref/set.sgml) as the last doc.\nBe sure to get the latest copies before making more changes.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 04 Jun 1999 02:08:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Freezing docs for v6.5"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Note this:\n> vac=> begin;\n> BEGIN\n> vac=> create table bug1 (f1 int28);\n> CREATE\n> vac=> abort;\n> ABORT\n> vac=> create table bug1 (f1 int28);\n> CREATE\n\nThat's not a very interesting case, because (AFAICS) there is nothing\nthat will cause the pg_class row for bug1 to get loaded into SysCache\nduring the transaction. So, no problem.\n\nI tried the obvious extension:\n\nplay=> begin;\nBEGIN\nplay=> create table bug1 (f1 int);\nCREATE\nplay=> create index bug1i on bug1(f1);\nCREATE\nplay=> abort;\nABORT\nplay=> create table bug1 (f1 int);\nCREATE\n\nHmm ... that's interesting, why does that work? My guess is that the\nCommandCounterIncrement() after the CREATE TABLE causes the SI code\nto take responsibility for bug1's pg_class row even though it's not\ntruly committed. However,\n\nplay=> begin;\nBEGIN\nplay=> create table bug1 (f1 int28);\nCREATE\nplay=> create index bug1i on bug1(f1);\nERROR: Can't find a default operator class for type 22.\nplay=> abort;\nABORT\nplay=> create table bug1 (f1 int28);\nERROR: bug1 relation already exists\n\nI really do not understand why this last fails when the prior\nexample works.\n\nHowever, I've already committed a fix, and all of these\nexamples now work fine with 6.5 ;-). So I'm not inclined to\nspend more time on the issue right now ... other bugs beckon.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 23:10:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5 "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hmm ... that's interesting, why does that work? My guess is that the\n> CommandCounterIncrement() after the CREATE TABLE causes the SI code\n ^^^^^^^^^^^^^^^^^^^^^^^^^\nIt's called in the case of \ncreate table bug1 (f1 int28 primary key);\ntoo (after CREATE TABLE and before CREATE INDEX)...\n\n> to take responsibility for bug1's pg_class row even though it's not\n> truly committed. However,\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 11:19:00 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5"
},
{
"msg_contents": "I wrote:\n>> I suspect that this is an instance of a generic problem with *all*\n>> the SysCache tables, and perhaps the relcache as well: there is no\n>> mechanism to ensure that the caches stay in sync with the underlying\n>> relation during an abort.\n\nActually, there is such a mechanism: I find that the \"shared\ninvalidation\" manager has the right sorts of hooks into the SysCache\nstuff. It appears that once a tuple has been committed, the SI code\nwill ensure that it gets flushed from all the backends' caches if it\nis modified. The problem comes up when a backend creates a tuple,\ncauses it to be loaded into SysCache, and then aborts, all within\none transaction. The SI code doesn't handle that case, for reasons\nthat are not clear to me.\n\nBruce Momjian <[email protected]> writes:\n> Other backends don't see the rows until they are committed, but we do\n> see them because they are part of our own transaction.\n\nYes, this seems to be a key part of the problem.\n\nThe fix I just committed seems to cure the known cases, but it is\nnot elegant. I now think that the *real* problem is somewhere in\nthe sinval code. But I'm not inclined to try to solve it for 6.5.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 23:19:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5 "
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> Hmm ... that's interesting, why does that work? My guess is that the\n>> CommandCounterIncrement() after the CREATE TABLE causes the SI code\n> ^^^^^^^^^^^^^^^^^^^^^^^^^\n> It's called in the case of \n> create table bug1 (f1 int28 primary key);\n> too (after CREATE TABLE and before CREATE INDEX)...\n\nYeah, I know --- that's why I'm confused about why the one case works\nand the other doesn't...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 23:43:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Freezing docs for v6.5 "
}
] |
[
{
"msg_contents": "I've been chasing Chris Bitmead's coredump report from earlier today.\nI find that it can be reproduced very easily. For example:\nregression=> select f1 from int4_tbl group by f1;\n< no problem >\nregression=> select f1 from int4_tbl* group by f1;\n< core dump >\n\n(You may get unstable behavior rather than a reliable core dump\nif you are not configured --enable-cassert.)\n\nThe problem seems to be in optimizer/plan/planner.c, which is\nresponsible for creating the Sort and Group plan nodes needed to\nimplement GROUP BY. It also has to mark the lower plan's targetlist\nitems with resdom->reskey numbers so that the executor will know which\nitems to use for sort keys (cf. FormSortKeys in executor/nodeSort.c).\nThe trouble is that that latter marking is done in planner.c's\nmake_subplanTargetList(), which *is never invoked* for a query that\ninvolves inheritance. union_planner() only calls it if the given plan\ninvolves neither UNION nor inheritance. In the UNION case, recursion\ninto union_planner does the right thing, but not so in the inheritance\ncase.\n\nI rewrote some of this code a couple months ago, but I find that 6.4.2\nhas similar problems, so at least I can say I didn't break it ;-).\n\nIt seems clear that at least some of the processing that union_planner\ndoes in the simple case (the \"else\" part of its first big if-then-else)\nalso needs to be done in the inheritance case (and perhaps also in\nthe UNION case?). But I'm not sure exactly what. There's a lot going\non in this chunk of code, and I don't understand very much of it.\nI could really use some advice...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jun 1999 22:40:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "inherited GROUP BY is busted ... I need some help here"
},
{
"msg_contents": "I wrote:\n> I've been chasing Chris Bitmead's coredump report from earlier today.\n> I find that it can be reproduced very easily. For example:\n> regression=> select f1 from int4_tbl group by f1;\n> < no problem >\n> regression=> select f1 from int4_tbl* group by f1;\n> < core dump >\n\nWe had tentatively agreed not to fix this for 6.5, but I got more\nworried about it when I noticed that this particular simple case\nworked in 6.4.2. I don't like regressing ... so I dug into it and\nhave just committed a fix.\n\nIt turns out that pretty much *anything* involving grouping or\naggregation would fail if the query used inheritance, because the\nnecessary preprocessing wasn't getting done in that case. I think\nthat's a big enough bug to justify fixing at this late date. (Besides,\nthe fixes do not change the non-inheritance case, so I don't think\nI could have broken anything that was working...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Jun 1999 13:53:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] inherited GROUP BY is busted ... I need some help here "
},
{
"msg_contents": "\nTom, I can dig into this with you, if it is not already fixed.\n\n\n\n> I've been chasing Chris Bitmead's coredump report from earlier today.\n> I find that it can be reproduced very easily. For example:\n> regression=> select f1 from int4_tbl group by f1;\n> < no problem >\n> regression=> select f1 from int4_tbl* group by f1;\n> < core dump >\n> \n> (You may get unstable behavior rather than a reliable core dump\n> if you are not configured --enable-cassert.)\n> \n> The problem seems to be in optimizer/plan/planner.c, which is\n> responsible for creating the Sort and Group plan nodes needed to\n> implement GROUP BY. It also has to mark the lower plan's targetlist\n> items with resdom->reskey numbers so that the executor will know which\n> items to use for sort keys (cf. FormSortKeys in executor/nodeSort.c).\n> The trouble is that that latter marking is done in planner.c's\n> make_subplanTargetList(), which *is never invoked* for a query that\n> involves inheritance. union_planner() only calls it if the given plan\n> involves neither UNION nor inheritance. In the UNION case, recursion\n> into union_planner does the right thing, but not so in the inheritance\n> case.\n> \n> I rewrote some of this code a couple months ago, but I find that 6.4.2\n> has similar problems, so at least I can say I didn't break it ;-).\n> \n> It seems clear that at least some of the processing that union_planner\n> does in the simple case (the \"else\" part of its first big if-then-else)\n> also needs to be done in the inheritance case (and perhaps also in\n> the UNION case?). But I'm not sure exactly what. There's a lot going\n> on in this chunk of code, and I don't understand very much of it.\n> I could really use some advice...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 19:53:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] inherited GROUP BY is busted ... I need some help here"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, I can dig into this with you, if it is not already fixed.\n\nIt's at least partially fixed: the given test case does not coredump.\nI think there are still problems with more complex combinations of\ninheritance, UNION, and GROUP BY, however.\n\nI have some more changes in that area that I do not want to risk\ncommitting into 6.5.* ... after we split the tree I will commit them\nand then we can see how well things work...\n\n\t\t\tregards, tom lane\n\n>> I've been chasing Chris Bitmead's coredump report from earlier today.\n>> I find that it can be reproduced very easily. For example:\n>> regression=> select f1 from int4_tbl group by f1;\n>> < no problem >\n>> regression=> select f1 from int4_tbl* group by f1;\n>> < core dump >\n>> \n>> (You may get unstable behavior rather than a reliable core dump\n>> if you are not configured --enable-cassert.)\n>> \n>> The problem seems to be in optimizer/plan/planner.c, which is\n>> responsible for creating the Sort and Group plan nodes needed to\n>> implement GROUP BY. It also has to mark the lower plan's targetlist\n>> items with resdom->reskey numbers so that the executor will know which\n>> items to use for sort keys (cf. FormSortKeys in executor/nodeSort.c).\n>> The trouble is that that latter marking is done in planner.c's\n>> make_subplanTargetList(), which *is never invoked* for a query that\n>> involves inheritance. union_planner() only calls it if the given plan\n>> involves neither UNION nor inheritance. In the UNION case, recursion\n>> into union_planner does the right thing, but not so in the inheritance\n>> case.\n>> \n>> I rewrote some of this code a couple months ago, but I find that 6.4.2\n>> has similar problems, so at least I can say I didn't break it ;-).\n>> \n>> It seems clear that at least some of the processing that union_planner\n>> does in the simple case (the \"else\" part of its first big if-then-else)\n>> also needs to be done in the inheritance case (and perhaps also in\n>> the UNION case?). But I'm not sure exactly what. There's a lot going\n>> on in this chunk of code, and I don't understand very much of it.\n>> I could really use some advice...\n>> \n>> regards, tom lane\n",
"msg_date": "Wed, 07 Jul 1999 20:00:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] inherited GROUP BY is busted ... I need some help here "
},
{
"msg_contents": "Tom, is this still an open item?\n\n\n> I've been chasing Chris Bitmead's coredump report from earlier today.\n> I find that it can be reproduced very easily. For example:\n> regression=> select f1 from int4_tbl group by f1;\n> < no problem >\n> regression=> select f1 from int4_tbl* group by f1;\n> < core dump >\n> \n> (You may get unstable behavior rather than a reliable core dump\n> if you are not configured --enable-cassert.)\n> \n> The problem seems to be in optimizer/plan/planner.c, which is\n> responsible for creating the Sort and Group plan nodes needed to\n> implement GROUP BY. It also has to mark the lower plan's targetlist\n> items with resdom->reskey numbers so that the executor will know which\n> items to use for sort keys (cf. FormSortKeys in executor/nodeSort.c).\n> The trouble is that that latter marking is done in planner.c's\n> make_subplanTargetList(), which *is never invoked* for a query that\n> involves inheritance. union_planner() only calls it if the given plan\n> involves neither UNION nor inheritance. In the UNION case, recursion\n> into union_planner does the right thing, but not so in the inheritance\n> case.\n> \n> I rewrote some of this code a couple months ago, but I find that 6.4.2\n> has similar problems, so at least I can say I didn't break it ;-).\n> \n> It seems clear that at least some of the processing that union_planner\n> does in the simple case (the \"else\" part of its first big if-then-else)\n> also needs to be done in the inheritance case (and perhaps also in\n> the UNION case?). But I'm not sure exactly what. There's a lot going\n> on in this chunk of code, and I don't understand very much of it.\n> I could really use some advice...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Sep 1999 15:46:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] inherited GROUP BY is busted ... I need some help here"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, is this still an open item?\n\nThat particular coredump seems to be fixed. There might be some other\nproblems lurking with inherited queries, but this thread can be\nwritten off I think...\n\n>> I've been chasing Chris Bitmead's coredump report from earlier today.\n>> I find that it can be reproduced very easily. For example:\n>> regression=> select f1 from int4_tbl group by f1;\n>> < no problem >\n>> regression=> select f1 from int4_tbl* group by f1;\n>> < core dump >\n>> \n>> (You may get unstable behavior rather than a reliable core dump\n>> if you are not configured --enable-cassert.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Sep 1999 21:19:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] inherited GROUP BY is busted ... I need some help here "
}
] |
[
{
"msg_contents": "Jan Wieck writes (over in pgsql-sql):\n> * WE STILL NEED THE GENERAL TUPLE SPLIT CAPABILITY!!! *\n\nI've been thinking about making this post for a while ... with 6.5\nalmost out the door, I guess now is a good time.\n\nI don't know what people have had in mind for 6.6, but I propose that\nthere ought to be three primary objectives for our next release:\n\n1. Eliminate arbitrary restrictions on tuple size.\n\n2. Eliminate arbitrary restrictions on query size (textual\n length/complexity that is).\n\n3. Cure within-statement memory leaks, so that processing large numbers\n of tuples in one statement is reliable.\n\nAll of these are fairly major projects, and it might be that we get\nlittle or nothing else done if we take these on. But these are the\nproblems we've been hearing about over and over and over. I think\nfixing these would do more to improve Postgres than almost any other\nwork we might do.\n\nComments? Does anyone have a different list of pet peeves? Is there\nany chance of getting everyone to subscribe to a master plan like this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 13:32:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Priorities for 6.6"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I don't know what people have had in mind for 6.6, but I propose that\n> there ought to be three primary objectives for our next release:\n> \n> 1. Eliminate arbitrary restrictions on tuple size.\n\nThis is not primary for me -:) \nThough, it's required by PL/pgSQL and so... I agreed that\nthis problem must be resolved in some way. Related TODO items:\n\n* Allow compression of large fields or a compressed field type\n* Allow large text type to use large objects(Peter)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nI like it very much, though I don't like that LO are stored\nin separate files. This is known as \"multi-representation\" feature\nin Illustra.\n\n> 2. Eliminate arbitrary restrictions on query size (textual\n> length/complexity that is).\n\nYes, this is quite annoyning thing.\n\n> 3. Cure within-statement memory leaks, so that processing large numbers\n> of tuples in one statement is reliable.\n\nQuite significant!\n\n> All of these are fairly major projects, and it might be that we get\n> little or nothing else done if we take these on. But these are the\n> problems we've been hearing about over and over and over. I think\n> fixing these would do more to improve Postgres than almost any other\n> work we might do.\n> \n> Comments? Does anyone have a different list of pet peeves? Is there\n> any chance of getting everyone to subscribe to a master plan like this?\n\nNo chance -:))\n\nThis is what I would like to see in 6.6:\n\n1. Referential integrity.\n2. Dirty reads (will be required by 1. if we'll decide to follow\n the way proposed by Jan - using rules, - though there is another\n way I'll talk about later; dirty reads are useful anyway).\n3. Savepoints (they are my primary wish-to-implement thing).\n4. elog(ERROR) must return error-codes, not just messages!\n This is very important for non-interactive application...\n in conjuction with 3. -:)\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 02:23:04 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "On Fri, 4 Jun 1999, Vadim Mikheev wrote:\n\n> * Allow compression of large fields or a compressed field type\n\nThis one looks cool...\n\n> > All of these are fairly major projects, and it might be that we get\n> > little or nothing else done if we take these on. But these are the\n> > problems we've been hearing about over and over and over. I think\n> > fixing these would do more to improve Postgres than almost any other\n> > work we might do.\n> > \n> > Comments? Does anyone have a different list of pet peeves? Is there\n> > any chance of getting everyone to subscribe to a master plan like this?\n> \n> No chance -:))\n\nhave to agree with Vadim here...the point that has *always* been stressed\nhere is that if something is important to you, fix it. Don't expect\nanyone else to fall into some sort of \"party line\" or scheduale, cause\nthen ppl lose the enjoyment in what they are doing *shrug*\n\nfor instance, out of the three things you listed, the only one that I'd\nconsider an issue is the third, as I've never hit the first two\nlimitations ...*shrug*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 3 Jun 1999 15:59:28 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "> This is what I would like to see in 6.6:\n> \n> 1. Referential integrity.\n\nBingo. Item #1. Period. End of story. Everything else pales in\ncomparison. We just get too many requests for this, though I think it\nan insignificant feature myself. Jan, I believe you have some ideas on\nthis. (Like an elephant, I never forget.)\n\n\n> 4. elog(ERROR) must return error-codes, not just messages!\n> This is very important for non-interactive application...\n> in conjuction with 3. -:)\n\nAdded to TODO.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 15:10:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "> Jan Wieck writes (over in pgsql-sql):\n> > * WE STILL NEED THE GENERAL TUPLE SPLIT CAPABILITY!!! *\n> \n> I've been thinking about making this post for a while ... with 6.5\n> almost out the door, I guess now is a good time.\n> \n> I don't know what people have had in mind for 6.6, but I propose that\n> there ought to be three primary objectives for our next release:\n> \n> 1. Eliminate arbitrary restrictions on tuple size.\n> \n> 2. Eliminate arbitrary restrictions on query size (textual\n> length/complexity that is).\n> \n> 3. Cure within-statement memory leaks, so that processing large numbers\n> of tuples in one statement is reliable.\n\nI think the other hot item for 6.6 is outer joins.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 15:52:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> I think the other hot item for 6.6 is outer joins.\n\nI would like to have 48 hours in day -:)\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 03:59:14 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > I think the other hot item for 6.6 is outer joins.\n> \n> I would like to have 48 hours in day -:)\n> \n> Vadim\n> \n\nYou and I are off the hook. Jan volunteered for foreign keys, and\nThomas for outer joins. We can relax. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 16:21:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > I think the other hot item for 6.6 is outer joins.\n> >\n> > I would like to have 48 hours in day -:)\n> >\n> > Vadim\n> >\n> \n> You and I are off the hook. Jan volunteered for foreign keys, and\n> Thomas for outer joins. We can relax. :-)\n\nI volunteered for savepoints -:))\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 04:30:19 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > Bruce Momjian wrote:\n> > > >\n> > > > I think the other hot item for 6.6 is outer joins.\n> > >\n> > > I would like to have 48 hours in day -:)\n> > >\n> > > Vadim\n> > >\n> > \n> > You and I are off the hook. Jan volunteered for foreign keys, and\n> > Thomas for outer joins. We can relax. :-)\n> \n> I volunteered for savepoints -:))\n\nOh.\n\nHey, I thought you were going to sleep?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 16:40:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > > >\n> > > > > I think the other hot item for 6.6 is outer joins.\n> > > >\n> > > > I would like to have 48 hours in day -:)\n> > > >\n> > > > Vadim\n> > > >\n> > >\n> > > You and I are off the hook. Jan volunteered for foreign keys, and\n> > > Thomas for outer joins. We can relax. :-)\n> >\n> > I volunteered for savepoints -:))\n> \n> Oh.\n> \n> Hey, I thought you were going to sleep?\n\nI just try to have at least 25 hours in day :)\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 04:44:11 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "> > > I volunteered for savepoints -:))\n> > \n> > Oh.\n> > \n> > Hey, I thought you were going to sleep?\n> \n> I just try to have at least 25 hours in day :)\n> \n\nJust have some pelmeni and go to sleep.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 16:45:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > > I volunteered for savepoints -:))\n> > >\n> > > Oh.\n> > >\n> > > Hey, I thought you were going to sleep?\n> >\n> > I just try to have at least 25 hours in day :)\n> >\n> \n> Just have some pelmeni and go to sleep.\n\n-:)))\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 04:47:25 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> 1. Eliminate arbitrary restrictions on tuple size.\n\n> This is not primary for me -:) \n\nFair enough; it's not something I need either. But I see complaints\nabout it constantly on the mailing lists; a lot of people do need it.\n\n> * Allow large text type to use large objects(Peter)\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> I like it very much, though I don't like that LO are stored\n> in separate files.\n\nBut, but ... if we fixed the tuple-size problem then people could stop\nusing large objects at all, and instead just put their data into tuples.\nI hate to see work going into improving LO support when we really ought\nto be phasing out the whole feature --- it's got *so* many conceptual\nand practical problems ...\n\n>> any chance of getting everyone to subscribe to a master plan like this?\n\n> No chance -:))\n\nYeah, I know ;-). But I was hoping to line up enough people so that\nthese things have some chance of getting done. I doubt that any of\nthese projects can be implemented by just one or two people; they all\naffect too much of the code. (For instance, eliminating query-size\nrestrictions will require looking at all of the interface libraries,\npsql, pg_dump, and probably other apps, even though the fixes in\nthe backend should be somewhat localized.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 17:39:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Priorities for 6.6 "
},
{
"msg_contents": "At 05:39 PM 6/3/99 -0400, Tom Lane wrote:\n\n>But, but ... if we fixed the tuple-size problem then people could stop\n>using large objects at all, and instead just put their data into tuples.\n>I hate to see work going into improving LO support when we really ought\n>to be phasing out the whole feature --- it's got *so* many conceptual\n>and practical problems ...\n\nMaking them go away would be a real blessing. Oracle folk\nbitch about CLOBS and BLOBS and the like, too. They're a \npain.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Thu, 03 Jun 1999 15:15:16 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6 "
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 05:39 PM 6/3/99 -0400, Tom Lane wrote:\n> \n> > * Allow large text type to use large objects(Peter)\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > I like it very much, though I don't like that LO are stored\n> > in separate files. This is known as \"multi-representation\" feature\n> > in Illustra.\n> >\n> >But, but ... if we fixed the tuple-size problem then people could stop\n> >using large objects at all, and instead just put their data into tuples.\n> >I hate to see work going into improving LO support when we really ought\n> >to be phasing out the whole feature --- it's got *so* many conceptual\n> >and practical problems ...\n> \n> Making them go away would be a real blessing. Oracle folk\n> bitch about CLOBS and BLOBS and the like, too. They're a\n> pain.\n\nNote: I told about \"multi-representation\" feature, not just about\nLO/CLOBS/BLOBS support. \"Multi-representation\" means that server\nstores tuple fields sometime inside the main relation file,\nsometime outside of it, but this is hidden from user and so\npeople \"just put their data into tuples\". I think that putting\nbig fields outside of main relation file is very good thing.\nBTW, this approach also allows what you are proposing - why not\nput not too big field (~ 8K or so) to another block of main file?\nBTW, I don't like using LOs as external storage.\n\nImplementation seems easy:\n\nstruct varlena\n{\n int32 vl_len;\n char vl_dat[1];\n};\n\n1. make vl_len uint32;\n2. use vl_len & 0x80000000 as flag that underlying data is\n in another place;\n3. put oid of external \"relation\" (where data is stored),\n blocknumber and item position (something else?) to vl_dat.\n...\n...\n...\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 10:56:04 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "> Implementation seems easy:\n> \n> struct varlena\n> {\n> int32 vl_len;\n> char vl_dat[1];\n> };\n> \n> 1. make vl_len uint32;\n> 2. use vl_len & 0x80000000 as flag that underlying data is\n> in another place;\n> 3. put oid of external \"relation\" (where data is stored),\n> blocknumber and item position (something else?) to vl_dat.\n> ...\n\nYes, it would be very nice to have this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 23:27:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "At 10:56 AM 6/4/99 +0800, Vadim Mikheev wrote:\n\n>Note: I told about \"multi-representation\" feature, not just about\n>LO/CLOBS/BLOBS support. \"Multi-representation\" means that server\n>stores tuple fields sometime inside the main relation file,\n>sometime outside of it, but this is hidden from user and so\n>people \"just put their data into tuples\". I think that putting\n>big fields outside of main relation file is very good thing.\n\nYes, it is, though \"big\" is relative (as computers grow). The\nkey is to hide the details of where things are stored from the\nuser, so the user doesn't really have to know what is \"big\"\n(today) vs. \"small\" (tomorrow or today, for that matter). I\ndon't think it's so much the efficiency hit of having big\nitems stored outside the main relation file, as the need for\nthe user to know what's \"big\" and what's \"small\", that's the\nproblem.\n\nI mean, my background is as a compiler writer for high-level\nlanguages...call me a 1970's idealist if you will, but I\nreally think such things should be hidden from the user.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Thu, 03 Jun 1999 20:58:22 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "At 10:56 AM 6/4/99 +0800, Vadim Mikheev wrote:\n\n>Note: I told about \"multi-representation\" feature, not just about\n>LO/CLOBS/BLOBS support. \"Multi-representation\" means that server\n>stores tuple fields sometime inside the main relation file,\n>sometime outside of it, but this is hidden from user and so\n>people \"just put their data into tuples\".\n\nOK, in my first response I didn't pick up on your generalization,\nbut I did respond with a generalization that implementation \ndetails should be hidden from the user.\n\nWhich is what you're saying.\n\nAs a compiler writer, this is more or less what I devoted my\nlife to 20 years ago...of course, reasonable efficiency is\na pre-condition if you're going to hide details from the\nuser...\n\nI'll back off a bit, though, and say that a lot of DB users\nreally don't need an enterprise engine like Oracle (i.e.\nsomething that requires a suite of $100K/yr DBAs :)\n\nThere's a niche for a solid reliable, rich feature set,\nreasonably well-performing db out there, and this niche\nis ever-growing with the web.\n\nWith $500 web servers sitting on $29.95/mo DSL lines,\nas does mine (http://donb.photo.net/tweeterdom), who\nwants to pay $6K to Oracle? \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Thu, 03 Jun 1999 21:05:19 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I don't know what people have had in mind for 6.6, but I propose that\n> there ought to be three primary objectives for our next release:\n> \n> 1. Eliminate arbitrary restrictions on tuple size.\n> \n> 2. Eliminate arbitrary restrictions on query size (textual\n> length/complexity that is).\n> \n> 3. Cure within-statement memory leaks, so that processing large numbers\n> of tuples in one statement is reliable.\n\nI would add a few that I think would be important:\n\nA. Add outer joins\n\nB. Add the possibility to prepare statements and then execute them \n with a set of arguments. This already exists in SPI but for many\n C/S apps it would be desirable to have this in the fe/be protocol\n as well\n\nC. Look over the protocol and unify the _binary_ representations of\n datatypes on wire. in fact each type already has two sets of\n in/out conversion functions in its definition tuple, one for disk and\n another for net, it's only that until now they are the same for\n all types and thus probably used wromg in some parts of code.\n\nD. After B. and C., add a possibility to insert binary data\n in \"(small)binary\" field without relying on LOs or expensive\n (4x the size) quoting. Allow any characters in said binary field\n\nE. to make 2. and B., C, D. possible, some more fundamental changes in\n fe/be-protocol may be needed. There seems to be some effort for a new\n fe/be communications mechanism using CORBA. \n But my proposal would be to adopt the X11 protocol which is quite\nlight\n but still very clean, well understood and which can transfer\narbitrary\n data in an efficient way.\n There are even \"low bandwidth\" variants of it for using over\n really slow links. Also some kinds of \"out of band\" provisions exist,\n that are used by window managers.\n It should also be trivial to adapt crypto wrappers/proxies (such as\nthe\n one in ssh)\n The protocol is described in a document available from\nhttp://www.x.org\n\nF. As a lousy alternative to 1. fix the LO storage. Currently _all_ of\n the LO files are kept in the same directory as the tables and\nindexes.\n this can bog down the whole database quite fast if one lots of LOs\nand\n a file system that does linear scans on open (like ext2).\n A sheme where LOs are kept in subdirectories based on the hex\n representation of their oids would avoid that (so LO with OID\n0x12345678\n would be stored in $PG_DATA/DBNAME/LO/12/34/56/78.lo or maybe\nreversed\n $PG_DATA/DBNAME/LO/78/56/34/12.lo to distribute them more evenly in\n \"buckets\"\n\n> All of these are fairly major projects, and it might be that we get\n> little or nothing else done if we take these on.\n\nBut then, the other things to do _are_ little compared to these ;)\n\n> But these are the problems we've been hearing about over and over and\n> over.\n\nThe LO thing (and lack of decent full-text indexing) is what has kept me \nusing hybrid solutions where I keep the LO data and home-grown full-text\nindexes in file system outside of the database.\n\n> I think fixing these would do more to improve Postgres than \n> almost any other work we might do.\n\nAmen!\n\n----------------\nHannu\n",
"msg_date": "Fri, 04 Jun 1999 12:10:51 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "On Thu, Jun 03, 1999 at 11:27:14PM -0400, Bruce Momjian wrote:\n> > Implementation seems easy:\n> > \n> > struct varlena\n> > {\n> > int32 vl_len;\n> > char vl_dat[1];\n> > };\n> > \n> > 1. make vl_len uint32;\n> > 2. use vl_len & 0x80000000 as flag that underlying data is\n> > in another place;\n> > 3. put oid of external \"relation\" (where data is stored),\n> > blocknumber and item position (something else?) to vl_dat.\n> > ...\n> \n> Yes, it would be very nice to have this.\n\nI hate to be fussy - normally I am just watching, but could we\n*please* keep any flag like above in another field. That way, when\nthe size of an object reaches 2^31 we will not have legacy problems..\n\nstruct varlena\n{\n size_t vl_len;\n int vl_flags;\n caddr_t vl_dat[1];\n};\n\n(Please:)\n\nRegards,\n-- \nPeter Galbavy\nKnowledge Matters Ltd\nhttp://www.knowledge.com/\n",
"msg_date": "Fri, 4 Jun 1999 12:56:46 +0100",
"msg_from": "Peter Galbavy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Note: I told about \"multi-representation\" feature, not just about\n> LO/CLOBS/BLOBS support. \"Multi-representation\" means that server\n> stores tuple fields sometime inside the main relation file,\n> sometime outside of it, but this is hidden from user and so\n> people \"just put their data into tuples\". I think that putting\n> big fields outside of main relation file is very good thing.\n\nAh, I see what you mean. If you think that is easier than splitting\ntuples, we could go that way. We'd have a limit of about 500 fields in\na tuple (maybe less if the tuple contains \"small\" fields that are not\npushed to another place). That's annoying if the goal is to eliminate\nlimits, but I think it would be unlikely to be a big problem in\npractice.\n\nPerhaps a better way is to imagine these \"pointers to another place\"\nto be just part of the tuple structure on disk, without tying them to\nindividual fields. In other words, the tuple's data is still a string\nof fields, but now you can have that data either right there with the\ntuple header, or pointed to by a list of \"indirect links\" that are\nstored with the tuple header. (Kinda like direct vs indirect blocks in\nUnix filesystem.) You can chop the tuple data into blocks without\nregard for field boundaries if you do it that way. I think that might\nbe better than altering the definition of varlena --- it'd be visible\nonly to the tuple read and write mechanisms, not to everything in the\nexecutor that deals with varlena fields...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jun 1999 10:03:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Priorities for 6.6 "
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> E. to make 2. and B., C, D. possible, some more fundamental changes in\n> fe/be-protocol may be needed. There seems to be some effort for a new\n> fe/be communications mechanism using CORBA. \n> But my proposal would be to adopt the X11 protocol which is quite\n> light but still very clean, well understood and which can transfer\n> arbitrary data in an efficient way.\n\n... but no one uses it for database work. If we're going to go to the\ntrouble of overhauling the fe/be protocol, I think we should adopt\nsomething fairly standard, and that seems to mean CORBA.\n\n> F. As a lousy alternative to 1. fix the LO storage. Currently _all_ of\n> the LO files are kept in the same directory as the tables and\n> indexes. this can bog down the whole database quite fast\n\nYes. I was thinking last night that there's no good reason not to\njust stick all the LOs into a single relation --- or actually two\nrelations, one having a row per LO (which would really just act to tell\nyou what LOs exist, and perhaps store access-privileges info) and one\nthat has a row per LO chunk, with columns LONumber, Offset, Data rather\nthan just Offset and Data as is done now. The existing index on Offset\nwould be replaced by a multi-index on LONumber and Offset. In this\nscheme the LONumbers need not be tied hard-and-fast to OIDs, but could\nactually be anything you wanted, which would be much nicer for\ndump/reload purposes.\n\nHowever, I am loathe to put *any* work into improving LOs, since I think\nthe right answer is to get rid of the need for the durn things by\neliminating the size restrictions on regular tuples.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jun 1999 10:47:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Priorities for 6.6 "
},
{
"msg_contents": "\nOn 04-Jun-99 Tom Lane wrote:\n> However, I am loathe to put *any* work into improving LOs, since I think\n> the right answer is to get rid of the need for the durn things by\n> eliminating the size restrictions on regular tuples.\n\nIs this doable? I just looked at the list of datatypes and didn't see\nbinary as one of them. Imagining a Real Estate database with pictures\nof homes (inside and out), etc. or an employee database with mugshots of\nthe employees, what datatype would you use to store the pictures (short \nof just storing a filename of the pic)?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Fri, 04 Jun 1999 11:49:06 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n> On 04-Jun-99 Tom Lane wrote:\n>> However, I am loathe to put *any* work into improving LOs, since I think\n>> the right answer is to get rid of the need for the durn things by\n>> eliminating the size restrictions on regular tuples.\n\n> Is this doable? I just looked at the list of datatypes and didn't see\n> binary as one of them.\n\nbytea ... even if we didn't have one, inventing it would be trivial.\n(Although I wonder whether pg_dump copes with arbitrary data in fields\nproperly ... I think there are still some issues about COPY protocol\nnot being fully 8-bit-clean...)\n\nAs someone else pointed out, you'd still want an equivalent of\nlo_read/lo_write, but now it would mean fetch or put N bytes at an\noffset of M bytes within the value of field X of tuple Y in some\nrelation. Otherwise field X is pretty much like any other item in the\ndatabase. I suppose it'd only make sense to allow random data to be\nfetched/stored in a bytea field --- other datatypes would want to\nconstrain the data to valid values...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jun 1999 13:14:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Priorities for 6.6 "
},
{
"msg_contents": "> > eliminating the size restrictions on regular tuples.\n> Is this doable?\n\nPresumably we would have to work out a \"chunking\" client/server\nprotocol to allow sending very large tuples. Also, it would need to\nreport the size of the tuple before it shows up, to allow very large\nrows to be caught correctly.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 05 Jun 1999 01:32:13 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>>>> eliminating the size restrictions on regular tuples.\n>> Is this doable?\n\n> Presumably we would have to work out a \"chunking\" client/server\n> protocol to allow sending very large tuples.\n\nI don't really see a need to change the protocol. It's true that\na single tuple containing a couple dozen megabytes (per someone's\nrecent example) would stress the system unpleasantly, but that would\nbe true in a *lot* of ways. Perhaps we should plan on keeping the\nLO feature to allow for really huge objects.\n\nAs far as I've seen, 99% of users are not interested in storing objects\nthat are so large that handling them as single tuples would pose serious\nperformance problems. It's just that a hard limit at 8K (or any other\nparticular small number) is annoying.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 05 Jun 1999 11:38:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Priorities for 6.6 "
},
{
"msg_contents": "> C. Look over the protocol and unify the _binary_ representations of\n> datatypes on wire. in fact each type already has two sets of\n> in/out conversion functions in its definition tuple, one for disk and\n> another for net, it's only that until now they are the same for\n> all types and thus probably used wromg in some parts of code.\n\nAdded to TODO:\n\n\t* remove duplicate type in/out functions for disk and net\n\n> \n> D. After B. and C., add a possibility to insert binary data\n> in \"(small)binary\" field without relying on LOs or expensive\n> (4x the size) quoting. Allow any characters in said binary field\n\nI will add this to the TODO list if you can tell me how does the user\npass this into the backend via a query?\n\n\t* Add non-large-object binary field\n\n\n> F. As a lousy alternative to 1. fix the LO storage. Currently _all_ of\n> the LO files are kept in the same directory as the tables and\n> indexes.\n> this can bog down the whole database quite fast if one lots of LOs\n> and\n> a file system that does linear scans on open (like ext2).\n> A sheme where LOs are kept in subdirectories based on the hex\n> representation of their oids would avoid that (so LO with OID\n> 0x12345678\n> would be stored in $PG_DATA/DBNAME/LO/12/34/56/78.lo or maybe\n> reversed\n> $PG_DATA/DBNAME/LO/78/56/34/12.lo to distribute them more evenly in\n> \"buckets\"\n\nI have already added a TODO item to use hash directories for large\nobjects. Probably single or double-level 256 directory buckets are\nenough:\n\n\t04/4A/file\n\t09/B3/file\n\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 20:07:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
},
{
"msg_contents": "\nOK, question answered, TODO item added:\n\n\t* Add non-large-object binary field\n\n> > Is this doable? I just looked at the list of datatypes and didn't see\n> > binary as one of them.\n> \n> bytea ... even if we didn't have one, inventing it would be trivial.\n> (Although I wonder whether pg_dump copes with arbitrary data in fields\n> properly ... I think there are still some issues about COPY protocol\n> not being fully 8-bit-clean...)\n> \n> As someone else pointed out, you'd still want an equivalent of\n> lo_read/lo_write, but now it would mean fetch or put N bytes at an\n> offset of M bytes within the value of field X of tuple Y in some\n> relation. Otherwise field X is pretty much like any other item in the\n> database. I suppose it'd only make sense to allow random data to be\n> fetched/stored in a bytea field --- other datatypes would want to\n> constrain the data to valid values...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 20:08:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Priorities for 6.6"
}
] |
[
{
"msg_contents": "\nSELECT * FROM test WHERE test IN (SELECT * FROM test) fails with strange error\nFix function pointer calls to take Datum args for char and int2 args(ecgs)\nRegression test for new Numeric type\nLarge Object memory problems\nrefint problems\ninvalidate cache on aborted transaction\nspinlock stuck problem\nbenchmark performance problem\nadd more detail inref/lock.sgml, ref/set.sgml to reflect MVCC & locking changes.\n\nMarkup sql.sgml, Stefan's intro to SQL\nGenerate Admin, User, Programmer hardcopy postscript\nGenerate INSTALL and HISTORY from sgml sources.\n\n\n \n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 17:01:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open 6.5 items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> SELECT * FROM test WHERE test IN (SELECT * FROM test) fails with strange error\n\nWhat is this one all about? I don't see a problem offhand:\n\nregression=> create table test (test int);\nCREATE\nregression=> SELECT * FROM test WHERE test IN (SELECT * FROM test);\ntest\n----\n(0 rows)\n\nregression=> insert into test values (33);\nINSERT 189449 1\nregression=> SELECT * FROM test WHERE test IN (SELECT * FROM test);\ntest\n----\n 33\n(1 row)\n\n> Fix function pointer calls to take Datum args for char and int2 args(ecgs)\n\nI think the consensus is to leave this alone until we can get more info.\n\n> Regression test for new Numeric type\n\nI think we need this in order to start flushing out any portability\nproblems that may exist in NUMERIC. (The first time I tried to use it\nI found it didn't work on my box, so I'm harboring lingering doubts...)\nJan?\n\n> Large Object memory problems\n\nAs far as I can tell, lo_read/lo_write etc do not leak memory anymore\n(well, maybe they do within a transaction, but it's all cleaned up at\nxact end).\n\nThere is a small leak every time a new LO is created, but I believe this\nis not specific to LOs --- I think it is the same leak in the relcache\nthat occurs on the first reference to a relation of *any* kind. (See\nmy message \"Memory leaks in relcache\" dated 5/15/99.)\n\nIn short, I think this one can be closed out, or at least removed from\nthe 6.5-release-stoppers list.\n\n> refint problems\n\nWhat is the issue here?\n\n> spinlock stuck problem\n\nI think this might be fixed... at least Vadim fixed one cause of it...\n\n> benchmark performance problem\n\nThe only thing I have been able to find out here is that btree is fairly\ninefficient in the presence of *many* equal keys. I do not think this\nis a showstopper, although if I get time I might try to fix the easiest-\nto-fix aspect of it (linear search in bt_firsteq).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 19:05:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "> add more detail inref/lock.sgml, ref/set.sgml to reflect \n> MVCC & locking changes.\n\nDid you just do this, or is more coming from Vadim or yourself?\n\n> Markup sql.sgml, Stefan's intro to SQL\n\nDone (well, sort of. My browser doesn't recognize some of the math\nmarkup, but the hardcopy seems pretty good).\n\n> Generate Admin, User, Programmer hardcopy postscript\n\nProgrammer's Guide done.\n\n> Generate INSTALL and HISTORY from sgml sources.\n\nINSTALL done.\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 04 Jun 1999 02:01:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > SELECT * FROM test WHERE test IN (SELECT * FROM test) fails with strange error\n> \n> What is this one all about? I don't see a problem offhand:\n> \n> regression=> create table test (test int);\n> CREATE\n> regression=> SELECT * FROM test WHERE test IN (SELECT * FROM test);\n> test\n> ----\n> (0 rows)\n\nThe issue is the issue:\n\t\n\ttest=> drop table test;\n\tDROP\n\ttest=> create table test(x int);\n\tCREATE\n\ttest=> insert into test values (3);\n\tINSERT 72169 1\n\ttest=> select * from test where test in (select x from test);\n\tNOTICE: unknown node tag 704 in fireRIRonSubselect()\n\tNOTICE: Node is: { IDENT \"test\" }\n\tERROR: ExecEvalExpr: unknown expression type 704\n\ttest=> \n\t\n> > Fix function pointer calls to take Datum args for char and int2 args(ecgs)\n> \n> I think the consensus is to leave this alone until we can get more info.\n\nYes. I will remove it.\n\n> \n> > Large Object memory problems\n> \n> As far as I can tell, lo_read/lo_write etc do not leak memory anymore\n> (well, maybe they do within a transaction, but it's all cleaned up at\n> xact end).\n> \n> There is a small leak every time a new LO is created, but I believe this\n> is not specific to LOs --- I think it is the same leak in the relcache\n> that occurs on the first reference to a relation of *any* kind. (See\n> my message \"Memory leaks in relcache\" dated 5/15/99.)\n> \n> In short, I think this one can be closed out, or at least removed from\n> the 6.5-release-stoppers list.\n\nRemoved.\n\n> > refint problems\n> \n> What is the issue here?\n\nI thought regression tests were showing a problem?\n\n> > spinlock stuck problem\n> \n> I think this might be fixed... at least Vadim fixed one cause of it...\n\nAnyone?\n\n> \n> > benchmark performance problem\n> \n> The only thing I have been able to find out here is that btree is fairly\n> inefficient in the presence of *many* equal keys. I do not think this\n> is a showstopper, although if I get time I might try to fix the easiest-\n> to-fix aspect of it (linear search in bt_firsteq).\n\nI will move it to TODO if it is not done for final.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 23:03:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> > add more detail inref/lock.sgml, ref/set.sgml to reflect \n> > MVCC & locking changes.\n> \n> Did you just do this, or is more coming from Vadim or yourself?\n\nI have done this. I want to add something about SERIALIZED/READ\nCOMMITTED. Vadim may want to add something to what I have done. It is\nup to him.\n\n> \n> > Markup sql.sgml, Stefan's intro to SQL\n> \n> Done (well, sort of. My browser doesn't recognize some of the math\n> markup, but the hardcopy seems pretty good).\n\nRemoved.\n\n> \n> > Generate Admin, User, Programmer hardcopy postscript\n> \n> Programmer's Guide done.\n\nProgrammer's removed.\n\n> \n> > Generate INSTALL and HISTORY from sgml sources.\n> \n> INSTALL done.\n> \n\nINSTALL removed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 23:06:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > add more detail inref/lock.sgml, ref/set.sgml to reflect\n> > > MVCC & locking changes.\n> >\n> > Did you just do this, or is more coming from Vadim or yourself?\n> \n> I have done this. I want to add something about SERIALIZED/READ\n> COMMITTED. Vadim may want to add something to what I have done. It is\n> up to him.\n\nI have to rewrite lock.sgml...\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 11:23:31 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "[Charset koi8-r unsupported, filtering to ASCII...]\n> Bruce Momjian wrote:\n> > \n> > > > add more detail inref/lock.sgml, ref/set.sgml to reflect\n> > > > MVCC & locking changes.\n> > >\n> > > Did you just do this, or is more coming from Vadim or yourself?\n> > \n> > I have done this. I want to add something about SERIALIZED/READ\n> > COMMITTED. Vadim may want to add something to what I have done. It is\n> > up to him.\n> \n> I have to rewrite lock.sgml...\n\nI am adding a little about SET TRANSACTION. Feel free to change it.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 23:28:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> [Charset koi8-r unsupported, filtering to ASCII...]\n> > Bruce Momjian wrote:\n> > >\n> > > > > add more detail inref/lock.sgml, ref/set.sgml to reflect\n> > > > > MVCC & locking changes.\n> > > >\n> > > > Did you just do this, or is more coming from Vadim or yourself?\n> > >\n> > > I have done this. I want to add something about SERIALIZED/READ\n> > > COMMITTED. Vadim may want to add something to what I have done. It is\n> > > up to him.\n> >\n> > I have to rewrite lock.sgml...\n> \n> I am adding a little about SET TRANSACTION. Feel free to change it.\n\nBTW, should we describe difference between two isolation levels\nin set.sgml or it's enough to have such desc in User Guide?\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 11:31:34 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "[Charset koi8-r unsupported, filtering to ASCII...]\n> Bruce Momjian wrote:\n> > \n> > > > add more detail inref/lock.sgml, ref/set.sgml to reflect\n> > > > MVCC & locking changes.\n> > >\n> > > Did you just do this, or is more coming from Vadim or yourself?\n> > \n> > I have done this. I want to add something about SERIALIZED/READ\n> > COMMITTED. Vadim may want to add something to what I have done. It is\n> > up to him.\n> \n> I have to rewrite lock.sgml...\n> \n\nI have updated set.sgml and set.l to describe isolation levels. I don't\nunderstand the new lock options very well.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 23:45:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> BTW, should we describe difference between two isolation levels\n> in set.sgml or it's enough to have such desc in User Guide?\n> \n> Vadim\n> \n\n\nI just did this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 23:46:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > BTW, should we describe difference between two isolation levels\n> > in set.sgml or it's enough to have such desc in User Guide?\n> >\n> > Vadim\n> >\n> \n> I just did this.\n\nThanks.\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 11:48:11 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > BTW, should we describe difference between two isolation levels\n> > > in set.sgml or it's enough to have such desc in User Guide?\n> > >\n> > > Vadim\n> > >\n> > \n> > I just did this.\n> \n> Thanks.\n> \n> Vadim\n> \n\n lock [table] classname [[IN] [ROW|ACCESS] [SHARE|EXCLUSIVE] MODE]\n\nI don't understand these options. I would be glad to write something if\nyou can explain it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 23:51:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> \n> lock [table] classname [[IN] [ROW|ACCESS] [SHARE|EXCLUSIVE] MODE]\n ^ ^\nRemove them.\nAlso, there is yet another lock mode:\n\nlock [table] classname IN SHARE ROW EXCLUSIVE MODE\n\n> \n> I don't understand these options. I would be glad to write something if\n> you can explain it.\n\nActually, all lock modes are described in mvcc.sgml\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 12:11:20 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> The issue is the issue:\n\t\n> \ttest=> drop table test;\n> \tDROP\n> \ttest=> create table test(x int);\n> \tCREATE\n> \ttest=> insert into test values (3);\n> \tINSERT 72169 1\n> \ttest=> select * from test where test in (select x from test);\n> \tNOTICE: unknown node tag 704 in fireRIRonSubselect()\n> \tNOTICE: Node is: { IDENT \"test\" }\n> \tERROR: ExecEvalExpr: unknown expression type 704\n\nHmm. Doesn't happen if one does\n\n\tselect * from test where x in (select x from test);\n\nIs there any rational interpretation to the first query? test is not an\navailable column name, so I don't know what it is supposed to mean.\n\nI'm having a hard time seeing this as a showstopper bug... at best it's\nan error that needs to be caught somewhere where a better error message\ncan be given...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jun 1999 00:15:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items "
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > \n> > lock [table] classname [[IN] [ROW|ACCESS] [SHARE|EXCLUSIVE] MODE]\n> ^ ^\n> Remove them.\n> Also, there is yet another lock mode:\n> \n> lock [table] classname IN SHARE ROW EXCLUSIVE MODE\n\nDone.\n\n> \n> > \n> > I don't understand these options. I would be glad to write something if\n> > you can explain it.\n> \n> Actually, all lock modes are described in mvcc.sgml\n\nI will take a look there.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 4 Jun 1999 00:16:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> Is there any rational interpretation to the first query? test is not an\n> available column name, so I don't know what it is supposed to mean.\n> \n\nIt clearly is a meaningless query. Tablenames can not be used in that\ncontext.\n\n> I'm having a hard time seeing this as a showstopper bug... at best it's\n> an error that needs to be caught somewhere where a better error message\n> can be given...\n\nIt isn't. It is just new from 6.4.2. I just move it to TODO if it is\nnot done, unless you want it moved now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 4 Jun 1999 00:18:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > \n> > lock [table] classname [[IN] [ROW|ACCESS] [SHARE|EXCLUSIVE] MODE]\n> ^ ^\n> Remove them.\n> Also, there is yet another lock mode:\n> \n> lock [table] classname IN SHARE ROW EXCLUSIVE MODE\n> \n> > \n> > I don't understand these options. I would be glad to write something if\n> > you can explain it.\n> \n> Actually, all lock modes are described in mvcc.sgml\n\nI read it, and I don't understand the last one:\n\n\tIN SHARE ROW EXCLUSIVE MODE\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 4 Jun 1999 00:33:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Actually, all lock modes are described in mvcc.sgml\n> \n> I read it, and I don't understand the last one:\n> \n> IN SHARE ROW EXCLUSIVE MODE\n\nIt allows update a table only you. If two xacts acquire\nSHARE lock and than both try to update the table then one\nof them will be rolled back due to deadlock condition.\nSHARE ROW EXCLUSIVE mode prevents such deadlock conditions.\nBut in difference from EXCLUSIVE mode it allows concurrent\nSELECT FOR UPDATE, which could be used by other to ensure that\nsome rows will not be updated during his xaction.\n\nAs I already mentioned, our lock modes (except of Access\nShare/Exclusive ones) are the same as in Oracle - I found that\ntheir lock modes are very suitable for MVCC.\n\nLOCK TABLE is not standard statement - so being compatible\nwith this big boy is good, isn't it?\n\nVadim\n",
"msg_date": "Fri, 04 Jun 1999 14:33:46 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "Vadim Mikheev wrote:\n> Actually, all lock modes are described in mvcc.sgml\n\nI have all the files in doc/src/sgml but I don't understand how to\nview these except in raw form. I looked at the Makefile and see\nit requires style sheets ($HDSL) which are not included with\nthe cvs distribution. I assume that these are used to gen html\nfiles, but can find no reference to this.\n\nAm I missing something?\n\n--------\nRegards\nTheo\n",
"msg_date": "Sun, 06 Jun 1999 13:33:51 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> > Actually, all lock modes are described in mvcc.sgml\n> I have all the files in doc/src/sgml but I don't understand how to\n> view these except in raw form. I looked at the Makefile and see\n> it requires style sheets ($HDSL) which are not included with\n> the cvs distribution. I assume that these are used to gen html\n> files, but can find no reference to this.\n> Am I missing something?\n\nThere is an appendix in the v6.4 and v6.5beta which discusses the\ndocumentation and how to generate it. It includes information on\npackages, where to get them, and how to install them.\n\nThere are daily snapshots built of the html, which run to completion\nunless something gets broken in the sgml markup, in which case there\nmay be a failed snapshot build for a day or two until I fix the\nmarkup.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 07 Jun 1999 06:53:54 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
}
] |
[
{
"msg_contents": "Hello, \n\n I'am compiling a lot of sofware packages, which requires postgres header\nfiles under /usr/local/pgsql.\n It is possible to provide little script, which will tell, where\ncomponents of postgresql are installed ? \n\n Something like:\n$ pgsql-config --cflags pgsql\n-I/opt/pgsql/include\n\n$ pgsql-config --ldflags pgsql\n-L/opt/pgsql/lib -lpq\n\nSame script for java or perl will help\n('pgsql-config --classpath pgsql-java' will print: --classpath=/opt/pgsql/lib/postgresql.jar).\n\n thanks,\n\np.s.: this isn't my idea, gnome or gtk does this ...\n-- \n* David Sauer, student of Czech Technical University\n* electronic mail: [email protected] (mime compatible)\n",
"msg_date": "03 Jun 1999 23:40:51 +0200",
"msg_from": "David Sauer <[email protected]>",
"msg_from_op": true,
"msg_subject": "idea for compiling"
},
{
"msg_contents": "\nOn 03-Jun-99 David Sauer wrote:\n> Hello, \n> \n> I'am compiling a lot of sofware packages, which requires postgres header\n> files under /usr/local/pgsql.\n> It is possible to provide little script, which will tell, where\n> components of postgresql are installed ? \n\nIMHO, apropriate configure for your programs is the best way\nfrom the other side you can use ENV\n PGLIB/PGDATA \n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Fri, 04 Jun 1999 13:13:41 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] idea for compiling"
}
] |
[
{
"msg_contents": "Is there a reason why vacuum won't vacuum large objects? AFAIK they\nare not really different from ordinary relations, and could be vacuumed\nthe same way. If you do a lot of lo_writes to a large object, its file\nsize grows without bound because of invalidated tuples, so it'd sure\nbe nice for LOs to be vacuumable...\n\nTrying to force the issue doesn't work either:\n\nlotest=> vacuum xinv150337;\nNOTICE: Vacuum: can not process index and certain system tables\nVACUUM\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jun 1999 18:04:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum ignores large objects"
},
{
"msg_contents": "> Is there a reason why vacuum won't vacuum large objects? AFAIK they\n> are not really different from ordinary relations, and could be vacuumed\n> the same way. If you do a lot of lo_writes to a large object, its file\n> size grows without bound because of invalidated tuples, so it'd sure\n> be nice for LOs to be vacuumable...\n> \n> Trying to force the issue doesn't work either:\n> \n> lotest=> vacuum xinv150337;\n> NOTICE: Vacuum: can not process index and certain system tables\n> VACUUM\n\nReally. I thought they were just bit buckets. I didn't realize they\nactually contain transaction id's and versions.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 3 Jun 1999 22:01:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum ignores large objects"
},
{
"msg_contents": "\nAdded to TODO:\n\n o Allow large object vacuuming\n\n\n> Is there a reason why vacuum won't vacuum large objects? AFAIK they\n> are not really different from ordinary relations, and could be vacuumed\n> the same way. If you do a lot of lo_writes to a large object, its file\n> size grows without bound because of invalidated tuples, so it'd sure\n> be nice for LOs to be vacuumable...\n> \n> Trying to force the issue doesn't work either:\n> \n> lotest=> vacuum xinv150337;\n> NOTICE: Vacuum: can not process index and certain system tables\n> VACUUM\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 19:57:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum ignores large objects"
}
] |
[
{
"msg_contents": "Bruce Momjian <[email protected]>\n> \n\nWhat's this item? Do we have anything more specific?\n\n> SELECT * FROM test WHERE test IN (SELECT * FROM test) fails with strange error\n\npostgres=> drop table test;\nDROP\npostgres=> create table test ( test int );\nCREATE\npostgres=> insert into test values ( 3);\nINSERT 148950 1\npostgres=> insert into test values ( 2);\nINSERT 148951 1\npostgres=> insert into test values ( 1);\nINSERT 148952 1\npostgres=> SELECT * FROM test WHERE test IN (SELECT * FROM test);\ntest\n----\n 3\n 2\n 1\n(3 rows)\n\npostgres=>\n\nLooks OK to me.\n\nKeith.\n\n",
"msg_date": "Thu, 3 Jun 1999 23:30:42 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> Bruce Momjian <[email protected]>\n> > \n> \n> What's this item? Do we have anything more specific?\n> \n> > SELECT * FROM test WHERE test IN (SELECT * FROM test) fails with strange error\n> \n> postgres=> drop table test;\n> DROP\n> postgres=> create table test ( test int );\n\nTry this:\n\n\tcreate table test(x int);\n\n> CREATE\n> postgres=> insert into test values ( 3);\n> INSERT 148950 1\n> postgres=> insert into test values ( 2);\n> INSERT 148951 1\n> postgres=> insert into test values ( 1);\n> INSERT 148952 1\n> postgres=> SELECT * FROM test WHERE test IN (SELECT * FROM test);\n> test\n> ----\n> 3\n> 2\n> 1\n> (3 rows)\n> \n> postgres=>\n> \n> Looks OK to me.\n> \n> Keith.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 4 Jun 1999 10:15:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open 6.5 items"
}
] |
[
{
"msg_contents": "> Please update set.sgml - I failed to understand all these\n> SET TIME ZONE { '<REPLACEABLE CLASS=\"PARAMETER\">\n> now.\n> Vadim\n\nIt's a little late in the game to be playing dumb, Vadim. Most of the\nrest of us can get away with a \"I failed to understand\", but not you\n:))\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 04 Jun 1999 01:51:13 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] 'pgsql/doc/src/sgml/ref set.sgml'"
}
] |
[
{
"msg_contents": "Dear All,\n\nIt seems to me that there are a bunch of related issues that probably need to be tied together (and forgotten about?):\n\n1. A 'nice' user interface for blobs \n2. Text fields stored as blobs\n3. Naming issues for 'system' tables etc.\n4. pg_dump support for blobs and other 'internal' structures.\n5. Blob storage in files Vs. a 'nicer' storage medium.\n6. The tuple-size problem(?)\n\nPoints (1) & (2) are really the same thing; if you provide a nice interface to blobs: \"select len(blob_field) from ....\" and \"select blob_field from ...\", then any discussion of the messiness associated with blobs will go away. Personally, I would hate to lose the ability to store a blob's data using a series of 'lo_write' calls: one system I work on (not in PG) has blob data as large as 24MB which makes blob_write functionality essential.\n\nPoints (3) & (4) recognize that there are a number issues floating around that relate to the basic inappropriateness of using SQL to reload the data structures of an existing database. I have only used a few commercial DBs, but the ones I have used uniformly have a 'dump' that produces data files in it's own format. There is no question that having pg_dump produce a schema and/or INSERT statements is nice, but a new option needs to be added to allow raw exports, and a new pg_load utility needs to be written. Cross-version compatibility between export formats must also be maintained (obviously).\n\nPoint (5) recognizes that storing 'large' data in the same area that a row is stored in will remove any benefits of clustering, so a method of handling blob data needs to be found, irrespective of whether PG still supports blobs as such. I don't know how PG handles large text fields - some commercial systems allow the user to 'map' specific fields to separate data files. The current system (storing blobs in files) is fine except in so far as it *looks* messy, produces *huge* directories, and is slow for many small blobs (file open/read/close per row).\n\nI don't know anything about the 'tuple-size' problem (point 6), but it may also relate to a solution for storing blob-data (or specific columns) in alternate locations.\n\n\nI hope this is not all static...\n\nPhilip Warner.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 04 Jun 1999 14:13:43 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Priorities for 6.6 "
}
] |
[
{
"msg_contents": "\tHiroshi wrote:\n> Ole Gjerde who provided the patch for current implementation of \n> mdtruncate() sayz.\n> \"First, please reverse my patch to mdtruncate() in md.c as soon as\n> possible. It does not work properly in some cases.\"\n> \n> I also recommend to reverse his patch to mdtruncate().\n> \n> Though we could not shrink segmented relations by old implementation \n> the result by vacuum would never be inconsistent(?).\n> \n> I think we don't have enough time to fix this.\n> \nIf there is no fix for vacuum, I suggest to change the filesize before\nsplitting\nback to just below 2 Gb (2Gb - 8k). Else vacuum will only work for tables\nup to 1 Gb, and it did work up to 2 Gb before.\n\nI am the one who suggested 1 Gb, so I had my eye on this issue.\nI still think 1 Gb is good for various reasons, but only if vacuum works.\n\nAndreas\n",
"msg_date": "Fri, 4 Jun 1999 10:49:24 +0200 ",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "important Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "ZEUGSWETTER Andreas IZ5 wrote:\n> \n> Hiroshi wrote:\n> > Ole Gjerde who provided the patch for current implementation of\n> > mdtruncate() sayz.\n> > \"First, please reverse my patch to mdtruncate() in md.c as soon as\n> > possible. It does not work properly in some cases.\"\n> >\n> > I also recommend to reverse his patch to mdtruncate().\n> >\n> > Though we could not shrink segmented relations by old implementation\n> > the result by vacuum would never be inconsistent(?).\n> >\n> > I think we don't have enough time to fix this.\n> >\n> If there is no fix for vacuum, I suggest to change the filesize before\n> splitting\n> back to just below 2 Gb (2Gb - 8k). Else vacuum will only work for tables\n> up to 1 Gb, and it did work up to 2 Gb before.\n> \n> I am the one who suggested 1 Gb, so I had my eye on this issue.\n> I still think 1 Gb is good for various reasons, but only if vacuum works.\n\nIs this issue addressed by last mdtruncate() changes?\n\nVadim\n",
"msg_date": "Tue, 15 Jun 1999 12:10:17 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: important Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> > I am the one who suggested 1 Gb, so I had my eye on this issue.\n> > I still think 1 Gb is good for various reasons, but only if vacuum works.\n> \n> Is this issue addressed by last mdtruncate() changes?\n> \n\nI think it is fixed. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 15 Jun 1999 00:58:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: important Re: [HACKERS] Open 6.5 items"
},
{
"msg_contents": "> \tHiroshi wrote:\n> > Ole Gjerde who provided the patch for current implementation of \n> > mdtruncate() sayz.\n> > \"First, please reverse my patch to mdtruncate() in md.c as soon as\n> > possible. It does not work properly in some cases.\"\n> > \n> > I also recommend to reverse his patch to mdtruncate().\n> > \n> > Though we could not shrink segmented relations by old implementation \n> > the result by vacuum would never be inconsistent(?).\n> > \n> > I think we don't have enough time to fix this.\n> > \n> If there is no fix for vacuum, I suggest to change the filesize before\n> splitting\n> back to just below 2 Gb (2Gb - 8k). Else vacuum will only work for tables\n> up to 1 Gb, and it did work up to 2 Gb before.\n> \n> I am the one who suggested 1 Gb, so I had my eye on this issue.\n> I still think 1 Gb is good for various reasons, but only if vacuum works.\n\nThis is where we dropped the ball. We should have made this recommended\nchange before 6.5.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 12:12:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: important Re: [HACKERS] Open 6.5 items"
}
] |
[
{
"msg_contents": "Hello.\n\nI'm busy writing an IP accounting DB using nacctd and postgres.\nThe data gets inserted correctly into the database and I can run the select\nqueries that I desire. \n\ne.g. (select sum(size) from iptraff where srcip = \"209.100.30.2\") gives me\nthe total bytes that that particular host sent. Now it would be *REALLY*\ngroovy if I could do the following: (select sum(size) from iptraff where\nscrip = \"209.100.30.0/24\")\nThat would tell me the total traffic for that subnet.\n\n>From what I understand the relevant code resides in network.c ,\nunfortunately I am not a C person :-(\n\nPlease reply to my e-mail addy.\n\nThanks!\nChristopher Griesel\[email protected]\n ---------------\n---FREE THE SOURCE---\n ---------------\n\nDial-up Network Management\nUSKO Enterprise Networks \nhttp://www.usko.co.za\n\n+27 11 800-9300 (TEL)\n+27 11 803-6110 (FAX)\n+27 83 616-5438 (GSM Mobile)\n\n\" This message contains information, which may be privileged and\nconfidential and subject to legal privilege. If you are not the intended\nrecipient, you may not peruse, use, disseminate, distribute or copy this\nmessage. If you have received this message in error, please notify the\nsender immediately by e-mail, facsimile or telephone and return or destroy\nthe original message. Thank you.\"\n\n",
"msg_date": "Fri, 4 Jun 1999 14:17:42 +0200 ",
"msg_from": "Chris Griesel <[email protected]>",
"msg_from_op": true,
"msg_subject": "inet data types & select"
}
] |
[
{
"msg_contents": "\nOk, BLOBS may go. But if they stay, can they not be stored\nall in the same directory? With a fixed rule for making \nsubdirectories (i.e. xinv/00 xinv/01 .. where number is last\n8 bits of oid, or some sort of hash) then users can spread\nthe size of the data out over lots of partitions using\nsoftlinks. \n\n-- cary\[email protected]\n\n\n\t\n",
"msg_date": "Fri, 4 Jun 1999 09:38:23 -0400 (EDT)",
"msg_from": "\"Cary O'Brien\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Priorities for 6.6 (Large Objects)"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.