threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "\n\n Hi,\n\nI have a question... How PG parser select (build-in) function?\nCheck parser function's arguments datetypes only or check returns value \ndatetype too (for function searching in system cache ..etc)?\n\nI have function to_number(text, text) this function returns numeric \ndatetype. Is possible (effective) write this function for more \ndatatypes than for numeric? The function is always \"to_number(text, text)\",\npossible difference is in a returned datetype. \n\n(The function extract number from formated string and this number \nreturn, but how datetype we want return?)\n\n\t\t\t\t\t\t\tKarel\n\nPS. Sorry if my problem description is a litle mazy..\n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n",
"msg_date": "Thu, 13 Jan 2000 16:24:42 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "How PG parser search (build-in) function?"
},
{
"msg_contents": "Karel Zak - Zakkr <[email protected]> writes:\n> I have a question... How PG parser select (build-in) function?\n> Check parser function's arguments datetypes only or check returns value \n> datetype too (for function searching in system cache ..etc)?\n\nThe argument types have to be different. In general, the parser\ndoesn't *know* what the return type of the function is; it has to\ndiscover that by looking up the function. So you can't have two\nfunctions of the same name unless they differ in number and/or type\nof arguments.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jan 2000 10:50:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How PG parser search (build-in) function? "
},
{
"msg_contents": "\nOn Thu, 13 Jan 2000, Tom Lane wrote:\n\n> Karel Zak - Zakkr <[email protected]> writes:\n> > I have a question... How PG parser select (build-in) function?\n> > Check parser function's arguments datetypes only or check returns value \n> > datetype too (for function searching in system cache ..etc)?\n> \n> The argument types have to be different. In general, the parser\n> doesn't *know* what the return type of the function is; it has to\n> discover that by looking up the function. So you can't have two\n> functions of the same name unless they differ in number and/or type\n> of arguments.\n\n Well, we must use the numeric version of to_number() only and parser \nmust cast from numeric itself (but IMHO it is not effective convert in \nto_number() string to numeric and in parser convert numeric to other\ndatetype. But if it not possible.. :-(\n\n Thank!\n\n\t\t\t\t\t\tKarel\n\n",
"msg_date": "Thu, 13 Jan 2000 17:04:28 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] How PG parser search (build-in) function? "
}
] |
[
{
"msg_contents": "It seems SPI_fnumber can't recognize oid as an attribute for a tuple\ndescription. I performed my tests using contrib/spi/refint.c which uses\nfunction to test the validity of the trigger for check_foreign_key() and\nfor check_primary_key(). I changed refint.c slightly to verify that\nSPI_fnumber was really returning SPI_ERROR_NOATTRIBUTE and indeed it was.\nHere's the exact line I got from the check_foreign_key trigger:\n\nERROR: check_foreign_key: there is no attribute oid in relation product\n\nLooking at src/backend/executor/spi.c, I noticed SPI_fnumber started\ncounting at 0. Is this normal or should it be -1, assuming oid might be\njust before 0?\n\nLet me know what you think,\nMarc\n\n",
"msg_date": "Thu, 13 Jan 2000 23:16:04 +0000 (GMT)",
"msg_from": "admin <[email protected]>",
"msg_from_op": true,
"msg_subject": "SPI_fnumber can't see oid"
},
{
"msg_contents": "admin <[email protected]> writes:\n> ERROR: check_foreign_key: there is no attribute oid in relation product\n\n> Looking at src/backend/executor/spi.c, I noticed SPI_fnumber started\n> counting at 0. Is this normal or should it be -1, assuming oid might be\n> just before 0?\n\nI think the SPI code is deliberately set up to prevent access to\nthe system attributes. Jan Wieck is probably the only one who knows\nexactly why, but I'll bet that something will break if you bypass\nthat restriction :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jan 2000 01:26:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SPI_fnumber can't see oid "
}
] |
[
{
"msg_contents": "Hi, everyone, I like your work very much and hope PostgreSQL can\ngrow into something competitive with Oracle just like Linux vs.\nWindows.\n\nI have background in relational database management system\nresearch and I want to try to be a developer for PostgreSQL.\nRight now I only try to be familiar with your code base. I\nplan to start with a specific function module in the backend.\nI'm thinking of /docs/pgsql/src/backend/executor because\nI want to experiment with some new fast join algorithms.\nMy long term objective is to introduce materialized view\nsubsystem into PostgreSQL. Could anyone tell me if\nthe directory /docs/pgsql/src/backend/executor is the \nright place to start or just give me some general suggestions\nwhich are not in the FAQs? Oh one more thing I want to\nmention is that those join algorithms I want to experiment\nwith may have some special data access paths similar to an index.\n\nFurther if it doesn't bother you much, could someone\nanswer the following question(s) for me? (Sorry if\nsome are already in the docs)\n1. Does postgresql do raw storage device management or it relies\n on file system? My impression is no raw device. If no,\n is it difficult to add it and possibly how?\n2. Do you have standard benchmark results for postgresql?\n I guess not since it only implements a subset of SQL'92.\n What about subset of a benchmark or something repeatable?\n3. Suppose I have added a new two rel. join algorithm, how\n would I proceed to compare the performance of it with \n the exisiting two relation join algorithms under\n different senarios? Are there any existing facilities\n in the current code base for this purpose? Am I right\n that the available join algos implemented are nested loop\n join (including index-based), hash join (which one? hybrid),\n sort-merge join?\n4. Usually a single sequential pass of a large joining relation\n is preferred to random access in large join operation.\n It's mostly because of the current disk access characteristics.\n Is it possible for me to do some benchmarking about this\n using postgresql? What I'm actually asking are the issues about \n how to control the flow of data form disk to buffers,\n how to stop file system interference and how to arrange\n actual data placement on the disk.\n\nSorry again if I'm not clear with my questions. I'd like\nto further explain them if necessary.\n\nthanks for any help\nxun\n\n \n\n",
"msg_date": "Thu, 13 Jan 2000 16:46:52 -0800 (PST)",
"msg_from": "[email protected] (Xun Cheng)",
"msg_from_op": true,
"msg_subject": "[hackers]development suggestion needed"
},
{
"msg_contents": "> I have background in relational database management system\n> research and I want to try to be a developer for PostgreSQL.\n> Right now I only try to be familiar with your code base. I\n> plan to start with a specific function module in the backend.\n> I'm thinking of /docs/pgsql/src/backend/executor because\n> I want to experiment with some new fast join algorithms.\n> My long term objective is to introduce materialized view\n> subsystem into PostgreSQL. Could anyone tell me if\n> the directory /docs/pgsql/src/backend/executor is the \n> right place to start or just give me some general suggestions\n> which are not in the FAQs? Oh one more thing I want to\n> mention is that those join algorithms I want to experiment\n> with may have some special data access paths similar to an index.\n\nGood.\n\n> \n> Further if it doesn't bother you much, could someone\n> answer the following question(s) for me? (Sorry if\n> some are already in the docs)\n> 1. Does postgresql do raw storage device management or it relies\n> on file system? My impression is no raw device. If no,\n> is it difficult to add it and possibly how?\n\nNo, only file system. We don't see much advantage to raw i/o.\n\n> 2. Do you have standard benchmark results for postgresql?\n> I guess not since it only implements a subset of SQL'92.\n> What about subset of a benchmark or something repeatable?\n\nWe do the Wisconsin. I think it is in the source tree.\n\n> 3. Suppose I have added a new two rel. join algorithm, how\n> would I proceed to compare the performance of it with \n> the exisiting two relation join algorithms under\n> different senarios? Are there any existing facilities\n> in the current code base for this purpose? Am I right\n> that the available join algos implemented are nested loop\n> join (including index-based), hash join (which one? hybrid),\n> sort-merge join?\n\nYou can control the join types used with flags to postgres. Very easy.\n\n> 4. Usually a single sequential pass of a large joining relation\n> is preferred to random access in large join operation.\n> It's mostly because of the current disk access characteristics.\n> Is it possible for me to do some benchmarking about this\n> using postgresql? What I'm actually asking are the issues about \n> how to control the flow of data form disk to buffers,\n> how to stop file system interference and how to arrange\n> actual data placement on the disk.\n\nGood idea. We deal with this regularly in deciding to use an index in\nthe optimizer or a sequential scan. Our optimizer is quite good.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 Jan 2000 20:09:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "[email protected] (Xun Cheng) writes:\n> I want to experiment with some new fast join algorithms.\n\nCool. Welcome aboard!\n\n> Could anyone tell me if\n> the directory /docs/pgsql/src/backend/executor is the \n> right place to start\n\nThe executor is only half the problem: you must also teach the\nplanner/optimizer how and when to use the new join type.\n\nHiroshi Inoue has recently been down this path (adding support\nfor TID-based scans), and might be able to give you more specific\nadvice.\n\n> 1. Does postgresql do raw storage device management or it relies\n> on file system? My impression is no raw device. If no,\n> is it difficult to add it and possibly how?\n\nPostgres uses Unix files. We have avoided raw-device access mostly on\ngrounds of portability. To persuade people that such a change should go\ninto the distribution, you'd need to prove that *significantly* better\nperformance is obtained with raw access. I for one don't think it's a\nforegone conclusion; Postgres gets considerable benefit from sitting\natop Unix kernel device schedulers and disk buffer caches.\n\nAs far as the actual implementation goes, the low level access methods\ngo through a \"storage manager\" switch that was intended to allow for\nthe addition of a new storage manager, such as a raw-device manager.\nSo you could get a good deal of stuff working by implementing code that\nparallels md.c/fd.c. The main problem at this point is that there is a\nfair amount of utility code that goes out and does its own manipulation\nof the database file structure. You'd need to clean that up by pushing\nit all down below the storage manager switch (inventing new storage\nmanager calls as needed).\n\n> that the available join algos implemented are nested loop\n> join (including index-based), hash join (which one? hybrid),\n> sort-merge join?\n\nRight. The hash join uses batching if it estimates that the relation\nis too large to fit in memory; is that what you call \"hybrid\"?\n\n> 4. Usually a single sequential pass of a large joining relation\n> is preferred to random access in large join operation.\n> It's mostly because of the current disk access characteristics.\n> Is it possible for me to do some benchmarking about this\n> using postgresql? What I'm actually asking are the issues about \n> how to control the flow of data form disk to buffers,\n> how to stop file system interference and how to arrange\n> actual data placement on the disk.\n\nYou don't get to do either of the latter two unless you write a\nraw-device storage manager --- which'd be a fair amount of work\nfor what might be little return. Are you convinced that that ought\nto be the first thing you work on? I'd be inclined to think about\njoin algorithms in the abstract, without trying to control physical\ndisk placement of the data...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jan 2000 20:23:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "At 08:23 PM 1/13/00 -0500, Tom Lane wrote:\n\n>As far as the actual implementation goes, the low level access methods\n>go through a \"storage manager\" switch that was intended to allow for\n>the addition of a new storage manager, such as a raw-device manager.\n>So you could get a good deal of stuff working by implementing code that\n>parallels md.c/fd.c. The main problem at this point is that there is a\n>fair amount of utility code that goes out and does its own manipulation\n>of the database file structure. You'd need to clean that up by pushing\n>it all down below the storage manager switch (inventing new storage\n>manager calls as needed).\n\nThis would need to be done to implement some sort of tablespace-style\nfacility, too, right? I'm off Xun's thread in asking but I've been\nwondering. DBs like Oracle allow you to place tables and indices\nwhereever you like in the filesystem. This is normally done to\ndistribute things across different spindles, and in large, busy\ndatabases makes a significant difference. I've done some experimenting\nmoving index files to a different spindle (using \"ln\" to fool \npostgres, of course) and insertions go measurably faster. Spindles\nare so cheap nowadays :)\n\nI know there's been discussion of letting folks specify where the\nWAL will be placed when it's implemented, for safety's sake - it \nwill also improve performance.\n\n>You don't get to do either of the latter two unless you write a\n>raw-device storage manager\n\nNot within a single filesystem, but scattering things across spindles\ncould be done without a raw-device storage manager :)\n\n(not what he's talking about, but heck, thought I'd raise it)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 13 Jan 2000 17:43:39 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> This would need to be done to implement some sort of tablespace-style\n> facility, too, right? I'm off Xun's thread in asking but I've been\n> wondering. DBs like Oracle allow you to place tables and indices\n> whereever you like in the filesystem. This is normally done to\n> distribute things across different spindles, and in large, busy\n> databases makes a significant difference. I've done some experimenting\n> moving index files to a different spindle (using \"ln\" to fool \n> postgres, of course) and insertions go measurably faster. Spindles\n> are so cheap nowadays :)\n\nAs you say, you can fake it manually with symbolic links, but that's\na kluge.\n\nThe \"database location\" stuff that Peter and Thomas have been arguing\nabout is intended to allow a single postmaster to control databases that\nare in multiple physical locations --- but there seems to be some debate\nas to whether it works ;-). (I never tried it.) In any case, we don't\ncurrently have any official provision for controlling location at finer\nthan database level. It'd be nice to be able to push individual tables\naround, I suppose.\n\nThis wouldn't require a new storage manager, since presumably you'd\nstill be using the Unix-filesystem storage manager. The trick would be\nto allow a path rather than just a base file name to be specified\nper-relation. I'm not sure if it'd be hard or not. Probably, all the\nsystem tables would have to stay in the database's default directory,\nbut maybe user tables could be given path names without too much\ntrouble...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jan 2000 21:01:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "At 09:01 PM 1/13/00 -0500, Tom Lane wrote:\n\n>As you say, you can fake it manually with symbolic links, but that's\n>a kluge.\n\nYes, it is. Or worse :)\n>\n>The \"database location\" stuff that Peter and Thomas have been arguing\n>about is intended to allow a single postmaster to control databases that\n>are in multiple physical locations --- but there seems to be some debate\n>as to whether it works ;-). (I never tried it.) In any case, we don't\n>currently have any official provision for controlling location at finer\n>than database level. It'd be nice to be able to push individual tables\n>around, I suppose.\n\nPutting indices on different spindles than the tables is known to\nsignificantly speed up the Ars Digita Community system under load\nwith Oracle. Systems like this, used to back busy web sites, stuff\nthings into tables many times a second. As I mentioned, I've played\naround a bit with postgres using \"ln\" and it does indeed help boost\nthe number of inserts my (paltry, two-spindle) system could sustain.\n\nThe selects that such sites spew forth are handled wonderfully\nby Postgres now, with MVCC and the change that stops the update\nof pg_log after read-only selects.\n\nMy site's still in the experimental stage, being used by a couple\ndozen folks to record bird distribution data in the Pacific NW, so\nI don't personally have real-world data to get a feeling for how\nimportant this might become. Still, Oracle DBA docs talk a lot\nabout it so in some real-world scenarios being able to distribute\ntables and indices on different spindles must pay off.\n\n>\n>This wouldn't require a new storage manager, since presumably you'd\n>still be using the Unix-filesystem storage manager. The trick would be\n>to allow a path rather than just a base file name to be specified\n>per-relation. I'm not sure if it'd be hard or not. Probably, all the\n>system tables would have to stay in the database's default directory,\n>but maybe user tables could be given path names without too much\n>trouble...\n\nI've looked into it, actually, and have reached the same conclusion.\nIncluding the bit about keeping system tables in the database's default\ndirectory.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 13 Jan 2000 18:19:33 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "> This would need to be done to implement some sort of tablespace-style\n> facility, too, right? I'm off Xun's thread in asking but I've been\n> wondering. DBs like Oracle allow you to place tables and indices\n> whereever you like in the filesystem. This is normally done to\n> distribute things across different spindles, and in large, busy\n> databases makes a significant difference. I've done some experimenting\n> moving index files to a different spindle (using \"ln\" to fool \n> postgres, of course) and insertions go measurably faster. Spindles\n> are so cheap nowadays :)\n\nWAL will add the oid to the base file name. That may make tablespaces\neasier.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 Jan 2000 21:20:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "> As you say, you can fake it manually with symbolic links, but that's\n> a kluge.\n> \n> The \"database location\" stuff that Peter and Thomas have been arguing\n> about is intended to allow a single postmaster to control databases that\n> are in multiple physical locations --- but there seems to be some debate\n> as to whether it works ;-). (I never tried it.) In any case, we don't\n> currently have any official provision for controlling location at finer\n> than database level. It'd be nice to be able to push individual tables\n> around, I suppose.\n> \n> This wouldn't require a new storage manager, since presumably you'd\n> still be using the Unix-filesystem storage manager. The trick would be\n> to allow a path rather than just a base file name to be specified\n> per-relation. I'm not sure if it'd be hard or not. Probably, all the\n> system tables would have to stay in the database's default directory,\n> but maybe user tables could be given path names without too much\n> trouble...\n\nOr we could continue to use symlinks, and just create them ourselves in\nthe backend.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 Jan 2000 21:32:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "> The \"database location\" stuff that Peter and Thomas have been arguing\n> about is intended to allow a single postmaster to control databases that\n> are in multiple physical locations --- but there seems to be some debate\n> as to whether it works ;-). (I never tried it.) In any case, we don't\n> currently have any official provision for controlling location at finer\n> than database level. It'd be nice to be able to push individual tables\n> around, I suppose.\n> \n> This wouldn't require a new storage manager, since presumably you'd\n> still be using the Unix-filesystem storage manager. The trick would be\n> to allow a path rather than just a base file name to be specified\n> per-relation. I'm not sure if it'd be hard or not. Probably, all the\n> system tables would have to stay in the database's default directory,\n> but maybe user tables could be given path names without too much\n> trouble...\n\nThis is possible since PostgreSQL was born unless I misunderstand what\nyou are saying...\n\ntest=> create table \"/tmp/t1\" (i int);\nCREATE\nbash$ ls -l /tmp/t1\n-rw------- 1 postgres postgres 0 Jan 14 11:19 /tmp/t1\n\nEven,\n\ntest=> create table \"../test2/pg_proc\" (i int);\nERROR: cannot create ../test2/pg_proc\n\nThis is not good. Maybe we should prevent to make this kind of table\nnames.\n\nBTW, it would be nice to add a \"table space\" concept to the create\ntable statement.\n\n-- reserve a table space named 'foo' which is physically located under\n-- /pg/myspace. Only PostgreSQL super user can execute this command\n-- to avoid security risks.\ncreate table space foo as '/pg/myspace';\n\n-- create table t1 under /pg/myspace\ncreate table t1 (i int) with table space 'foo';\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 14 Jan 2000 11:42:03 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "At 11:42 AM 1/14/00 +0900, Tatsuo Ishii wrote:\n\n>This is possible since PostgreSQL was born unless I misunderstand what\n>you are saying...\n>\n>test=> create table \"/tmp/t1\" (i int);\n\nClever...\n\nOf course, when I'm porting over thousands of lines of an Oracle\ndata model and tens of thousands of lines of scripting code that\nrefers to these tables via queries this is a very inconvenient\nway to locate a particular table in a particular place. It involves\nchanging a lot of code...\n\nBesides being somewhat ... baroque? :)\n\n>BTW, it would be nice to add a \"table space\" concept to the create\n>table statement.\n\nI figured you felt that way!\n\n>\n>-- reserve a table space named 'foo' which is physically located under\n>-- /pg/myspace. Only PostgreSQL super user can execute this command\n>-- to avoid security risks.\n>create table space foo as '/pg/myspace';\n>\n>-- create table t1 under /pg/myspace\n>create table t1 (i int) with table space 'foo';\n\nYes, that's the Oracle-ish style of it. Of course, Oracle allows\nall sorts of anal retentive features like allowing a DBA to\nrestrict the size of the tablespace, etc that I personally\ndon't care about...\n\nThough I understand why they're important to some.\n\nOracle tables and indices within a single tablespace all live in\none file (if you're using filesystem rather than raw I/O), so \nthey also provide features which allow you to specify how big\na chunk to allocate per extent (Oracle pre-allocates to avoid\nrunning out of disk space while you're running except in ways\nthat you control, and in hopes of getting contiguous chunks of\ndisk storage because they hope you're using previously empty\ndisks used only for Oracle).\n\nFeatures like this don't fit well with each table/index residing\nin its own file. Personally I don't have any need for them, either,\nbut as Postgres gets more popular (as it will as it continues to\nimprove) it may attract the attention of folks with traditional\nDBA requirements like this.\n\nOf course, that would require a new storage manager, one similar\nin concept to what would be needed to implement raw I/O.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 13 Jan 2000 18:56:39 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "> Oracle tables and indices within a single tablespace all live in\n> one file (if you're using filesystem rather than raw I/O), so \n> they also provide features which allow you to specify how big\n> a chunk to allocate per extent (Oracle pre-allocates to avoid\n> running out of disk space while you're running except in ways\n> that you control, and in hopes of getting contiguous chunks of\n> disk storage because they hope you're using previously empty\n> disks used only for Oracle).\n\nAnd with data and index in the same file, you can't split them across\nspindles.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 Jan 2000 22:18:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Oracle tables and indices within a single tablespace all live in\n> > one file (if you're using filesystem rather than raw I/O), so\n> > they also provide features which allow you to specify how big\n> > a chunk to allocate per extent (Oracle pre-allocates to avoid\n> > running out of disk space while you're running except in ways\n> > that you control, and in hopes of getting contiguous chunks of\n> > disk storage because they hope you're using previously empty\n> > disks used only for Oracle).\n> \n> And with data and index in the same file, you can't split them across\n> spindles.\n> \n\nBut you can certainly do that in ORACLE, if you wish. In fact,\nORACLE recommends it:\n\nPlace Data Files for Maximum Performance \n\nTablespace location is determined by the physical location of the\ndata files that constitute that tablespace. Use the hardware\nresources of your computer appropriately.\n\nFor example, if several disk drives are available to store the\ndatabase, it might be helpful to store table data in a tablespace\non one disk drive, and index data in a tablespace on another disk\ndrive. This way, when users query table information, both disk\ndrives can work simultaneously, retrieving table and index data\nat the same time.\n\nMike Mascari\n",
"msg_date": "Thu, 13 Jan 2000 22:43:41 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "On Thu, 13 Jan 2000, Tom Lane wrote:\n\n> Don Baccus <[email protected]> writes:\n> > This would need to be done to implement some sort of tablespace-style\n> > facility, too, right? I'm off Xun's thread in asking but I've been\n> > wondering. DBs like Oracle allow you to place tables and indices\n> > whereever you like in the filesystem. This is normally done to\n> > distribute things across different spindles, and in large, busy\n> > databases makes a significant difference. I've done some experimenting\n> > moving index files to a different spindle (using \"ln\" to fool \n> > postgres, of course) and insertions go measurably faster. Spindles\n> > are so cheap nowadays :)\n> \n> As you say, you can fake it manually with symbolic links, but that's\n> a kluge.\n> \n> The \"database location\" stuff that Peter and Thomas have been arguing\n> about is intended to allow a single postmaster to control databases that\n> are in multiple physical locations --- but there seems to be some debate\n> as to whether it works ;-). (I never tried it.) In any case, we don't\n> currently have any official provision for controlling location at finer\n> than database level. It'd be nice to be able to push individual tables\n> around, I suppose.\n> \n> This wouldn't require a new storage manager, since presumably you'd\n> still be using the Unix-filesystem storage manager. The trick would be\n> to allow a path rather than just a base file name to be specified\n> per-relation. I'm not sure if it'd be hard or not. Probably, all the\n> system tables would have to stay in the database's default directory,\n> but maybe user tables could be given path names without too much\n> trouble...\n\nOkay, I've been thinking about this recently with the whole Udmsearch of\nPostgreSQL. We just put a 9gig drive online to handle this, as well as\nother database related projects, since I wanted alot of room to grow\n(PostgreSQL itself indexed out to something like 1gig, and the lists are\ngrowing) ...\n\nAll the major OSs out there have \"disk management tools\" that allow you to\nbuild \"large file systems\" out of smaller ones... Solaris has DiskSuite,\nFreeBSD has vinum, Linux has ??... why are we looking/investigating adding\na level of complexity to PostgreSQL to handle something that, as far as I\nknow, each of the OSs out there already has a way of dealing with?\n\nSome aren't necessarily mature yet...Solaris's is the only one that I'm\naware of that has a *beautiful* growfs program that allows you to add a\nnew drive to an existing \"pack\" and grow the file system into that new\ndrive while the system is live...but the utilities are there...\n\nI think the major problem that I'm worried about isn't spreading tables\nacross drives, but its when that *one* table grows to the point that its\nabout to overflow my drive...I'd rather add a 9gig drive on, make it an\n18gig file system, and let it continue to grow...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 14 Jan 2000 00:21:23 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "On Thu, 13 Jan 2000, Don Baccus wrote:\n\n> My site's still in the experimental stage, being used by a couple\n> dozen folks to record bird distribution data in the Pacific NW, so\n> I don't personally have real-world data to get a feeling for how\n> important this might become. Still, Oracle DBA docs talk a lot\n> about it so in some real-world scenarios being able to distribute\n> tables and indices on different spindles must pay off.\n\nWhat would it take to break the data/base/<database> directory down? To\nsomething like, maybe:\n\ndata/base/<database>/pg_*\n /tables/*\n /indices/*\n\nThen, one could easily mount a drive as /tables and another one as\n/indices ...\n\nWe know the difference between a table and an index, so I wouldn't think\nit would be *too* hard add /tables/ internally to the existing\npath...would it?\n\nYou'd basically have somethign like:\n\nsprintf(\"%s/data/base/%s/tables/%s\", data_dir, database, tablename);\n\nInstead of:\n\nsprintf(\"%s/data/base/%s/%s\", data_dir, database, tablename);\n\nI know, I'm being simplistic here, but...\n\nOr, a different way:\n\nif(table) sprintf(\"%s/data/base/table/%s/%s\", data_dir,database,tablename);\nelse if(index) sprintf(\"%s/data/base/index/%s/%s\", data_dir,database,tablename);\nelse sprintf(\"%s/data/base/sys/%s/%s\", data_dir,database,sysfile);\n\nThis would give you the ability to put all table from all databass onto\none file system, and all indexes onto another, and all system files onto a\nthird...\n\nI don't know, I'm oversimplying and spewing thoughts out\nagain...but...*shrug*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 14 Jan 2000 00:31:30 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Multiple Spindles ( Was: Re: [HACKERS] [hackers]development\n suggestion\n\tneeded )"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tatsuo Ishii\n> \n> \n> BTW, it would be nice to add a \"table space\" concept to the create\n> table statement.\n> \n> -- reserve a table space named 'foo' which is physically located under\n> -- /pg/myspace. Only PostgreSQL super user can execute this command\n> -- to avoid security risks.\n> create table space foo as '/pg/myspace';\n> \n> -- create table t1 under /pg/myspace\n> create table t1 (i int) with table space 'foo';\n> --\n\nI agree with Tatsuo though I prefer\n\tcreate table t1 (i int) tablespace foo;\n.\nIsn't it preferable to encapsulate the table location and storage type ?\n\nAt first,a tablespace would only correspond to a directory and it won't\nbe so difficult to implment. But we would gain a lot with the feature.\n\nIn the future,the tablespace may be changed to mean real(??)\ntablespace. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 14 Jan 2000 13:34:45 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "> On Thu, 13 Jan 2000, Don Baccus wrote:\n> \n> > My site's still in the experimental stage, being used by a couple\n> > dozen folks to record bird distribution data in the Pacific NW, so\n> > I don't personally have real-world data to get a feeling for how\n> > important this might become. Still, Oracle DBA docs talk a lot\n> > about it so in some real-world scenarios being able to distribute\n> > tables and indices on different spindles must pay off.\n> \n> What would it take to break the data/base/<database> directory down? To\n> something like, maybe:\n> \n> data/base/<database>/pg_*\n> /tables/*\n> /indices/*\n\nAnd put sort and large objects somewhere separate too.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 Jan 2000 23:41:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Spindles ( Was: Re: [HACKERS] [hackers]development\n\tsuggestion needed )"
},
{
"msg_contents": ">I agree with Tatsuo though I prefer\n>\tcreate table t1 (i int) tablespace foo;\n>.\n>Isn't it preferable to encapsulate the table location and storage type ?\n\nAgreed.\n\n>At first,a tablespace would only correspond to a directory and it won't\n>be so difficult to implment. But we would gain a lot with the feature.\n>\n>In the future,the tablespace may be changed to mean real(??)\n>tablespace. \n\nGood point.\n\n> I think the major problem that I'm worried about isn't spreading tables\n> across drives, but its when that *one* table grows to the point that its\n> about to overflow my drive...I'd rather add a 9gig drive on, make it an\n> 18gig file system, and let it continue to grow...\n\nWe could extend the create tablespace command something like:\n\ncreate tablespace foo as '/pg/myspace1 /pg/myspace2 ';\n\nto spread a table (space) among different disk drives. Moreover we\ncould define the \"policy\" to use the tablespace:\n\ncreate tablespace foo as '/pg/myspace1 /pg/myspace2 ' policy roundrobin;\n\nin above case, if the table hits the 1GB limit in /pg/myspace1 then\nnew segment will be created in /pg/myspace2.\n\nJust an idea...\n--\nTatsuo Ishii\n\n",
"msg_date": "Fri, 14 Jan 2000 13:43:49 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "At 10:18 PM 1/13/00 -0500, Bruce Momjian wrote:\n>> Oracle tables and indices within a single tablespace all live in\n>> one file (if you're using filesystem rather than raw I/O), so \n>> they also provide features which allow you to specify how big\n>> a chunk to allocate per extent (Oracle pre-allocates to avoid\n>> running out of disk space while you're running except in ways\n>> that you control, and in hopes of getting contiguous chunks of\n>> disk storage because they hope you're using previously empty\n>> disks used only for Oracle).\n>\n>And with data and index in the same file, you can't split them across\n>spindles.\n\nWhich is why folks define more than one tablespace, eh? Something\nyou can't do in PG...\n\nPerhaps I didn't make it clear that you can define as many\ntablespaces as you want?\n\nAnd freely assign any table or index to any tablespace you\nwant?\n\nGiven my statement above, it is clear you can:\n\n1. Coalesce indices and tables into a single tablespace (the\n default) if you only define one (again, more or less how\n Oracle defaults, though I sorta forget because I'm not an\n Oracle stud and no one in their right mind allows Oracle to\n set defaults, because they're always wrong)\n\n-or-\n\n2. At the other extreme, you can define as many tablespaces as \n you have tables and indices, and each can live in their own\n separate tablespace (i.e. spindle, if that is what you want\n to do).\n\n-or-\n\n3. Set yourself up at any point between either extreme, according\n to your own needs.\n\nI don't think it's that difficult to understand, is it?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 13 Jan 2000 20:50:03 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "On Thu, 13 Jan 2000, Bruce Momjian wrote:\n\n> > On Thu, 13 Jan 2000, Don Baccus wrote:\n> > \n> > > My site's still in the experimental stage, being used by a couple\n> > > dozen folks to record bird distribution data in the Pacific NW, so\n> > > I don't personally have real-world data to get a feeling for how\n> > > important this might become. Still, Oracle DBA docs talk a lot\n> > > about it so in some real-world scenarios being able to distribute\n> > > tables and indices on different spindles must pay off.\n> > \n> > What would it take to break the data/base/<database> directory down? To\n> > something like, maybe:\n> > \n> > data/base/<database>/pg_*\n> > /tables/*\n> > /indices/*\n> \n> And put sort and large objects somewhere separate too.\n\nwhy not? by default, one drive, it would make no difference except for\nfile layout, but it would *really* give room to expand...\n\nRight now, the udmsearch database contains (approx):\n\ntables:\n 10528 dict10\n 5088 dict11\n 2608 dict12\n 3232 dict16\n 64336 dict2\n 47960 dict3\n 3096 dict32\n 65952 dict4\n 42944 dict5\n 36384 dict6\n 34792 dict7\n 21008 dict8\n 14120 dict9\n 31912 url\n\nindexs:\n 5216 url_id10\n 2704 url_id11\n 1408 url_id12\n 1648 url_id16\n 36440 url_id2\n 27128 url_id3\n 1032 url_id32\n 37416 url_id4\n 22600 url_id5\n 19096 url_id6\n 18248 url_id7\n 10880 url_id8\n 6920 url_id9\n 6464 word10\n 3256 word11\n 1672 word12\n 2280 word16\n 26344 word2\n 21200 word3\n 2704 word32\n 28720 word4\n 21880 word5\n 19240 word6\n 18464 word7\n 11952 word8\n 8864 word9\n\nif tables/indexs were in different subdirectories, it would be too easy\nfor me, at some point in the future, to take just the tables directory and\nput them on their own dedicated drive, halving the space used on either\ndrive...\n\nI don't know...IMHO, it sounds like the simplist solution that provides\nthe multi-spindle benefits ppl are suggesting...\n\n",
"msg_date": "Fri, 14 Jan 2000 00:59:08 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Spindles ( Was: Re: [HACKERS] [hackers]development\n\tsuggestion needed )"
},
{
"msg_contents": "At 10:43 PM 1/13/00 -0500, Mike Mascari wrote:\n\n>For example, if several disk drives are available to store the\n>database, it might be helpful to store table data in a tablespace\n>on one disk drive, and index data in a tablespace on another disk\n>drive.\n\nMy gosh, imagine - I didn't just make this up! The amazement :)\n\n> This way, when users query table information, both disk\n>drives can work simultaneously, retrieving table and index data\n>at the same time.\n\nOverlapped Disk I/O - I remember that term from way back in my\n(gulp) PDP-8 days, working as a system hacker...\n\nLook - in case there's any doubt, I'm not trying to toast Postgres,\nI'm a fan, interested in getting more involved in the development\nscenario. I raised this issue because Xun raised some \"really big\ndatabase issues which I as a database theorist have an interest in\".\nMy biggest sin if any is to try to paint the horizon, at this point.\nPhilip Greenspun still says that those of us (including employee\n#3 or so of his company, Ben) who are interested in Postgres are \"losers\"\nby definition (Ben no longer works there). Meanwhile, folks like Ben\nand I keep telling folks that Postgres is growing into an ideal \nRDBMS for database-backed websites (you know, that place where all\nmoney is flowing and will continue to do so tomorrow, though don't\nask me about next week? :) And Philip says you're a loser if you\nwon't pay Oracle's license fee. He speaks as a dude badly bitten\nby Illustra, based long ago on a long-dead version of Postgres but\nthe pain not yet forgotten...\n\nThings like the Oracle documentation cited above fall into the class\nof advice to folks running really big - and REALLY BUSY - database\nservers.\n\nSure, hardware (cycles, RAM) fallsin price and as time goes on we\ncan perhaps forget some of the more horrific optimization stuff that\nwas invented to deal with small computer systems of one decade ago.\n\nAs a compiler writer, trust me - I'm familiar with the concept. And\nwith changing pardigms as designs flow from CISC to RISC (oh gosh,\nnot a theoretical advantage but you mean just a cost/performance point\non the transistor-per-chip curve? Damn, I should've patented my \ncynicism 10 years ago!) and back to post-RISC CISC, I'm not about\nto claim theoretical long-term advantages for any point-of-view.\n\nI won't suggest that all of the big-time hacks employed to make old\ncommercial DBs like Oracle are necessary in today's hardware/OS climate\n(raw I/O in particular falls into that category, IMH-tentative-O)\n\nBut, still...as long as we've had movable head disk drives (and my\nfirst disk drive was head-per-track, believe it or not) minimizing\nhead movement has been an optimization goal. I've written complex\nscheduling disk drivers in the past, and they can be good. Still,\nnothing beats coalescing one spindle's I/O into a narrow number of\ntracks, minimizing head movement. That's a constant that hasn't\nchanged for 30 years, and won't change next week.\n\nHeck, it even lowers your downtime due to blown drives.\n\nI might also add that the kind of systems Oracle doc writers were\nthinking of 10 years ago just aren't in the Postgres universe of\npossible DB clients...\n\nBut, it is changing. One impact - like it or not - of the good work\nyou folks have done over the past couple of (3 or 4 or I'm not personally\nsure how much) years and the fact that you continue to push the db into\nbeing more and more capable, more and more stable, more and more \nfeature-filled with SQL-92 stuff is that folks being asked to pay\n$25,000 for a fully-paid up license on a PIII500 X86 system (<$2000\nin hardware without even shopping, a greater than 10-1 software to\nhardware ratio) are going to be looking for a cheaper alternative. \n\nOf which you folks are one.\n\nSo, what's the deal, here...is the goal the Big Time or not?\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 13 Jan 2000 21:08:31 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "At 12:21 AM 1/14/00 -0400, The Hermit Hacker wrote:\n\n>All the major OSs out there have \"disk management tools\" that allow you to\n>build \"large file systems\" out of smaller ones... Solaris has DiskSuite,\n>FreeBSD has vinum, Linux has ??... why are we looking/investigating adding\n>a level of complexity to PostgreSQL to handle something that, as far as I\n>know, each of the OSs out there already has a way of dealing with?\n\nIf the resident OS can handle the issue, sure, it is worth investigating.\nLinux today does not (at least, the one I'm running).\n\nOne godliness-over-utility issue is the fact that doing such things in\nthe operating system (\"ln\" also works...) kinda violates the RDBMS ideal\nof having everything about a database, including metadata, stored in\nthe database. \n\nIn the case of \"CREATE TABLESPACE\" having Postgres handle placement\nplaces the burden of operating system specifics where it belongs - on the \nimplementation. This is why we say things like \"integer\" or \"numeric\",\ncome to think of it...\n\nThe word \"portability\" comes to mind, though of course things like\nspindle numbers and the like are extremely variable. \n\n>Some aren't necessarily mature yet...Solaris's is the only one that I'm\n>aware of that has a *beautiful* growfs program that allows you to add a\n>new drive to an existing \"pack\" and grow the file system into that new\n>drive while the system is live...but the utilities are there...\n>\n>I think the major problem that I'm worried about isn't spreading tables\n>across drives, but its when that *one* table grows to the point that its\n>about to overflow my drive...I'd rather add a 9gig drive on, make it an\n>18gig file system, and let it continue to grow...\n\nThese aren't mutally exclusive problems, which is one reason why Oracle\nallows you to control things so minutely. I think they let you spill\na table onto multiple drives, though I'd have to look at one of my\nmanuals hidden in my piles of more interesting things (I'm no Oracle\nexpert, I just read manuals :)\n\nThere is definitely a sort of tension between the operating systems,\nwhich continue to grow in capability such as you're pointing out,\nand commercial systems like Oracle that have to work TODAY regardless\nof where operating systems sit on the capability yardstick.\n\nThus Oracle includes built-in mirroring, while today under Linux\nyou might as well do software RAID 1, or you can buy a hardware\ndevice that does RAID 1 behind your back and looking like a single\ndrive no matter how many platter you stuff it with, etc etc.\n\nSo...you'll never hear me argue that Postgres should include\na mirroring storage manager. There is no longer the need for an \napplication to do it on its own in the supported OS space (hmmm...I'm\nassuming FreeBSD is there, too, right?)\n\nSo maybe the notion of application placement of files on particular\nspindles is becoming obsolete, too. It isn't today on Linux...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 13 Jan 2000 21:21:11 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "At 01:34 PM 1/14/00 +0900, Hiroshi Inoue wrote:\n\n>I agree with Tatsuo though I prefer\n>\tcreate table t1 (i int) tablespace foo;\n>.\n>Isn't it preferable to encapsulate the table location and storage type ?\n\nEncapsulation is almost always preferable, but I don't think Tatsuo was\nsaying otherwise, merely pointing out a clever trick that I hadn't\nthough of (I didn't realize that \"/foo\" would be rooted rather than\njust put under PGDATA, in fact I stay away from quoted non-SQL non-\"normal\nlanguage\" identifiers altogether, being somewhat interested in portability\nof my application code).\n\nAnd it is a clever trick...obvious to an insider, but not to me.\n\n>At first,a tablespace would only correspond to a directory and it won't\n>be so difficult to implment. But we would gain a lot with the feature.\n\n>In the future,the tablespace may be changed to mean real(??)\n>tablespace. \n\nI think the decision to keep the mechanism for providing separate\nstorage mangers was a good one (if I understand correctly that there\nwas once consideration of removing it). Even if it is never used in\nmainstream Postgres, some specialty application may use it, one of\nthe potentials that comes from open source software.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 13 Jan 2000 21:25:32 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "At 12:31 AM 1/14/00 -0400, The Hermit Hacker wrote:\n\n>This would give you the ability to put all table from all databass onto\n>one file system, and all indexes onto another, and all system files onto a\n>third...\n>\n>I don't know, I'm oversimplying and spewing thoughts out\n>again...but...*shrug*\n\nThis kind of hack would certainly be doable, but I guess the question\narises once again - is PostgreSQL shooting to be the Big Time or not?\n\nI mean, the mere use of words like \"Bronze Support\" and \"Silver Support\"\nremind one of Oracle :)\n\nThis particular issue of the placement of tables and indices pales\nin importance compared to outer joins, for instance. But once all\nthe big things are implemented, folks doing BIG JOBS will look at\nPostgres as being a viable alternative. And they'll then be disappointed\nif they don't have the kind of control over files that they do with\nOracle or other big-time commercial DBs...the folks talking about\nreplication are coming from the same space, though much more ambitiously\n(since a simple tablespace feature could be simple, while I can't\nthink of any simple replication hacks).\n\nI'd like to see Postgres succeed in a big way. I don't see it toppling\nOracle, but heck I can't see why Interbase can't be ground into dust.\nOpen Source, great functionality, maybe B+ on scalability etc (thus not\ntoppling Oracle but equal to most others) ... that's not too ambitious\na goal, is it?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 13 Jan 2000 21:31:41 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Spindles ( Was: Re: [HACKERS]\n\t[hackers]development suggestion needed )"
},
{
"msg_contents": "At 11:41 PM 1/13/00 -0500, Bruce Momjian wrote:\n\n>And put sort and large objects somewhere separate too.\n\nYeah, very very good point! Though toasted or roasted or toasty or\nwhatever large objects (I unsubscribed for a few crucial days when\nI was back east travelling around with my girlfriend) might make\nthe large-object issue less important?\n\nAlso, for sorting, many sites will just load down with RAM, and this\nwill increase rapidly over the next few years (despite current\nSDRAM high prices and the whole RDRAM fiasco). But really really\nbig sites might appreciate such a feature...\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 13 Jan 2000 21:35:23 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Spindles ( Was: Re: [HACKERS]\n\t[hackers]development suggestion needed )"
},
{
"msg_contents": "At 12:59 AM 1/14/00 -0400, The Hermit Hacker wrote:\n\n>if tables/indexs were in different subdirectories, it would be too easy\n>for me, at some point in the future, to take just the tables directory and\n>put them on their own dedicated drive, halving the space used on either\n>drive...\n>\n>I don't know...IMHO, it sounds like the simplist solution that provides\n>the multi-spindle benefits ppl are suggesting...\n\nSplitting tables/indexes seems to be the first-order optimization, from\nmy talking to folks who are far more experienced with databases than\nI (I did mention I wrote my first query less than a year ago, didn't\nI?)\n\nStill...encapsulation within the RDBMS itself seems to be in the\nspirit of what RDBMS's are all about...such encapsulation could\nbe expressed in very simple external form and still be useful, but\nI think it should be encapsulated...\n\nAmong other things, if CREATE TABLESPACE were dumped by pg_dump,\nI could move from V7.0 to V8.0 and beyond without having to \nrebuild my distribution structure by hand :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 13 Jan 2000 21:40:23 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Spindles ( Was: Re: [HACKERS]\n\t[hackers]development suggestion needed )"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> [email protected] (Xun Cheng) writes:\n> > I want to experiment with some new fast join algorithms.\n> \n> Cool. Welcome aboard!\n> \n> > Could anyone tell me if\n> > the directory /docs/pgsql/src/backend/executor is the \n> > right place to start\n> \n> The executor is only half the problem: you must also teach the\n> planner/optimizer how and when to use the new join type.\n> \n> Hiroshi Inoue has recently been down this path (adding support\n> for TID-based scans), and might be able to give you more specific\n> advice.\n>\n\nHmm,un(??)fortunately,I didn't have to understand details about join\nto implement Tidscan. It's a basic scan and is used to scan one relation.\n\nI don't know about document well either,sorry. AFAIK,Tom is\ndecidedly superior to me in understanding planner/optimizer. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Fri, 14 Jan 2000 14:51:35 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> ...\n> Or we could continue to use symlinks, and just create them ourselves in\n> the backend.\n\nBut you'd still need some built-in understanding about where the table\nis Really Supposed To Be, because you'd need to be able to create and\ndelete the symlinks on the fly when the table grows past a 1-Gb segment\nboundary (or is shrunk back again by vacuum!).\n\nAFAICS, a reasonable solution still requires storing a location path\nfor each table --- so you might as well just use that path directly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jan 2000 00:59:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "Don Baccus wrote:\n\n> I'd like to see Postgres succeed in a big way. I don't see it toppling\n> Oracle, but heck I can't see why Interbase can't be ground into dust.\n> Open Source, great functionality, maybe B+ on scalability etc (thus not\n> toppling Oracle but equal to most others) ... that's not too ambitious\n> a goal, is it?\n> \n\nShoot for the sky you hit the eagle, shoot for the eagle you hit\nthe ground....\n\nMike Mascari\n",
"msg_date": "Fri, 14 Jan 2000 01:00:57 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Spindles ( Was: Re: [HACKERS][hackers]development \n\tsuggestion needed )"
},
{
"msg_contents": "> I'd like to see Postgres succeed in a big way. I don't see it toppling\n> Oracle, but heck I can't see why Interbase can't be ground into dust.\n> Open Source, great functionality, maybe B+ on scalability etc (thus not\n> toppling Oracle but equal to most others) ... that's not too ambitious\n> a goal, is it?\n> \n\nWe don't block people for working on ietms. However, we do try and set\npriorities based on the open items we have. I hope we don't appear\nheavy-handed in this regard.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 14 Jan 2000 01:16:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Spindles ( Was: Re: [HACKERS] [hackers]development\n\tsuggestion needed )"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > ...\n> > Or we could continue to use symlinks, and just create them ourselves in\n> > the backend.\n> \n> But you'd still need some built-in understanding about where the table\n> is Really Supposed To Be, because you'd need to be able to create and\n> delete the symlinks on the fly when the table grows past a 1-Gb segment\n> boundary (or is shrunk back again by vacuum!).\n> \n> AFAICS, a reasonable solution still requires storing a location path\n> for each table --- so you might as well just use that path directly.\n\nMakes sense. The only advantage to symlinks is that you could use that\ninformation in places you need it, and for normal access use the\nsymlinks. You wouldn't have to carry around that info as much.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 14 Jan 2000 01:20:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "> Look - in case there's any doubt, I'm not trying to toast Postgres,\n> I'm a fan, interested in getting more involved in the development\n> scenario. I raised this issue because Xun raised some \"really big\n> database issues which I as a database theorist have an interest in\".\n> My biggest sin if any is to try to paint the horizon, at this point.\n> Philip Greenspun still says that those of us (including employee\n> #3 or so of his company, Ben) who are interested in Postgres are \"losers\"\n> by definition (Ben no longer works there). Meanwhile, folks like Ben\n\nI thought Phil was a big fan of ours.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 14 Jan 2000 01:22:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": ">You don't get to do either of the latter two unless you write a\n\n> >raw-device storage manager\n>\n> Not within a single filesystem, but scattering things across spindles\n> could be done without a raw-device storage manager :)\n\nYes, but seen how cheap RAID arrays have become? I know disks are getting\nbigger as well, and many people will opt for a single disk, but there may\nbe more urgent things to fix than something for which a hardware solution\nalready exists. And lets face it: a database ought to be on RAID\nanyway,unless somebody wants to write Tandem-style mirrored disks.... ;-)\n\nAdriaan\n\n\n\n\n\n",
"msg_date": "Fri, 14 Jan 2000 07:58:41 +0000",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "On Thu, 13 Jan 2000, Don Baccus wrote:\n\n> Date: Thu, 13 Jan 2000 18:19:33 -0800\n> From: Don Baccus <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Xun Cheng <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] [hackers]development suggestion needed \n> \n...skipped...\n\n> The selects that such sites spew forth are handled wonderfully\n> by Postgres now, with MVCC and the change that stops the update\n> of pg_log after read-only selects.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \nJust curious,\nDoes plain 6.5.3 handle read-only selects in this way ?\n\n Regards,\n\t\tOleg\n\n> \n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 14 Jan 2000 12:04:32 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "Don Baccus writes:\n> At 12:21 AM 1/14/00 -0400, The Hermit Hacker wrote:\n> \n> >All the major OSs out there have \"disk management tools\" that allow you to\n> >build \"large file systems\" out of smaller ones... Solaris has DiskSuite,\n> >FreeBSD has vinum, Linux has ??... why are we looking/investigating adding\n> >a level of complexity to PostgreSQL to handle something that, as far as I\n> >know, each of the OSs out there already has a way of dealing with?\n> \n> If the resident OS can handle the issue, sure, it is worth investigating.\n> Linux today does not (at least, the one I'm running).\n\nLinux has software raid (often called \"md\") with most of the usual\nbells and whistles (RAID0, RAID1, RAID5, RAID0+1, hot spares,\nbackground reconstruction). You can patch in LVM (logical volume\nmanagement) although the distributions of which I am aware don't ship\nit ready-patched. That's the equivalent of Digi^H^H^H^HTru64 UNIX LSM\nand AIX and so on have similar things. Basically, join together\nphysical disk units into logical block devices with additions being\npossible on the fly. If you put an ext2 filesystem on one of those,\nthen you can dynamically resize it with e2resize, although that is not\ncompletely production quality yet and last I heard you could currently\nonly increase the filesystem size on the fly and not decrease it. ISTR\nthe competition tend only to allow increase and not shrink but the\next2 one is designed to allow shrink too. The complexity isn't so much\nin the basics (\"simply\" throw in more block groups and be carefuly\nabout atomicity if the system is live); it's in stuff like making sure\nthat the system is robust against fragmentation, goal allocation needs\ntweaking (I think) and how you gather together admin information about\nwhere all the bits are. If you break apart the separate disks of a\nlive filesystem, it's nice to know where all the bits go.\n\nEven with all that underlying stuff, it's *still* important for higher\nlevel configuration at the database level to be possible. Even if from\nthe theoretical point of view it's all one big page space, it matters\na great deal in practice to be able to spread different bits out over\ndifferent table spaces/volumes/files/block devices/whatever.\n\nI think that means I'm in violent agreement with you on the db side\nbut this reply does give me a chance to point out that Linux isn't\nmissing the functionality you haven't noticed in it :-).\n\n--Malcolm\n\n-- \nMalcolm Beattie <[email protected]>\nUnix Systems Programmer\nOxford University Computing Services\n",
"msg_date": "Fri, 14 Jan 2000 10:35:26 +0000 (GMT)",
"msg_from": "Malcolm Beattie <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "On Thu, 13 Jan 2000, Don Baccus wrote:\n\n> So, what's the deal, here...is the goal the Big Time or not?\n\nIf it means adopting one vendors concept of what the world should look\nlike...not.\n\nI *hate* the way Oracle sets up tablespaces ... where i have to pre-guess\nthe size of my data and allocate space accordingly...what if my table\nnever does reach that critical mass? I've just wasted X meg of space ...\n\nI hate the way that Oracle starts up something like 4 processes for every\ndatabase, when the server is started...\n\nI think the route we've taken since day one, and hope that we continue\nalong it...we've created \"hacks\" along the way to appease users of the\ndifferent vendors for SQL relateed issues, but our underlying structure\nhas stayed, I think, pretty unique...\n\nWe haven't been designing a FreeOracle...we've been working on and\ndesigning a Free, full featured RDBMS that follows the specifications laid\ndown ... with our own PostgreSQLism's thrown in here and there ...\n\nIf that happens to follow what one vendor happens to have done as far as\ntheir implementation, great...but there has been no conscious effort to do\nso that I'm aware of...\n\nJust look at the whole OUTER JOIN issue ... *shrug*\n\nI *like* the fact that we come up with original ideas/solutions to\nproblems, that make use of *existing* technologies ...\n\nI liked the thread about moving indexes and tables to seperate file\nsystems, and hope we can implement something that will make it something\nthat does't require 'ln's, but I definitely don't like Oracle's way of\ndoing things ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 14 Jan 2000 11:05:53 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n>> The selects that such sites spew forth are handled wonderfully\n>> by Postgres now, with MVCC and the change that stops the update\n>> of pg_log after read-only selects.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \n> Does plain 6.5.3 handle read-only selects in this way ?\n\nAFAIR that logic is in 6.5.*. (Vadim would remember better, since he\nput it in.) But keep in mind that a SELECT is read-only just to the\nextent that it is hitting previously committed tuples. The first visit\nto a newly committed-good or newly committed-dead tuple will cause an\nupdate and write-back of the tuple's status flags --- whether that visit\nhappens in SELECT or anything else.\n\nIt occurs to me that the no-log-update logic could probably be improved\non. The test to see whether a log update is needed looks at whether any\nbuffers have been written. A SELECT that marks someone else's tuples as\nknown-committed will look like it needs to be committed in pg_log\n... but it doesn't really need it. Perhaps Vadim is planning to fix\nthis in the WAL rewrite.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jan 2000 10:11:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "> I thought Phil was a big fan of ours.\n\nHe wants to be. But ihho we are not yet worthy :))\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 14 Jan 2000 16:00:32 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "At 01:22 AM 1/14/00 -0500, Bruce Momjian wrote:\n>> Look - in case there's any doubt, I'm not trying to toast Postgres,\n>> I'm a fan, interested in getting more involved in the development\n>> scenario. I raised this issue because Xun raised some \"really big\n>> database issues which I as a database theorist have an interest in\".\n>> My biggest sin if any is to try to paint the horizon, at this point.\n>> Philip Greenspun still says that those of us (including employee\n>> #3 or so of his company, Ben) who are interested in Postgres are \"losers\"\n>> by definition (Ben no longer works there). Meanwhile, folks like Ben\n>\n>I thought Phil was a big fan of ours.\n\nHe's moderated his opinion considerably over the past several\nmonths. To some extent you might say he's had his opinion\nmoderated for him. Feel free to extrapolate :)\n\n(not just me, or even primarily me, but folks like Ben Adida who've\n worked with Philip at MIT and then Ars Digita are deeply interested\n in seeing a successful version of the Ars Digita toolkit based on\n Postgres, Ben also coordinates AOLserver releases for Ars Digita/MIT)\n\nStill, as recently as six months ago Philip flamed some English gent\nin public for suggesting an ACS port to Postgres or another free\nor cheap RDBMS, and went on the e-mail the guy nasty notes in private.\nOr so the gent sez.\n\nI know Philip was surprised and impressed by the great leap forward\nembodied by 6.5.\n\nAs was I - I'd given up on 6.4.\n\nBut Philip is mostly concerned with the clients that feed his very\nrapidly growing company, and while he'll release his toolkit sources\nstill tells folks you really need Oracle. His most recent criticism\nof Postgres shrunk to two items (not referential integrity, no outer\njoins), one of which has disappeared in current sources. \n\nApparently he doesn't know how weak large object support is, I won't\ntell him, either...\n\nAnyway, this isn't about Philip's opinions so much as the fact that\nPostgres has had a very spotty reputation in the past, but is improving\nso quickly and predictably that its reputation is also steadily\nimproving. He serves as an example of someone who's convinced that\nPostgres has greatly improved but remains skeptical that it's improved\nenough to do serious work with.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 14 Jan 2000 08:06:24 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "At 01:16 AM 1/14/00 -0500, Bruce Momjian wrote:\n\n>We don't block people for working on ietms. However, we do try and set\n>priorities based on the open items we have. I hope we don't appear\n>heavy-handed in this regard.\n\nNot at all. And though I've triggered this dicussion, I'd be the\nfirst to agree it is minor in importance compared to things like\nouter joins, the WAL implementation, and Jan's large object work.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 14 Jan 2000 08:08:11 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Spindles ( Was: Re: [HACKERS]\n\t[hackers]development suggestion needed )"
},
{
"msg_contents": "At 07:58 AM 1/14/00 +0000, Adriaan Joubert wrote:\n>>You don't get to do either of the latter two unless you write a\n>\n>> >raw-device storage manager\n>>\n>> Not within a single filesystem, but scattering things across spindles\n>> could be done without a raw-device storage manager :)\n>\n>Yes, but seen how cheap RAID arrays have become? I know disks are getting\n>bigger as well, and many people will opt for a single disk, but there may\n>be more urgent things to fix than something for which a hardware solution\n>already exists. And lets face it: a database ought to be on RAID\n>anyway,unless somebody wants to write Tandem-style mirrored disks.... ;-)\n\nDon't need to write Tandem-style mirrored disks, the Linux kernal \nimplements mirrored file systems for me. I can mirror UW2 disks in\nsoftware for $189/spindle (current cost of an IBM Deskstar UW2 7200 RPM\n4.5 GB spindle here in Oregon), the fancier RAID arrays still aren't that\ncheap. \n\nThe cheapest RAID interfaces just hide the mirroring from you. There's\na tier up that take a cluster of mirrored (or RAID 5'd) platters and\npresent them to you as a single large disk - these remove a lot of\none's control over spindle placement, sure. My guess is that some \nfolks don't view this as a plus...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 14 Jan 2000 08:14:44 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "At 12:04 PM 1/14/00 +0300, Oleg Bartunov wrote:\n\n>> The selects that such sites spew forth are handled wonderfully\n>> by Postgres now, with MVCC and the change that stops the update\n>> of pg_log after read-only selects.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \n>Just curious,\n>Does plain 6.5.3 handle read-only selects in this way ?\n\nYes.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 14 Jan 2000 08:15:33 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "At 10:35 AM 1/14/00 +0000, Malcolm Beattie wrote:\n\n>Linux has software raid (often called \"md\") with most of the usual\n>bells and whistles (RAID0, RAID1, RAID5, RAID0+1, hot spares,\n>background reconstruction).\n\nYes, I know.\n\n> You can patch in LVM (logical volume\n>management) although the distributions of which I am aware don't ship\n>it ready-patched.\n\nRight.\n\n>I think that means I'm in violent agreement with you on the db side\n>but this reply does give me a chance to point out that Linux isn't\n>missing the functionality you haven't noticed in it :-).\n\nLinux also has a journaling filesystem available, if you're brave and\nwanting to be at the bleeding edge. That's sort of like using next\nweek's Postgres development sources in yesterday's production environment,\nthough - a bit risky :)\n\nI should've made it clear that when I was talking about the current,\nwidespread releases that I'm familiar with. RedHat, in particular.\n\nThe RAID stuff is out-of-the-box supported today, fancier stuff\nlike the journaling file-system will be out-of-the-box supported before\ntoo much longer. It's exciting to see Linux improve steadily just as it's\nexciting to see Postgres improve steadily.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 14 Jan 2000 08:22:01 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "At 11:05 AM 1/14/00 -0400, The Hermit Hacker wrote:\n>On Thu, 13 Jan 2000, Don Baccus wrote:\n>\n>> So, what's the deal, here...is the goal the Big Time or not?\n>\n>If it means adopting one vendors concept of what the world should look\n>like...not.\n\n>I *hate* the way Oracle sets up tablespaces ... where i have to pre-guess\n>the size of my data and allocate space accordingly...what if my table\n>never does reach that critical mass? I've just wasted X meg of space ...\n\nAnd I'm not suggesting that anything like this level of (as I described\nit in my previous note) anal retentive control be implemented. Anal\nretentive IT managers won't be happy unless they're paying Oracle\n$25/power unit anyway.\n\nBut being able to spread tables and indices around several spindles\nwould improve scalability. I think the very simple approach that's\nbeen kicked around would work for anyone we care about (me!:)\n\nI mentioned the Oracle details in part because it's not clear to me\nhow much folks here know about Oracle. I don't know all that much,\nonly enough to know that any database that initializes its defaults\nto useless values is a pain in the ass in ways that customers shouldn't\nput up with. I don't understand Oracle's approach, there, seducing you\ninto letting it build a default installation which is then virtually\nuseless.\n\n>We haven't been designing a FreeOracle...\n\nI'm certainly not arguing for this...remember, I did argue against\n\"(+)\" in favor of SQL 92 outer joins :)\n\n>I liked the thread about moving indexes and tables to seperate file\n>systems, and hope we can implement something that will make it something\n>that does't require 'ln's, but I definitely don't like Oracle's way of\n>doing things ...\n\nI agree...I was simply providing a datapoint, not suggesting it was one\nPostgres should emulate.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 14 Jan 2000 08:30:30 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "At 10:11 AM 1/14/00 -0500, Tom Lane wrote:\n>Oleg Bartunov <[email protected]> writes:\n>>> The selects that such sites spew forth are handled wonderfully\n>>> by Postgres now, with MVCC and the change that stops the update\n>>> of pg_log after read-only selects.\n>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \n>> Does plain 6.5.3 handle read-only selects in this way ?\n>\n>AFAIR that logic is in 6.5.*. (Vadim would remember better, since he\n>put it in.)\n\nIt is. I'd notice right away if it wasn't, the decibel level on my\nlittle database server would go 'way up because it went 'way down when\nI applied the patch to my 6.5 beta. It sits six inches from me so\nI'd know for sure!\n\n>It occurs to me that the no-log-update logic could probably be improved\n>on. The test to see whether a log update is needed looks at whether any\n>buffers have been written. A SELECT that marks someone else's tuples as\n>known-committed will look like it needs to be committed in pg_log\n>... but it doesn't really need it. Perhaps Vadim is planning to fix\n>this in the WAL rewrite.\n\nNo idea if he is or isn't, but the patch is very simple and is based\non whether or not buffers got dirty, not whether or not the select \nitself changed anything, IIRC.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 14 Jan 2000 08:37:22 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> [email protected] (Xun Cheng) writes:\n> > I want to experiment with some new fast join algorithms.\n> \n> Cool. Welcome aboard!\n> \n> > Could anyone tell me if\n> > the directory /docs/pgsql/src/backend/executor is the\n> > right place to start\n> \n> The executor is only half the problem: you must also teach the\n> planner/optimizer how and when to use the new join type.\n> \n> Hiroshi Inoue has recently been down this path (adding support\n> for TID-based scans), and might be able to give you more specific\n> advice.\n> \n> > 1. Does postgresql do raw storage device management or it relies\n> > on file system? My impression is no raw device. If no,\n> > is it difficult to add it and possibly how?\n> \n> Postgres uses Unix files. We have avoided raw-device access mostly on\n> grounds of portability. To persuade people that such a change should go\n> into the distribution, you'd need to prove that *significantly* better\n> performance is obtained with raw access. I for one don't think it's a\n> foregone conclusion; Postgres gets considerable benefit from sitting\n> atop Unix kernel device schedulers and disk buffer caches.\n> \n> As far as the actual implementation goes, the low level access methods\n> go through a \"storage manager\" switch that was intended to allow for\n> the addition of a new storage manager, such as a raw-device manager.\n> So you could get a good deal of stuff working by implementing code that\n> parallels md.c/fd.c. The main problem at this point is that there is a\n> fair amount of utility code that goes out and does its own manipulation\n> of the database file structure. You'd need to clean that up by pushing\n> it all down below the storage manager switch (inventing new storage\n> manager calls as needed).\n> \n> > that the available join algos implemented are nested loop\n> > join (including index-based), hash join (which one? hybrid),\n> > sort-merge join?\n> \n> Right. The hash join uses batching if it estimates that the relation\n> is too large to fit in memory; is that what you call \"hybrid\"?\n\nI've heard the word \"hybrid\" being used of a scheme where you hash each \nkey of a multi-key index separately and then concatenate the hashes for \nthe index. That way you can use the index for accessing also subsets of \nkeys by examining only the buxkets with matching hash sections.\n\nDoes postgres do it even when generating the keys ?\n\nI'd guess it does, as each hashable type has a hashing function.\n\nOTOH pg probably does not use it for finding by the 3rd field of index ?\n\n--------\nHannu\n",
"msg_date": "Sun, 16 Jan 2000 02:09:32 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "First I want to thank you for all help I got to my\nearlier posting.\n\n\n>> Right. The hash join uses batching if it estimates that the relation\n>> is too large to fit in memory; is that what you call \"hybrid\"?\n>\n>I've heard the word \"hybrid\" being used of a scheme where you hash each \n>key of a multi-key index separately and then concatenate the hashes for \n>the index. That way you can use the index for accessing also subsets of \n>keys by examining only the buxkets with matching hash sections.\n\nIn research, there are traditionally three kinds of hash joins:\nsimple hash, grace hash and hybrid hash. Hybrid is generally considered\nto be having a better performance since it is designed to combine\nthe best behavior of simple hash and grace hash.\nIt has two phases. In the first the relations are read, hashed into\nbuckets, and written out, as in grace hash. However, during this phase\na portion of the memory is reserved for an in-memory hash bucket for R (\nR is joining with S and R is smaller). This bucket of R will never\nbe written to disk.\n\nxun\n\n",
"msg_date": "Sun, 16 Jan 2000 15:44:22 -0800",
"msg_from": "Xun Cheng <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] hybrid hash, cont. of development suggestion needed "
},
{
"msg_contents": "Xun Cheng <[email protected]> writes:\n> In research, there are traditionally three kinds of hash joins:\n> simple hash, grace hash and hybrid hash. Hybrid is generally considered\n> to be having a better performance since it is designed to combine\n> the best behavior of simple hash and grace hash.\n> It has two phases. In the first the relations are read, hashed into\n> buckets, and written out, as in grace hash. However, during this phase\n> a portion of the memory is reserved for an in-memory hash bucket for R (\n> R is joining with S and R is smaller). This bucket of R will never\n> be written to disk.\n\nYes, that's how nodeHash.c does it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jan 2000 21:00:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] hybrid hash, cont. of development suggestion needed "
}
] |
[
{
"msg_contents": "Peter E writes (in elog.h):\n\n+ #ifndef __GNUC__\n extern void elog(int lev, const char *fmt, ...);\n+ #else\n+ /* This extension allows gcc to check the format string for consistency with\n+ the supplied arguments. */\n+ extern void elog(int lev, const char *fmt, ...) __attribute__ ((format (printf\n, 2, 3)));\n+ #endif\n\nCool. Now who's going to fix the cartload of warnings this has\nproduced? I'm counting about 125 of them. They should be fixed,\non grounds of portability, but the main problem right now is that\nit's difficult to see the *real* warnings because of all these\nguys...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jan 2000 20:32:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Peter opens a can of worms"
},
{
"msg_contents": "I wrote:\n> Cool. Now who's going to fix the cartload of warnings this has\n> produced? I'm counting about 125 of them.\n\nOn further investigation, it seems that some of these warnings are\nreal portability issues (pointers being printed as ints, etc).\nBut a very considerable fraction are bogus. gcc doesn't know about\nelog's \"%m\" extension to the standard %-format set, and it seems to\nbe assuming that there should be a parameter to go with the %m.\n\nI have a feeling we will have to revert this change...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jan 2000 21:10:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Peter opens a can of worms "
},
{
"msg_contents": "Boy, those worms were just waiting to come out ...\n\nOn 2000-01-13, Tom Lane mentioned:\n\n> I wrote:\n> > Cool. Now who's going to fix the cartload of warnings this has\n> > produced? I'm counting about 125 of them.\n\nAll fixed. (good count by the way ;)\n\n> \n> On further investigation, it seems that some of these warnings are\n> real portability issues (pointers being printed as ints, etc).\n\nActually about a quarter of these were definitely problems, with a handful\nof rather serious bugs (depends on how serious this can become, of\ncourse).\n\n> But a very considerable fraction are bogus. gcc doesn't know about\n\nActually going through them there were certainly a few harmless ones (such\nas too many arguments), but exactly zero were completely bogus. Especially\ntoo many arguments might point out a typo.\n\n> elog's \"%m\" extension to the standard %-format set, and it seems to\n> be assuming that there should be a parameter to go with the %m.\n\n%m is a GNU extension (so they claim). (And even if it weren't, there's a\nway to \"register\" non-standard format conversions.)\n\n> I have a feeling we will have to revert this change...\n\nI'd ask you to reconsider. Especially with the multitude of typdefs we\nhave you never know for sure what format to use. As I said, a number of\nthese warnings actually had a good cause.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 15 Jan 2000 04:04:56 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Peter opens a can of worms "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 2000-01-13, Tom Lane mentioned:\n>> I have a feeling we will have to revert this change...\n\n> I'd ask you to reconsider.\n\nIf you found a solution to the %m issue, I'm a happy camper. I thought\nall the too-many-arguments gripes were because of %m, but evidently\nI was mistaken. Good work!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 Jan 2000 00:39:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Peter opens a can of worms "
}
] |
[
{
"msg_contents": "It looks like the latest psql has a buffer-flush-timing problem.\nError messages are appearing out-of-sync with script echoing,\nfor example:\n\n*** expected/boolean.out\tSat Jan 8 19:31:26 2000\n--- results/boolean.out\tThu Jan 13 20:41:24 2000\n***************\n*** 111,118 ****\n -- This is now an invalid expression\n -- For pre-v6.3 this evaluated to false - thomas 1997-10-23\n INSERT INTO BOOLTBL2 (f1) \n- VALUES (bool 'XXX'); \n ERROR: Bad boolean external representation 'XXX'\n -- BOOLTBL2 should be full of false's at this point \n SELECT '' AS f_4, BOOLTBL2.*;\n f_4 | f1 \n--- 111,118 ----\n -- This is now an invalid expression\n -- For pre-v6.3 this evaluated to false - thomas 1997-10-23\n INSERT INTO BOOLTBL2 (f1) \n ERROR: Bad boolean external representation 'XXX'\n+ VALUES (bool 'XXX'); \n -- BOOLTBL2 should be full of false's at this point \n SELECT '' AS f_4, BOOLTBL2.*;\n f_4 | f1 \n\n\nThis is making it difficult to look for actual backend bugs,\nso I respectfully request a fix ASAP.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jan 2000 20:47:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Many regress tests failing due to latest psql changes"
},
{
"msg_contents": "This seems to have been like that for a while. I put in a fflush() at the\nappropriate spot, hope that helps. Man, I thought we had all of them\ncaught already... (It's time I start running the regress tests\nreligiously, ey?)\n\nOn 2000-01-13, Tom Lane mentioned:\n\n> It looks like the latest psql has a buffer-flush-timing problem.\n> Error messages are appearing out-of-sync with script echoing,\n> for example:\n> \n> *** expected/boolean.out\tSat Jan 8 19:31:26 2000\n> --- results/boolean.out\tThu Jan 13 20:41:24 2000\n> ***************\n> *** 111,118 ****\n> -- This is now an invalid expression\n> -- For pre-v6.3 this evaluated to false - thomas 1997-10-23\n> INSERT INTO BOOLTBL2 (f1) \n> - VALUES (bool 'XXX'); \n> ERROR: Bad boolean external representation 'XXX'\n> -- BOOLTBL2 should be full of false's at this point \n> SELECT '' AS f_4, BOOLTBL2.*;\n> f_4 | f1 \n> --- 111,118 ----\n> -- This is now an invalid expression\n> -- For pre-v6.3 this evaluated to false - thomas 1997-10-23\n> INSERT INTO BOOLTBL2 (f1) \n> ERROR: Bad boolean external representation 'XXX'\n> + VALUES (bool 'XXX'); \n> -- BOOLTBL2 should be full of false's at this point \n> SELECT '' AS f_4, BOOLTBL2.*;\n> f_4 | f1 \n> \n> \n> This is making it difficult to look for actual backend bugs,\n> so I respectfully request a fix ASAP.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 14 Jan 2000 23:28:23 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Many regress tests failing due to latest psql changes"
}
] |
[
{
"msg_contents": "Can anyone who uses SSL check whether libpq current sources still\nwork for SSL connections?\n\nIt looks to me like it's completely broken --- surely trying to\nnegotiate SSL before doing connect() isn't going to work too well?\n\nBut I don't know anything about SSL, nor have it installed,\nso I'm not eager to touch it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jan 2000 00:38:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Does SSL code still work after async-connection changes?"
}
] |
[
{
"msg_contents": "\nJust trying to start up a 'mini-process' of -B 8 -N 4, and it tells me\nthat -N must be >= 16 ... why? \n\ndocumentation doesn't seem to imply/indicate this, should it?\n\n -N n_backends\n n_backends is the maximum number of backend server\n processes that this postmaster is allowed to start.\n In the stock configuration, this value defaults to\n 32, and can be set as high as 1024 if your system\n will support that many processes. Both the default\n and upper limit values can be altered when building\n Postgres (see src/include/config.h).\n\nFor security reasons, I have a server running on its own port, and I want\nto restrict the amount of shared memory that it uses, and I know the app\nwill never open more then 4 backends (it would be lucky to hit 2), so\nfigured 8/4 would be nice and safe...\n\nThanks...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 14 Jan 2000 01:54:53 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why must -N be >= 16?"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Just trying to start up a 'mini-process' of -B 8 -N 4, and it tells me\n> that -N must be >= 16 ... why? \n\nYou misread it --- -N can be as small as you like, but we don't allow\na really tiny -B. To quote the code:\n\n if (NBuffers < 2 * MaxBackends || NBuffers < 16)\n {\n /* Do not accept -B so small that backends are likely to starve for\n * lack of buffers. The specific choices here are somewhat arbitrary.\n */\n fprintf(stderr, \"%s: -B must be at least twice -N and at least 16.\\n\",\n progname);\n exit(1);\n }\n\nI'm not even real sure that -B 16 is going to work well if you throw\ncomplex queries at it --- we've not stressed the system with small\nnumbers of buffers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jan 2000 01:22:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Why must -N be >= 16? "
}
] |
[
{
"msg_contents": "But it's not quite this simple. In our production system, we have reference\ndata in one tablespace, reference indices in another, working data in a\nthird space, and work indices in a fourth. This is because the amount of\nworking data throughput is extremely high, while the reference data,\nalthough changing reasonably frequently, changes significantly less than the\nworking data. This is then normally spread across five spindles, with\nOracle being in the fifth. On a 32-processor HP.\n\nI think a good solution is to be able to specify where on the disk the table\nis created (absolute paths only), and then postgres symlinks that file in\nthe main data directory, so from that point on it referenced without the\npath name. That's probably a significant start.\n\nAlternatively, we could create \"directoryspaces\", which treats a directory\nas a tablespace. Then you do this:\n\nCREATE TABLE foo (id_foo int, name varchar(30)) TABLESPACE\n\"/data/pgdata/sys1ref\";\n\nto create the new file /data/pgdata/sys1ref/foo, and a symlink is created in\nthe main db directory, so that you can just SELECT * FROM foo;\n\nThis is not difficult at all, or am I missing something? Only real issue\n(possibly) is security regarding the tablespace. It might be an idea to\nallow only the superuser to add allowed directories (i.e.: \"create\"\ntablespaces), and assign user access to those tablespaces.\n\n\nMikeA\n\n\n\n\n>> -----Original Message-----\n>> From: The Hermit Hacker [mailto:[email protected]]\n>> Sent: Friday, January 14, 2000 6:32 AM\n>> To: Don Baccus\n>> Cc: Tom Lane; Xun Cheng; [email protected]\n>> Subject: Multiple Spindles ( Was: Re: [HACKERS] [hackers]development\n>> suggestion needed )\n>> \n>> \n>> On Thu, 13 Jan 2000, Don Baccus wrote:\n>> \n>> > My site's still in the experimental stage, being used by a couple\n>> > dozen folks to record bird distribution data in the Pacific NW, so\n>> > I don't personally have real-world data to get a feeling for how\n>> > important this might become. Still, Oracle DBA docs talk a lot\n>> > about it so in some real-world scenarios being able to distribute\n>> > tables and indices on different spindles must pay off.\n>> \n>> What would it take to break the data/base/<database> \n>> directory down? To\n>> something like, maybe:\n>> \n>> data/base/<database>/pg_*\n>> /tables/*\n>> /indices/*\n>> \n>> Then, one could easily mount a drive as /tables and another one as\n>> /indices ...\n>> \n>> We know the difference between a table and an index, so I \n>> wouldn't think\n>> it would be *too* hard add /tables/ internally to the existing\n>> path...would it?\n>> \n>> You'd basically have somethign like:\n>> \n>> sprintf(\"%s/data/base/%s/tables/%s\", data_dir, database, tablename);\n>> \n>> Instead of:\n>> \n>> sprintf(\"%s/data/base/%s/%s\", data_dir, database, tablename);\n>> \n>> I know, I'm being simplistic here, but...\n>> \n>> Or, a different way:\n>> \n>> if(table) sprintf(\"%s/data/base/table/%s/%s\", \n>> data_dir,database,tablename);\n>> else if(index) sprintf(\"%s/data/base/index/%s/%s\", \n>> data_dir,database,tablename);\n>> else sprintf(\"%s/data/base/sys/%s/%s\", data_dir,database,sysfile);\n>> \n>> This would give you the ability to put all table from all \n>> databass onto\n>> one file system, and all indexes onto another, and all \n>> system files onto a\n>> third...\n>> \n>> I don't know, I'm oversimplying and spewing thoughts out\n>> again...but...*shrug*\n>> \n>> Marc G. Fournier ICQ#7615664 \n>> IRC Nick: Scrappy\n>> Systems Administrator @ hub.org \n>> primary: [email protected] secondary: \n>> scrappy@{freebsd|postgresql}.org \n>> \n>> \n>> ************\n>> \n",
"msg_date": "Fri, 14 Jan 2000 08:34:12 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Multiple Spindles ( Was: Re: [HACKERS] [hackers]development \n\tsuggestion needed )"
}
] |
[
{
"msg_contents": "Yes, but what if it's just your data that's a problem, and not so much the\nindex space. Then you are more likely to want to split the table data than\nsplit tables from index data.\n\nMikeA\n\n\n>> -----Original Message-----\n>> From: The Hermit Hacker [mailto:[email protected]]\n>> Sent: Friday, January 14, 2000 6:59 AM\n>> To: Bruce Momjian\n>> Cc: Don Baccus; Tom Lane; Xun Cheng; [email protected]\n>> Subject: Re: Multiple Spindles ( Was: Re: [HACKERS] \n>> [hackers]development\n>> suggestion needed )\n>> \n>> \n>> On Thu, 13 Jan 2000, Bruce Momjian wrote:\n>> \n>> > > On Thu, 13 Jan 2000, Don Baccus wrote:\n>> > > \n>> > > > My site's still in the experimental stage, being used \n>> by a couple\n>> > > > dozen folks to record bird distribution data in the \n>> Pacific NW, so\n>> > > > I don't personally have real-world data to get a \n>> feeling for how\n>> > > > important this might become. Still, Oracle DBA docs talk a lot\n>> > > > about it so in some real-world scenarios being able to \n>> distribute\n>> > > > tables and indices on different spindles must pay off.\n>> > > \n>> > > What would it take to break the data/base/<database> \n>> directory down? To\n>> > > something like, maybe:\n>> > > \n>> > > data/base/<database>/pg_*\n>> > > /tables/*\n>> > > /indices/*\n>> > \n>> > And put sort and large objects somewhere separate too.\n>> \n>> why not? by default, one drive, it would make no difference \n>> except for\n>> file layout, but it would *really* give room to expand...\n>> \n>> Right now, the udmsearch database contains (approx):\n>> \n>> tables:\n>> 10528 dict10\n>> 5088 dict11\n>> 2608 dict12\n>> 3232 dict16\n>> 64336 dict2\n>> 47960 dict3\n>> 3096 dict32\n>> 65952 dict4\n>> 42944 dict5\n>> 36384 dict6\n>> 34792 dict7\n>> 21008 dict8\n>> 14120 dict9\n>> 31912 url\n>> \n>> indexs:\n>> 5216 url_id10\n>> 2704 url_id11\n>> 1408 url_id12\n>> 1648 url_id16\n>> 36440 url_id2\n>> 27128 url_id3\n>> 1032 url_id32\n>> 37416 url_id4\n>> 22600 url_id5\n>> 19096 url_id6\n>> 18248 url_id7\n>> 10880 url_id8\n>> 6920 url_id9\n>> 6464 word10\n>> 3256 word11\n>> 1672 word12\n>> 2280 word16\n>> 26344 word2\n>> 21200 word3\n>> 2704 word32\n>> 28720 word4\n>> 21880 word5\n>> 19240 word6\n>> 18464 word7\n>> 11952 word8\n>> 8864 word9\n>> \n>> if tables/indexs were in different subdirectories, it would \n>> be too easy\n>> for me, at some point in the future, to take just the tables \n>> directory and\n>> put them on their own dedicated drive, halving the space \n>> used on either\n>> drive...\n>> \n>> I don't know...IMHO, it sounds like the simplist solution \n>> that provides\n>> the multi-spindle benefits ppl are suggesting...\n>> \n>> \n>> ************\n>> \n",
"msg_date": "Fri, 14 Jan 2000 08:43:42 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Multiple Spindles ( Was: Re: [HACKERS] [hackers]development \n\tsuggestion needed )"
}
] |
[
{
"msg_contents": "\n\nHi,\n\n I look at current PG's regress tests and I'm not sure if before test \nstart anything set LANG/Locale. How is it? \n\nNow it is not interesting (probably), but in future...\n\n(Example to_char() has locale depend output.)\n\n\t\t\t\t\t\tKarel \n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n",
"msg_date": "Fri, 14 Jan 2000 15:08:52 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "locale vs. regress"
}
] |
[
{
"msg_contents": "I was under the impression that if you used NATURAL JOIN, then the join\nwould be made on the declared keys.\n\nOr doesn't SQL92 support declared keys?\n\nMikeA\n\n-----Original Message-----\nFrom: Thomas Lockhart\nTo: Don Baccus\nCc: Bruce Momjian; PostgreSQL-development\nSent: 00/01/14 05:14\nSubject: Re: [HACKERS] Re: Informix and OUTER join syntax\n\n> And if I understand SQL92 correctly, if tab1, tab2, and tab3 only\n> share col1 in common, then you can further simplify:\n> SELECT *\n> FROM tab1 NATURAL RIGHT JOIN (tab2 NATURAL RIGHT JOIN tab3)\n> Is that right? ...and some\n> might argue this is less clear than explicitly listing the column(s)\n> to join on.\n\nBut this is \"natural\", right? ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n\n************\n",
"msg_date": "Fri, 14 Jan 2000 21:27:02 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Re: Informix and OUTER join syntax"
},
{
"msg_contents": "> I was under the impression that if you used NATURAL JOIN, then the join\n> would be made on the declared keys.\n\nNope. On column names in common.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 15 Jan 2000 03:20:33 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Informix and OUTER join syntax"
},
{
"msg_contents": "At 03:20 AM 1/15/00 +0000, Thomas Lockhart wrote:\n>> I was under the impression that if you used NATURAL JOIN, then the join\n>> would be made on the declared keys.\n>\n>Nope. On column names in common.\n\n(phew!) This is how I remembered it. Though it's in Boston and I'm\nin Portland (OR, that is), due to my space-headedness, I can strongly \nrecommend that interested folks spend some of those $25,000 or so\ndollars saved by not using Oracle on a copy of Date's SQL primer :)\n\n(I forget the exact title, but it's pretty good. I'll probably pick\nup the standard, too, but Date's book is as much critique as explanation\nand the SQL 92 standard seems in need of critical comments)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 14 Jan 2000 20:52:51 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Informix and OUTER join syntax"
},
{
"msg_contents": "\n\nDon Baccus wrote:\n\n> At 03:20 AM 1/15/00 +0000, Thomas Lockhart wrote:\n> >> I was under the impression that if you used NATURAL JOIN, then the join\n> >> would be made on the declared keys.\n> >\n> >Nope. On column names in common.\n>\n> (phew!) This is how I remembered it. Though it's in Boston and I'm\n> in Portland (OR, that is), due to my space-headedness, I can strongly\n> recommend that interested folks spend some of those $25,000 or so\n> dollars saved by not using Oracle on a copy of Date's SQL primer :)\n>\n> (I forget the exact title, but it's pretty good. I'll probably pick\n> up the standard, too, but Date's book is as much critique as explanation\n> and the SQL 92 standard seems in need of critical comments)\n>\n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n>\n> ************\n\nIMHO \"A Guide to THE SQL STANDARD\" by Date/Darwen is an interesting\ndocumentation but I'm reading\nanother very, very interesting book about SQL, here the title in english:\n\"SQL: The Standard Handbook\" (Based on the New SQL Standard ISO 9075:1992(E)\nby Stephen Cannan and Gerard Otten.\nThis a clear explanation of SQL standard made by two persons that colaborated\ndirect or indirect to establish\nsuch Standard.\n\nJos�\n\n\n\n\n\n",
"msg_date": "Mon, 17 Jan 2000 15:17:56 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Informix and OUTER join syntax"
}
] |
[
{
"msg_contents": ">> > So, what's the deal, here...is the goal the Big Time or not?\n>> \n>> If it means adopting one vendors concept of what the world should look\n>> like...not.\n>> \n>> I *hate* the way Oracle sets up tablespaces ... where i have to\n>> pre-guess\n>> the size of my data and allocate space accordingly...what if my table\n>> never does reach that critical mass? I've just wasted X meg of space\n>> ...\nI go with you on the size thing, but I still think it's not a bad idea to be\nable to determine WHERE your data goes, at least down to a table/index/etc.\nlevel.\n\n>> I hate the way that Oracle starts up something like 4 processes for\n>> every\n>> database, when the server is started...\nWell, Oracle doesn't see a database quite the way we do. At least, not the\nway we use it. We tend to have multiple schemas in a single 'instance', or\ndatabase. These schemas are actually defined per user. This is possible on\nPostgres, but people just don't do it. So although four processes are\nstarted for each instance, we only have two instances running on our main\ndev server, even though there are about twenty-five schemas on it.\n\n<snip>\n\n>> If that happens to follow what one vendor happens to have done as far >>\nas\n>> their implementation, great...but there has been no conscious effort to\n>> do\n>> so that I'm aware of...\nCool. PostgreSQL is a vendor.\n\n>> Just look at the whole OUTER JOIN issue ... *shrug*\n>> \n>> I *like* the fact that we come up with original ideas/solutions to\n>> problems, that make use of *existing* technologies ...\nAnd move to new technologies a lot quicker than any other product.\n\n>> I liked the thread about moving indexes and tables to seperate file\n>> systems, and hope we can implement something that will make it \n>> something\n>> that does't require 'ln's, but I definitely don't like Oracle's way of\n>> doing things ...\nYes, that's about the sum of it. Why not the links? I think that it's an\nelegant way of designing the whole thing. Only the system table that stores\nthe 'tablespace' directories will even have a hard path in it. For the\nrest, everything works in the main database directory (which could be\nconsidered the SYSTEM tablespace).\n\nMikeA\n",
"msg_date": "Fri, 14 Jan 2000 21:38:01 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] [hackers]development suggestion needed"
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> >> I liked the thread about moving indexes and tables to seperate file\n> >> systems, and hope we can implement something that will make it \n> >> something\n> >> that does't require 'ln's, but I definitely don't like Oracle's way of\n> >> doing things ...\n> Yes, that's about the sum of it. Why not the links? I think that it's an\n> elegant way of designing the whole thing. Only the system table that stores\n> the 'tablespace' directories will even have a hard path in it. For the\n> rest, everything works in the main database directory (which could be\n> considered the SYSTEM tablespace).\n\nIt seems to me (in spite of the fact that I contribute no code) that\nwe *might* want to consider if we want a system table that contains\nstorage manager specific information. Our current storage manager\ncould probably be extended (with some small amount of difficulty) to\nget more path information out of such a table. A raw partition\nstorage manager might want pathing and sizeing information.\n\nBack to the symlink stuff: symlinking in general should be deprecated\nsince it doesn't work everywhere, PLUS it causes ugliness in the table\ntruncation code. I must admit, I'm and Ingres fan, and I kinda like\nthe way it does things (very similar to what PostgreSQL does, but\nexplicit pathing in a system table). A database is created in a\ndefault location, it can then be extended across multiple locations.\nWithout explicit instruction, however, it will create all new files in\nthe default location. Ingres also has the initial database lookup\nfunction return the default database area.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "14 Jan 2000 15:10:32 -0500",
"msg_from": "Brian E Gallew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed"
}
] |
[
{
"msg_contents": "I tried to send this to [email protected] and it was returned \nre: user unknown. Can you forward to support?\nThanks,\nKeith Harmon\n-----Original Message-----\nFrom:\tKeith Harmon [SMTP:[email protected]]\nSent:\tFriday, January 14, 2000 4:17 PM\nTo:\t'[email protected]'\nSubject:\tlicensing and support\n\nHello there,\n\nI found your product via an internet search via an article describing how \nto set up Linux, Apache and PostgreSQL.\nI have a Java database app using JDBC that currently runs on SQL server 7 \nMSDE on the Wintel platform, but I would like to be able to make it \navailable on the Linux and Solaris platforms. This is a new retail product \nin which the database will not be visible to the user, essentially \nimbedding the database behind the product.\nThe decision to use MSDE for Wintel was predicated on the fact that it has \na royalty free runtime distribution.\nIs it true that PostgreSQL is free - source code and all? I can include it \nbehind my software for no license royalty?\nI assume your support services have a fee involved - what is that cost \nstructure like?\nLooking forward to hearing from you,\nKeith Harmon\n\n\n",
"msg_date": "Fri, 14 Jan 2000 16:37:11 -0500",
"msg_from": "Keith Harmon <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: licensing and support"
}
] |
[
{
"msg_contents": "I know we left this issue open, but I came to the conclusion that it would\nbe wiser to keep it like it used to be in that operator comments should be\nindexed on the underlying functions. The reason is simply that there is a\na one-to-one relationship between operators and their function, so we'd\nend up writing everything double with little purpose. That would mean\nyou'd have to tweak your code a little.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Fri, 14 Jan 2000 23:11:02 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "descriptions on operators"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> I know we left this issue open, but I came to the conclusion that it would\n> be wiser to keep it like it used to be in that operator comments should be\n> indexed on the underlying functions. The reason is simply that there is a\n> a one-to-one relationship between operators and their function, so we'd\n> end up writing everything double with little purpose. That would mean\n> you'd have to tweak your code a little.\n> \n\nIf that's the way you want it, so it shall be. I have to write up\na diff for pg_dump this weekend anyways to generate the\nappropriate COMMENT ON statements for version 7.0.\n\nMike Mascari\n",
"msg_date": "Fri, 14 Jan 2000 18:31:17 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] descriptions on operators"
}
] |
[
{
"msg_contents": "The Windows makefile for psql is still in prehistoric state. Is there\nsomeone who uses Visual C++ or whatever it was that could update it?\nOtherwise psql will no longer be supported on non-cygwin systems!\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Fri, 14 Jan 2000 23:11:12 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help from a Windows programmer needed!"
}
] |
[
{
"msg_contents": "I resolved the issue psql variables vs array syntax in the manner\nsuggested by various people. If the variable is undefined the string will\nbe untouched. Now something else I'd like to get your comment on is that I\nhandled the cast operator '::' in the same way, namely so that\n\n=> select 'now'::datetime\nwill resolve to\n=> select 'now':<value of variable \"datetime\" if defined>\n\nThe reason is that otherwise a construct like this\n=> \\set foo 3\n=> select arr.a[2::foo];\nor even\n=> \\set foo 'int4'\n=> select x:::foo from y;\nwon't be possible without introducing an extra syntax trick. And it makes\nit consistent throughout.\n\n(Btw., was somebody mentioning that this cast syntax is non-standard and\nthat he wanted to move toward a standard one? Just wondering.)\n\nHowever, psql defines some variables by itself, for example the one\ncontaining the last oid. I set up the rule that those variables are always\nall upper-case. If something still fails you can always call \\unset VAR to\nunset it before a query. The list of these variables is in the docs.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Fri, 14 Jan 2000 23:25:36 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql variables fixed (?)"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> I resolved the issue psql variables vs array syntax in the manner\n> suggested by various people. If the variable is undefined the string will\n> be untouched. Now something else I'd like to get your comment on is that I\n> handled the cast operator '::' in the same way, namely so that\n> (Btw., was somebody mentioning that this cast syntax is non-standard and\n> that he wanted to move toward a standard one? Just wondering.)\n\nYes, I probably mentioned that. But there is a problem in that the\nSQL92 standard does not actually define the \"type 'string'\" syntax for\nanything other than date/time types, since those are the only types\nother than true strings defined in the standard. So I extended the\nstandard (in a natural way imho) to include any string-y input.\n\nI'd be a little reluctant to give up the alternate \"::\" syntax, only\nbecause I'm not sure I trust the standards folks to not stomp on the\nalternative at some point in the future. (Sorry about the\ndouble-negative, but it says what I mean :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 15 Jan 2000 03:08:59 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql variables fixed (?)"
},
{
"msg_contents": "At 03:08 AM 1/15/00 +0000, Thomas Lockhart wrote:\n\n>I'd be a little reluctant to give up the alternate \"::\" syntax, only\n>because I'm not sure I trust the standards folks to not stomp on the\n>alternative at some point in the future. (Sorry about the\n>double-negative, but it says what I mean :)\n\nWhich begs the question ... does Postgres.org have representation\non the standards commitees?\n\nWhether or not there's money available for development, I should\nthink minimal funding for participation in the standards committee\n(i.e. travel and per-diem, maybe not a salary) wouldn't be hard to\nscare up...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 14 Jan 2000 20:49:12 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql variables fixed (?)"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I'd be a little reluctant to give up the alternate \"::\" syntax, only\n> because I'm not sure I trust the standards folks to not stomp on the\n> alternative at some point in the future.\n\nEven more to the point, we now have a substantial pool of user\napplications that rely on \"::\" (I know a lot of my company's\ncode does, for example). We can't just blow off those users.\n\nPeter's current proposal seems OK to me, with the caveat that we\nwill have to be *very* circumspect about introducing additional\nvariables-automatically-defined-by-psql in the future. Every\ntime we do, we risk breaking existing user scripts, not unlike\nwhat happens when we introduce a new reserved word in the backend\ngrammar.\n\nLane's Other Law: the price of success is backwards compatibility.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 Jan 2000 00:54:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql variables fixed (?) "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> Which begs the question ... does Postgres.org have representation\n> on the standards commitees?\n\nNot that I've heard about --- for that matter, is there any active\nSQL standards committee?\n\n> Whether or not there's money available for development, I should\n> think minimal funding for participation in the standards committee\n> (i.e. travel and per-diem, maybe not a salary) wouldn't be hard to\n> scare up...\n\nWhile I don't know anything about the SQL standards situation, I do know\nhow this game is played from past work with JPEG. The only effective\nparticipants on the international standards committees are people who\nwork for corporate research labs of corporations with very deep pockets.\n(Or, occasionally, professors with tenure at universities having very\ndeep pockets.)\n\nJPEG, for example, used to have a regular schedule of three meetings\na year: one in the Americas, one in Europe, one in the Far East. Your\ntravel budget didn't stretch that far? Tough luck; you were a bystander\nnot a player. Not to mention that the real players would organize/host\na local meeting every so often --- better be able to budget a couple of\nfull-time secretaries to handle the details if you want to do that.\n\nI don't think PostgreSQL, Inc is playing in that league, or has any\nprospect of doing so soon. Nor would I want to be our representative\nif we could do it. The Net has changed the rules about running\nlarge-scale collaborations, but AFAICT the ISO hasn't heard yet.\n\n\t\t\tregards, tom \"professional bystander\" lane\n\t\t\torganizer, Independent JPEG Group\n",
"msg_date": "Sat, 15 Jan 2000 01:12:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Standards (was Re: psql variables fixed (?))"
},
{
"msg_contents": "On Sat, 15 Jan 2000, Tom Lane wrote:\n\n> I don't think PostgreSQL, Inc is playing in that league, or has any\n> prospect of doing so soon. \n\nThings are picking up, and I'd *love* to be able to do stuff like that,\nbut \"not any time soon\"...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 15 Jan 2000 02:27:05 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Standards (was Re: psql variables fixed (?))"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Lane's Other Law: the price of success is backwards compatibility.\n\n\"And the onlookers breathed a sigh of relief...\"\n\n\n\n",
"msg_date": "Sat, 15 Jan 2000 10:39:45 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql variables fixed (?)"
},
{
"msg_contents": "Silly me. The correct behaviour is of course\n\n=> \\set foo 3\n=> select arr.a[2: :foo];\n=> \\set bar timestamp\n=> select 'now':: :bar;\n\nThat way typecasts should bear no compatibility problems.\n\nOn 2000-01-14, I mentioned:\n\n> I resolved the issue psql variables vs array syntax in the manner\n> suggested by various people. If the variable is undefined the string will\n> be untouched. Now something else I'd like to get your comment on is that I\n> handled the cast operator '::' in the same way, namely so that\n> \n> => select 'now'::datetime\n> will resolve to\n> => select 'now':<value of variable \"datetime\" if defined>\n> \n> The reason is that otherwise a construct like this\n> => \\set foo 3\n> => select arr.a[2::foo];\n> or even\n> => \\set foo 'int4'\n> => select x:::foo from y;\n> won't be possible without introducing an extra syntax trick. And it makes\n> it consistent throughout.\n> \n> (Btw., was somebody mentioning that this cast syntax is non-standard and\n> that he wanted to move toward a standard one? Just wondering.)\n> \n> However, psql defines some variables by itself, for example the one\n> containing the last oid. I set up the rule that those variables are always\n> all upper-case. If something still fails you can always call \\unset VAR to\n> unset it before a query. The list of these variables is in the docs.\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 16 Jan 2000 18:14:12 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql variables fixed (?)"
}
] |
[
{
"msg_contents": "I committed a few changes to the user creation code which should, in\ntheory, make them more robust (doesn't use pg_exec_query_dest anymore).\nDue to an inherent design conflict the update of the password file cannot\nuse the COPY logic anymore (the copy command wouldn't see the last changed\ntuple), so it does it itself now, which looks better to me anyway. The\nformat of the file is still the same, I just filled up the unused spots\nwith x's and 0's. The remaining problem is initdb, which writes the\ninitial pg_pwd with COPY, so we can't go to a different, more compact\nformat yet. A solution for this would be to scrap this call and instead\ncall\n\necho \"ALTER USER $POSTGRES WITH PASSWORD '$WHATEVER'\" | postgres ...\n\nin initdb and forget about all the scary sed -f or similar things that\nhave been proposed recently to get the password in there securely. That\nway we can be assured to have the password file always in the format it\nwill be later on.\n\nBruce, are you still working on that part?\n\nA few other side effects of the changes were:\n\n* Even unprivileged users can change their own password (but nothing else)\n\n* The password is now an Sconst in the parser, which better reflects its\ntext datatype and also forces users to quote them.\n\n* If your password is NULL you won't be written to the password file,\nmeaning you can't connect until you have a password set up (if you use\npassword authentication).\n\n* When you drop a user that owns a database you get an error. The database\nis not gone.\n\nThese are minor \"wholesale\" changes, but I thought they would be in the\npublic's interest.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n\n\n",
"msg_date": "Fri, 14 Jan 2000 23:26:23 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_pwd"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> I committed a few changes to the user creation code which should, in\n> theory, make them more robust (doesn't use pg_exec_query_dest anymore).\n\nGreat. I have been hoping for this.\n\n> Due to an inherent design conflict the update of the password file cannot\n> use the COPY logic anymore (the copy command wouldn't see the last changed\n> tuple), so it does it itself now, which looks better to me anyway. The\n> format of the file is still the same, I just filled up the unused spots\n> with x's and 0's. The remaining problem is initdb, which writes the\n> initial pg_pwd with COPY, so we can't go to a different, more compact\n> format yet. A solution for this would be to scrap this call and instead\n> call\n> \n> echo \"ALTER USER $POSTGRES WITH PASSWORD '$WHATEVER'\" | postgres ...\n\nThat is fine with me.\n\n> in initdb and forget about all the scary sed -f or similar things that\n> have been proposed recently to get the password in there securely. That\n> way we can be assured to have the password file always in the format it\n> will be later on.\n> \n> Bruce, are you still working on that part?\n\nSure, the echo is fine. If not, you do whatever you want, and I will go\nin and make it secure.\n\n> \n> A few other side effects of the changes were:\n> \n> * Even unprivileged users can change their own password (but nothing else)\n\nGreat.\n\n> \n> * The password is now an Sconst in the parser, which better reflects its\n> text datatype and also forces users to quote them.\n> \n> * If your password is NULL you won't be written to the password file,\n> meaning you can't connect until you have a password set up (if you use\n> password authentication).\n> \n> * When you drop a user that owns a database you get an error. The database\n> is not gone.\n> \n> These are minor \"wholesale\" changes, but I thought they would be in the\n> public's interest.\n\nSounds like a nice set of patches. We need to get Peter on the\nDevelopers page. Vince?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 14 Jan 2000 18:05:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_pwd"
}
] |
[
{
"msg_contents": "\nI can add days to now(), but not subtract? \n\n=====================================\n\ntemplate1=> select now() + '30 days';\n ?column? \n------------------------------\n Sun Feb 13 22:00:33 2000 AST\n(1 row)\n\ntemplate1=> select now() - '30 days';\nERROR: Unable to identify an operator '-' for types 'timestamp' and 'unknown'\n You will have to retype this query using an explicit cast\ntemplate1=> select version();\n version \n---------------------------------------------------------------------\n PostgreSQL 7.0.0 on i386-unknown-freebsd4.0, compiled by gcc 2.95.2\n(1 row)\n\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 14 Jan 2000 22:02:39 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "date/time problem in v6.5.3 and 7.0.0 ..."
},
{
"msg_contents": "At 10:02 PM 1/14/00 -0400, The Hermit Hacker wrote:\n>\n>I can add days to now(), but not subtract? \n>\n>=====================================\n>\n>template1=> select now() + '30 days';\n> ?column? \n>------------------------------\n> Sun Feb 13 22:00:33 2000 AST\n>(1 row)\n>\n>template1=> select now() - '30 days';\n>ERROR: Unable to identify an operator '-' for types 'timestamp' and\n'unknown'\n> You will have to retype this query using an explicit cast\n>template1=> select version();\n\ndonb=> select now()-'30 days'::reltime;\n?column? \n----------------------\n1999-12-15 18:13:18-08\n(1 row)\n\ndonb=> select now()-'30 days';\nERROR: Unable to identify an operator '-' for types 'timestamp' and 'unknown'\n You will have to retype this query using an explicit cast\ndonb=> \n\nAs a relative newcomer, I too have found dates a bit confusing and\nmy solution has been to cast like crazy rather than guess what will\nhappen if I don't :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 14 Jan 2000 18:12:56 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] date/time problem in v6.5.3 and 7.0.0 ..."
},
{
"msg_contents": "\n\nThe Hermit Hacker wrote:\n\n> I can add days to now(), but not subtract?\n>\n> =====================================\n>\n> template1=> select now() + '30 days';\n> ?column?\n> ------------------------------\n> Sun Feb 13 22:00:33 2000 AST\n> (1 row)\n>\n> template1=> select now() - '30 days';\n> ERROR: Unable to identify an operator '-' for types 'timestamp' and 'unknown'\n> You will have to retype this query using an explicit cast\n> template1=> select version();\n> version\n> ---------------------------------------------------------------------\n> PostgreSQL 7.0.0 on i386-unknown-freebsd4.0, compiled by gcc 2.95.2\n> (1 row)\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n>\n> ************\n\nTry using standard names (TIMESTAMP instead of now() and INTERVA) as in:\n\nhygea=> select current_timestamp + interval '30 day';\n?column?\n--------------------------\n17/02/2000 16:00:28.00 CET\n(1 row)\n\nhygea=> select current_timestamp - interval '30 day';\n?column?\n--------------------------\n19/12/1999 16:00:44.00 CET\n(1 row)\n\nhygea=> select now() - '30 day';\nERROR: Unable to identify an operator '-' for types 'timestamp' and 'unknown'\n You will have to retype this query using an explicit cast\n\nJos�\n\n\n",
"msg_date": "Tue, 18 Jan 2000 15:10:58 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] date/time problem in v6.5.3 and 7.0.0 ..."
},
{
"msg_contents": "> > I can add days to now(), but not subtract?\n\nThe problem is that it is meaningful to subtract two absolute times,\ngiving a delta time as a result, *and* it is meaningful to subtract a\ndelta time from an absolute time, giving another absolute time as a\nresult. \n\nSo your unspecified field could be either one, and Postgres can't\ndecide what it should be for you ;)\n\nThe error message is intentionally vague, since by the time the\nmessage is printed the parser has lost track of whether there were\nzero candidates or too many candidates.\n\n - Thomas\n\n> > =====================================\n> >\n> > template1=> select now() + '30 days';\n> > ?column?\n> > ------------------------------\n> > Sun Feb 13 22:00:33 2000 AST\n> > (1 row)\n> >\n> > template1=> select now() - '30 days';\n> > ERROR: Unable to identify an operator '-' for types 'timestamp' and 'unknown'\n> > You will have to retype this query using an explicit cast\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 18 Jan 2000 14:40:27 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] date/time problem in v6.5.3 and 7.0.0 ..."
},
{
"msg_contents": "> > > I can add days to now(), but not subtract?\n> \n> The problem is that it is meaningful to subtract two absolute times,\n> giving a delta time as a result, *and* it is meaningful to subtract a\n> delta time from an absolute time, giving another absolute time as a\n> result. \n\nIt would be nice if we could decide '30 days' is a delta time because\nit is not suitable for an absolute time representation. Would it be\nhard?\n--\nTatsuo Ishii\n\n",
"msg_date": "Wed, 19 Jan 2000 10:35:18 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] date/time problem in v6.5.3 and 7.0.0 ..."
},
{
"msg_contents": "> > > > I can add days to now(), but not subtract?\n> > The problem is that it is meaningful to subtract two absolute times,\n> > giving a delta time as a result, *and* it is meaningful to subtract a\n> > delta time from an absolute time, giving another absolute time as a\n> > result.\n> It would be nice if we could decide '30 days' is a delta time because\n> it is not suitable for an absolute time representation. Would it be\n> hard?\n\nHmm. I'm not sure how hard it would be. The places where types need to\nbe matched to operators are fairly well isolated in the parser.\nHowever, we would need a new kind of input function which does not\nthrow an elog(ERROR), but rather just returns failure if the input\ndoes not get decoded. Then, we could accumulate a list of successfully\ndecoded types (based on our list of candidate operators), and if that\nlist is of length one then we have a match.\n\nOne way to implement this would be to define an additional input\nroutine (which the existing input routine would use) which returns an\nerror rather than throwing an elog() error. Then, our parser could use\nthis additional routine to discriminate between the candidate types.\nThe input routine could be found in a similar way to our existing\n\"implicit coersion\" code, which assumes a specific function name for\neach type.\n\nThe downside to this is that we have built up one additional\nassumption about the form and contents of our system tables. But,\nsince it adds functionality that probably isn't a bad thing.\n\nI do know I'm not likely to find time to work on it for the next\nrelease...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 19 Jan 2000 02:22:55 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] date/time problem in v6.5.3 and 7.0.0 ..."
},
{
"msg_contents": "This table shows every legal operation between datetimes and intervals and\nvice versa\nallowed by SQL92:\n\n-----------------------------------------\n1st operand|operator|2nd operand|result\n-----------+--------+-----------+--------\ndatetime | - |datetime |interval\ndatetime | + |interval |datetime\ndatetime | - |interval |datetime\ninterval | + |datetime |datetime\ninterval | + |interval |interval\ninterval | - |interval |interval\ninterval | * |number |interval\ninterval | / |number |interval\nnumber | * |interval |interval\n-----------+--------+-----------+--------\n\nI wrote some pgPL/SQL functions to create operators between datetimes and\nintervals.\nProbably there's a better way to achieve that goal, but this works anyway.\n\nJos�\n\n\n\nThomas Lockhart wrote:\n\n> > > > > I can add days to now(), but not subtract?\n> > > The problem is that it is meaningful to subtract two absolute times,\n> > > giving a delta time as a result, *and* it is meaningful to subtract a\n> > > delta time from an absolute time, giving another absolute time as a\n> > > result.\n> > It would be nice if we could decide '30 days' is a delta time because\n> > it is not suitable for an absolute time representation. Would it be\n> > hard?\n>\n> Hmm. I'm not sure how hard it would be. The places where types need to\n> be matched to operators are fairly well isolated in the parser.\n> However, we would need a new kind of input function which does not\n> throw an elog(ERROR), but rather just returns failure if the input\n> does not get decoded. Then, we could accumulate a list of successfully\n> decoded types (based on our list of candidate operators), and if that\n> list is of length one then we have a match.\n>\n> One way to implement this would be to define an additional input\n> routine (which the existing input routine would use) which returns an\n> error rather than throwing an elog() error. Then, our parser could use\n> this additional routine to discriminate between the candidate types.\n> The input routine could be found in a similar way to our existing\n> \"implicit coersion\" code, which assumes a specific function name for\n> each type.\n>\n> The downside to this is that we have built up one additional\n> assumption about the form and contents of our system tables. But,\n> since it adds functionality that probably isn't a bad thing.\n>\n> I do know I'm not likely to find time to work on it for the next\n> release...\n>\n> - Thomas\n>\n> --\n> Thomas Lockhart [email protected]\n> South Pasadena, California\n>\n> ************",
"msg_date": "Wed, 19 Jan 2000 14:39:59 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] date/time problem in v6.5.3 and 7.0.0 ..."
}
] |
[
{
"msg_contents": "As I promised before, I have added a small tool called \"pgbench\",\nperforming TPC-B like benchmarking, to contrib. See\ncontrib/pgbench/README for more details.\n--\nTatsuo Ishii\n\n",
"msg_date": "Sat, 15 Jan 2000 21:43:19 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgbench added to contrib"
}
] |
[
{
"msg_contents": "Maybe this has been discussed before my time, but why exactly is it that\nwe don't distribute lex'ed files, as with yacc'ed files?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 15 Jan 2000 19:31:48 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "flex"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Maybe this has been discussed before my time, but why exactly is it that\n> we don't distribute lex'ed files, as with yacc'ed files?\n\nNot sure. Are they more platform-dependent or lexer-dependent? Doesn't\nthe lexer call a lexer-specific library? Not sure.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 Jan 2000 14:32:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] flex"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Maybe this has been discussed before my time, but why exactly is it that\n> we don't distribute lex'ed files, as with yacc'ed files?\n\nNo particularly good reason I suppose... if we did that, we could get\nrid of that whole 'lextest' business, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 Jan 2000 20:25:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] flex "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Maybe this has been discussed before my time, but why exactly is it that\n>> we don't distribute lex'ed files, as with yacc'ed files?\n\n> Not sure. Are they more platform-dependent or lexer-dependent? Doesn't\n> the lexer call a lexer-specific library? Not sure.\n\nflex has a lexer-specific library (libfl.a), but as far as I can tell\nour scanners don't call it. In fact our build process has no provision\nfor adding -lfl to the link, which I used to think was an oversight, but\nnow it's starting to seem like a good idea. We could ship scan.c et al\nin the same way we handle the yacc/bison output files, and it should\nwork everywhere.\n\nIf we were going to do this, I'd vote for making sure that *all* the\nyacc files are pregenerated (currently, we only take care of the larger\nones), and then most people wouldn't need either flex or bison to build.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 Jan 2000 20:38:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] flex "
},
{
"msg_contents": "\nAdded to TODO list.\n\n> Bruce Momjian <[email protected]> writes:\n> >> Maybe this has been discussed before my time, but why exactly is it that\n> >> we don't distribute lex'ed files, as with yacc'ed files?\n> \n> > Not sure. Are they more platform-dependent or lexer-dependent? Doesn't\n> > the lexer call a lexer-specific library? Not sure.\n> \n> flex has a lexer-specific library (libfl.a), but as far as I can tell\n> our scanners don't call it. In fact our build process has no provision\n> for adding -lfl to the link, which I used to think was an oversight, but\n> now it's starting to seem like a good idea. We could ship scan.c et al\n> in the same way we handle the yacc/bison output files, and it should\n> work everywhere.\n> \n> If we were going to do this, I'd vote for making sure that *all* the\n> yacc files are pregenerated (currently, we only take care of the larger\n> ones), and then most people wouldn't need either flex or bison to build.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 Jan 2000 21:35:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] flex"
},
{
"msg_contents": "I've made the necessary changes to release_prep, makefiles, and\ndocumentation (not sure how the INSTALL file proper is made from the sgml\ndocs, though). lextest is removed. configure now gives a friendly warning\nif it finds flex 2.5.3.\n\n(In fact it seems like some lex files were already generated for\ndistribution, but now it's all of them.)\n\nOn 2000-01-15, Bruce Momjian mentioned:\n\n> \n> Added to TODO list.\n> \n> > Bruce Momjian <[email protected]> writes:\n> > >> Maybe this has been discussed before my time, but why exactly is it that\n> > >> we don't distribute lex'ed files, as with yacc'ed files?\n> > \n> > > Not sure. Are they more platform-dependent or lexer-dependent? Doesn't\n> > > the lexer call a lexer-specific library? Not sure.\n> > \n> > flex has a lexer-specific library (libfl.a), but as far as I can tell\n> > our scanners don't call it. In fact our build process has no provision\n> > for adding -lfl to the link, which I used to think was an oversight, but\n> > now it's starting to seem like a good idea. We could ship scan.c et al\n> > in the same way we handle the yacc/bison output files, and it should\n> > work everywhere.\n\nThis puzzles me a bit still, but it seems to work. GNU suggests putting\nyacc and lex files in distributions, so I can't imagine why they would do\nthat if you need to have lib[f]l.a anyway.\n\n$ nm /usr/lib/libfl.a\n \nlibmain.o:\n00000000 t gcc2_compiled.\n00000000 T main\n U yylex\n \nlibyywrap.o:\n00000000 t gcc2_compiled.\n00000000 T yywrap\n\n\n> > \n> > If we were going to do this, I'd vote for making sure that *all* the\n> > yacc files are pregenerated (currently, we only take care of the larger\n> > ones), and then most people wouldn't need either flex or bison to build.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ************\n> > \n> \n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 16 Jan 2000 21:13:03 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] flex"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>>>> flex has a lexer-specific library (libfl.a), but as far as I can tell\n>>>> our scanners don't call it. In fact our build process has no provision\n>>>> for adding -lfl to the link, which I used to think was an oversight, but\n>>>> now it's starting to seem like a good idea. We could ship scan.c et al\n>>>> in the same way we handle the yacc/bison output files, and it should\n>>>> work everywhere.\n\n> This puzzles me a bit still, but it seems to work.\n\nI suppose that libfl.a is only needed to support some flex features that\nwe don't use --- but I haven't bothered to dig in and find out what.\n\n\nThanks for taking care of that task; it'd been hanging around on the\n\"good ideas to get to someday\" list for quite a while.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 01:41:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] flex "
},
{
"msg_contents": "On Sun, Jan 16, 2000 at 09:13:03PM +0100, Peter Eisentraut wrote:\n> \n> This puzzles me a bit still, but it seems to work. GNU suggests putting\n> yacc and lex files in distributions, so I can't imagine why they would do\n> that if you need to have lib[f]l.a anyway.\n> \n> $ nm /usr/lib/libfl.a\n> \n> libmain.o:\n> 00000000 t gcc2_compiled.\n> 00000000 T main\n> U yylex\n> \n> libyywrap.o:\n> 00000000 t gcc2_compiled.\n> 00000000 T yywrap\n\nI think those are defaults for the case where you just have a lex file, but\ndidn't bother with defining a main() after the last %% eg:\n\n%%\nA putchar('b');\n%%\n\nWhen linked with -lfl, you get an executable. In the postgresql case, life\nis more complicated and the parser calls yylex rather than a fake main(), so\n-lfl isn't needed.\n\nCheers,\n\nPatrick\n",
"msg_date": "Mon, 17 Jan 2000 10:49:59 +0000",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] flex"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> >>>> flex has a lexer-specific library (libfl.a), but as far as I can tell\n> >>>> our scanners don't call it. In fact our build process has no provision\n> >>>> for adding -lfl to the link, which I used to think was an oversight, but\n> >>>> now it's starting to seem like a good idea. We could ship scan.c et al\n> >>>> in the same way we handle the yacc/bison output files, and it should\n> >>>> work everywhere.\n>\n> > This puzzles me a bit still, but it seems to work.\n>\n> I suppose that libfl.a is only needed to support some flex features that\n> we don't use --- but I haven't bothered to dig in and find out what.\n\n AFAIK, flex's libfl.a only contains a main() and a noop variant of\n yywrap(). The main() in there only calls yylex() repeatedly so you can\n write a scan.l that does text replacement etc. and simply compile the\n generated C source into a standalone executable. Our backend already\n contains a yywrap() (and a main() of course), so there are no symbols\n that libfl.a could potentially resolve. Thus, it's not needed.\n\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n",
"msg_date": "Mon, 17 Jan 2000 18:59:36 +0100",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] flex"
}
] |
[
{
"msg_contents": "* User who can create databases can modify pg_database table\n\nNot anymore.\n\n* Interlock to prevent DROP DATABASE on a database with running backends\n\nI think Tom wanted this listed as an achievement, because it's already\ndone.\n\n* Better interface for adding to pg_group\n\nDone.\n\n* Allow array on int8[]\n\nDone. (Credit to Thomas, thought, he just forgot to apply the patch.)\n\n* Make Absolutetime/Relativetime int4 because time_t can be int8 on some\nports\n\nDoes this mean the abstime/reltime types or all of them? I thought the\nformer were deprecated anyway.\n\n* Permissions on indexes, prevent them?\n\nDone (prevented)\n\n* Make postgres user have a password by default\n\nDone. (--pwprompt option, enter it blind twice, echo ALTER USER |\npostgres; probably as secure as it gets)\n\n* Update table SET table.value = 3 fails(SQL standard says this is OK)\n\nNot the standard I'm looking at. Someone please enlighten me.\n\n <update statement: searched> ::=\n UPDATE <table name>\n SET <set clause list>\n [ WHERE <search condition> ]\n\n <set clause list> ::=\n <set clause> [ { <comma> <set clause> }... ]\n\n <set clause> ::=\n <object column> <equals operator> <update source>\n\n <object column> ::= <column name>\n\n <column name> ::= <identifier>\n\n <identifier> ::=\n [ <introducer><character set specification> ] <actual identifier>\n \n <introducer> ::= <underscore>\n\n <character set specification> ::=\n\t\t{ nothing of interest }\n\n <actual identifier> ::=\n <regular identifier>\n | <delimited identifier>\n\n\t{ meaning a non-quoted identifier or a quoted one }\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 15 Jan 2000 19:37:24 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "TODO list"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> * User who can create databases can modify pg_database table\n> \n> Not anymore.\n\nDone. Good.\n\n> \n> * Interlock to prevent DROP DATABASE on a database with running backends\n> \n> I think Tom wanted this listed as an achievement, because it's already\n> done.\n\nDone.\n\n> \n> * Better interface for adding to pg_group\n> \n> Done.\n\nYes. I can remove the pg_group FAQ item after 7.0 is out to most\npeople.\n\n> \n> * Allow array on int8[]\n> \n> Done. (Credit to Thomas, thought, he just forgot to apply the patch.)\n\nGood.\n\n> \n> * Make Absolutetime/Relativetime int4 because time_t can be int8 on some\n> ports\n> \n> Does this mean the abstime/reltime types or all of them? I thought the\n> former were deprecated anyway.\n\nI think the idea is that it can roll over the mac int4 value. Not sure\nabout which types are active.\n\n\n> \n> * Permissions on indexes, prevent them?\n> \n> Done (prevented)\n\nGood.\n\n\n> \n> * Make postgres user have a password by default\n> \n> Done. (--pwprompt option, enter it blind twice, echo ALTER USER |\n> postgres; probably as secure as it gets)\n\nPerfect. We don't want to do it by default.\n\n> \n> * Update table SET table.value = 3 fails(SQL standard says this is OK)\n> \n> Not the standard I'm looking at. Someone please enlighten me.\n> \n> <update statement: searched> ::=\n> UPDATE <table name>\n> SET <set clause list>\n> [ WHERE <search condition> ]\n> \n> <set clause list> ::=\n> <set clause> [ { <comma> <set clause> }... ]\n> \n> <set clause> ::=\n> <object column> <equals operator> <update source>\n> \n> <object column> ::= <column name>\n> \n> <column name> ::= <identifier>\n> \n> <identifier> ::=\n> [ <introducer><character set specification> ] <actual identifier>\n> \n> <introducer> ::= <underscore>\n> \n> <character set specification> ::=\n> \t\t{ nothing of interest }\n> \n> <actual identifier> ::=\n> <regular identifier>\n> | <delimited identifier>\n> \n> \t{ meaning a non-quoted identifier or a quoted one }\n> \n\nI don't see anything in the spec that says you can use table.column on\nthe left-hand side of the equals. No?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 Jan 2000 14:31:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list"
},
{
"msg_contents": "At 02:31 PM 1/15/00 -0500, Bruce Momjian wrote:\n>> * Update table SET table.value = 3 fails(SQL standard says this is OK)\n>> \n>> Not the standard I'm looking at. Someone please enlighten me.\n\nIf this is indeed the standard, it looks to me as though Bruce is \nreading it right. Makes sense, too, only one table can be updated\nat a time, so there's no opportunity for ambiguity in column names\non the left side. What's the point?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sat, 15 Jan 2000 11:55:00 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list"
},
{
"msg_contents": "Can we ALTER a table to drop a column yet, or is that still a TO DO item?\n\nSteve\n\n\nPeter Eisentraut wrote:\n\n> * User who can create databases can modify pg_database table\n>\n> Not anymore.\n>\n> * Interlock to prevent DROP DATABASE on a database with running backends\n>\n\n",
"msg_date": "Sat, 15 Jan 2000 15:29:50 -0800",
"msg_from": "Stephen Birch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list"
},
{
"msg_contents": "> Can we ALTER a table to drop a column yet, or is that still a TO DO item?\n> \n\nStill a TODO item.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 Jan 2000 21:29:18 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list"
},
{
"msg_contents": "I'm working on that, but pssst, don't tell anyone! ;)\n\nOn 2000-01-15, Stephen Birch mentioned:\n\n> Can we ALTER a table to drop a column yet, or is that still a TO DO item?\n> \n> Steve\n> \n> \n> Peter Eisentraut wrote:\n> \n> > * User who can create databases can modify pg_database table\n> >\n> > Not anymore.\n> >\n> > * Interlock to prevent DROP DATABASE on a database with running backends\n> >\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 16 Jan 2000 18:18:31 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TODO list"
},
{
"msg_contents": "> * Allow array on int8[]\n> Done. (Credit to Thomas, thought, he just forgot to apply the patch.)\n\nThanks. btw, didn't forget, but wanted confirmation that it worked\n(which I got a day or two later).\n\n> * Make Absolutetime/Relativetime int4 because time_t can be int8 on some\n> ports\n> Does this mean the abstime/reltime types or all of them? I thought the\n> former were deprecated anyway.\n\nabstime should probably be considered deprecated as a user type, but\nit is still used extensively internally and within the tuple\nstructure. I'd be reluctant to wholesale replace it with\ntimestamp/datetime, since that will take 8 bytes per value rather than\n4.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 17 Jan 2000 08:02:02 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Does this mean the abstime/reltime types or all of them? I thought the\n>> former were deprecated anyway.\n\n> abstime should probably be considered deprecated as a user type, but\n> it is still used extensively internally and within the tuple\n> structure. I'd be reluctant to wholesale replace it with\n> timestamp/datetime, since that will take 8 bytes per value rather than\n> 4.\n\nI was meaning to ask you which of the date/time types are going to be\nleft standing when the dust settles. (I know you've said, but the\narchives are so messed up right now that I can't find it.)\n\nTimestamp is the only remaining standard type without an array type,\nand if it's not going to be deprecated then it ought to have one...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 03:08:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list "
},
{
"msg_contents": "> I was meaning to ask you which of the date/time types are going to be\n> left standing when the dust settles. (I know you've said, but the\n> archives are so messed up right now that I can't find it.)\n> Timestamp is the only remaining standard type without an array type,\n> and if it's not going to be deprecated then it ought to have one...\n\n\"timestamp\" will continue, but *all* of the code will come from a\nrenamed \"datetime\". So don't bother adding anything for timestamp,\nsince it will magically appear when datetime gets renamed.\n\nbtw, I will make \"datetime\" a synonym for \"timestamp\", so existing\napps should work without change.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 17 Jan 2000 08:31:52 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list"
},
{
"msg_contents": "On Mon, 17 Jan 2000, Thomas Lockhart wrote:\n\n> > * Make Absolutetime/Relativetime int4 because time_t can be int8 on some\n> > ports\n> > Does this mean the abstime/reltime types or all of them? I thought the\n> > former were deprecated anyway.\n> \n> abstime should probably be considered deprecated as a user type, but\n> it is still used extensively internally and within the tuple\n> structure. I'd be reluctant to wholesale replace it with\n> timestamp/datetime, since that will take 8 bytes per value rather than\n> 4.\n\nJust so I understand this: The official SQL data types are \"timestamp\" and\n\"interval\", right? Everything else will eventually be an alias or phased\nout or whatever?\n\nI've been itching to change the pg_shadow.valuntil column to timestamp\nanyway, I suppose that would be a step in the right direction, or not?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 17 Jan 2000 12:13:35 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list"
},
{
"msg_contents": "> > I was meaning to ask you which of the date/time types are going to be\n> > left standing when the dust settles. (I know you've said, but the\n> > archives are so messed up right now that I can't find it.)\n> > Timestamp is the only remaining standard type without an array type,\n> > and if it's not going to be deprecated then it ought to have one...\n> \n> \"timestamp\" will continue, but *all* of the code will come from a\n> renamed \"datetime\". So don't bother adding anything for timestamp,\n> since it will magically appear when datetime gets renamed.\n> \n> btw, I will make \"datetime\" a synonym for \"timestamp\", so existing\n> apps should work without change.\n\nGot it. Never mind.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jan 2000 11:17:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list"
},
{
"msg_contents": "> The official SQL data types are \"timestamp\" and\n> \"interval\", right? Everything else will eventually be an alias or \n> phased out or whatever?\n\nNo (at least I haven't proposed that). abstime stays as a 4-byte\ninternal system time type. timestamp and interval become full-featured\ndate/time types, stealing all of the datetime and timespan code, and\nthe latter two become synonyms for timestamp and interval.\n\n> I've been itching to change the pg_shadow.valuntil column to timestamp\n> anyway, I suppose that would be a step in the right direction, or not?\n\nAt the moment, there are *no* 8-byte date/time types in the system\ntables. This would be the first instance of that, and I'm not sure we\nshould introduce it in just one place.\n\nHas abstime been a problem here?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 17 Jan 2000 16:33:59 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list"
},
{
"msg_contents": "On 2000-01-17, Thomas Lockhart mentioned:\n\n> > The official SQL data types are \"timestamp\" and\n> > \"interval\", right? Everything else will eventually be an alias or \n> > phased out or whatever?\n> \n> No (at least I haven't proposed that). abstime stays as a 4-byte\n> internal system time type. timestamp and interval become full-featured\n> date/time types, stealing all of the datetime and timespan code, and\n> the latter two become synonyms for timestamp and interval.\n\nOkay, so we have \"timestamp\" and \"interval\" as offical types, a few\n\"datetime\" sort of things as aliases for backwards compatibility, and\n\"abstime\" as a more or less internal type with less precision and storage\nrequirements. Sounds clear to me. This also puts the original TODO item\ninto a much clearer light.\n\n> > I've been itching to change the pg_shadow.valuntil column to timestamp\n> > anyway, I suppose that would be a step in the right direction, or not?\n> \n> At the moment, there are *no* 8-byte date/time types in the system\n> tables. This would be the first instance of that, and I'm not sure we\n> should introduce it in just one place.\n> \n> Has abstime been a problem here?\n\nNo. I just thought this could be done, but in view of your explanation I\nam now wiser ...\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n",
"msg_date": "Wed, 19 Jan 2000 00:28:10 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TODO list"
}
] |
[
{
"msg_contents": "Hello,\n\nI grabbed last nights snapshot to come up with a diff for\npg_dump to generate COMMENT ON statements for user-defined\ndescriptions. However, I believe pg_dump is currently in a\nbroken state with respect to the dumping of indexes,\npossibly related to the current work done with\nINDEX_MAX_KEYS. Can anyone confirm this?\n\nThanks for any info,\n\nMike Mascari\n\n\n",
"msg_date": "Sat, 15 Jan 2000 17:17:21 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "INDEX_MAX_KEYS and pg_dump"
},
{
"msg_contents": "> Hello,\n> \n> I grabbed last nights snapshot to come up with a diff for\n> pg_dump to generate COMMENT ON statements for user-defined\n> descriptions. However, I believe pg_dump is currently in a\n> broken state with respect to the dumping of indexes,\n> possibly related to the current work done with\n> INDEX_MAX_KEYS. Can anyone confirm this?\n\nI just looked and I don't see any problems.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 Jan 2000 23:05:30 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INDEX_MAX_KEYS and pg_dump"
},
{
"msg_contents": ">> descriptions. However, I believe pg_dump is currently in a\n>> broken state with respect to the dumping of indexes,\n>> possibly related to the current work done with\n>> INDEX_MAX_KEYS. Can anyone confirm this?\n\n> I just looked and I don't see any problems.\n\nIt was broken --- I committed fixes for it at about 11PM EST.\n\n(The same problem as for func args --- it wasn't coping with\nzero suppression in oidvectorout/int2vectorout.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 Jan 2000 23:51:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INDEX_MAX_KEYS and pg_dump "
},
{
"msg_contents": "> >> descriptions. However, I believe pg_dump is currently in a\n> >> broken state with respect to the dumping of indexes,\n> >> possibly related to the current work done with\n> >> INDEX_MAX_KEYS. Can anyone confirm this?\n> \n> > I just looked and I don't see any problems.\n> \n> It was broken --- I committed fixes for it at about 11PM EST.\n> \n> (The same problem as for func args --- it wasn't coping with\n> zero suppression in oidvectorout/int2vectorout.)\n\nThanks. I missed that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 Jan 2000 00:40:21 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INDEX_MAX_KEYS and pg_dump"
}
] |
[
{
"msg_contents": "\n",
"msg_date": "Sun, 16 Jan 2000 00:25:07 +0000 (GMT)",
"msg_from": "Marc Tardif <[email protected]>",
"msg_from_op": true,
"msg_subject": "rules with oid"
}
] |
[
{
"msg_contents": "I noticed today that the system drops any \"typmod\" modifier associated\nwith a type name being casted to. For example,\n\nregression=# select '1.23456'::numeric(7,2);\n ?column?\n----------\n 1.23456\t\t\t--- should be 1.23\n(1 row)\n\nregression=# select CAST ('1234567.89' AS numeric(4,1));\n ?column?\n------------\n 1234567.89\t\t\t--- should raise a numeric-overflow error\n(1 row)\n\nThese particular cases can be fixed with a one-line patch, I think,\nbecause there is storage in an A_Const node to hold a reference to\na Typename, which includes typmod. parse_expr.c is just forgetting\nto pass the typmod to parser_typecast().\n\nBUT: there isn't any equally simple patch when the value being casted\nis not a constant. For instance\n\n\tselect field1 :: numeric(7,2) from table1;\n\ncannot work properly now, because gram.y transforms it into\n\n\tselect numeric(field1) from table;\n\nwhich (a) drops the typmod and (b) bypasses all of the intelligence\nthat should be used to determine how to coerce the type.\n\nWhat I think we need is to add a new parsetree node type that explicitly\nrepresents a CAST operator, and then modify parse_expr.c to transform\nthat node type into an appropriate function call (or, perhaps, nothing\nat all if the source value is already the right type).\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 Jan 2000 21:03:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "I think we need an explicit parsetree node for CAST"
},
{
"msg_contents": "> These particular cases can be fixed with a one-line patch, I think,\n> because there is storage in an A_Const node to hold a reference to\n> a Typename, which includes typmod. parse_expr.c is just forgetting\n> to pass the typmod to parser_typecast().\n> \n> BUT: there isn't any equally simple patch when the value being casted\n> is not a constant. For instance\n> \n> \tselect field1 :: numeric(7,2) from table1;\n> \n> cannot work properly now, because gram.y transforms it into\n> \n> \tselect numeric(field1) from table;\n> \n> which (a) drops the typmod and (b) bypasses all of the intelligence\n> that should be used to determine how to coerce the type.\n> \n> What I think we need is to add a new parsetree node type that explicitly\n> represents a CAST operator, and then modify parse_expr.c to transform\n> that node type into an appropriate function call (or, perhaps, nothing\n> at all if the source value is already the right type).\n\nI have on the TODO list, and once considered adding more passing around\nof atttypmod in the parser to keep such information. Maybe I shoud do\nthat first to see what happens and what gets fixed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 Jan 2000 21:37:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I think we need an explicit parsetree node for CAST"
},
{
"msg_contents": "> These particular cases can be fixed with a one-line patch, I think,\n> because there is storage in an A_Const node to hold a reference to\n> a Typename, which includes typmod. parse_expr.c is just forgetting\n> to pass the typmod to parser_typecast().\n> \n> BUT: there isn't any equally simple patch when the value being casted\n> is not a constant. For instance\n> \n> \tselect field1 :: numeric(7,2) from table1;\n> \n> cannot work properly now, because gram.y transforms it into\n> \n> \tselect numeric(field1) from table;\n> \n> which (a) drops the typmod and (b) bypasses all of the intelligence\n> that should be used to determine how to coerce the type.\n> \n> What I think we need is to add a new parsetree node type that explicitly\n> represents a CAST operator, and then modify parse_expr.c to transform\n> that node type into an appropriate function call (or, perhaps, nothing\n> at all if the source value is already the right type).\n\nActually, I think I never made the additional atttypmod changes because\nno one had ever reported a problem, and I was confused by that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 Jan 2000 21:41:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I think we need an explicit parsetree node for CAST"
},
{
"msg_contents": "At 09:03 PM 1/15/00 -0500, Tom Lane wrote:\n\n>What I think we need is to add a new parsetree node type that explicitly\n>represents a CAST operator, and then modify parse_expr.c to transform\n>that node type into an appropriate function call (or, perhaps, nothing\n>at all if the source value is already the right type).\n>\n>Comments?\n\nWell, I hate to keep popping up wearing my compiler-writer hat, but\nyes, this seems obvious. If the casting notation includes type\nmodifications like precision information then the simple expression\ntype_name(expr) can't ever match it. The casting notation must be \nrejected or properly executed. Silent errors of this type are simply\nunforgivable. Either an error or a proper transformation (presumably\nto a function that takes precision and scale parameters) is fine, but\nsilent and incorrect execution is a major sin.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sat, 15 Jan 2000 20:07:30 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I think we need an explicit parsetree node for\n CAST"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Actually, I think I never made the additional atttypmod changes because\n> no one had ever reported a problem, and I was confused by that.\n\nI think that after further discussion, we concluded that it wasn't\nreally possible to determine an atttypmod value to attach to the\nresult of most expressions. However, CAST is a special case because\nthere *is* a typmod value associated with the Typename node. The\nthing I want to do is make sure we hold onto that value long enough\nto use it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 Jan 2000 23:22:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] I think we need an explicit parsetree node for CAST "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Actually, I think I never made the additional atttypmod changes because\n> > no one had ever reported a problem, and I was confused by that.\n> \n> I think that after further discussion, we concluded that it wasn't\n> really possible to determine an atttypmod value to attach to the\n> result of most expressions. However, CAST is a special case because\n> there *is* a typmod value associated with the Typename node. The\n> thing I want to do is make sure we hold onto that value long enough\n> to use it...\n\nI found the area you mentioned and fixed it. Seems the other areas I\nremember being a problem were fixed by me or someone else, maybe you.\nI remember makeResdom and makeVar having bogus entries sometimes, but I\nsee now they are all fixed.\n\nI see your issue, and I don't know the code well enough to comment on\nit.\n\nI was able to do:\n\n\ttest=> select 'x' as fred into test ;\n\tNOTICE: Attribute 'fred' has an unknown type\n\t Relation created; continue\n\tSELECT\n\ttest=> \\d test\n\t Table \"test\"\n\t Attribute | Type | Extra \n\t-----------+---------+-------\n\t fred | unknown | \n\t\n---------------------------------------------------------------------------\n\n\ttest=> select 'x'::varchar as fred into test ;\n\tSELECT\n\ttest=> \\d test\n\t Table \"test\"\n\t Attribute | Type | Extra \n\t-----------+------------+-------\n\t fred | varchar(0) | \n\n\nSeems we should disallow this. This last one is the one you want to\nfix:\n\n---------------------------------------------------------------------------\n\n\ttest=> select 'x'::varchar(20) as fred into test ;\n\tSELECT\n\ttest=> \\d test\n\t Table \"test\"\n\t Attribute | Type | Extra \n\t-----------+------------+-------\n\t fred | varchar(0) | \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 Jan 2000 00:39:53 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I think we need an explicit parsetree node for CAST"
},
{
"msg_contents": "I have applied a patch for this one.\n\n> I noticed today that the system drops any \"typmod\" modifier associated\n> with a type name being casted to. For example,\n> \n> regression=# select '1.23456'::numeric(7,2);\n> ?column?\n> ----------\n> 1.23456\t\t\t--- should be 1.23\n> (1 row)\n> \n> regression=# select CAST ('1234567.89' AS numeric(4,1));\n> ?column?\n> ------------\n> 1234567.89\t\t\t--- should raise a numeric-overflow error\n> (1 row)\n> \n> These particular cases can be fixed with a one-line patch, I think,\n> because there is storage in an A_Const node to hold a reference to\n> a Typename, which includes typmod. parse_expr.c is just forgetting\n> to pass the typmod to parser_typecast().\n> \n> BUT: there isn't any equally simple patch when the value being casted\n> is not a constant. For instance\n> \n> \tselect field1 :: numeric(7,2) from table1;\n> \n> cannot work properly now, because gram.y transforms it into\n> \n> \tselect numeric(field1) from table;\n> \n> which (a) drops the typmod and (b) bypasses all of the intelligence\n> that should be used to determine how to coerce the type.\n> \n> What I think we need is to add a new parsetree node type that explicitly\n> represents a CAST operator, and then modify parse_expr.c to transform\n> that node type into an appropriate function call (or, perhaps, nothing\n> at all if the source value is already the right type).\n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 Jan 2000 00:40:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I think we need an explicit parsetree node for CAST"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have applied a patch for this one.\n\nRight, you saw the parser_typecast mistake. But the problem of doing\nit properly for non-constant input to the CAST is still open.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jan 2000 01:25:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] I think we need an explicit parsetree node for CAST "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I have applied a patch for this one.\n> \n> Right, you saw the parser_typecast mistake. But the problem of doing\n> it properly for non-constant input to the CAST is still open.\n> \n\nYes, and constants with cases in SELECT INTO are broken too.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 Jan 2000 01:44:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I think we need an explicit parsetree node for CAST"
},
{
"msg_contents": ">> Right, you saw the parser_typecast mistake. But the problem of doing\n>> it properly for non-constant input to the CAST is still open.\n\nBTW, the strings regress test is currently failing in a couple of\nplaces, because it thinks that casting to \"char\" won't truncate the\nstring. With this patch in place, casting a constant to \"char\" means\ncasting to char(1) which indeed truncates to one character. I think\nthis is correct behavior, though it may surprise someone somewhere.\n\nThere are other places in the strings test that cast non-constant\nexpressions to \"char\", and those are going to change behavior as soon\nas I finish inventing a parsenode for CAST. So I am not going to bother\nchecking in an update for the strings test until the dust settles.\n\n> Yes, and constants with cases in SELECT INTO are broken too.\n\nHuh? I'm not sure if I follow this or not --- would you give an\nexample?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jan 2000 16:32:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] I think we need an explicit parsetree node for CAST "
},
{
"msg_contents": "> > Yes, and constants with cases in SELECT INTO are broken too.\n> \n> Huh? I'm not sure if I follow this or not --- would you give an\n> example?\n\nHere is the mail I sent out last night. It shows a failure:\n\n---------------------------------------------------------------------------\n\nI see your issue, and I don't know the code well enough to comment on\nit.\n\nI was able to do:\n\n test=> select 'x' as fred into test ;\n NOTICE: Attribute 'fred' has an unknown type\n Relation created; continue\n SELECT\n test=> \\d test\n Table \"test\"\n Attribute | Type | Extra\n -----------+---------+-------\n fred | unknown |\n\n---------------------------------------------------------------------------\n\n\n test=> select 'x'::varchar as fred into test ;\n SELECT\n test=> \\d test\n Table \"test\"\n Attribute | Type | Extra\n -----------+------------+-------\n fred | varchar(0) |\n\n\nSeems we should disallow this. This last one is the one you want to\nfix:\n\n---------------------------------------------------------------------------\n\n\n test=> select 'x'::varchar(20) as fred into test ;\n SELECT\n test=> \\d test\n Table \"test\"\n Attribute | Type | Extra\n -----------+------------+-------\n fred | varchar(0) |\n\n-\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 Jan 2000 17:04:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I think we need an explicit parsetree node for CAST"
}
] |
[
{
"msg_contents": "I have repaired the most recently introduced coredump in pg_dump,\nbut it still crashes on the regression test database.\n\nIssue 1:\n\nThe \"most recently introduced coredump\" came from the change to\noidvector/int2vector to suppress trailing zeroes in the output\nroutine. pg_dump was assuming that it would see exactly the\nright number of zeroes, and wasn't bothering to initialize any\nleftover array locations --- but it would happily try to dereference\nthose locations later on. Ugh.\n\nAlthough cleaning up pg_dump's code is clearly good practice, maybe\nthis should raise a flag about whether suppressing the zeroes is\na good idea. Are there any client applications that will break\nbecause of this change? I'm not sure...\n\nIssue 2:\n\nThe reason it's still broken is that the pqexpbuffer.c code I added to\nlibpq doesn't support adding more than 1K characters to an \"expansible\nstring\" in any one appendPQExpBuffer() call. pg_dump tries to use that\nroutine to format function definitions, which can easily be over 1K.\n(Very likely there are other places in pg_dump that have similar\nproblems, but this is the one you hit first when trying to pg_dump the\nregression DB.) That 1K limitation was OK when the module was just used\ninternally in libpq, but if we're going to allow pg_dump to use it, we\nprobably ought to relax the limitation.\n\nThe equivalent backend code already has solved this problem, but it\nsolved it by using vsnprintf() which isn't available everywhere.\nWe have a vsnprintf() emulation in backend/port, so in theory we\ncould link that routine into libpq if we are on a platform that\nhasn't got vsnprintf.\n\nThe thing that bothers me about that is that if libpq exports a\nvsnprintf routine that's different from the system version, we\ncould find ourselves changing the behavior of applications that\nthought they were calling the local system's vsnprintf. (The\nbackend/port module would get linked if either snprintf() or\nvsnprintf() is missing --- there are machines that have only one\n--- and we'd effectively replace the system definition of the\none that the local system did have.) That's not good.\n\nHowever, the alternative of hacking pg_dump so it doesn't try to\nformat more than 1K at a time is mighty unattractive as well.\n\nI am inclined to go ahead and insert vsnprintf into libpq.\nThe risk of problems seems pretty small (and it's zero on any\nmachine with a reasonably recent libc, since then vsnprintf\nwill be in libc and we won't link our version). The risk of\nmissing a buffer-overrun condition in pg_dump, and shipping\na pg_dump that will fail on someone's database, seems worse.\n\nComments? Better ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 Jan 2000 23:16:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump not in very good shape"
},
{
"msg_contents": "> I have repaired the most recently introduced coredump in pg_dump,\n> but it still crashes on the regression test database.\n> \n> Issue 1:\n> \n> The \"most recently introduced coredump\" came from the change to\n> oidvector/int2vector to suppress trailing zeroes in the output\n> routine. pg_dump was assuming that it would see exactly the\n> right number of zeroes, and wasn't bothering to initialize any\n> leftover array locations --- but it would happily try to dereference\n> those locations later on. Ugh.\n> \n> Although cleaning up pg_dump's code is clearly good practice, maybe\n> this should raise a flag about whether suppressing the zeroes is\n> a good idea. Are there any client applications that will break\n> because of this change? I'm not sure...\n\nI think we are OK. There are very few places the vectors are used. \nThey really weren't used even as part of initdb except to define the\ntypes. Makes sense pg_dump uses it, I guess, but I can't imagine other\napps using it. With a definable length, I think we have to supress the\nzero padding.\n\n> I am inclined to go ahead and insert vsnprintf into libpq.\n> The risk of problems seems pretty small (and it's zero on any\n> machine with a reasonably recent libc, since then vsnprintf\n> will be in libc and we won't link our version). The risk of\n> missing a buffer-overrun condition in pg_dump, and shipping\n> a pg_dump that will fail on someone's database, seems worse.\n\nYou bring up an interesting point. I say just link it in and see what\nhappens. No real good way to know how much memory sprintf needs.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 Jan 2000 00:36:21 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape"
},
{
"msg_contents": "On 2000-01-15, Tom Lane mentioned:\n\n> I am inclined to go ahead and insert vsnprintf into libpq.\n> The risk of problems seems pretty small (and it's zero on any\n> machine with a reasonably recent libc, since then vsnprintf\n> will be in libc and we won't link our version). The risk of\n> missing a buffer-overrun condition in pg_dump, and shipping\n> a pg_dump that will fail on someone's database, seems worse.\n> \n> Comments? Better ideas?\n\nI think including this in libpq is the wrong way to go. It's not meant for\nexternal clients. If you open this can of worms then anything psql or\npg_dump feel like using that day becomes part of the library interface.\nWe'd be stuck with supporting this forever.\n\nA better idea would be to do what psql does with snprintf: Just include\nthe [v]snprintf.o file in the compilation (linking) conditionally. (Of\ncourse a better plan might even be consolidating all the backend/port and\nutils stuff into one unified port directory that everyone in the source\ntree can use, but that's probably too much bother right now.)\n\nOne thing that I hope I can tackle for 7.1 is cleaning up the build\nprocess (with automake?) and that would take care of missing functions\nautomatically by substituting a replacement contained in the distribution,\nas I suggested above.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n",
"msg_date": "Sun, 16 Jan 2000 21:12:01 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape"
},
{
"msg_contents": "On 2000-01-15, Tom Lane mentioned:\n\n> I have repaired the most recently introduced coredump in pg_dump,\n> but it still crashes on the regression test database.\n\nWhich brings up the idea why the regression tests don't test pg_dump. It's\njust as important to people as the backend. psql already gets tested more\nor less. Would it not be a decent idea to do a\n\npg_dump regress > out\ndiff out expected.out\n\nat the end of the tests? That way we could catch these problems\nearlier. (Especially since I'm not sure how many people use pg_dump at all\nduring development.)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 16 Jan 2000 21:12:20 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 2000-01-15, Tom Lane mentioned:\n> \n> > I have repaired the most recently introduced coredump in pg_dump,\n> > but it still crashes on the regression test database.\n> \n> Which brings up the idea why the regression tests don't test pg_dump. It's\n> just as important to people as the backend. psql already gets tested more\n> or less. Would it not be a decent idea to do a\n> \n> pg_dump regress > out\n> diff out expected.out\n> \n> at the end of the tests? That way we could catch these problems\n> earlier. (Especially since I'm not sure how many people use pg_dump at all\n> during development.)\n\nActually the megatest is:\n\n\tpg_dump regress > out\n\tdropdb regression\n\tcreatedb regression\n\tpsql regression < out\n\tpg_dump regress > out2\n\tdiff out out2\n\nThat is the pg_dump test, and someone usually does it as part of\nregression testing before each release.\n\nIt would be nice to add this to test/regress/Makefile maybe.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jan 2000 01:19:19 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 2000-01-15, Tom Lane mentioned:\n>> I am inclined to go ahead and insert vsnprintf into libpq.\n\n> I think including this in libpq is the wrong way to go. [snip]\n> A better idea would be to do what psql does with snprintf: Just include\n> the [v]snprintf.o file in the compilation (linking) conditionally.\n\nSorry if I was unclear, but that was exactly what I meant.\n\nBTW, since this is now done in libpq, you could probably remove\nsnprintf.o from psql ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 01:29:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Which brings up the idea why the regression tests don't test pg_dump.\n\nThat'd be nice ...\n\n> Would it not be a decent idea to do a\n\n> pg_dump regress > out\n> diff out expected.out\n\n> at the end of the tests?\n\nThere's a couple of small practical problems with that. Number one:\n\npg_dump regression | wc\n118211 565800 3516170\n\nAdding a 3.5meg comparison file to the distribution isn't too\nappetizing; nor is the prospect of trying to keep it up to date\nvia cvs. (*How* much storage did you just add to hub, Marc? ;-))\n\nNumber two is that we'd never get consistent dump results across\ndifferent platforms. There are the known cross-platform variations\n(float roundoff, DST handling, etc) already accounted for by\nplatform-specific substitute comparison files. Worse, a dump will\nsee the platform-dependent variations in tuple update order that we\ncurrently mask in many tests by asking for ordered select results.\nI don't think anyone will hold still for a bunch of 3.5meg\nplatform-specific dump comparison files.\n\nIn short, it'd be a great idea, but figuring out a practical testing\nmethod will take some work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 01:38:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Actually the megatest is:\n\n> \tpg_dump regress > out\n> \tdropdb regression\n> \tcreatedb regression\n> \tpsql regression < out\n> \tpg_dump regress > out2\n> \tdiff out out2\n\n> That is the pg_dump test, and someone usually does it as part of\n> regression testing before each release.\n\n> It would be nice to add this to test/regress/Makefile maybe.\n\nThat's a good thought --- it eliminates both the platform-specific\nissues and the problem of adding a bulky reference file to the\ndistribution.\n\nI'd suggest, though, that the test *not* clobber the regression DB.\nInstead\n\n\tpg_dump regression >out\n\tcreatedb regression2\n\tpsql regression2 <out\n\tpg_dump regression >out2\n\tdropdb regression2\t\t-- maybe\n\tdiff out out2\n\nThis leaves you a better chance of investigating the diff if you\nget one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 01:56:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape "
},
{
"msg_contents": "On Mon, 17 Jan 2000, Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> > On 2000-01-15, Tom Lane mentioned:\n> >> I am inclined to go ahead and insert vsnprintf into libpq.\n> \n> > I think including this in libpq is the wrong way to go. [snip]\n> > A better idea would be to do what psql does with snprintf: Just include\n> > the [v]snprintf.o file in the compilation (linking) conditionally.\n> \n> Sorry if I was unclear, but that was exactly what I meant.\n> \n> BTW, since this is now done in libpq, you could probably remove\n> snprintf.o from psql ...\n\nHmm, maybe this is not what I meant. I meant adding the linking line to\npg_dump, not libpq. But I guess as long as we don't tell anyone about it\n(vsnprintf being in libpq) we can safely take it out later if someone has\na better plan.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 17 Jan 2000 12:07:43 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > Actually the megatest is:\n>\n> > pg_dump regress > out\n> > dropdb regression\n> > createdb regression\n> > psql regression < out\n> > pg_dump regress > out2\n> > diff out out2\n>\n> > That is the pg_dump test, and someone usually does it as part of\n> > regression testing before each release.\n>\n> > It would be nice to add this to test/regress/Makefile maybe.\n>\n> That's a good thought --- it eliminates both the platform-specific\n> issues and the problem of adding a bulky reference file to the\n> distribution.\n\n Still an incomplete test at all.\n\n It doesn't guarantee, that the resulting dump is what you\n really need to restore the database. For example, I'm not\n sure that FOREIGN KEY constraints are actually dumped\n correct (as constraint trigger statements after the data\n load). So it might work, and result in the same, wrong dump\n again.\n\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n",
"msg_date": "Mon, 17 Jan 2000 16:06:04 +0100",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>>>> A better idea would be to do what psql does with snprintf: Just include\n>>>> the [v]snprintf.o file in the compilation (linking) conditionally.\n>> \n>> Sorry if I was unclear, but that was exactly what I meant.\n>> \n>> BTW, since this is now done in libpq, you could probably remove\n>> snprintf.o from psql ...\n\n> Hmm, maybe this is not what I meant. I meant adding the linking line to\n> pg_dump, not libpq.\n\nNot a usable answer: that would mean that *every* application using\nlibpq would have to start including backend/port/snprintf.o, on the\nplatforms where vsnprintf doesn't exist.\n\n> But I guess as long as we don't tell anyone about it\n> (vsnprintf being in libpq)\n\nIt's certainly not going to become a published part of the interface,\nif that's what you mean.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 11:22:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape "
},
{
"msg_contents": "On 2000-01-17, Tom Lane mentioned:\n\n> > Hmm, maybe this is not what I meant. I meant adding the linking line to\n> > pg_dump, not libpq.\n> \n> Not a usable answer: that would mean that *every* application using\n> libpq would have to start including backend/port/snprintf.o, on the\n> platforms where vsnprintf doesn't exist.\n\nNevermind. I got it all backwards.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 19 Jan 2000 00:28:32 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape "
}
] |
[
{
"msg_contents": "Hi all. Running 6.5.3 on Intel Linux. I think problems related to this\nhave been reported in the past but the mailing list archives are running\nincredibly slowly lately so I'm not *absolutely* sure:\n\n[PostgreSQL 6.5.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\nmail=> begin transaction ;\nBEGIN\nmail=> create temp table tbl1 ( x int4 ) ;\nCREATE\nmail=> drop table tbl1 ;\nDROP\nmail=> commit transaction ;\nNOTICE: trying to delete a reldesc that does not exist.\nNOTICE: trying to delete a reldesc that does not exist.\nEND\nmail=>\n\nA two-parter:\n\n(1) For myself: Is this at all dangerous/problematic for my database?\n\n(2) For Postgres: Is this addressed in the latest 7.0 source or on the\n\tTODO list?\n\nThanks...\n\n- K\n\nKristofer Munn * KMI * 973-509-9414 * AIM KrMunn * http://www.munn.com/\n\n",
"msg_date": "Sun, 16 Jan 2000 01:57:41 -0500 (EST)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Temp Tables: Trying to delete a reldesc..."
},
{
"msg_contents": "> Hi all. Running 6.5.3 on Intel Linux. I think problems related to this\n> have been reported in the past but the mailing list archives are running\n> incredibly slowly lately so I'm not *absolutely* sure:\n> \n> [PostgreSQL 6.5.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n> \n> mail=> begin transaction ;\n> BEGIN\n> mail=> create temp table tbl1 ( x int4 ) ;\n> CREATE\n> mail=> drop table tbl1 ;\n> DROP\n> mail=> commit transaction ;\n> NOTICE: trying to delete a reldesc that does not exist.\n> NOTICE: trying to delete a reldesc that does not exist.\n> END\n> mail=>\n> \n> A two-parter:\n> \n> (1) For myself: Is this at all dangerous/problematic for my database?\n\nNot a problem. Was some issue with flushing the cache, I bet.\n\n> \n> (2) For Postgres: Is this addressed in the latest 7.0 source or on the\n> \tTODO list?\n\nFixed in current tree.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 Jan 2000 02:09:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Temp Tables: Trying to delete a reldesc..."
},
{
"msg_contents": "Kristofer Munn <[email protected]> writes:\n> [ 6.5.3 acts funny about deleting a temp table created in the same\n> transaction ]\n\n> (1) For myself: Is this at all dangerous/problematic for my database?\n\nAFAIR the notice is harmless; we'd have tried harder to back-patch a\nfix if the consequences of the bug were critical.\n\n> (2) For Postgres: Is this addressed in the latest 7.0 source or on the\n> \tTODO list?\n\nIt's fixed in current sources.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jan 2000 02:13:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Temp Tables: Trying to delete a reldesc... "
}
] |
[
{
"msg_contents": "Hi all - ran into this little parser idiosyncrasy today... Workaround was\nsimple but this should probably go on somebody's list somewhere.\n\n[PostgreSQL 6.5.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\nmail=> create table tbl1 ( id1 int4 );\nCREATE\n\nmail=> create table tbl2 ( id2 int4, id1 int4 ) ;\nCREATE\n\nmail=> select 1 from tbl2 t2, tbl1 t1 where t1.id1 = t2.id1 and\n\tt2.id1 = 7 for update ;\n?column?\n--------\n(0 rows)\n\nmail=> select 1 from tbl2 t2, tbl1 t1 where t1.id1 = t2.id1 and\n\tt2.id1 = 7 for update of t2;\n?column?\n--------\n(0 rows)\n\nmail=> select 1 from tbl2 t2, tbl1 t1 where t1.id1 = t2.id1 and \n\tt2.id1 = 7 for update of tbl2;\n\nERROR: FOR UPDATE: relation tbl2 not found in FROM clause\n\n- K\n\nKristofer Munn * KMI * 973-509-9414 * AIM KrMunn * http://www.munn.com/\n\n",
"msg_date": "Sun, 16 Jan 2000 02:41:02 -0500 (EST)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT...FOR UPDATE OF class_name"
},
{
"msg_contents": "Kristofer Munn <[email protected]> writes:\n> select 1 from tbl2 t2, tbl1 t1 where t1.id1 = t2.id1 and \n> \tt2.id1 = 7 for update of tbl2;\n\n> ERROR: FOR UPDATE: relation tbl2 not found in FROM clause\n\nI believe the error message is correct; you should have written\n\nselect 1 from tbl2 t2, tbl1 t1 where t1.id1 = t2.id1 and \n\tt2.id1 = 7 for update of t2;\n\nA lot of people do not realize that writing an alias for a table\nin FROM means that as far as all the rest of that query is concerned,\nthat alias *is* the name of the table. The original table name is\ncompletely masked by the alias. This must be so, because one of the\nmain reasons for the alias facility is to resolve ambiguity when you\nare doing self-joins. Consider\n\n\tselect * from person p1, person p2 where p1.spouse = p2.id;\n\nIf you wrote instead\n\n\tselect * from person p1, person p2 where p1.spouse = person.id;\n\nwhich instance of the person table is being referenced? SQL resolves\nthis by treating it as an error: there is no table named person\navailable from that FROM clause.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jan 2000 03:09:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT...FOR UPDATE OF class_name "
},
{
"msg_contents": "Tom Lane wrote:\n>\n> > ERROR: FOR UPDATE: relation tbl2 not found in FROM clause\n> \n> I believe the error message is correct; you should have written\n> \n> select 1 from tbl2 t2, tbl1 t1 where t1.id1 = t2.id1 and \n> \tt2.id1 = 7 for update of t2;\n> \n> A lot of people do not realize that writing an alias for a table\n> in FROM means that as far as all the rest of that query is concerned,\n> that alias *is* the name of the table. \n>\n> [ additional comments and self-join example clipped ]\n\nOk, that sounds like a fine rule except for non-self-joins:\n\nmail=> select 1 from tbl2 t2, tbl1 t1 where tbl1.id1 = t2.id1 and t2.id1 = 7 ;\n ^^^^^^^ ^^^^\nDoes not give any error. I had expected that behavior to be consistent\nwhich is why I ran into the error. However, I have no problem with that\nexplanation.\n\n- K\n\nKristofer Munn * KMI * 973-509-9414 * AIM KrMunn * http://www.munn.com/\n\n",
"msg_date": "Sun, 16 Jan 2000 10:30:27 -0500 (EST)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT...FOR UPDATE OF class_name "
},
{
"msg_contents": "Kristofer Munn <[email protected]> writes:\n> select 1 from tbl2 t2, tbl1 t1 where tbl1.id1 = t2.id1 and t2.id1 = 7 ;\n> ^^^^^^^ ^^^^\n> Does not give any error.\n\nWhat that's doing is giving you a *three way* join --- Postgres silently\nadds an implicit FROM clause for the unaliased tbl1, as if you'd written\n\tFROM tbl2 t2, tbl1 t1, tbl1\n\nThis behavior has confused a lot of people; moreover it's not SQL\nstandard (I think it's a leftover from Berkeley's old POSTQUEL\nlanguage). There's been a good deal of talk about removing it,\nor at least giving a NOTICE when an implicit FROM clause is added.\n\nFOR UPDATE seems to be set up to not allow implicit FROM clause\naddition, which is probably a good thing --- it wouldn't make much\nsense to say FOR UPDATE on a table not appearing anywhere else in\nthe query...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jan 2000 12:01:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT...FOR UPDATE OF class_name "
}
] |
[
{
"msg_contents": "Tom, I went through all the places in pg_dump where a format string was used\nto add a string to the buffer (I believe it's only a problem when using\nsnprintf, which, I think, is only used if you pass a format string), and\neither removed the format string by passing in a single variable at a time,\nor making sure that only things like db object names (which have a size\nlimit significantly less than 1kB) were passed in using a format string. Of\ncourse, maybe I missed some places, but it shouldn't be a real problem.\nThat's why there are those particularly ugly pieces of code where the\nappendText (or whetever it is) function gets called repeatedly. Not pretty,\nbut it should always work.\n\nAm I wrong in assuming the the snprintf function only gets used when using a\nformat string, or is it always used?\n\nMikeA\n\n\n\n\n>> -----Original Message-----\n>> From: Tom Lane [mailto:[email protected]]\n>> Sent: Sunday, January 16, 2000 6:16 AM\n>> To: [email protected]\n>> Subject: [HACKERS] pg_dump not in very good shape\n>> \n>> \n>> I have repaired the most recently introduced coredump in pg_dump,\n>> but it still crashes on the regression test database.\n>> \n>> Issue 1:\n>> \n>> The \"most recently introduced coredump\" came from the change to\n>> oidvector/int2vector to suppress trailing zeroes in the output\n>> routine. pg_dump was assuming that it would see exactly the\n>> right number of zeroes, and wasn't bothering to initialize any\n>> leftover array locations --- but it would happily try to dereference\n>> those locations later on. Ugh.\n>> \n>> Although cleaning up pg_dump's code is clearly good practice, maybe\n>> this should raise a flag about whether suppressing the zeroes is\n>> a good idea. Are there any client applications that will break\n>> because of this change? I'm not sure...\n>> \n>> Issue 2:\n>> \n>> The reason it's still broken is that the pqexpbuffer.c code \n>> I added to\n>> libpq doesn't support adding more than 1K characters to an \n>> \"expansible\n>> string\" in any one appendPQExpBuffer() call. pg_dump tries \n>> to use that\n>> routine to format function definitions, which can easily be over 1K.\n>> (Very likely there are other places in pg_dump that have similar\n>> problems, but this is the one you hit first when trying to \n>> pg_dump the\n>> regression DB.) That 1K limitation was OK when the module \n>> was just used\n>> internally in libpq, but if we're going to allow pg_dump to \n>> use it, we\n>> probably ought to relax the limitation.\n>> \n>> The equivalent backend code already has solved this problem, but it\n>> solved it by using vsnprintf() which isn't available everywhere.\n>> We have a vsnprintf() emulation in backend/port, so in theory we\n>> could link that routine into libpq if we are on a platform that\n>> hasn't got vsnprintf.\n>> \n>> The thing that bothers me about that is that if libpq exports a\n>> vsnprintf routine that's different from the system version, we\n>> could find ourselves changing the behavior of applications that\n>> thought they were calling the local system's vsnprintf. (The\n>> backend/port module would get linked if either snprintf() or\n>> vsnprintf() is missing --- there are machines that have only one\n>> --- and we'd effectively replace the system definition of the\n>> one that the local system did have.) That's not good.\n>> \n>> However, the alternative of hacking pg_dump so it doesn't try to\n>> format more than 1K at a time is mighty unattractive as well.\n>> \n>> I am inclined to go ahead and insert vsnprintf into libpq.\n>> The risk of problems seems pretty small (and it's zero on any\n>> machine with a reasonably recent libc, since then vsnprintf\n>> will be in libc and we won't link our version). The risk of\n>> missing a buffer-overrun condition in pg_dump, and shipping\n>> a pg_dump that will fail on someone's database, seems worse.\n>> \n>> Comments? Better ideas?\n>> \n>> \t\t\tregards, tom lane\n>> \n>> ************\n>> \n",
"msg_date": "Sun, 16 Jan 2000 13:52:37 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] pg_dump not in very good shape"
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> Tom, I went through all the places in pg_dump where a format string was used\n> to add a string to the buffer (I believe it's only a problem when using\n> snprintf, which, I think, is only used if you pass a format string), and\n> either removed the format string by passing in a single variable at a time,\n> or making sure that only things like db object names (which have a size\n> limit significantly less than 1kB) were passed in using a format\n> string.\n\nYes, this is pretty much what the alternative is if we don't want to\nrely on vsnprintf(). However, if you submitted changes along that line,\nthey haven't been applied yet.\n\n> Of course, maybe I missed some places, but it shouldn't be a real\n> problem.\n\nWell, that's what the risk is. Not only might you have missed an\nobscure case or two (which Murphy's Law says we won't notice till\nafter release); but everyone who touches pg_dump from here on out\nrisks getting burnt by this non-obvious limit.\n\nAnd a pg_dump that cores trying to dump someone's database is *not*\na \"minor\" problem.\n\nSo I'm leaning towards leaving the pg_dump code as-is and fixing the\nlimitation in pqexpbuffer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jan 2000 11:44:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape "
},
{
"msg_contents": "I wrote:\n> So I'm leaning towards leaving the pg_dump code as-is and fixing the\n> limitation in pqexpbuffer.\n\nThis is done. pg_dump dumps the regression database again.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jan 2000 22:02:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape "
}
] |
[
{
"msg_contents": "I have some quistions! Who can tell me what can i do with a Credit-Card?\nWhat is savety to do?\n\nWhere can i get some Informations about writeing Cards with some Programms\nand where can i get the Programms?\n\[email protected]\n\n\n",
"msg_date": "Sun, 16 Jan 2000 16:36:52 +0100",
"msg_from": "\"Adrian Pach\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Credit Card ???????????"
}
] |
[
{
"msg_contents": ">> And a pg_dump that cores trying to dump someone's database is *not*\n>> a \"minor\" problem.\nNo, I didn't mean minor when it happens, I meant minor to fix. Sorry. Of\ncourse it's serious for the user.\n\n>> So I'm leaning towards leaving the pg_dump code as-is and fixing the\n>> limitation in pqexpbuffer.\nYes, this is the correct solution. What's the best way? To check the\nincoming string lengths for anything aproaching or greater than 1kB and\nslice it up from there?\n\nMikeA\n",
"msg_date": "Sun, 16 Jan 2000 22:07:37 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] pg_dump not in very good shape "
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n>> So I'm leaning towards leaving the pg_dump code as-is and fixing the\n>> limitation in pqexpbuffer.\n\n> Yes, this is the correct solution. What's the best way? To check the\n> incoming string lengths for anything aproaching or greater than 1kB and\n> slice it up from there?\n\nI don't think we can do that short of writing a complete snprintf\nemulation --- so we might as well just use snprintf.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jan 2000 15:16:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Ansley, Michael\" <[email protected]> writes:\n> >> So I'm leaning towards leaving the pg_dump code as-is and fixing the\n> >> limitation in pqexpbuffer.\n> \n> > Yes, this is the correct solution. What's the best way? To check the\n> > incoming string lengths for anything aproaching or greater than 1kB and\n> > slice it up from there?\n> \n> I don't think we can do that short of writing a complete snprintf\n> emulation --- so we might as well just use snprintf.\n> \n> regards, tom lane\n\nCan I go ahead and use today's snapshot to write up the diffs for\npg_dump for dumping COMMENT ON statements?\n\nMike Mascari\n",
"msg_date": "Sun, 16 Jan 2000 15:30:21 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n> Can I go ahead and use today's snapshot to write up the diffs for\n> pg_dump for dumping COMMENT ON statements?\n\nSure --- just don't test it with any long comments ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jan 2000 15:47:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump not in very good shape "
}
] |
[
{
"msg_contents": "\nTom, does your inherit pass constraints fix apply to anything on the\nTODO list, like:\n\n* Unique index on base column not honored on inserts from inherited table\n INSERT INTO inherit_table (unique_index_col) VALUES (dup) should fail\n [inherit]\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 Jan 2000 21:45:01 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inhterit fix"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, does your inherit pass constraints fix apply to anything on the\n> TODO list, like:\n\n> * Unique index on base column not honored on inserts from inherited table\n> INSERT INTO inherit_table (unique_index_col) VALUES (dup) should fail\n> [inherit]\n\nNo; that was a fix for something that did work in 6.5, but I had\nunknowingly broken it :-(.\n\nI guess what the above TODO item is complaining about is that indexes\non a parent table are not duplicated for the child table. That should\nhappen, I suppose, but it'll take an all-new chunk of code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jan 2000 22:23:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inhterit fix "
}
] |
[
{
"msg_contents": "Dear hackers,\n\nI have to port a lot of programs to PostgreSQL and don't want to wait\nfor completing them all before I put the first into production.\n\nSo after some thought I came to an enthrilling vision\n\nIf PostgreSQL had an interface for transparently accessing external\ndata, I could leave the needed tables on the old RDBMS and write a\nwrapper which would provide access to the data on the old RDBMS. Both\nold and new (pgsql) programs would see the same data. And I could start\nusing postgres' fine tools (e.g. psql) _now_ for all of my work. \n\nPutting these short sighted things aside it would provide an opportunity\nfor\n- distributed databases\n- load balancing via putting a table on multiple servers, creating a\nunioning-view\n- you could use SQL to access other data sources (e.g. LDAP)\n- one step further to world domination (no matter where your data\nresides, you could use postgres for your queries)\n- accessing multiple databases in _one_ SQL statement (e.g. join)\n\nSo I ask for your opinion on this strange idea. \n\nI suggest marking a table as external uses an interface which should\nprovide the following methods\n- query the structure of the table (\\d table-name)\n- sequentially scan the table (returning selected attributes of each\ntuple) [with some conditions]\n- update/delete either by cursor or by where-condition\n- query for statistics (see below)\n\nSince this functionality would require modifications all over the place\nin postgres I would like to start discussion about it. I might overlook\nsomething but the thought of having such a thing around opens up a lot\nof opportunities.\n\nPerhaps an ODBC wrapper could be the correct point to start (after\nimplementing and testing the [additional] virtual table access layer)\n\n Christof\n\nPS: I'd never dare to depend on this functionality, some kind of\nmirroring program might cover the problem for me as well, but it looked\nso cool.\nPPS: Of course I would start to investigate it further _after_ TOAST is\nfinished.\n\n------------------- example ------------------------------------\n\n[a is an external table, b an internal table]\n\nselect tuple_a1,tuple_b1 from a,b where a.tuple_a2=b.tuple_b2 and\ntuple_a3=42\n\nwould cause\n -> sequential scan (tuple_a1,tuple_a2) on a where tuple_a3=42\n -> for each entry index scan on b for a.tuple_a2=b.tuple_b2\n -> report result \nor\n -> sequential scan (tuple_b2, tuple_b1) on b\n -> index_scan (tuple_a1) by (tuple_a3=42,tuple_a2=b.tuple_b2)\n -> report result\n\nclearly we need to collect statistics on external tables as well\n\n\n",
"msg_date": "Mon, 17 Jan 2000 03:54:07 +0100",
"msg_from": "Christof Petig <[email protected]>",
"msg_from_op": true,
"msg_subject": "external data sources (e.g. a second RDBMS)"
}
] |
[
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> Tom, any chance we can change the name of setheapoverried to something\n>>>> that makes sense?\n>> \n>> Actually, I thought the plan was to eliminate it entirely in favor of\n>> using CommandCounterIncrement when we need to make tuples visible.\n>> There was a thread about that back in September, but I guess no one's\n>> gotten around to actually doing it.\n\n> I remember in the old days being totally confused about its purpose. \n> That was my motivation to change it. Can I do something to help fix\n> this?\n\nActually, according to my notes I had put off doing anything with this\nbecause Hiroshi pointed out that CommandCounterIncrement had a shared-\ncache-invalidation problem (it sent SI messages for changes that we\ncouldn't yet be sure would get committed).\n\nHiroshi's message last Monday stated that he'd fixed that problem,\nso maybe now it's safe to start using CommandCounterIncrement more\nheavily. Hiroshi, what do you think --- do you trust\nCommandCounterIncrement now?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jan 2000 22:46:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Getting rid of setheapoverride (was Re: [COMMITTERS] heap.c) "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> Bruce Momjian <[email protected]> writes:\n> >>>> Tom, any chance we can change the name of setheapoverried to \n> something\n> >>>> that makes sense?\n> >> \n> >> Actually, I thought the plan was to eliminate it entirely in favor of\n> >> using CommandCounterIncrement when we need to make tuples visible.\n> >> There was a thread about that back in September, but I guess no one's\n> >> gotten around to actually doing it.\n> \n> > I remember in the old days being totally confused about its purpose. \n> > That was my motivation to change it. Can I do something to help fix\n> > this?\n> \n> Actually, according to my notes I had put off doing anything with this\n> because Hiroshi pointed out that CommandCounterIncrement had a shared-\n> cache-invalidation problem (it sent SI messages for changes that we\n> couldn't yet be sure would get committed).\n> \n> Hiroshi's message last Monday stated that he'd fixed that problem,\n> so maybe now it's safe to start using CommandCounterIncrement more\n> heavily. Hiroshi, what do you think --- do you trust\n> CommandCounterIncrement now?\n>\n\nOh,I was just looking at heapoverride stuff quite accidentally. \nYes, this call is ugly and should be replaced by CommandCounterIncrement().\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 17 Jan 2000 13:20:37 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Getting rid of setheapoverride (was Re: [COMMITTERS] heap.c) "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Oh,I was just looking at heapoverride stuff quite accidentally. \n> Yes, this call is ugly and should be replaced by CommandCounterIncrement().\n\nOK, I'm running a build now with setheapoverride calls removed.\nWill see what happens.\n\nAbout half of the setheapoverride calls surrounded heap_update()\n(formerly called heap_replace()) calls. AFAICS there is no need\nfor these calls unless heap_update itself needs them --- but there\nare many calls to heap_update that do not have setheapoverride.\nPerhaps heap_replace once needed setheapoverride but no longer does?\n\nI am going to try just removing these calls without adding a\nCommandCounterIncrement to replace them. If anyone knows that\nthis is a bad idea, let me know!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 00:35:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RE: Getting rid of setheapoverride (was Re:\n\t[COMMITTERS] heap.c)"
},
{
"msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Oh,I was just looking at heapoverride stuff quite accidentally. \n> > Yes, this call is ugly and should be replaced by CommandCounterIncrement().\n> \n> OK, I'm running a build now with setheapoverride calls removed.\n> Will see what happens.\n> \n> About half of the setheapoverride calls surrounded heap_update()\n> (formerly called heap_replace()) calls. AFAICS there is no need\n> for these calls unless heap_update itself needs them --- but there\n> are many calls to heap_update that do not have setheapoverride.\n> Perhaps heap_replace once needed setheapoverride but no longer does?\n> \n> I am going to try just removing these calls without adding a\n> CommandCounterIncrement to replace them. If anyone knows that\n> this is a bad idea, let me know!\n\nGo for it. The setheapoverride name was so confusing, people just\nprobably left it in, not knowing what it did.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jan 2000 01:13:39 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: Getting rid of setheapoverride (was Re:\n [COMMITTERS]\n\theap.c)"
},
{
"msg_contents": ">>>> Oh,I was just looking at heapoverride stuff quite accidentally. \n>>>> Yes, this call is ugly and should be replaced by CommandCounterIncrement()\n>> \n>> OK, I'm running a build now with setheapoverride calls removed.\n>> Will see what happens.\n\nWell, it seems to work, but...\n\n>> About half of the setheapoverride calls surrounded heap_update()\n>> (formerly called heap_replace()) calls. AFAICS there is no need\n>> for these calls unless heap_update itself needs them --- but there\n>> are many calls to heap_update that do not have setheapoverride.\n\nI figured out that the cases where setheapoverride (or, now,\nCommandCounterIncrement) were needed were the cases where the\nheap_update might be updating a tuple created earlier in the\nsame command. pg_operator.c has some cases like that, but many of\nthe other uses of setheapoverride seem to be unnecessary.\n\nHowever, I'm a little uncomfortable with committing this change,\nbecause my first try at it worked OK at creating things but fell\nright over on DROP TABLE. I had replaced the setheapoverride(true)\nin heap_drop_with_catalog() (backend/catalog/heap.c) with a\nCommandCounterIncrement, and it failed with this backtrace:\n\n(gdb) bt\n#0 elog (lev=-1, fmt=0x6b160 \"cannot find attribute %d of relation %s\")\n at elog.c:112\n#1 0x1767d8 in build_tupdesc_ind (buildinfo={infotype = 1, i = {\n info_id = 19040,\n info_name = 0x4a60}},\n relation=0x4006ef00, natts=1074198864) at relcache.c:527\n#2 0x176554 in RelationBuildTupleDesc (buildinfo={infotype = 1, i = {\n info_id = 19040,\n info_name = 0x4a60 }},\n relation=0x1, natts=1074198864) at relcache.c:437\n#3 0x177230 in RelationBuildDesc (buildinfo={infotype = 1, i = {\n info_id = 19040,\n info_name = 0x4a60 }},\n oldrelation=0x4006ef00) at relcache.c:808\n#4 0x177b28 in RelationClearRelation (relation=0x4006ef00, rebuildIt=0 '\\000')\n at relcache.c:1279\n#5 0x177bbc in RelationFlushRelation (relationPtr=0xffffffff,\n onlyFlushReferenceCountZero=96 '`') at relcache.c:1320\n#6 0x177e10 in RelationIdInvalidateRelationCacheByRelationId (\n relationId=19040) at relcache.c:1415\n#7 0x175968 in CacheIdInvalidate (cacheId=4294967295, hashIndex=438624,\n pointer=0x1) at inval.c:544\n#8 0x175ae8 in InvalidationMessageCacheInvalidate (message=0x4007cce4)\n at inval.c:657\n#9 0x175490 in LocalInvalidInvalidate (invalid=0x4007cce4 \"r\",\n function=0x4000c3ca <DINFINITY+9226>, freemember=1 '\\001') at inval.c:173\n#10 0x175ca4 in ImmediateLocalInvalidation (send=-1 '�') at inval.c:806\n#11 0x9d0b0 in AtCommit_LocalCache () at xact.c:687\n#12 0x9cf70 in CommandCounterIncrement () at xact.c:520\n#13 0xa7a08 in heap_drop_with_catalog (relname=0x4006ef00 \"����\")\n at heap.c:1528\n#14 0xb16ac in RemoveRelation (name=0x40083328 \"ff1\") at creatinh.c:217\n#15 0x13ce84 in ProcessUtility (parsetree=0x40083370, dest=Remote)\n at utility.c:201\n#16 0x13add0 in pg_exec_query_dest (query_string=0x40024270 \"drop table ff1\",\n dest=Remote, aclOverride=1 '\\001') at postgres.c:721\n\nApparently, if I call CommandCounterIncrement while partway through\ndropping a relation, it will try to rebuild the relcache entry for\nthe relation --- and of course fail. I'm too tired to figure out\nwhether this is just a small coding error in the new cache invalidation\ncode or whether it's a serious limitation in the whole design. Hiroshi,\nwhat do you think?\n\nI was able to get around this by simply removing CommandCounterIncrement\nfrom heap_drop_with_catalog entirely --- dropping tables seems to work\nfine that way ... but I don't trust it ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 02:12:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RE: Getting rid of setheapoverride (was Re:\n\t[COMMITTERS] heap.c)"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n>\n> >>>> Oh,I was just looking at heapoverride stuff quite accidentally.\n> >>>> Yes, this call is ugly and should be replaced by\n> CommandCounterIncrement()\n> >>\n> >> OK, I'm running a build now with setheapoverride calls removed.\n> >> Will see what happens.\n>\n> Well, it seems to work, but...\n>\n> >> About half of the setheapoverride calls surrounded heap_update()\n> >> (formerly called heap_replace()) calls. AFAICS there is no need\n> >> for these calls unless heap_update itself needs them --- but there\n> >> are many calls to heap_update that do not have setheapoverride.\n>\n> I figured out that the cases where setheapoverride (or, now,\n> CommandCounterIncrement) were needed were the cases where the\n> heap_update might be updating a tuple created earlier in the\n> same command. pg_operator.c has some cases like that, but many of\n> the other uses of setheapoverride seem to be unnecessary.\n>\n> However, I'm a little uncomfortable with committing this change,\n> because my first try at it worked OK at creating things but fell\n> right over on DROP TABLE. I had replaced the setheapoverride(true)\n> in heap_drop_with_catalog() (backend/catalog/heap.c) with a\n> CommandCounterIncrement, and it failed with this backtrace:\n>\n> (gdb) bt\n> #0 elog (lev=-1, fmt=0x6b160 \"cannot find attribute %d of relation %s\")\n> at elog.c:112\n> #1 0x1767d8 in build_tupdesc_ind (buildinfo={infotype = 1, i = {\n> info_id = 19040,\n> info_name = 0x4a60}},\n> relation=0x4006ef00, natts=1074198864) at relcache.c:527\n> #2 0x176554 in RelationBuildTupleDesc (buildinfo={infotype = 1, i = {\n> info_id = 19040,\n> info_name = 0x4a60 }},\n> relation=0x1, natts=1074198864) at relcache.c:437\n> #3 0x177230 in RelationBuildDesc (buildinfo={infotype = 1, i = {\n> info_id = 19040,\n> info_name = 0x4a60 }},\n> oldrelation=0x4006ef00) at relcache.c:808\n> #4 0x177b28 in RelationClearRelation (relation=0x4006ef00,\n> rebuildIt=0 '\\000')\n> at relcache.c:1279\n> #5 0x177bbc in RelationFlushRelation (relationPtr=0xffffffff,\n> onlyFlushReferenceCountZero=96 '`') at relcache.c:1320\n> #6 0x177e10 in RelationIdInvalidateRelationCacheByRelationId (\n> relationId=19040) at relcache.c:1415\n> #7 0x175968 in CacheIdInvalidate (cacheId=4294967295, hashIndex=438624,\n> pointer=0x1) at inval.c:544\n> #8 0x175ae8 in InvalidationMessageCacheInvalidate (message=0x4007cce4)\n> at inval.c:657\n> #9 0x175490 in LocalInvalidInvalidate (invalid=0x4007cce4 \"r\",\n> function=0x4000c3ca <DINFINITY+9226>, freemember=1 '\\001') at\n> inval.c:173\n> #10 0x175ca4 in ImmediateLocalInvalidation (send=-1 '���') at inval.c:806\n> #11 0x9d0b0 in AtCommit_LocalCache () at xact.c:687\n> #12 0x9cf70 in CommandCounterIncrement () at xact.c:520\n> #13 0xa7a08 in heap_drop_with_catalog (relname=0x4006ef00 \"������������\")\n> at heap.c:1528\n> #14 0xb16ac in RemoveRelation (name=0x40083328 \"ff1\") at creatinh.c:217\n> #15 0x13ce84 in ProcessUtility (parsetree=0x40083370, dest=Remote)\n> at utility.c:201\n> #16 0x13add0 in pg_exec_query_dest (query_string=0x40024270 \"drop\n> table ff1\",\n> dest=Remote, aclOverride=1 '\\001') at postgres.c:721\n>\n> Apparently, if I call CommandCounterIncrement while partway through\n> dropping a relation, it will try to rebuild the relcache entry for\n> the relation --- and of course fail. I'm too tired to figure out\n> whether this is just a small coding error in the new cache invalidation\n> code or whether it's a serious limitation in the whole design. Hiroshi,\n> what do you think?\n>\n\nHmmm,CommandCounterIncrement() was about to rebuild relation decriptor\nfor the half dropped table and failed. It seems impossible to call Command-\nCounterIncrement() in heap_drop_with_catalog.\n\nI don't understand why DeleteTypeTuple() require setheapoverride().\nDoes anyone know ?\nIf no one knows,we had better remove setheapoverride().\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Mon, 17 Jan 2000 18:25:06 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] RE: Getting rid of setheapoverride (was Re:\n\t[COMMITTERS] heap.c)"
},
{
"msg_contents": "> >> About half of the setheapoverride calls surrounded heap_update()\n> >> (formerly called heap_replace()) calls. AFAICS there is no need\n> >> for these calls unless heap_update itself needs them --- but there\n> >> are many calls to heap_update that do not have setheapoverride.\n> \n> I figured out that the cases where setheapoverride (or, now,\n> CommandCounterIncrement) were needed were the cases where the\n> heap_update might be updating a tuple created earlier in the\n> same command. pg_operator.c has some cases like that, but many of\n> the other uses of setheapoverride seem to be unnecessary.\n\nI thought about that this morning and suspected this may be the case,\nthough I thought tuples would be visible to the same transaction\nautomatically. Hard to imagine why we would not want such visibility in\nall cases.\n\n\n> \n> However, I'm a little uncomfortable with committing this change,\n> because my first try at it worked OK at creating things but fell\n> right over on DROP TABLE. I had replaced the setheapoverride(true)\n> in heap_drop_with_catalog() (backend/catalog/heap.c) with a\n> CommandCounterIncrement, and it failed with this backtrace:\n> \n> (gdb) bt\n> #0 elog (lev=-1, fmt=0x6b160 \"cannot find attribute %d of relation %s\")\n> at elog.c:112\n> #1 0x1767d8 in build_tupdesc_ind (buildinfo={infotype = 1, i = {\n> info_id = 19040,\n> info_name = 0x4a60}},\n> relation=0x4006ef00, natts=1074198864) at relcache.c:527\n> #2 0x176554 in RelationBuildTupleDesc (buildinfo={infotype = 1, i = {\n> info_id = 19040,\n> info_name = 0x4a60 }},\n> relation=0x1, natts=1074198864) at relcache.c:437\n> #3 0x177230 in RelationBuildDesc (buildinfo={infotype = 1, i = {\n> info_id = 19040,\n> info_name = 0x4a60 }},\n> oldrelation=0x4006ef00) at relcache.c:808\n> #4 0x177b28 in RelationClearRelation (relation=0x4006ef00, rebuildIt=0 '\\000')\n> at relcache.c:1279\n> #5 0x177bbc in RelationFlushRelation (relationPtr=0xffffffff,\n> onlyFlushReferenceCountZero=96 '`') at relcache.c:1320\n> #6 0x177e10 in RelationIdInvalidateRelationCacheByRelationId (\n> relationId=19040) at relcache.c:1415\n> #7 0x175968 in CacheIdInvalidate (cacheId=4294967295, hashIndex=438624,\n> pointer=0x1) at inval.c:544\n> #8 0x175ae8 in InvalidationMessageCacheInvalidate (message=0x4007cce4)\n> at inval.c:657\n> #9 0x175490 in LocalInvalidInvalidate (invalid=0x4007cce4 \"r\",\n> function=0x4000c3ca <DINFINITY+9226>, freemember=1 '\\001') at inval.c:173\n> #10 0x175ca4 in ImmediateLocalInvalidation (send=-1 '�') at inval.c:806\n> #11 0x9d0b0 in AtCommit_LocalCache () at xact.c:687\n> #12 0x9cf70 in CommandCounterIncrement () at xact.c:520\n> #13 0xa7a08 in heap_drop_with_catalog (relname=0x4006ef00 \"����\")\n> at heap.c:1528\n> #14 0xb16ac in RemoveRelation (name=0x40083328 \"ff1\") at creatinh.c:217\n> #15 0x13ce84 in ProcessUtility (parsetree=0x40083370, dest=Remote)\n> at utility.c:201\n> #16 0x13add0 in pg_exec_query_dest (query_string=0x40024270 \"drop table ff1\",\n> dest=Remote, aclOverride=1 '\\001') at postgres.c:721\n> \n> Apparently, if I call CommandCounterIncrement while partway through\n> dropping a relation, it will try to rebuild the relcache entry for\n> the relation --- and of course fail. I'm too tired to figure out\n> whether this is just a small coding error in the new cache invalidation\n> code or whether it's a serious limitation in the whole design. Hiroshi,\n> what do you think?\n> \n> I was able to get around this by simply removing CommandCounterIncrement\n> from heap_drop_with_catalog entirely --- dropping tables seems to work\n> fine that way ... but I don't trust it ...\n\nWith buggy code like that, it seems we could just try removing them all,\nand adding them back in as part of beta testing as people find problems.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jan 2000 11:15:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: Getting rid of setheapoverride (was Re:\n [COMMITTERS]\n\theap.c)"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> Apparently, if I call CommandCounterIncrement while partway through\n>> dropping a relation, it will try to rebuild the relcache entry for\n>> the relation --- and of course fail. I'm too tired to figure out\n>> whether this is just a small coding error in the new cache invalidation\n>> code or whether it's a serious limitation in the whole design. Hiroshi,\n>> what do you think?\n\n> Hmmm,CommandCounterIncrement() was about to rebuild relation decriptor\n> for the half dropped table and failed. It seems impossible to call Command-\n> CounterIncrement() in heap_drop_with_catalog.\n\n> I don't understand why DeleteTypeTuple() require setheapoverride().\n\nAs far as I can tell, it doesn't --- drop table seems to work just fine\nwithout it.\n\nI have been thinking some more about this, and have come to the\nconclusion that it is only safe to call CommandCounterIncrement\nat points where you have a self-consistent catalog tuple state.\nIn particular it must be possible to build valid relcache entries\nfrom whatever tuples you are making visible with the increment.\n\nFor example, it's OK for heap_create to call C.C.I. after creating the\nrelation's pg_class, pg_type, and pg_attribute entries; the relation\nis not finished, since it lacks default and constraint info, but\nrelcache.c won't have a problem with that. (Note that heap_create\nexplicitly forces a rebuild of the relcache entry after it's added\nthat extra stuff!)\n\nIt is *not* OK for heap_drop to call C.C.I. where it was doing it,\nbecause it had already deleted the pg_attribute tuples, but was still\nholding a refcount lock on the relcache entry for the target relation.\n(If the refcount were zero, then relcache.c would have just dropped\nthe entry instead of trying to rebuild it...)\n\nThe heap_drop code was risky even in its original form of\nsetheapoverride, since had a relcache rebuild been caused during\nDeleteTypeTuple(), it would have failed. (This could happen,\nin the current state of the code, if an SI Reset message arrives\nand gets processed while DeleteTypeTuple is trying to open pg_type.)\nSwitching to CommandCounterIncrement just exposed the latent bug\nby forcing the rebuild attempt to occur.\n\nIn short, I have convinced myself that this is all fine. I will\nfinish ripping out setheapoverride and commit the changes tonight.\nShould be able to simplify tqual.c a little bit now that we don't\nneed the override code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 11:19:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RE: Getting rid of setheapoverride (was Re:\n\t[COMMITTERS] heap.c)"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I figured out that the cases where setheapoverride (or, now,\n>> CommandCounterIncrement) were needed were the cases where the\n>> heap_update might be updating a tuple created earlier in the\n>> same command. pg_operator.c has some cases like that, but many of\n>> the other uses of setheapoverride seem to be unnecessary.\n\n> I thought about that this morning and suspected this may be the case,\n> though I thought tuples would be visible to the same transaction\n> automatically. Hard to imagine why we would not want such visibility in\n> all cases.\n\nNormally you *don't* want tuples created/updated in the current command\nto be visible. Consider an UPDATE proceeding by sequential scan. As it\nfinds tuples it needs to update, the updated versions of those tuples\nwill get added to the end of the relation. Eventually the UPDATE will\nreach those tuples and be scanning its own output! Thanks to the\nvisibility rule, it will ignore those new tuples as not-yet-visible.\nWithout that, something as simple as \"UPDATE t SET f = f + 1\" would be\nan infinite loop.\n\nCommandCounterIncrement() is like a statement boundary inside a\ntransaction: after you call it, you can see the effects of your\nprior operation (but no one else can; it's not a commit).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 11:29:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] RE: Getting rid of setheapoverride (was Re:\n\t[COMMITTERS] heap.c)"
},
{
"msg_contents": "> As far as I can tell, it doesn't --- drop table seems to work just fine\n> without it.\n> \n> I have been thinking some more about this, and have come to the\n> conclusion that it is only safe to call CommandCounterIncrement\n> at points where you have a self-consistent catalog tuple state.\n> In particular it must be possible to build valid relcache entries\n> from whatever tuples you are making visible with the increment.\n\nThis is a good analysis.\n\n> \n> For example, it's OK for heap_create to call C.C.I. after creating the\n> relation's pg_class, pg_type, and pg_attribute entries; the relation\n> is not finished, since it lacks default and constraint info, but\n> relcache.c won't have a problem with that. (Note that heap_create\n> explicitly forces a rebuild of the relcache entry after it's added\n> that extra stuff!)\n> \n> It is *not* OK for heap_drop to call C.C.I. where it was doing it,\n> because it had already deleted the pg_attribute tuples, but was still\n> holding a refcount lock on the relcache entry for the target relation.\n> (If the refcount were zero, then relcache.c would have just dropped\n> the entry instead of trying to rebuild it...)\n> \n> The heap_drop code was risky even in its original form of\n> setheapoverride, since had a relcache rebuild been caused during\n> DeleteTypeTuple(), it would have failed. (This could happen,\n> in the current state of the code, if an SI Reset message arrives\n> and gets processed while DeleteTypeTuple is trying to open pg_type.)\n> Switching to CommandCounterIncrement just exposed the latent bug\n> by forcing the rebuild attempt to occur.\n\nThis is an excellent point. We know we have some instability in\ncreating/droping tables in separate sessions at the same time. This may\nnot fix that, but it is clearly an issue that an SI message could arrive\nat that time.\n\n> \n> In short, I have convinced myself that this is all fine. I will\n> finish ripping out setheapoverride and commit the changes tonight.\n> Should be able to simplify tqual.c a little bit now that we don't\n> need the override code.\n\nI know I am responsible for at least one of those function calls. I\nremember asking about it in the past. I have added my first two emails\nabout this below. I may have added it whenever I did heap_update\nbecause I never knew what it did, and the name was confusing to me.\n\n---------------------------------------------------------------------------\n\n\n\t\n\tFrom maillist Fri Aug 14 22:22:06 1998\n\tDate: Fri, 14 Aug 1998 22:22:06 -0400 (EDT)\n\tX-Mailer: ELM [version 2.4ME+ PL43 (25)]\n\tMIME-Version: 1.0\n\tContent-Type: text/plain; charset=US-ASCII\n\tContent-Transfer-Encoding: 7bit\n\tXFMstatus: 0000\n\tTo: (PostgreSQL-development) <[email protected]>\n\tSubject: setheapoverride\n\tContent-Length: 346\n\tStatus: RO\n\t\n\tCan someone tell me what setheapoverride() does? I see it around\n\theap_replace a lot.\n\n---------------------------------------------------------------------------\n\n\tDate: Sat, 18 Sep 1999 17:25:43 -0400 (EDT)\n\tX-Mailer: ELM [version 2.4ME+ PL56 (25)]\n\tMIME-Version: 1.0\n\tContent-Type: text/plain; charset=US-ASCII\n\tContent-Transfer-Encoding: 7bit\n\tXFMstatus: 0000\n\tTo: Tom Lane <[email protected]>\n\tSubject: Re: [HACKERS] setheapoverride() considered harmful\n\tCc: [email protected]\n\tContent-Length: 628\n\tStatus: RO\n\t\n\t> I think we need to get rid of setheapoverride().\n\t\n\tI have always wondered what it did. It is in my personal TODO with a\n\tquestionmark. Never figured out its purpose.\n\t\n\t> since this way the tuples still look valid if we look at them again\n\t> later in the same command.\n\t> \n\t> Comments? Anyone know a reason not to get rid of setheapoverride?\n\t\n\tYes, please remove it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jan 2000 11:33:07 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: Getting rid of setheapoverride (was Re:\n [COMMITTERS]\n\theap.c)"
}
] |
[
{
"msg_contents": "Hi again...\n\nI've discovered (or perhaps re-discovered) what seems to be a memory leak\ninvolving temp tables. It's so bad that I make a script (which runs as a\ndaemon) close the backend connection and reconnect each time it runs the\noffending command sequences. Without the reset I've seen the backend grow\nto over 100 megs in size in a matter of a couple of minutes.\n\nI've created a sample case that reproduces the error that I will attach\nwith this message. Basically, I create a 50 column temp table (with no\nrows in it) and then run updates on each column in succession. The\nbackend gets large pretty quick - I'm seeing about 12Megs after running\nthe enclosed script which does an update on all 50 columns 3 times (150\nupdates).\n\nThanks...\n\n- K\n\nKristofer Munn * KMI * 973-509-9414 * AIM KrMunn * http://www.munn.com/",
"msg_date": "Sun, 16 Jan 2000 23:40:44 -0500 (EST)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Temp Table Memory Leak"
},
{
"msg_contents": "Kristofer Munn <[email protected]> writes:\n> I've created a sample case that reproduces the error that I will attach\n> with this message. Basically, I create a 50 column temp table (with no\n> rows in it) and then run updates on each column in succession. The\n> backend gets large pretty quick - I'm seeing about 12Megs after running\n> the enclosed script which does an update on all 50 columns 3 times (150\n> updates).\n\nI confirm the leak in 6.5.* --- but I see no leak in current sources.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 00:01:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Temp Table Memory Leak "
},
{
"msg_contents": "> Hi again...\n> \n> I've discovered (or perhaps re-discovered) what seems to be a memory leak\n> involving temp tables. It's so bad that I make a script (which runs as a\n> daemon) close the backend connection and reconnect each time it runs the\n> offending command sequences. Without the reset I've seen the backend grow\n> to over 100 megs in size in a matter of a couple of minutes.\n> \n> I've created a sample case that reproduces the error that I will attach\n> with this message. Basically, I create a 50 column temp table (with no\n> rows in it) and then run updates on each column in succession. The\n> backend gets large pretty quick - I'm seeing about 12Megs after running\n> the enclosed script which does an update on all 50 columns 3 times (150\n> updates).\n\nIs this with 6.5.* or 7.0. If it is 6.5.*, can you try it with our\ncurrent tree for testing purposes. I think you will find it is fixed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jan 2000 00:04:39 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Temp Table Memory Leak"
},
{
"msg_contents": "On Mon, 17 Jan 2000, Tom Lane wrote:\n> \n> I confirm the leak in 6.5.* --- but I see no leak in current sources.\n\nGood news for 7.0 but... if anyone has a patch I could (safely) apply to\n6.5.3 for this problem, it would be much appreciated.\n\n- K\n\nKristofer Munn * KMI * 973-509-9414 * AIM KrMunn * http://www.munn.com/\n\n",
"msg_date": "Mon, 17 Jan 2000 00:06:42 -0500 (EST)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Temp Table Memory Leak "
},
{
"msg_contents": "> Kristofer Munn <[email protected]> writes:\n> > I've created a sample case that reproduces the error that I will attach\n> > with this message. Basically, I create a 50 column temp table (with no\n> > rows in it) and then run updates on each column in succession. The\n> > backend gets large pretty quick - I'm seeing about 12Megs after running\n> > the enclosed script which does an update on all 50 columns 3 times (150\n> > updates).\n> \n> I confirm the leak in 6.5.* --- but I see no leak in current sources.\n\nGreat. Now the big question is should we backpatch, and if so do we\nwant a 6.5.4. I know you(Tom) have put a number of patches into the\n6.5.* branch, and we are at least 2 months away from our next release.\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jan 2000 00:32:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Temp Table Memory Leak"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I confirm the leak in 6.5.* --- but I see no leak in current sources.\n\n> Great. Now the big question is should we backpatch, and if so do we\n> want a 6.5.4.\n\nDo you have a low-risk patch for this? I recall that we did some\nfairly extensive changes involving not only temp tables but the regular\nrelation cache. Extracting a patch that could be trusted seems like\nit might be tough.\n\n> I know you(Tom) have put a number of patches into the 6.5.* branch,\n> and we are at least 2 months away from our next release.\n\nI have been throwing low-risk/high-reward fixes into REL6_5 when I\ncould, with the thought that we might want to do another 6.5.* release.\nBut I'm undecided on whether we should or not. It seems like we are\nclose enough to 7.0 beta cycle that we should focus our effort there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 00:44:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Temp Table Memory Leak "
},
{
"msg_contents": "On Mon, 17 Jan 2000, Bruce Momjian wrote:\n\n> > Kristofer Munn <[email protected]> writes:\n> > > I've created a sample case that reproduces the error that I will attach\n> > > with this message. Basically, I create a 50 column temp table (with no\n> > > rows in it) and then run updates on each column in succession. The\n> > > backend gets large pretty quick - I'm seeing about 12Megs after running\n> > > the enclosed script which does an update on all 50 columns 3 times (150\n> > > updates).\n> > \n> > I confirm the leak in 6.5.* --- but I see no leak in current sources.\n> \n> Great. Now the big question is should we backpatch, and if so do we\n> want a 6.5.4. I know you(Tom) have put a number of patches into the\n> 6.5.* branch, and we are at least 2 months away from our next release.\n> \n> Comments?\n\nI'm all for it...I think that snce ppl have been consciously making an\neffort to backpatch as appropriate (aren't CVS branches great? *grin*), we\nshould try and provide periodic releases, as appropriate ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 17 Jan 2000 02:07:40 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Temp Table Memory Leak"
},
{
"msg_contents": "On Mon, 17 Jan 2000, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> >> I confirm the leak in 6.5.* --- but I see no leak in current sources.\n> \n> > Great. Now the big question is should we backpatch, and if so do we\n> > want a 6.5.4.\n> \n> Do you have a low-risk patch for this? I recall that we did some\n> fairly extensive changes involving not only temp tables but the regular\n> relation cache. Extracting a patch that could be trusted seems like\n> it might be tough.\n> \n> > I know you(Tom) have put a number of patches into the 6.5.* branch,\n> > and we are at least 2 months away from our next release.\n> \n> I have been throwing low-risk/high-reward fixes into REL6_5 when I\n> could, with the thought that we might want to do another 6.5.* release.\n> But I'm undecided on whether we should or not. It seems like we are\n> close enough to 7.0 beta cycle that we should focus our effort there.\n\npast experience tends to be that even when we beta 7.0 on the 1st of Feb,\nwhich I haven't heard anyone suggest changing yet, we're still talking\nanother month, maybe two, before release...\n\nI think it would be nice to put out a 6.5.4 about the same time as we go\nbeta ... \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 17 Jan 2000 02:09:46 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Temp Table Memory Leak "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> I confirm the leak in 6.5.* --- but I see no leak in current sources.\n> \n> > Great. Now the big question is should we backpatch, and if so do we\n> > want a 6.5.4.\n> \n> Do you have a low-risk patch for this? I recall that we did some\n> fairly extensive changes involving not only temp tables but the regular\n> relation cache. Extracting a patch that could be trusted seems like\n> it might be tough.\n\nI remember now. That entire code is changed to do the replacement\nbefore getting to actual cache.\n\n> \n> > I know you(Tom) have put a number of patches into the 6.5.* branch,\n> > and we are at least 2 months away from our next release.\n> \n> I have been throwing low-risk/high-reward fixes into REL6_5 when I\n> could, with the thought that we might want to do another 6.5.* release.\n> But I'm undecided on whether we should or not. It seems like we are\n> close enough to 7.0 beta cycle that we should focus our effort there.\n> \n\nSeems we can not fix this in 6.5.* without the risk of more bugs. I\nagree on focusing on 7.0.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jan 2000 01:15:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Temp Table Memory Leak"
}
] |
[
{
"msg_contents": "> Hmm, numeric array type was missing too. Added.\n> Of the standard types, only 'timestamp' seems not to have an array \n> type; should it be added, or are we going to remove that type for 7.0 \n> anyway?\n\nWill be removed/replaced.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 17 Jan 2000 08:26:28 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] pgsql/src/include/catalog (pg_type.h)"
},
{
"msg_contents": "> > Hmm, numeric array type was missing too. Added.\n> > Of the standard types, only 'timestamp' seems not to have an array \n> > type; should it be added, or are we going to remove that type for 7.0 \n> > anyway?\n> \n> Will be removed/replaced.\n\nWe are going to internally move everything to the more standard ANSI\nnames, right, or do we give preference to the older types?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jan 2000 11:17:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [COMMITTERS] pgsql/src/include/catalog (pg_type.h)"
},
{
"msg_contents": "> We are going to internally move everything to the more standard ANSI\n> names, right, or do we give preference to the older types?\n\nWell, that could be up for discussion. The \"internal\" abstime/reltime\ntypes are direct copies of Unix system time, which most systems\nsupport at a fundamental level. Moving to timestamp/interval will\ndouble the storage size of those fields, with no increase in\nfunctionality afaik.\n\nPeter brought up changing one field to timestamp; that would have the\nbenefit of being able to specify times past y2038.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 18 Jan 2000 02:29:54 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [COMMITTERS] pgsql/src/include/catalog (pg_type.h)"
},
{
"msg_contents": "On 2000-01-18, Thomas Lockhart mentioned:\n\n> > We are going to internally move everything to the more standard ANSI\n> > names, right, or do we give preference to the older types?\n> \n> Well, that could be up for discussion. The \"internal\" abstime/reltime\n\nWe might as well make that change now rather than dragging the old baggage\n(8 different types after all!) around for another major release. I don't\nmean dropping them but putting forth a clear preference.\n\nPreferred set: timestamp, interval, date, time\n\ntimespan: alias to interval, for compatibility\ndatetime: alias to timestamp, for compatibility\n\nabstime, reltime: deprecated, used only for internal catalogs\n\nI mean that would make sense to me as a user. I have long been confused\nabout that.\n\n> types are direct copies of Unix system time, which most systems\n> support at a fundamental level. Moving to timestamp/interval will\n\nThe problem also seems to be that on some systems they seem to be 8 byte\ntypes (see original TODO item). So either you move it to proper int32\ntypes, thus losing the exact correspondence, or you make them aliases to\ntimespan and interval as well and lose them sometime.\n\n> double the storage size of those fields, with no increase in\n> functionality afaik.\n\nIsn't storage size in multiples of 8192 anyway? So this probably makes\nzero difference in practice.\n\n> Peter brought up changing one field to timestamp; that would have the\n> benefit of being able to specify times past y2038.\n\nThe Y2038 problem is next. We could be the first ones to comply. :)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 19 Jan 2000 03:58:29 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Date/time types (Re: [HACKERS] Re: [COMMITTERS]\n\tpgsql/src/include/catalog (pg_type.h))"
},
{
"msg_contents": "> We might as well make that change now rather than dragging the old baggage\n> (8 different types after all!) around for another major release. I don't\n> mean dropping them but putting forth a clear preference.\n> Preferred set: timestamp, interval, date, time\n> timespan: alias to interval, for compatibility\n> datetime: alias to timestamp, for compatibility\n> abstime, reltime: deprecated, used only for internal catalogs\n> I mean that would make sense to me as a user. I have long been confused\n> about that.\n\nHmm. I *think* I state a clear preference in the User's Guide. Is\nthere another place to mention this? Should we be more explicit?? If\nwe're going to fix it up, we need some suggestions ;)\n\n> The problem also seems to be that on some systems they seem to be 8 byte\n> types (see original TODO item). So either you move it to proper int32\n> types, thus losing the exact correspondence, or you make them aliases to\n> timespan and interval as well and lose them sometime.\n\nThat's a detail on 64 bit systems like Alpha/Unix, but afaik one can\nforce the field into 4 bytes and you get the Right Thing, at least\nuntil 2038. I'd prefer moving to an 8 byte integer, but we don't have\nthose on enough of our supported platforms, so the 8 byte float is the\nnext best thing to get past 2038.\n\n> > double the storage size of those fields, with no increase in\n> > functionality afaik.\n> Isn't storage size in multiples of 8192 anyway? So this probably makes\n> zero difference in practice.\n\nIt actually makes a big difference on the simplest tests, which have a\nsingle small column. Then, the tuple overhead is most obvious, and\n(I'm not sure of the actual numbers) going from 40 bytes to 60 bytes\nis significant.\n\n> > Peter brought up changing one field to timestamp; that would have the\n> > benefit of being able to specify times past y2038.\n> The Y2038 problem is next. We could be the first ones to comply. :)\n\nSince we are currently mapping to Unix system time, I'd rather go slow\nand wait for a good OS solution. Or we could go to 8 byte integers\nwith 100ns ticks a la Corba Time (hmm, maybe we can get an\nimplementation from somewhere which would work on all of our\nplatforms??). The double we currently have for user time isn't likely\nto be what OSes end up using, though with our license they could ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 19 Jan 2000 03:30:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Date/time types (Re: [HACKERS] Re: [COMMITTERS]\n\tpgsql/src/include/catalog(pg_type.h))"
},
{
"msg_contents": "On 2000-01-19, Thomas Lockhart mentioned:\n\n> Hmm. I *think* I state a clear preference in the User's Guide. Is\n\nYeah, they say datetime is the \"best general date and time\" type.\n\n> there another place to mention this? Should we be more explicit?? If\n> we're going to fix it up, we need some suggestions ;)\n\nThe users still look at 8 different types and which gets mapped to what in\n\"some future release\". I was thinking along the lines of\n\nDATE/TIME TYPES\n\nWe have these (SQL compat.) types: timestamp, date, time, interval\n[ these four types have clearly distinct functionality, so there is no\nneed for \"preferences\" ]\n\n<body of description here>\n\nAppendix/Note:\nTo ensure compatibility to earlier versions of PostgreSQL we also continue\nto provide datetime (equivalent to timestamp), timespan (equivalent to\ninterval). The types abstime and reltime are lower precision types which\nare used internally. You are discouraged from using any of these types in\nnew applications and move your old ones over where appropriate. Any or all\nof these type might disappear in a future release. [ 7.1 or 7.2 I guess ]\n\n\nIf you want me to help writing something like this up, tell me.\n\nI'd also envision a similar change to the documentation of the numerical\ntypes. The way it currently looks is \"Okay, this is what those standard\nguys say and this is what _we_ say. You can use the standard stuff but our\nstuff gets is implemented natively, so it's your pick.\"\n\nThis is by no means to bash the documentation writers, I just like the\nidea of supporting standard SQL over Postgres'isms where both are\nequivalent. See also CAST vs ::, etc.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Thu, 20 Jan 2000 18:55:18 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Date/time type"
},
{
"msg_contents": "> ... I was thinking along the lines of\n> If you want me to help writing something like this up, tell me.\n\nWell, looks like you just did. If you want to plop it into sgml and\ncommit it, that would be great. Otherwise, I'll steal it and do it\nsometime soon ;)\n\n> This is by no means to bash the documentation writers, I just like the\n> idea of supporting standard SQL over Postgres'isms where both are\n> equivalent. See also CAST vs ::, etc.\n\nRight, I'm happy going through the docs and emphasizing SQL92 vs older\n\"Pig-isms\" for equivalent features. For 7.0, I'd also like to go\nthrough and reorganize the User's Guide, but I'm not sure if I'll get\ntime...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 21 Jan 2000 03:11:52 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Date/time type"
},
{
"msg_contents": "On Thu, Jan 20, 2000 at 06:55:18PM +0100, Peter Eisentraut wrote:\n> \n> Appendix/Note:\n> To ensure compatibility to earlier versions of PostgreSQL we also continue\n> to provide datetime (equivalent to timestamp), timespan (equivalent to\n> interval).\n\nBTW, it seems Insight's PostgreSQL ODBC 6.40.0006 driver converts\n\nDate/Time -> datetime (rather than timestamp)\n\nCheers,\n\nPatrick\n",
"msg_date": "Fri, 21 Jan 2000 20:12:33 +0000",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Date/time type"
},
{
"msg_contents": "On Thu, Jan 20, 2000 at 06:55:18PM +0100, Peter Eisentraut wrote:\n>\n> Appendix/Note:\n> To ensure compatibility to earlier versions of PostgreSQL we also continue\n> to provide datetime (equivalent to timestamp), timespan (equivalent to\n> interval).\n\nBTW, it seems Insight's PostgreSQL ODBC 6.40.0006 driver converts\n\nDate/Time -> datetime (rather than timestamp)\n\nCheers,\n\nPatrick\n",
"msg_date": "Fri, 21 Jan 2000 20:16:13 +0000",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Date/time type"
}
] |
[
{
"msg_contents": "I cannot do this pair of table creations directly, because they are mutually\ndependent.\n\n\ncreate table purchased_job\n(\n\tsupplier\tchar(10)\tnot null\n\t\t\t\t references supplier (id) match full,\n\tspecification\ttext,\n\tdel_point\tchar(2)\t\tnot null\n\t\t\t\t references location (id) match full,\n\timport_licence\tbool\t\tdefault 'f',\n\timport_duty\tnumeric(12,2),\n\tterms\t\tchar(3),\n\tdeliv_clear\tnumeric(12,2),\n\n\tforeign key (product, supplier) references product_supplier (product, \nsupplier) match full\n)\n\tinherits (job)\n;\n\n\n\ncreate table product_supplier\n(\n\tproduct\t\tchar(10)\t\tnot null\n\t\t\t\t references purchased_job (product) match full,\n\tsupplier\tchar(10)\tnot null\n\t\t\t\t references supplier (id) match full,\n\n\tprimary key (product, supplier)\n)\n;\n\nso I omitted the foreign key specification from the creation of purchased_job\nand tried to add it afterwards, but (after fixing a bug in gram.y) I found\nthat ALTER TABLE ... ADD CONSTRAINT is not yet implemented. Is there, then, any\nway to create this mutual dependency?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And, behold, I come quickly; and my reward is with me,\n to give every man according as his work shall be.\" \n Revelation 22:12 \n\n\n",
"msg_date": "Mon, 17 Jan 2000 12:43:44 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Foreign keys: unexpected result from ALTER TABLE... ADD CONSTRAINT..."
},
{
"msg_contents": "On Mon, 17 Jan 2000, Oliver Elphick wrote:\n\n> I cannot do this pair of table creations directly, because they are mutually\n> dependent.\n> \n\nI don't think this will ever work. I can't really decode your intentions\nhere but I recall that translating proper relational schemas (you know,\nthe ones with the bubbles and lines) into tables never creates this sort\nof situation. Then again I could be wrong.\n\n> \n> create table purchased_job\n> (\n> \tsupplier\tchar(10)\tnot null\n> \t\t\t\t references supplier (id) match full,\n> \tspecification\ttext,\n> \tdel_point\tchar(2)\t\tnot null\n> \t\t\t\t references location (id) match full,\n> \timport_licence\tbool\t\tdefault 'f',\n> \timport_duty\tnumeric(12,2),\n> \tterms\t\tchar(3),\n> \tdeliv_clear\tnumeric(12,2),\n> \n> \tforeign key (product, supplier) references product_supplier (product, \n> supplier) match full\n> )\n> \tinherits (job)\n> ;\n> \n> \n> \n> create table product_supplier\n> (\n> \tproduct\t\tchar(10)\t\tnot null\n> \t\t\t\t references purchased_job (product) match full,\n> \tsupplier\tchar(10)\tnot null\n> \t\t\t\t references supplier (id) match full,\n> \n> \tprimary key (product, supplier)\n> )\n> ;\n> \n> so I omitted the foreign key specification from the creation of purchased_job\n> and tried to add it afterwards, but (after fixing a bug in gram.y) I found\n> that ALTER TABLE ... ADD CONSTRAINT is not yet implemented. Is there, then, any\n> way to create this mutual dependency?\n\nThanks for that fix, that was me changing the grammar for an ALTER TABLE /\nALTER COLUMN implementation, which now works btw.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 17 Jan 2000 14:05:51 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Foreign keys: unexpected result from ALTER TABLE...\n\tADD CONSTRAINT..."
},
{
"msg_contents": "Peter Eisentraut wrote:\n >On Mon, 17 Jan 2000, Oliver Elphick wrote:\n >\n >> I cannot do this pair of table creations directly, because they are mutual\n >ly\n >> dependent.\n >> \n >\n >I don't think this will ever work. I can't really decode your intentions\n >here but I recall that translating proper relational schemas (you know,\n >the ones with the bubbles and lines) into tables never creates this sort\n >of situation. Then again I could be wrong.\n \nThe idea is that suppliers of products can only supply products that are\npurchased, rather than manufactured; and purchased products must have\nsuppliers. However, it is possible for there to be more than one potential\nsupplier of a product; the one listed in purchased_jobs is the currently\nfavoured supplier.\n\nI guess I will have to remove the restriction that products listed in\nproduct_suppliers must be purchased; it may indeed become possible for the\nto change status from time to time, so that is not too unsatisfactory.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And, behold, I come quickly; and my reward is with me,\n to give every man according as his work shall be.\" \n Revelation 22:12 \n\n\n",
"msg_date": "Mon, 17 Jan 2000 14:06:01 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Foreign keys: unexpected result from ALTER TABLE... ADD\n\tCONSTRAINT..."
},
{
"msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> I guess I will have to remove the restriction that products listed in\n> product_suppliers must be purchased; it may indeed become possible for the\n> to change status from time to time, so that is not too unsatisfactory.\n\nYou could possibly enforce dependencies like that by using a trigger\nfunction, instead of foreign-key stuff.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jan 2000 10:58:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Foreign keys: unexpected result from ALTER TABLE... ADD\n\tCONSTRAINT..."
},
{
"msg_contents": "Tom Lane wrote:\n\n> \"Oliver Elphick\" <[email protected]> writes:\n> > I guess I will have to remove the restriction that products listed in\n> > product_suppliers must be purchased; it may indeed become possible for the\n> > to change status from time to time, so that is not too unsatisfactory.\n>\n> You could possibly enforce dependencies like that by using a trigger\n> function, instead of foreign-key stuff.\n\n In fact, ALTER TABLE ADD CONSTRAINT should do it!\n\n It's absolutely legal and makes sense in some case. The constraints\n must be deferrable then, and you must INSERT and/or UPDATE both rows\n referring to each other in the same transaction while the constraints\n are in deferred state.\n\n A normal trigger is never deferrable, so it will be fired at the end\n of the statement, not at COMMIT. Thus, a regular trigger will never\n work for that!\n\n In the mean time, you can setup the same RI triggers by hand using\n CREATE CONSTRAINT TRIGGER with the appropriate builtin RI_FKey\n functions. These commands are exactly what ALTER TABLE has to issue.\n The functions are named RI_FKey_<action>_<event>, where <action> is\n one of \"check\", \"noaction\", \"restrict\", \"cascade\", \"setnull\" or\n \"setdefault\" and <event> is \"ins\", \"upd\" or \"del\". \"check\" has to be\n used on the referencing table at INSERT and UPDATE. The others are\n for the PK table to issue the requested action. Don't forget to add\n \"noaction\" for the cases, where you don't want an action, otherwise\n the deferred trigger queue manager will not notice if it has to raise\n the \"triggered data change violation\" exception.\n\n All RI_FKey functions take the following arguments:\n\n\n * The constraint name\n * The match type (FULL for now)\n * The primary key tables name\n * The referencing tables name\n * Followed by pairs of PK-attrib, FK-attrib names.\n\n With CREATE CONSTRAINT TRIGGER (which I added first so someone could\n already work on pg_dump - what noone does up to now :-( ), you can\n specify deferrability and initial deferred state for the trigger. And\n it correctly sets up the PK<->FK tables relationships in pg_trigger,\n so that DROPping one of them removes all the triggers using it from\n the other one. Needless to say that dropping and recreating a PK\n table looses all the references! But dropping and recreating the\n referencing tables therefore doesn't put the PK table into an\n unusable state.\n\n So Peter, if you're working on ALTER TABLE ADD CONSTRAINT, let it\n setup the appropriate RI triggers. Look at analyze.c how to do so.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n",
"msg_date": "Mon, 17 Jan 2000 19:27:18 +0100",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Foreign keys: unexpected result from ALTER TABLE... ADD\n\tCONSTRAINT..."
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> With CREATE CONSTRAINT TRIGGER (which I added first so someone could\n> already work on pg_dump - what noone does up to now :-( ), you can\n> specify deferrability and initial deferred state for the trigger. And\n> it correctly sets up the PK<->FK tables relationships in pg_trigger,\n> so that DROPping one of them removes all the triggers using it from\n> the other one. Needless to say that dropping and recreating a PK\n> table looses all the references! But dropping and recreating the\n> referencing tables therefore doesn't put the PK table into an\n> unusable state.\n> \n\nOracle solves these kind of problems by having a CREATE OR REPLACE command, \nthat keeps as much of related objects as possible if there is already an \nobject by that name.\n\nDoes anyone know if it is ANSI SQL ?\n\n--------------------------\nHannu\n",
"msg_date": "Tue, 18 Jan 2000 02:19:17 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Foreign keys: unexpected result from ALTER TABLE... ADD\n\tCONSTRAINT..."
},
{
"msg_contents": "On 2000-01-17, Jan Wieck mentioned:\n\n> So Peter, if you're working on ALTER TABLE ADD CONSTRAINT, let it\n> setup the appropriate RI triggers. Look at analyze.c how to do so.\n\nMy priority is actually ALTER TABLE / DROP COLUMN, at least in a crude\n'use at your own risk, all your defaults and constraints are gone' way if\nI can't figure it out better by then. The ALTER TABLE / ALTER COLUMN /\nSET|DROP DEFAULT was just a by-product.\n\nI have been looking into all this ALTER TABLE code (or at least similar \ncode which it would have to peruse, since there is not a lot of ALTER\nTABLE code) and I think I have a good understanding of what would need to\nhappen, but I don't think I want to risk that now. (After all, I just\n_think_ I understand it.) We'll see.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 19 Jan 2000 00:28:27 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Foreign keys: unexpected result from ALTER TABLE...\n\tADD CONSTRAINT..."
},
{
"msg_contents": "Here is the list I have gotten of open 7.1 items:\n\t\n\tbit type\n\tinheritance\n\tdrop column\n\tvacuum index speed\n\tcached query plans\n\tmemory context cleanup\n\tTOAST\n\tWAL\n\tfmgr redesign\n\tencrypt pg_shadow passwords\n\tredesign pg_hba.conf password file option\n\tnew location for config files\n\nI have some of my own that are not on the list, as do others who are\nworking on their own items. Just thought a list of major items that\nneed work would be helpful.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Jun 2000 05:05:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Big 7.1 open items"
},
{
"msg_contents": "On Tue, 13 Jun 2000, Bruce Momjian wrote:\n\n> Here is the list I have gotten of open 7.1 items:\n> \t\n> \tbit type\n> \tinheritance\n> \tdrop column\n> \tvacuum index speed\n> \tcached query plans\n\t^^^^^^^^^^^^^^^^^\n\nI have already down it and I send patch for _testing_ next week (or\nlater), but I think that not will for 7.1, but 7.2.\n\n> \tmemory context cleanup\n> \tTOAST\n> \tWAL\n> \tfmgr redesign\n> \tencrypt pg_shadow passwords\n> \tredesign pg_hba.conf password file option\n> \tnew location for config files\n\n\t+ new ACL? (please :-)\n\n\nBTW. --- really cool list.\n\n\t\t\t\t\t\tKarel\n\n",
"msg_date": "Tue, 13 Jun 2000 11:28:37 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Tue, 13 Jun 2000, Bruce Momjian wrote:\n\n> Here is the list I have gotten of open 7.1 items:\n> \t\n> \tencrypt pg_shadow passwords\n\nThis will be for 7.1? For some reason I thought it was being pushed \noff to 7.2.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 13 Jun 2000 06:19:12 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> el d�a Tue, 13 Jun 2000 05:05:53 \n-0400 (EDT), escribi�:\n\n[...]\n>\tnew location for config files\n\ncan I suggest /etc/postgresql ?\n\n\nsergio\n\n",
"msg_date": "Tue, 13 Jun 2000 10:00:47 -0300",
"msg_from": "\"Sergio A. Kessler\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Tue, 13 Jun 2000, Bruce Momjian wrote:\n\n> Here is the list I have gotten of open 7.1 items:\n> \t\n> \tbit type\n> \tinheritance\n> \tdrop column\n> \tvacuum index speed\n> \tcached query plans\n> \tmemory context cleanup\n> \tTOAST\n> \tWAL\n> \tfmgr redesign\n> \tencrypt pg_shadow passwords\n> \tredesign pg_hba.conf password file option\n\nAny details?\n\n> \tnew location for config files\n\nAre you referring to pushing internal files to `$PGDATA/global'?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 13 Jun 2000 15:06:11 +0200 (MET DST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Tue, 13 Jun 2000, Sergio A. Kessler wrote:\n\n> Bruce Momjian <[email protected]> el d�a Tue, 13 Jun 2000 05:05:53 \n> -0400 (EDT), escribi�:\n> \n> [...]\n> >\tnew location for config files\n> \n> can I suggest /etc/postgresql ?\n\nyou can ... but everything related to postgresql has always been designed\nnot to require any special permissions to install, and /etc/postgresql\nwould definitely require root access to install :(\n\n\n",
"msg_date": "Tue, 13 Jun 2000 10:52:57 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Tue, 13 Jun 2000, The Hermit Hacker wrote:\n\n> On Tue, 13 Jun 2000, Sergio A. Kessler wrote:\n> \n> > Bruce Momjian <[email protected]> el d�a Tue, 13 Jun 2000 05:05:53 \n> > -0400 (EDT), escribi�:\n> > \n> > [...]\n> > >\tnew location for config files\n> > \n> > can I suggest /etc/postgresql ?\n> \n> you can ... but everything related to postgresql has always been designed\n> not to require any special permissions to install, and /etc/postgresql\n> would definitely require root access to install :(\n\n~postgres/etc ??\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 13 Jun 2000 09:59:42 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Tue, 13 Jun 2000, Vince Vielhaber wrote:\n\n> > > >\tnew location for config files\n> > > \n> > > can I suggest /etc/postgresql ?\n> > \n> > you can ... but everything related to postgresql has always been designed\n> > not to require any special permissions to install, and /etc/postgresql\n> > would definitely require root access to install :(\n> \n> ~postgres/etc ??\n\nYou need root access to create a postgres user. What's wrong with just\nkeeping it in $PGDATA and making symlinks whereever you would prefer it?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 13 Jun 2000 16:01:11 +0200 (MET DST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "\nthat one works ...\n\nOn Tue, 13 Jun 2000, Vince Vielhaber wrote:\n\n> On Tue, 13 Jun 2000, The Hermit Hacker wrote:\n> \n> > On Tue, 13 Jun 2000, Sergio A. Kessler wrote:\n> > \n> > > Bruce Momjian <[email protected]> el d�a Tue, 13 Jun 2000 05:05:53 \n> > > -0400 (EDT), escribi�:\n> > > \n> > > [...]\n> > > >\tnew location for config files\n> > > \n> > > can I suggest /etc/postgresql ?\n> > \n> > you can ... but everything related to postgresql has always been designed\n> > not to require any special permissions to install, and /etc/postgresql\n> > would definitely require root access to install :(\n> \n> ~postgres/etc ??\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 13 Jun 2000 11:31:33 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n>> encrypt pg_shadow passwords\n\n> This will be for 7.1? For some reason I thought it was being pushed \n> off to 7.2.\n\nI don't know of anything that would force delaying it --- it's not\ndependent on querytree redesign, for example. The real question is,\ndo we have anyone who's committed to do the work? I heard a lot of\ndiscussion but I didn't hear anyone taking responsibility for it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Jun 2000 10:50:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "On Tue, 13 Jun 2000, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> >> encrypt pg_shadow passwords\n> \n> > This will be for 7.1? For some reason I thought it was being pushed \n> > off to 7.2.\n> \n> I don't know of anything that would force delaying it --- it's not\n> dependent on querytree redesign, for example. The real question is,\n> do we have anyone who's committed to do the work? I heard a lot of\n> discussion but I didn't hear anyone taking responsibility for it...\n\nI offered to do the work and I have the md5 routine here and tested on \na number of platforms. But as I said, I thought someone wanted to delay\nit until 7.2, if that's not the case then I'll get to it. There was also\na lack of interest in testing it, but I think we have most platforms \ncovered.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 13 Jun 2000 10:54:27 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "On Tue, 13 Jun 2000, Ed Loehr wrote:\n\n> Vince Vielhaber wrote:\n> > \n> > > > [...]\n> > > > > new location for config files\n> > > >\n> > > > can I suggest /etc/postgresql ?\n> > >\n> > > you can ... but everything related to postgresql has always been designed\n> > > not to require any special permissions to install, and /etc/postgresql\n> > > would definitely require root access to install :(\n> > \n> > ~postgres/etc ??\n> \n> I would suggest you don't *require* or assume the creation of a postgres\n> user, except as an overridable default.\n\nI *knew* somebody would bring this up. Before I sent that I tried to \ndescribe the intent a few ways and just opted for simple. PostgreSQL\nhas to run as SOMEONE. Substitute that SOMEONE for ~postgres above.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 13 Jun 2000 10:59:02 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> > > [...]\n> > > > new location for config files\n> > >\n> > > can I suggest /etc/postgresql ?\n> >\n> > you can ... but everything related to postgresql has always been designed\n> > not to require any special permissions to install, and /etc/postgresql\n> > would definitely require root access to install :(\n> \n> ~postgres/etc ??\n\nI would suggest you don't *require* or assume the creation of a postgres\nuser, except as an overridable default.\n\nRegards,\nEd Loehr\n",
"msg_date": "Tue, 13 Jun 2000 10:00:52 -0500",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>>>> new location for config files\n>> \n>> can I suggest /etc/postgresql ?\n\n> you can ... but everything related to postgresql has always been designed\n> not to require any special permissions to install, and /etc/postgresql\n> would definitely require root access to install :(\n\nEven more to the point, the config files are always kept in the data\ndirectory so that it's possible to run multiple installations on the\nsame machine. Keeping the config files under /etc (or any other fixed\nlocation) would destroy that capability.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Jun 2000 11:03:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n>> do we have anyone who's committed to do the work? I heard a lot of\n>> discussion but I didn't hear anyone taking responsibility for it...\n\n> I offered to do the work and I have the md5 routine here and tested on \n> a number of platforms. But as I said, I thought someone wanted to delay\n> it until 7.2, if that's not the case then I'll get to it.\n\nFar as I can see, you should go for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Jun 2000 11:16:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Here is the list I have gotten of open 7.1 items:\n\nThere were a whole bunch of issues about the type system --- automatic\ncoercion rules, default type selection for both numeric and string\nliterals, etc. Not sure how to describe this in five words or less...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Jun 2000 11:53:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> Here is the list I have gotten of open 7.1 items:\n\nI thought that someone was working on\nouter joins\nbetter views (or rewriting the rules system, not sure what the direction was)\nbetter SQL92 compliance\nalso, I think that at some time there was discussion about a better interface\nfor procedures, enabling them to work on several tuples. May be wrong though.\n\nBut if all, or just most, of the items on your list will be finished, it ought\nto be a 8.0 release :-)\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2582\nHowitzvej 75 �ben 14.00-18.00 Email: [email protected]\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n",
"msg_date": "Wed, 14 Jun 2000 01:36:38 +0200",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Since there are several people interested in contributing, we should\nlist:\n\n Support multiple simultaneous character sets, per SQL92\n\n - Thomas\n",
"msg_date": "Wed, 14 Jun 2000 01:29:30 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> > \tmemory context cleanup\n> > \tTOAST\n> > \tWAL\n> > \tfmgr redesign\n> > \tencrypt pg_shadow passwords\n> > \tredesign pg_hba.conf password file option\n> > \tnew location for config files\n> \n> \t+ new ACL? (please :-)\n> \n> \n> BTW. --- really cool list.\n\nUpdated TODO.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Jun 2000 22:24:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> \t+ new ACL? (please :-)\n\nUpdated TODO.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Jun 2000 22:24:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> On Tue, 13 Jun 2000, Bruce Momjian wrote:\n> \n> > Here is the list I have gotten of open 7.1 items:\n> > \t\n> > \tbit type\n> > \tinheritance\n> > \tdrop column\n> > \tvacuum index speed\n> > \tcached query plans\n> > \tmemory context cleanup\n> > \tTOAST\n> > \tWAL\n> > \tfmgr redesign\n> > \tencrypt pg_shadow passwords\n> > \tredesign pg_hba.conf password file option\n> \n> Any details?\n\nI would like to remove our pg_passwd script that allows\nusername/passwords to be specified in a file, change that file to lists\nof users, or allow lists of users in pg_hba.conf.\n\n\n> \n> > \tnew location for config files\n> \n> Are you referring to pushing internal files to `$PGDATA/global'?\n\nYes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Jun 2000 22:35:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> that one works ...\n> \n> On Tue, 13 Jun 2000, Vince Vielhaber wrote:\n> \n> > On Tue, 13 Jun 2000, The Hermit Hacker wrote:\n> > \n> > > On Tue, 13 Jun 2000, Sergio A. Kessler wrote:\n> > > \n> > > > Bruce Momjian <[email protected]> el d�a Tue, 13 Jun 2000 05:05:53 \n> > > > -0400 (EDT), escribi�:\n> > > > \n> > > > [...]\n> > > > >\tnew location for config files\n> > > > \n> > > > can I suggest /etc/postgresql ?\n> > > \n> > > you can ... but everything related to postgresql has always been designed\n> > > not to require any special permissions to install, and /etc/postgresql\n> > > would definitely require root access to install :(\n> > \n> > ~postgres/etc ??\n\nRemember, that file has to be specific for each data tree, so it has to\nbe under /data.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Jun 2000 22:38:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> Vince Vielhaber <[email protected]> writes:\n> >> encrypt pg_shadow passwords\n> \n> > This will be for 7.1? For some reason I thought it was being pushed \n> > off to 7.2.\n> \n> I don't know of anything that would force delaying it --- it's not\n> dependent on querytree redesign, for example. The real question is,\n> do we have anyone who's committed to do the work? I heard a lot of\n> discussion but I didn't hear anyone taking responsibility for it...\n\nAgreed. No reason not to be in 7.1.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Jun 2000 22:40:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "I just kept your e-mails. I will make a TODO.detail mailbox with them.\n\n> Bruce Momjian <[email protected]> writes:\n> > Here is the list I have gotten of open 7.1 items:\n> \n> There were a whole bunch of issues about the type system --- automatic\n> coercion rules, default type selection for both numeric and string\n> literals, etc. Not sure how to describe this in five words or less...\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Jun 2000 22:44:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> > Here is the list I have gotten of open 7.1 items:\n> \n> I thought that someone was working on\n> outer joins\n> better views (or rewriting the rules system, not sure what the direction was)\n> better SQL92 compliance\n> also, I think that at some time there was discussion about a better interface\n> for procedures, enabling them to work on several tuples. May be wrong though.\n> \n> But if all, or just most, of the items on your list will be finished, it ought\n> to be a 8.0 release :-)\n> \n\nMost of these are planned for 7.2.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Jun 2000 22:53:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Added to TODO.\n\n> Since there are several people interested in contributing, we should\n> list:\n> \n> Support multiple simultaneous character sets, per SQL92\n> \n> - Thomas\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Jun 2000 22:56:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": " >On Tue, 13 Jun 2000, Bruce Momjian wrote:\n >\n >> Here is the list I have gotten of open 7.1 items:\n\nRolling back a transaction after dropping a table creates a corrupted\ndatabase. (Yes, I know it warns you not to do that, but users are\nfallible and sometimes just plain stupid.) Although the system catalog\nentries are rolled back, the file on disk is permanently destroyed.\n\nI suggest that DROP TABLE in a transaction should not be allowed.\n\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I beseech you therefore, brethren, by the mercies of \n God, that ye present your bodies a living sacrifice, \n holy, acceptable unto God, which is your reasonable \n service.\" Romans 12:1 \n\n\n",
"msg_date": "Wed, 14 Jun 2000 14:13:09 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> I suggest that DROP TABLE in a transaction should not be allowed.\n\nI had actually made it do that for a short time early this year,\nand was shouted down. On reflection I have to agree; it's too useful\nto be able to do\n\n\tbegin;\n\tdrop table foo;\n\tcreate table foo(new schema);\n\t...\n\tend;\n\nYou do indeed lose big if you suffer an error partway through, but\nthe answer to that is to fix our file naming conventions so that we\ncan support rollback of drop table.\n\nAlso note the complaints we've been getting about CREATE USER not\nworking inside a transaction block. That is a case where someone\n(Peter IIRC) took the more hard-line approach of emitting an error\ninstead of a warning. I think it was not the right choice to make.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Jun 2000 11:36:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Tom Lane writes:\n\n> Also note the complaints we've been getting about CREATE USER not\n> working inside a transaction block. That is a case where someone\n> (Peter IIRC) took the more hard-line approach of emitting an error\n> instead of a warning. I think it was not the right choice to make.\n\nProbably. Remember that you can claim your lunch any time. :)\n\nIn all truth, the problem is that the ODBC driver isn't very flexible\nabout putting BEGIN/END blocks around things. Perhaps that is also\nsomething to look at.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 14 Jun 2000 18:36:25 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Oliver Elphick\" <[email protected]> writes:\n> > I suggest that DROP TABLE in a transaction should not be allowed.\n>\n> I had actually made it do that for a short time early this year,\n> and was shouted down. On reflection I have to agree; it's too useful\n> to be able to do\n>\n> begin;\n> drop table foo;\n> create table foo(new schema);\n> ...\n> end;\n>\n> You do indeed lose big if you suffer an error partway through, but\n> the answer to that is to fix our file naming conventions so that we\n> can support rollback of drop table.\n\n Belongs IMHO to the discussion to keep separate what is\n separate (having indices/toast-relations/etc. in separate\n directories and whatnot).\n\n I've never been really happy with the file naming\n conventions. The need of a filesystem entry to have the same\n name of the DB object that is associated with it isn't right.\n I know, some people love to be able to easily identify the\n files with ls(1). OTOH what is that good for?\n\n Well, someone can easily see how big the disk footprint of\n his data is. Whow - what an info. Anything else?\n\n Why not changing the naming to be something like this:\n\n <dbroot>/catalog_tables/pg_...\n <dbroot>/catalog_index/pg_...\n <dbroot>/user_tables/oid_...\n <dbroot>/user_index/oid_...\n <dbroot>/temp_tables/oid_...\n <dbroot>/temp_index/oid_...\n <dbroot>/toast_tables/oid_...\n <dbroot>/toast_index/oid_...\n <dbroot>/whatnot_???/...\n\n This way, it would be much easier to separate all the\n different object types to different physical media. We would\n loose some transparency, but I've allways wondered what\n people USE that for (except for just wanna know). For\n convinience we could implement another little utility that\n tells the object size like\n\n DESCRIBE TABLE/VIEW/whatnot <object-name>\n\n that returns the physical location and storage details of the\n object. And psql could use it to print this info additional\n on the \\d commands. Would give unprivileged users access to\n this info, so be it, it's not a security issue IMHO.\n\n The subdirectory an object goes into has to be controlled by\n the relkind. So we need to tidy up that a little too. I think\n it's worth it.\n\n The objects storage location (the bare file) now would\n contain the OID. So we avoid naming conflicts for temp\n tables, naming conflicts during DROP/CREATE in a transaction\n and all the like.\n\n Comments?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 14 Jun 2000 22:43:39 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> Tom Lane wrote:\n> > \"Oliver Elphick\" <[email protected]> writes:\n> > > I suggest that DROP TABLE in a transaction should not be allowed.\n> >\n> > I had actually made it do that for a short time early this year,\n> > and was shouted down. On reflection I have to agree; it's too useful\n> > to be able to do\n> >\n> > begin;\n> > drop table foo;\n> > create table foo(new schema);\n> > ...\n> > end;\n> >\n> > You do indeed lose big if you suffer an error partway through, but\n> > the answer to that is to fix our file naming conventions so that we\n> > can support rollback of drop table.\n> \n> Belongs IMHO to the discussion to keep separate what is\n> separate (having indices/toast-relations/etc. in separate\n> directories and whatnot).\n> \n> I've never been really happy with the file naming\n> conventions. The need of a filesystem entry to have the same\n> name of the DB object that is associated with it isn't right.\n> I know, some people love to be able to easily identify the\n> files with ls(1). OTOH what is that good for?\n\nWell, I have no problem just appending some serial number to the end of\nour existing names. That solves both purposes, no? Seems Vadim is\ngoing to have a new storage manager in 7.2 anyway.\n\nIf/when we lose file name/object mapping, we will have to write\ncommand-line utilities to report the mappings so people can do\nadministration properly. It certainly makes it hard for administrators.\n\n> \n> Well, someone can easily see how big the disk footprint of\n> his data is. Whow - what an info. Anything else?\n> \n> Why not changing the naming to be something like this:\n> \n> <dbroot>/catalog_tables/pg_...\n> <dbroot>/catalog_index/pg_...\n> <dbroot>/user_tables/oid_...\n> <dbroot>/user_index/oid_...\n> <dbroot>/temp_tables/oid_...\n> <dbroot>/temp_index/oid_...\n> <dbroot>/toast_tables/oid_...\n> <dbroot>/toast_index/oid_...\n> <dbroot>/whatnot_???/...\n> \n> This way, it would be much easier to separate all the\n> different object types to different physical media. We would\n> loose some transparency, but I've allways wondered what\n> people USE that for (except for just wanna know). For\n> convinience we could implement another little utility that\n> tells the object size like\n\nYes, we could do that.\n\n> \n> DESCRIBE TABLE/VIEW/whatnot <object-name>\n> \n> that returns the physical location and storage details of the\n> object. And psql could use it to print this info additional\n> on the \\d commands. Would give unprivileged users access to\n> this info, so be it, it's not a security issue IMHO.\n\nYou need something that works from the command line, and something that\nworks if PostgreSQL is not running. How would you restore one file from\na tape. I guess you could bring back the whole thing, then do the\nquery, and move the proper table file back in, but that is a pain.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Jun 2000 19:13:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "At 07:13 PM 6/14/00 -0400, Bruce Momjian wrote:\n\n>> \n>> This way, it would be much easier to separate all the\n>> different object types to different physical media. We would\n>> loose some transparency, but I've allways wondered what\n>> people USE that for (except for just wanna know). For\n>> convinience we could implement another little utility that\n>> tells the object size like\n>\n>Yes, we could do that.\n\nIt's a poor man's substitute for a proper create tablespace on\nstorage 'filesystem' - style dml statement, but it's a step in\nthe right direction.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 14 Jun 2000 16:51:51 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> I've never been really happy with the file naming\n> conventions. The need of a filesystem entry to have the same\n> name of the DB object that is associated with it isn't right.\n> I know, some people love to be able to easily identify the\n> files with ls(1). OTOH what is that good for?\n\nI agree with Jan on this: let's just change the file names over to\nbe OIDs. Then we can have rollbackable DROP and RENAME TABLE easily.\nNaming the files after the logical names of the tables is nice if it\ndoesn't cost anything, but it is *not* worth the trouble to preserve\na relationship between filename and tablename when it is costing us.\nAnd it's costing us big time. That single feature is hurting us on\nfunctionality, robustness, and portability, and for what benefit?\nNot nearly enough. It's time to just let go of it.\n\n> Why not changing the naming to be something like this:\n\n> <dbroot>/catalog_tables/pg_...\n> <dbroot>/catalog_index/pg_...\n> <dbroot>/user_tables/oid_...\n> <dbroot>/user_index/oid_...\n> <dbroot>/temp_tables/oid_...\n> <dbroot>/temp_index/oid_...\n> <dbroot>/toast_tables/oid_...\n> <dbroot>/toast_index/oid_...\n> <dbroot>/whatnot_???/...\n\nI don't see a lot of value in that. Better to do something like\ntablespaces:\n\n\t<dbroot>/<oidoftablespace>/<oidofobject>\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Jun 2000 22:07:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> You need something that works from the command line, and something that\n> works if PostgreSQL is not running. How would you restore one file from\n> a tape.\n\n\"Restore one file from a tape\"? How are you going to do that anyway?\nYou can't save and restore portions of a database like that, because\nof transaction commit status problems. To restore table X correctly,\nyou'd have to restore pg_log as well, and then your other tables are\nhosed --- unless you also restore all of them from the backup. Only\na complete database restore from tape would work, and for that you\ndon't need to tell which file is which. So the above argument is a\nred herring.\n\nI realize it's nice to be able to tell which table file is which by\neyeball, but the price we are paying for that small convenience is\njust too high. Give that up, and we can have rollbackable DROP and\nRENAME now (I'll personally commit to making it happen for 7.1).\nContinue to insist on it, and I don't think we'll *ever* have those\nfeatures in a really robust form. It's just not possible to do\nmultiple file renames atomically.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Jun 2000 22:21:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > You need something that works from the command line, and something that\n> > works if PostgreSQL is not running. How would you restore one file from\n> > a tape.\n> \n> \"Restore one file from a tape\"? How are you going to do that anyway?\n> You can't save and restore portions of a database like that, because\n> of transaction commit status problems. To restore table X correctly,\n> you'd have to restore pg_log as well, and then your other tables are\n> hosed --- unless you also restore all of them from the backup. Only\n> a complete database restore from tape would work, and for that you\n> don't need to tell which file is which. So the above argument is a\n> red herring.\n> \n> I realize it's nice to be able to tell which table file is which by\n> eyeball, but the price we are paying for that small convenience is\n> just too high. Give that up, and we can have rollbackable DROP and\n> RENAME now (I'll personally commit to making it happen for 7.1).\n> Continue to insist on it, and I don't think we'll *ever* have those\n> features in a really robust form. It's just not possible to do\n> multiple file renames atomically.\n> \n\nOK, I am flexible. (Yea, right.) :-)\n\nBut seriously, let me give some background. I used Ingres, that used\nthe VMS file system, but used strange sequential AAAF324 numbers for\ntables. When someone deleted a table, or we were looking at what tables\nwere using disk space, it was impossible to find the Ingres table names\nthat went with the file. There was a system table that showed it, but\nit was poorly documented, and if you deleted the table, there was no way\nto look on the tape to find out which file to restore.\n\nAs far as pg_log, you certainly would not expect to get any information\nback from the time of the backup table to current, so the current pg_log\nwould be just fine.\n\nBasically, I guess we have to do it, but we have to print the proper\nerror messages for cases in the backend we just print the file name. \nAlso, we have to now replace the 'ls -l' command with something that\nwill be meaningful.\n\nRight now, we use 'ps' with args to display backend information, and ls\n-l to show disk information. We are going to lose that here.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Jun 2000 22:28:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> But seriously, let me give some background. I used Ingres, that used\n> the VMS file system, but used strange sequential AAAF324 numbers for\n> tables. When someone deleted a table, or we were looking at what tables\n> were using disk space, it was impossible to find the Ingres table names\n> that went with the file. There was a system table that showed it, but\n> it was poorly documented, and if you deleted the table, there was no way\n> to look on the tape to find out which file to restore.\n\nFair enough, but it seems to me that the answer is to expend some effort\non system admin support tools. We could do a lot in that line with less\neffort than trying to make a fundamentally mismatched filesystem\nrepresentation do what we need.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Jun 2000 22:36:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > But seriously, let me give some background. I used Ingres, that used\n> > the VMS file system, but used strange sequential AAAF324 numbers for\n> > tables. When someone deleted a table, or we were looking at what tables\n> > were using disk space, it was impossible to find the Ingres table names\n> > that went with the file. There was a system table that showed it, but\n> > it was poorly documented, and if you deleted the table, there was no way\n> > to look on the tape to find out which file to restore.\n> \n> Fair enough, but it seems to me that the answer is to expend some effort\n> on system admin support tools. We could do a lot in that line with less\n> effort than trying to make a fundamentally mismatched filesystem\n> representation do what we need.\n\nThat was my point --- that in doing this change, we are taking on more\nTODO items, that may detract from our main TODO items. I am also\nconcerned that the filename/tablename mapping is supported by so many\nUnix toolks like ls, lsof/fstat, and tar, that we could be in for\nneeding to support tons of utilities to enable administrators to do what\nthey can so easily do now.\n\nEven gdb shows us the filename/tablename in backtraces. We are never\ngoing to be able to reproduce that. I guess I didn't want to bit off\nthat much work until we had a _convincing_ need. I guess I don't\nconsider table schema commands inside transactions and such to be as big\nan items as the utility features we will need to build.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Jun 2000 22:44:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "At 10:28 PM 6/14/00 -0400, Bruce Momjian wrote:\n\n>As far as pg_log, you certainly would not expect to get any information\n>back from the time of the backup table to current, so the current pg_log\n>would be just fine.\n\nIn reality, very few people are going to be interested in restoring\na table in a way that breaks referential integrity and other \nnormal assumptions about what exists in the database. The reality\nis that most people are going to engage in a little time travel\nto a past, consistent backup rather than do as you suggest.\n\nThis is going to be more and more true as Postgres gains more and\nmore acceptance in (no offense intended) the real world.\n\n>Right now, we use 'ps' with args to display backend information, and ls\n>-l to show disk information. We are going to lose that here.\n\nDependence on \"ls -l\" is, IMO, a very weak argument.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 14 Jun 2000 19:46:39 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> That was my point --- that in doing this change, we are taking on more\n> TODO items, that may detract from our main TODO items.\n\nTrue, but they are also TODO items that could be handled by people other\nthan the inner circle of key developers. The actual rejiggering of\ntable-to-filename mapping is going to have to be done by one of the\nsmall number of people who are fully up to speed on backend internals.\nBut we've got a lot more folks who would be able (and, hopefully,\nwilling) to design and code whatever tools are needed to make the\ndbadmin's job easier in the face of the new filesystem layout. I'd\nrather not expend a lot of core time to avoid needing those tools,\nespecially when I feel the old approach is fatally flawed anyway.\n\n> Even gdb shows us the filename/tablename in backtraces. We are never\n> going to be able to reproduce that.\n\nBacktraces from *what*, exactly? 99% of the backend is still going\nto be dealing with the same data as ever. It might be that poking\naround in fd.c will be a little harder, but considering that fd.c\ndoesn't really know or care what the files it's manipulating are\nanyway, I'm not convinced that this is a real issue.\n\n> I guess I don't consider table schema commands inside transactions and\n> such to be as big an items as the utility features we will need to\n> build.\n\nYou've *got* to be kidding. We're constantly seeing complaints about\nthe fact that rolling back DROP or RENAME TABLE fails --- and worse,\nleaves the table in a corrupted/inconsistent state. As far as I can\ntell, that's one of the worst robustness problems we've got left to\nfix. This is a big deal IMHO, and I want it to be fixed and fixed\nright. I don't see how to fix it right if we try to keep physical\nfilenames tied to logical tablenames.\n\nMoreover, that restriction will continue to hurt us if we try to\npreserve it while implementing tablespaces, ANSI schemas, etc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Jun 2000 23:13:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > That was my point --- that in doing this change, we are taking on more\n> > TODO items, that may detract from our main TODO items.\n> \n> True, but they are also TODO items that could be handled by people other\n> than the inner circle of key developers. The actual rejiggering of\n> table-to-filename mapping is going to have to be done by one of the\n> small number of people who are fully up to speed on backend internals.\n> But we've got a lot more folks who would be able (and, hopefully,\n> willing) to design and code whatever tools are needed to make the\n> dbadmin's job easier in the face of the new filesystem layout. I'd\n> rather not expend a lot of core time to avoid needing those tools,\n> especially when I feel the old approach is fatally flawed anyway.\n\nYes, it is clearly fatally flawed. I agree.\n\n> > Even gdb shows us the filename/tablename in backtraces. We are never\n> > going to be able to reproduce that.\n> \n> Backtraces from *what*, exactly? 99% of the backend is still going\n> to be dealing with the same data as ever. It might be that poking\n> around in fd.c will be a little harder, but considering that fd.c\n> doesn't really know or care what the files it's manipulating are\n> anyway, I'm not convinced that this is a real issue.\n\nI was just throwing gdb out as an example. The bigger ones are ls,\nlsof/fstat, and tar.\n\n> > I guess I don't consider table schema commands inside transactions and\n> > such to be as big an items as the utility features we will need to\n> > build.\n> \n> You've *got* to be kidding. We're constantly seeing complaints about\n> the fact that rolling back DROP or RENAME TABLE fails --- and worse,\n> leaves the table in a corrupted/inconsistent state. As far as I can\n> tell, that's one of the worst robustness problems we've got left to\n> fix. This is a big deal IMHO, and I want it to be fixed and fixed\n> right. I don't see how to fix it right if we try to keep physical\n> filenames tied to logical tablenames.\n> \n> Moreover, that restriction will continue to hurt us if we try to\n> preserve it while implementing tablespaces, ANSI schemas, etc.\n> \n\nWell, we did have someone do a test implementation of oid file names,\nand their report was that is looked pretty ugly. However, if people are\nconvinced it has to be done, we can get started. I guess I was waiting\nfor Vadim's storage manager, where the whole idea of separate files is\ngoing to go away anyway, I suspect. We would then have to re-write all\nour admin tools for the new format.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Jun 2000 23:21:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian wrote:\n> >\n> > DESCRIBE TABLE/VIEW/whatnot <object-name>\n> >\n> > that returns the physical location and storage details of the\n> > object. And psql could use it to print this info additional\n> > on the \\d commands. Would give unprivileged users access to\n> > this info, so be it, it's not a security issue IMHO.\n>\n> You need something that works from the command line, and something that\n> works if PostgreSQL is not running. How would you restore one file from\n> a tape. I guess you could bring back the whole thing, then do the\n> query, and move the proper table file back in, but that is a pain.\n\n Think you messed up some basics of PG here.\n\n It's totally useless to restore single files of a PostgreSQL\n database. You could either put back anything below ./data, or\n nothing - the reason is pg_log.\n\n You don't need something that work's if PostgreSQL is not\n running. You cannot restore ONE file from a tape! You can\n restore a PostgreSQL instance (only a complete one - not a\n single DB, nor a single table or any other object). While\n your backup is writing to the tape, each number of backends\n could concurrently modify single blocks of the heap, it's\n indices and pg_log. So what does the tape contain the?\n\n I'd like to ask you, are you sure the backups you're making\n are worth the power consumption of the tape drive? You're\n talking about restoring a file - and sould be aware of the\n fact, that any file based backup would never be able to get\n consistent snapshot of the database, like pg_dump is able to.\n\n As long as you don't take the postmaster down during the\n entire saving of ./data, you aren't in a safe position. And\n the only safe RESTORE is to restore ./data completely or\n nothing. It's not even (easily) possible to initdb and\n restore a single DB from tape (it is, but requires some deep\n knowledge and more than just restoring some files from tape).\n\n YOU REALLY DON'T NEED ANY FILENAMES IN THERE!\n\n The more I think about it, the more I feel these file names,\n easily associatable with the objects they represent, are more\n dangerous than useful in practice. Maybe we should obfuscate\n the entire ./data like Oracle does with it's tablespace\n files. Just that our tablespaces will be directories,\n containing totally cryptic named files.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Thu, 15 Jun 2000 06:15:22 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > Why not changing the naming to be something like this:\n> \n> > <dbroot>/catalog_tables/pg_...\n> > <dbroot>/catalog_index/pg_...\n> > <dbroot>/user_tables/oid_...\n> > <dbroot>/user_index/oid_...\n> > <dbroot>/temp_tables/oid_...\n> > <dbroot>/temp_index/oid_...\n> > <dbroot>/toast_tables/oid_...\n> > <dbroot>/toast_index/oid_...\n> > <dbroot>/whatnot_???/...\n> \n> I don't see a lot of value in that. Better to do something like\n> tablespaces:\n> \n> \t<dbroot>/<oidoftablespace>/<oidofobject>\n\n *Slap* - yes!\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n",
"msg_date": "Thu, 15 Jun 2000 06:20:21 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Wed, Jun 14, 2000 at 11:21:15PM -0400, Bruce Momjian wrote:\n> \n> Well, we did have someone do a test implementation of oid file names,\n> and their report was that is looked pretty ugly. \n\nThat someone would be me. Did my mails from this morning fall into a black\nhole? I've got a patch that does either oid filenames or relname_<oid>,\ntake your pick. It doesn't do tablespaces, just leaves the files where\nthey are. TO do relname_<oid>, I add a relphysname field to pg_class.\n\nI'll update it to current and throw it at the PATCHES list this weekend,\nunless someone more central wants to do tablespaces first. I tried\nout rollinging back ALTER TABLE RENAME. Works fine. Biggest problem\nwith it is that I played silly buggers with the relcache for no good\nreason. Hiroshi stripped that out and said it works fone, otherwise. I\nalso haven't touched DROP TABLE yet. The physical file be deleted at\ntransaction commit time, then? Hmm, we're the 'things to do at commit'\nqueue?\n\n> convinced it has to be done, we can get started. I guess I was waiting\n> for Vadim's storage manager, where the whole idea of separate files is\n> going to go away anyway, I suspect. We would then have to re-write all\n> our admin tools for the new format.\n\nAny strong objections to the mixed relname_oid solution? It gets us\neverything oids does, and still lets Bruce use 'ls -l' to find the big\ntables, putting off writing any admin tools that'll need to be rewritten,\nanyway.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Thu, 15 Jun 2000 01:03:12 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> But seriously, let me give some background. I used Ingres, that used\n> the VMS file system, but used strange sequential AAAF324 numbers for\n> tables. When someone deleted a table, or we were looking at what \n> tables were using disk space, it was impossible to find the Ingres \n> table names that went with the file. There was a system table that \n> showed it, but it was poorly documented, and if you deleted the table, \n> there was no way to look on the tape to find out which file to \n> restore.\n\nI had the same experience, but let's put the blame where it belongs: it\nwasn't the filename's fault, it was poor design and support from the\nIngres company.\n\n - Thomas\n",
"msg_date": "Thu, 15 Jun 2000 06:29:29 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n\n> Any strong objections to the mixed relname_oid solution? It gets us\n> everything oids does, and still lets Bruce use 'ls -l' to find the big\n> tables, putting off writing any admin tools that'll need to be rewritten,\n> anyway.\n\nDoesn't relname_oid defeat the purpose of oid file names, which is that\nthey don't change when the table is renamed? Wasn't it going to be oids\nwith a tool to create a symlink of relname -> oid ?\n",
"msg_date": "Thu, 15 Jun 2000 16:56:12 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> Any strong objections to the mixed relname_oid solution?\n\nYes!\n\nYou cannot make it work reliably unless the relname part is the original\nrelname and does not track ALTER TABLE RENAME. IMHO having an obsolete\nrelname in the filename is worse than not having the relname at all;\nit's a recipe for confusion, it means you still need admin tools to tell\nwhich end is really up, and what's worst is you might think you don't.\n\nFurthermore it requires an additional column in pg_class to keep track\nof the original relname, which is a waste of space and effort.\n\nIt also creates a portability risk, or at least fails to remove one,\nsince you are critically dependent on the assumption that the OS\nsupports long filenames --- on a filesystem that truncates names to less\nthan about 45 characters you're in very deep trouble. An OID-only\napproach still works on traditional 14-char-filename Unix filesystems\n(it'd mostly even work on DOS 8+3, though I doubt we care about that).\n\nFinally, one of the reasons I want to go to filenames based only on OID\nis that that'll make life easier for mdblindwrt. Original relname + OID\ndoesn't help, in fact it makes life harder (more shmem space needed to\nkeep track of the filename for each buffer).\n\nCan we *PLEASE JUST LET GO* of this bad idea? No relname in the\nfilename. Period.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jun 2000 03:11:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Well, we did have someone do a test implementation of oid file names,\n> and their report was that is looked pretty ugly. However, if people are\n> convinced it has to be done, we can get started. I guess I was waiting\n> for Vadim's storage manager, where the whole idea of separate files is\n> going to go away anyway, I suspect. We would then have to re-write all\n> our admin tools for the new format.\n\nI seem to recall him saying that he wanted to go to filename == OID\njust like I'm suggesting. But I agree we probably ought to hold off\ndoing anything until he gets back from Russia and can let us know\nwhether that's still his plan. If he is planning one-huge-file or\nsomething like that, we might as well let these issues go unfixed\nfor one more release cycle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jun 2000 03:14:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "On Wed, 14 Jun 2000, Jan Wieck wrote:\n\n> Why not changing the naming to be something like this:\n> \n> <dbroot>/catalog_tables/pg_...\n> <dbroot>/catalog_index/pg_...\n> <dbroot>/user_tables/oid_...\n> <dbroot>/user_index/oid_...\n> <dbroot>/temp_tables/oid_...\n> <dbroot>/temp_index/oid_...\n> <dbroot>/toast_tables/oid_...\n> <dbroot>/toast_index/oid_...\n> <dbroot>/whatnot_???/...\n> \n> This way, it would be much easier to separate all the\n> different object types to different physical media. We would\n> loose some transparency, but I've allways wondered what\n> people USE that for (except for just wanna know). For\n> convinience we could implement another little utility that\n> tells the object size like\n\nWow, I've been advocating this one for how many months now? :) You won't\nget any arguments from me ... \n\n\n",
"msg_date": "Thu, 15 Jun 2000 09:03:31 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Wed, 14 Jun 2000, Bruce Momjian wrote:\n\n> > Backtraces from *what*, exactly? 99% of the backend is still going\n> > to be dealing with the same data as ever. It might be that poking\n> > around in fd.c will be a little harder, but considering that fd.c\n> > doesn't really know or care what the files it's manipulating are\n> > anyway, I'm not convinced that this is a real issue.\n> \n> I was just throwing gdb out as an example. The bigger ones are ls,\n> lsof/fstat, and tar.\n\nYou've lost me on this one ... if someone does an lsof of the process, it\nwill still provide them a list of open files ... are you complaining about\nthe extra step required to translate the file name to a \"valid table\"? \n\nOh, one point here ... this whole 'filenaming issue' ... as far as ls is\nconcerned, at least, only affects the superuser, since he's the only one\nthat can go 'ls'ng around i nthe directories ...\n\nAnd, ummm, how hard would it be to have \\d in psql display the \"physical\ntable name\" as part of its output?\n\nSlight tangent here:\n\nOne thing that I think would be great if we could add is some sort of:\n\nSELECT db_name, disk_space;\n\nquery wher a database owner, not the superuser, could see how much disk\nspace their tables are using up ... possible?\n\n",
"msg_date": "Thu, 15 Jun 2000 09:14:29 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Ross J. Reedstrom wrote:\n> \n> Any strong objections to the mixed relname_oid solution? It gets us\n> everything oids does, and still lets Bruce use 'ls -l' to find the big\n> tables, putting off writing any admin tools that'll need to be rewritten,\n> anyway.\n\nI would object to the mixed name.\n\nConsider:\n\nCREATE TABLE FOO ....\nALTER TABLE FOO RENAME FOO_OLD;\nCREATE TABLE FOO ....\n\nFor the same atomicity reason, rename can't change the\nname of the files. So, which foo_<oid> is the FOO_OLD\nand which is FOO?\n\nIn other words, in the presence of rename, putting\nrelname in the filename is misleading at best.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Thu, 15 Jun 2000 08:28:12 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> Precedence: bulk\n> \n> Bruce Momjian <[email protected]> writes:\n> > But seriously, let me give some background. I used Ingres, that used\n> > the VMS file system, but used strange sequential AAAF324 numbers for\n> > tables. When someone deleted a table, or we were looking at what tables\n> > were using disk space, it was impossible to find the Ingres table names\n> > that went with the file. There was a system table that showed it, but\n> > it was poorly documented, and if you deleted the table, there was no way\n> > to look on the tape to find out which file to restore.\n> \n> Fair enough, but it seems to me that the answer is to expend some effort\n> on system admin support tools. We could do a lot in that line with less\n> effort than trying to make a fundamentally mismatched filesystem\n> representation do what we need.\n\nWe've been an Ingres shop as long as there's been an Ingres. While\nwe've also had the problem Bruce noticed with table names, we've\n*also* used the trivial fix of running a (simple) Report Writer job\neach night, immediately before the backup, that lists all of the\ndatabase tables/indicies and the underlying files.\n\nTrue, if someone drops/recreates a table twice between backups we\ncan't find the intermediate file name, but since we also haven't\nbacked up that filename, this isn't an issue.\n\nAlso, the consistency issue is really not as important as you would\nthink. If you are restoring a table, you want the information in it,\nwhether or not it's consistent with anything else. I've done hundreds\nof table restores (can you say \"modify table to heap\"?) and never once\nhas inconsistency been an issue. Oh, yeah, and we don't shut the\ndatabase down for this, either. (That last isn't my choice, BTW.)\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "Thu, 15 Jun 2000 08:29:02 -0400 (EDT)",
"msg_from": "Brian E Gallew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> > But seriously, let me give some background. I used Ingres, that used\n> > the VMS file system, but used strange sequential AAAF324 numbers for\n> > tables. When someone deleted a table, or we were looking at what \n> > tables were using disk space, it was impossible to find the Ingres \n> > table names that went with the file. There was a system table that \n> > showed it, but it was poorly documented, and if you deleted the table, \n> > there was no way to look on the tape to find out which file to \n> > restore.\n> \n> I had the same experience, but let's put the blame where it belongs: it\n> wasn't the filename's fault, it was poor design and support from the\n> Ingres company.\n\nYes, that certainly was part of the cause. Also, if the PostgreSQL\nfiles are backed up using tar while no database activity is happening,\nthere is no reason the tar restore will not work. You just create a\ntable with the same schema, stop the postmaster, have the backup file\nreplace the newly created table file, and restart the postmaster.\n\nI can't tell you how many times I have said, \"Man, whoever did this\nIngres naming schema was an idiot. Do they know how many problems they\ncaused for us?\"\n\nAlso, Informix standard engine uses the tablename_oid setup for its\ntable names, and it works fine. It grabs the first 8 characters of the\ntable, and plops some unique number on the end of it. Works fine for\nadministrators.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Jun 2000 09:38:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Thu, Jun 15, 2000 at 03:11:52AM -0400, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> > Any strong objections to the mixed relname_oid solution?\n> \n> Yes!\n> \n> You cannot make it work reliably unless the relname part is the original\n> relname and does not track ALTER TABLE RENAME. IMHO having an obsolete\n> relname in the filename is worse than not having the relname at all;\n> it's a recipe for confusion, it means you still need admin tools to tell\n> which end is really up, and what's worst is you might think you don't.\n\nThe plan here was to let VACUUM handle renaming the file, since it\nwill already have all the necessary locks. This shortens the window\nof confusion. ALTER TABLE RENAME doesn't happen that often, really - \nthe relname is there just for human consumption, then.\n\n> \n> Furthermore it requires an additional column in pg_class to keep track\n> of the original relname, which is a waste of space and effort.\n> \n\nI actually started down this path thinking about implementing SCHEMA,\nsince tables in the same DB but in different schema can have the same\nrelname, I figured I needed to change that. We'll need something in\npg_class to keep track of what schema a relation is in, instead.\n\n> It also creates a portability risk, or at least fails to remove one,\n> since you are critically dependent on the assumption that the OS\n> supports long filenames --- on a filesystem that truncates names to less\n> than about 45 characters you're in very deep trouble. An OID-only\n> approach still works on traditional 14-char-filename Unix filesystems\n> (it'd mostly even work on DOS 8+3, though I doubt we care about that).\n\nActually, no. Since I store the filename in a name attribute, I used this\nnifty function somebody wrote, makeObjectName, to trim the relname part,\nbut leave the oid. (Yes, I know it's yours ;-)\n\n> \n> Finally, one of the reasons I want to go to filenames based only on OID\n> is that that'll make life easier for mdblindwrt. Original relname + OID\n> doesn't help, in fact it makes life harder (more shmem space needed to\n> keep track of the filename for each buffer).\n\nCan you explain in more detail how this helps? Not by letting the bufmgr\nknow that oid == filename, I hope. We need to improving the abstraction\nof the smgr, not add another violation. Ah, sorry, mdblindwrt _is_\nin the smgr. \n\nHmm, grovelling through that code, I see how it could be simpler if reloid\n== filename. Heck, we even get to save shmem in the buffdesc.blind part,\nsince we only need the dbname in there, now.\n\nHmm, I see I missed the relpath_blind() in my patch - oops. (relpath()\nis always called with RelationGetPhysicalRelationName(), and that's\nwhere I was putting in the relphysname)\n\nHmm, what's all this with functions in catalog.c that are only called by\nsmgr/md.c? seems to me that anything having to do with physical storage\n(like the path!) belongs in the smgr abstraction.\n\n> \n> Can we *PLEASE JUST LET GO* of this bad idea? No relname in the\n> filename. Period.\n> \n\nGee, so dogmatic. No one besides Bruce and Hiroshi discussed this _at\nall_ when I first put up patches two month ago. O.K., I'll do the oids\nonly version (and fix up relpath_blind)\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Thu, 15 Jun 2000 11:45:19 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> > Can we *PLEASE JUST LET GO* of this bad idea? No relname in the\n> > filename. Period.\n> > \n> \n> Gee, so dogmatic. No one besides Bruce and Hiroshi discussed this _at\n> all_ when I first put up patches two month ago. O.K., I'll do the oids\n> only version (and fix up relpath_blind)\n\nHold on. I don't think we want that work done yet. Seems even Tom is\nthinking that if Vadim is going to re-do everything later anyway, we may\nbe better with a relname/oid solution that does require additional\nadministration apps.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Jun 2000 15:35:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> \n> > > Can we *PLEASE JUST LET GO* of this bad idea? No relname in the\n> > > filename. Period.\n> > > \n> > \n> > Gee, so dogmatic. No one besides Bruce and Hiroshi discussed this _at\n> > all_ when I first put up patches two month ago. O.K., I'll do the oids\n> > only version (and fix up relpath_blind)\n> \n> Hold on. I don't think we want that work done yet. Seems even Tom is\n> thinking that if Vadim is going to re-do everything later anyway, we may\n> be better with a relname/oid solution that does require additional\n> administration apps.\n>\n\nHmm,why is naming rule first ?\n\nI've never enphasized naming rule except that it should be unique.\nIt has been my main point to reduce the necessity of naming rule\nas possible. IIRC,by keeping the stored place in pg_class,Ross's\ntrial patch remains only 2 places where naming rule is required. \nSo wouldn't we be free from naming rule(it would not be so difficult\nto change naming rule if the rule is found to be bad) ? \n\nI've also mentioned many times neither relname nor oid is sufficient\nfor the uniqueness. In addiiton neither relname nor oid would be\nnecessary for the uniqueness.\nIMHO,it's bad to rely on the item which is neither necessary nor\nsufficient.\nI proposed relname+unique_id naming once. The unique_id is\nindependent from oid. The relname is only for convinience for\nDBA and so we don't have to change it due to RENAME.\nDb's consistency is much more important than dba's satis-\nfaction.\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Fri, 16 Jun 2000 06:48:21 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items"
},
{
"msg_contents": "> I've also mentioned many times neither relname nor oid is sufficient\n> for the uniqueness. In addiiton neither relname nor oid would be\n> necessary for the uniqueness.\n> IMHO,it's bad to rely on the item which is neither necessary nor\n> sufficient.\n> I proposed relname+unique_id naming once. The unique_id is\n> independent from oid. The relname is only for convinience for\n> DBA and so we don't have to change it due to RENAME.\n> Db's consistency is much more important than dba's satis-\n> faction.\n> \n> Comments ?\n\nI am happy not to rename the file on 'RENAME', but seems no one likes\nthat.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Jun 2000 17:48:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Thu, Jun 15, 2000 at 05:48:59PM -0400, Bruce Momjian wrote:\n> > I've also mentioned many times neither relname nor oid is sufficient\n> > for the uniqueness. In addiiton neither relname nor oid would be\n> > necessary for the uniqueness.\n> > IMHO,it's bad to rely on the item which is neither necessary nor\n> > sufficient.\n> > I proposed relname+unique_id naming once. The unique_id is\n> > independent from oid. The relname is only for convinience for\n> > DBA and so we don't have to change it due to RENAME.\n> > Db's consistency is much more important than dba's satis-\n> > faction.\n> > \n> > Comments ?\n> \n> I am happy not to rename the file on 'RENAME', but seems no one likes\n> that.\n\nGood, 'cause that's how I've implemented it so far. Actually, all\nI've done is port my previous patch to current, with one little\nchange: I added a macro RelationGetRealRelationName which does what\nRelationGetPhysicalRelationName used to do: i.e. return the relname with\nno temptable funny business, and used that for the relcache macros. It\npasses all the serial regression tests: I haven't run the parallel tests\nyet. ALTER TABLE RENAME rollsback nicely. I'll need to learn some omre\nabout xacts to get DROP TABLE rolling back.\n\nI'll drop it on PATCHES right now, for comment.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Thu, 15 Jun 2000 17:53:59 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Here's the patch I promised on HACKERS. Comments anyone?\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005",
"msg_date": "Thu, 15 Jun 2000 17:57:38 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "filename patch (was Re: [HACKERS] Big 7.1 open items)"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> On Thu, Jun 15, 2000 at 03:11:52AM -0400, Tom Lane wrote:\n>> \"Ross J. Reedstrom\" <[email protected]> writes:\n>>>> Any strong objections to the mixed relname_oid solution?\n>> \n>> Yes!\n\n> The plan here was to let VACUUM handle renaming the file, since it\n> will already have all the necessary locks. This shortens the window\n> of confusion. ALTER TABLE RENAME doesn't happen that often, really - \n> the relname is there just for human consumption, then.\n\nYeah, I've seen tons of discussion of how if we do this, that, and\nthe other thing, and be prepared to fix up some other things in case\nof crash recovery, we can make it work with filename == relname + OID\n(where relname tracks logical name, at least at some remove).\n\nProbably. Assuming nobody forgets anything.\n\nI'm just trying to point out that that's a huge amount of pretty\ndelicate mechanism. The amount of work required to make it trustworthy\nlooks to me to dwarf the admin tools that Bruce is complaining about.\nAnd we only have a few people competent to do the work. (With all\ndue respect, Ross, if you weren't already aware of the implications\nfor mdblindwrt, I have to wonder what else you missed.)\n\nFilename == OID is so simple, reliable, and straightforward by\ncomparison that I think the decision is a no-brainer.\n\nIf we could afford to sink unlimited time into this one issue then\nit might make sense to do it the hard way, but we have enough\nimportant stuff on our TODO list to keep us all busy for years ---\nI cannot believe that it's an effective use of our time to do this.\n\n\n> Hmm, what's all this with functions in catalog.c that are only called by\n> smgr/md.c? seems to me that anything having to do with physical storage\n> (like the path!) belongs in the smgr abstraction.\n\nYeah, there's a bunch of stuff that should have been implemented by\nadding new smgr entry points, but wasn't. It should be pushed down.\n(I can't resist pointing out that one of those things is physical\nrelation rename, which will go away and not *need* to be pushed down\nif we do it the way I want.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jun 2000 19:53:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Gee, so dogmatic. No one besides Bruce and Hiroshi discussed this _at\n>> all_ when I first put up patches two month ago. O.K., I'll do the oids\n>> only version (and fix up relpath_blind)\n\n> Hold on. I don't think we want that work done yet. Seems even Tom is\n> thinking that if Vadim is going to re-do everything later anyway, we may\n> be better with a relname/oid solution that does require additional\n> administration apps.\n\nDon't put words in my mouth, please. If we are going to throw the\nwork away later, it'd be foolish to do the much greater amount of\nwork needed to make filename=relname+OID fly than is needed for\nfilename=OID.\n\nHowever, I'm pretty sure I recall Vadim stating that he thought\nfilename=OID would be required for his smgr changes anyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jun 2000 19:57:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n> \n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> > On Thu, Jun 15, 2000 at 03:11:52AM -0400, Tom Lane wrote:\n> >> \"Ross J. Reedstrom\" <[email protected]> writes:\n> >>>> Any strong objections to the mixed relname_oid solution?\n> >> \n> >> Yes!\n> \n> > The plan here was to let VACUUM handle renaming the file, since it\n> > will already have all the necessary locks. This shortens the window\n> > of confusion. ALTER TABLE RENAME doesn't happen that often, really - \n> > the relname is there just for human consumption, then.\n> \n> Yeah, I've seen tons of discussion of how if we do this, that, and\n> the other thing, and be prepared to fix up some other things in case\n> of crash recovery, we can make it work with filename == relname + OID\n> (where relname tracks logical name, at least at some remove).\n>\n\nI've seen little discussion of how to avoid the use of naming rule.\nI've proposed many times that we should keep the information\nwhere the table is stored in our database itself. I've never seen\nclear objections to it. So I could understand my proposal is OK ? \nIsn't it much more important than naming rule ? Under the\nmechanism,we could easily replace bad naming rule.\nAnd I believe that Ross's work is mostly around the mechanism\nnot naming rule. \n\nNow I like neither relname nor oid because it's not sufficient \nfor my purpose.\n \nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 16 Jun 2000 09:28:14 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Now I like neither relname nor oid because it's not sufficient \n> for my purpose.\n\nWe should probably not do much of anything with this issue until\nwe have a clearer understanding of what we want to do about\ntablespaces and schemas.\n\nMy gut feeling is that we will end up with pathnames that look\nsomething like\n\n.../data/base/DBNAME/TABLESPACE/OIDOFRELATION\n\n(with .N attached if a segment of a large relation, of course).\n\nThe TABLESPACE \"name\" should likely be an OID itself, but it wouldn't\nhave to be if you are willing to say that tablespaces aren't renamable.\n(Come to think of it, does anyone care about being able to rename\ndatabases? ;-)) Note that the TABLESPACE will often be a symlink\nto storage on another drive, rather than a plain subdirectory of the\nDBNAME, but that shouldn't be an issue at this level of discussion.\n\nI think that schemas probably don't enter into this. We should instead\nrely on the uniqueness of OIDs to prevent filename collisions. However,\nOIDs aren't really unique: different databases in an installation will\nuse the same OIDs for their system tables. My feeling is that we can\nlive with a restriction like \"you can't store the system tables of\ndifferent databases in the same tablespace\". Alternatively we could\navoid that issue by inverting the pathname order:\n\n.../data/base/TABLESPACE/DBNAME/OIDOFRELATION\n\nNote that in any case, system tables will have to live in a\npredetermined tablespace, since you can't very well look in pg_class\nto find out which tablespace pg_class lives in. Perhaps we should\njust reserve a tablespace per database for system tables and forget\nthe whole issue. If we do that, there's not really any need for\nthe database in the path! Just\n\n.../data/base/TABLESPACE/OIDOFRELATION\n\nwould do fine and help reduce lookup overhead.\n\nBTW, schemas do make things interesting for the other camp:\nis it possible for the same table to be referenced by different\nnames in different schemas? If so, just how useful is it to pick\none of those names arbitrarily for the filename? This is an advanced\nversion of the main objection to using the original relname and not\nupdating it at RENAME TABLE --- sooner or later, the filenames are\ngoing to be more confusing than helpful.\n\nComments? Have I missed something important about schemas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jun 2000 21:57:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Now I like neither relname nor oid because it's not sufficient \n> > for my purpose.\n> \n> We should probably not do much of anything with this issue until\n> we have a clearer understanding of what we want to do about\n> tablespaces and schemas.\n\nHere is an analysis of our options:\n\n Work required Disadvantages\n----------------------------------------------------------------------------\n\nKeep current system no work rename/create no rollback\n\nrelname/oid but less work new pg_class column,\nno rename change filename not accurate on\n rename\n\nrelname/oid with more work complex code\nrename change during \nvacuum\n\noid filename less work, but confusing to admins\n need admin tools \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Jun 2000 22:24:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Sorry for my previous mail. It was posted by my mistake.\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Now I like neither relname nor oid because it's not sufficient \n> > for my purpose.\n> \n> We should probably not do much of anything with this issue until\n> we have a clearer understanding of what we want to do about\n> tablespaces and schemas.\n> \n> My gut feeling is that we will end up with pathnames that look\n> something like\n>\n> .../data/base/DBNAME/TABLESPACE/OIDOFRELATION\n>\n\nSchema is a logical concept and irrevant to physical location.\nI strongly object your suggestion unless above means *default*\nlocation.\nTablespace is an encapsulation of table allocation and the \nname should be irrevant to the location basically. So above\nseems very bad for me.\n\nAnyway I don't see any advantage in fixed mapping impleme\nntation. After renewal,we should at least have a possibility to\nallocate a specific table in arbitrary separate directory.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 16 Jun 2000 11:43:52 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> \n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > Now I like neither relname nor oid because it's not sufficient \n> > > for my purpose.\n> > \n> > We should probably not do much of anything with this issue until\n> > we have a clearer understanding of what we want to do about\n> > tablespaces and schemas.\n> \n> Here is an analysis of our options:\n> \n> Work required Disadvantages\n> ------------------------------------------------------------------\n> ----------\n> \n> Keep current system no work rename/create \n> no rollback\n> \n> relname/oid but less work new pg_class column,\n> no rename change filename not \n> accurate on\n> rename\n> \n> relname/oid with more work complex code\n> rename change during \n> vacuum\n> \n> oid filename less work, but confusing to admins\n> need admin tools \n>\n\nPlease add my opinion for naming rule.\n\nrelname/unique_id but\tneed some work\t\tnew pg_class column,\t\nno relname change.\tfor unique-id generation\tfilename not relname\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 16 Jun 2000 12:20:16 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Tablespace is an encapsulation of table allocation and the \n> name should be irrevant to the location basically. So above\n> seems very bad for me.\n> Anyway I don't see any advantage in fixed mapping impleme\n> ntation. After renewal,we should at least have a possibility to\n> allocate a specific table in arbitrary separate directory.\n\nCall a \"directory\" a \"tablespace\" and we're on the same page,\naren't we? Actually I'd envision some kind of admin command\n\"CREATE TABLESPACE foo AS /path/to/wherever\". That would make\nappropriate system catalog entries and also create a symlink\nfrom \".../data/base/foo\" (or some such place) to the target\ndirectory. Then when we make a table in that tablespace,\nit's in the right place. Problem solved, no?\n\nIt gets a little trickier if you want to be able to split\nmulti-gig tables across several tablespaces, though, since\nyou couldn't just append \".N\" to the base table path in that\nscenario.\n\nI'd be interested to know what sort of facilities Oracle\nprovides for managing huge tables...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jun 2000 23:35:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Please add my opinion for naming rule.\n\n> relname/unique_id but\tneed some work\t\tnew pg_class column,\t\n> no relname change.\tfor unique-id generation\tfilename not relname\n\nWhy is a unique ID better than --- or even different from ---\nusing the relation's OID? It seems pointless to me...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jun 2000 23:43:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Please add my opinion for naming rule.\n> \n> > relname/unique_id but\tneed some work\t\tnew \n> pg_class column,\t\n> > no relname change.\tfor unique-id generation\tfilename not relname\n> \n> Why is a unique ID better than --- or even different from ---\n> using the relation's OID? It seems pointless to me...\n>\n\nFor example,in the implementation of CLUSTER command,\nwe would need another new file for the target relation in\norder to put sorted rows but don't we want to change the\nOID ? It would be needed for table re-construction generally.\nIf I remember correectly,you once proposed OID+version\nnaming for the cases.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 16 Jun 2000 12:57:44 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Tablespace is an encapsulation of table allocation and the \n> > name should be irrevant to the location basically. So above\n> > seems very bad for me.\n> > Anyway I don't see any advantage in fixed mapping impleme\n> > ntation. After renewal,we should at least have a possibility to\n> > allocate a specific table in arbitrary separate directory.\n> \n> Call a \"directory\" a \"tablespace\" and we're on the same page,\n> aren't we? Actually I'd envision some kind of admin command\n> \"CREATE TABLESPACE foo AS /path/to/wherever\". \n\nYes,I think 'tablespace -> directory' is the most natural\nextension under current file_per_table storage manager.\nIf many_tables_in_a_file storage manager is introduced,we\nmay be able to change the definiiton of TABLESPACE\nto 'tablespace -> files' like Oracle.\n\n> That would make\n> appropriate system catalog entries and also create a symlink\n> from \".../data/base/foo\" (or some such place) to the target\n> directory.\n> Then when we make a table in that tablespace,\n> it's in the right place. Problem solved, no?\n> \n\nI don't like symlink for dbms data files. However it may\nbe OK,If symlink are limited to 'tablespace->directory'\ncorrspondence and all tablespaces(including default\netc) are symlink. It is simple and all debugging would\nbe processed under tablespace_is_symlink environment.\n\n> It gets a little trickier if you want to be able to split\n> multi-gig tables across several tablespaces, though, since\n> you couldn't just append \".N\" to the base table path in that\n> scenario.\n>\n\nThis seems to be not that easy to solve now.\nRoss doesn't change this naming rule for multi-gig\ntables either in his trial.\n \n> I'd be interested to know what sort of facilities Oracle\n> provides for managing huge tables...\n>\n\nIn my knowledge about old Oracle,one TABLESPACE\ncould have many DATAFILEs which could contain\nmany tables.\n \nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Fri, 16 Jun 2000 14:35:21 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "Tom Lane wrote:\n\n> > <dbroot>/catalog_tables/pg_...\n> > <dbroot>/catalog_index/pg_...\n> > <dbroot>/user_tables/oid_...\n> > <dbroot>/user_index/oid_...\n> > <dbroot>/temp_tables/oid_...\n> > <dbroot>/temp_index/oid_...\n> > <dbroot>/toast_tables/oid_...\n> > <dbroot>/toast_index/oid_...\n> > <dbroot>/whatnot_???/...\n> \n> I don't see a lot of value in that. Better to do something like\n> tablespaces:\n> I don't see a lot of value in that. Better to do something like\n> tablespaces:\n>\n> <dbroot>/<oidoftablespace>/<oidofobject>\n\nWhat is the benefit of having oidoftablespace in the directory path?\nIsn't tablespace an idea so you can store it somewhere completely\ndifferent?\nOr is there some symlink idea or something?\n",
"msg_date": "Fri, 16 Jun 2000 15:36:04 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> Why is a unique ID better than --- or even different from ---\n>> using the relation's OID? It seems pointless to me...\n\n> For example,in the implementation of CLUSTER command,\n> we would need another new file for the target relation in\n> order to put sorted rows but don't we want to change the\n> OID ? It would be needed for table re-construction generally.\n> If I remember correectly,you once proposed OID+version\n> naming for the cases.\n\nHmm, so you are thinking that the pg_class row for the table would\ninclude this uniqueID, and then committing the pg_class update would\nbe the atomic action that replaces the old table contents with the\nnew? It does have some attraction now that I think about it.\n\nBut there are other ways we could do the same thing. If we want to\nhave tablespaces, there will need to be a tablespace identifier in\neach pg_class row. So we could do CLUSTER in the same way as we'd\nmove a table from one tablespace to another: create the new files in\nthe new tablespace directory, and the commit of the new pg_class row\nwith the new tablespace value is the atomic action that makes the new\nfiles valid and the old files not.\n\nYou will probably say \"but I didn't want to move my table to a new\ntablespace just to cluster it!\" I think we could live with that,\nthough. A tablespace doesn't need to have any existence more concrete\nthan a subdirectory, in my vision of the way things would work. We \ncould do something like making two subdirectories of each place that\nthe dbadmin designates as a \"tablespace\", so that we make two logical\ntablespaces out of what the dbadmin thinks of as one. Then we can\nping-pong between those directories to do things like clustering \"in\nplace\".\n\nBasically I want to keep the bottom-level mechanisms as simple and\nreliable as we possibly can. The fewer concepts are known down at\nthe bottom, the better. If we can keep the pathname constituents\nto just \"tablespace\" and \"relation OID\" we'll be in great shape ---\nbut each additional concept that has to be known down there is\nanother potential problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jun 2000 01:54:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> Why is a unique ID better than --- or even different from ---\n> >> using the relation's OID? It seems pointless to me...\n> \n> > For example,in the implementation of CLUSTER command,\n> > we would need another new file for the target relation in\n> > order to put sorted rows but don't we want to change the\n> > OID ? It would be needed for table re-construction generally.\n> > If I remember correectly,you once proposed OID+version\n> > naming for the cases.\n> \n> Hmm, so you are thinking that the pg_class row for the table would\n> include this uniqueID, \n\nNo,I just include the place where the table is stored(pathname under\ncurrent file_per_table storage manager) in the pg_class row because\nI don't want to rely on table allocating rule(naming rule for current)\nto access existent relation files. This has always been my main point.\nMany_tables_in_a_file storage manager wouldn't be able to live without\nkeeping this kind of infomation.\nThis information(where it is stored) is diffrent from tablespace(where\nto store) information. There was an idea to keep the information into\nopaque entry in pg_class which only a specific storage manager\ncould handle. There was an idea to have a new system table which\nkeeps the information. and so on...\n\n> and then committing the pg_class update would\n> be the atomic action that replaces the old table contents with the\n> new? It does have some attraction now that I think about it.\n> \n> But there are other ways we could do the same thing. If we want to\n> have tablespaces, there will need to be a tablespace identifier in\n> each pg_class row. So we could do CLUSTER in the same way as we'd\n> move a table from one tablespace to another: create the new files in\n> the new tablespace directory, and the commit of the new pg_class row\n> with the new tablespace value is the atomic action that makes the new\n> files valid and the old files not.\n> \n> You will probably say \"but I didn't want to move my table to a new\n> tablespace just to cluster it!\" \n\nYes.\n\n> I think we could live with that,\n> though. A tablespace doesn't need to have any existence more concrete\n> than a subdirectory, in my vision of the way things would work. We \n> could do something like making two subdirectories of each place that\n> the dbadmin designates as a \"tablespace\", so that we make two logical\n> tablespaces out of what the dbadmin thinks of as one. \n\nCertainly we could design TABLESPACE(where to store) as above.\n\n> Then we can\n> ping-pong between those directories to do things like clustering \"in\n> place\".\n>\n\nBut maybe we must keep the directory information where the table was \n*ping-ponged* in (e.g.) pg_class. Is such an implementation cleaner or\nmore extensible than mine(keeping the stored place exactly) ? \n \nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 16 Jun 2000 16:03:06 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Tom Lane wrote:\n>> I don't see a lot of value in that. Better to do something like\n>> tablespaces:\n>> \n>> <dbroot>/<oidoftablespace>/<oidofobject>\n\n> What is the benefit of having oidoftablespace in the directory path?\n> Isn't tablespace an idea so you can store it somewhere completely\n> different?\n> Or is there some symlink idea or something?\n\nExactly --- I'm assuming that the tablespace \"directory\" is likely\nto be a symlink to some other mounted volume. The point here is\nto keep the low-level file access routines from having to know very\nmuch about tablespaces or file organization. In the above proposal,\nall they need to know is the relation's OID and the name (or OID)\nof the tablespace the relation's assigned to; then they can form\na valid path using a hardwired rule. There's still plenty of\nflexibility of organization, but it's not necessary to know that\nwhere the rubber meets the road (eg, when you're down inside mdblindwrt\ntrying to dump a dirty buffer to disk with no spare resources to find\nout anything about the relation the page belongs to...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jun 2000 03:34:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Tom Lane wrote:\n>\n> It gets a little trickier if you want to be able to split\n> multi-gig tables across several tablespaces, though, since\n> you couldn't just append \".N\" to the base table path in that\n> scenario.\n>\n> I'd be interested to know what sort of facilities Oracle\n> provides for managing huge tables...\n\n Oracle tablespaces are a collection of 1...n preallocated\n files. Each table then is bound to a tablespace and\n allocates extents (chunks) from those files.\n\n There are some per table attributes that control the extent\n sizes with default values coming from the tablespace. The\n initial extent size, the nextextent and the pctincrease.\n There is a hardcoded limit for the number of extents a table\n can have at all. In Oracle7 it was 512 (or somewhat below -\n don't recall correct). Maybe that's gone with Oracle8, don't\n know.\n\n This storage concept has IMHO a couple of advatages over\n ours.\n\n The tablespace files are preallocated, so there will\n never be a change in block allocation during runtime and\n that's the base for fdatasync() beeing sufficient at\n syncpoints. All what might be inaccurate after a crash is\n the last modified time in the inode, and that's totally\n irrelevant for Oracle. The fsck will never fail, and\n anything is up to Oracle's recovery.\n\n The number of total tablespace files is limited to a\n value that ensures, that the backends can keep them all\n open all the time. It's hard to exceed that limit. A\n typical SAP installation with more than 20,000\n tables/indices doesn't need more than 30 or 40 of them.\n\n It is perfectly prepared for raw devices, since a\n tablespace in a raw device installation is simply an area\n of blocks on a disk.\n\n There are also disadvantages.\n\n You can run out of space even if there are plenty GB's\n free on your disks. You have to create tablespaces\n explicitly.\n\n If you've choosen inadequate extent size parameters, you\n end up with high fragmented tables (slowing down) or get\n stuck with running against maxextents, where only a reorg\n (export/import) helps.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Fri, 16 Jun 2000 14:42:12 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> There are also disadvantages.\n\n> You can run out of space even if there are plenty GB's\n> free on your disks. You have to create tablespaces\n> explicitly.\n\nNot to mention the reverse: if I read this right, you have to suck\nup your GB's long in advance of actually needing them. That's OK\nfor a machine that's dedicated to Oracle ... not so OK for smaller\ninstallations, playpens, etc.\n\nI'm not convinced that there's anything fundamentally wrong with\ndoing storage allocation in Unix files the way we have been.\n\n(At least not when we're sitting atop a well-done filesystem,\nwhich may leave the Linux folk out in the cold ;-).)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jun 2000 11:00:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> (At least not when we're sitting atop a well-done filesystem,\n> which may leave the Linux folk out in the cold ;-).)\n\nThose who live in HP houses should not throw stones :))\n\n - Thomas\n",
"msg_date": "Fri, 16 Jun 2000 15:11:27 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> It gets a little trickier if you want to be able to split\n>> multi-gig tables across several tablespaces, though, since\n>> you couldn't just append \".N\" to the base table path in that\n>> scenario.\n>> \n>> I'd be interested to know what sort of facilities Oracle\n>> provides for managing huge tables...\n\n> Oracle tablespaces are a collection of 1...n preallocated\n> files. Each table then is bound to a tablespace and\n> allocates extents (chunks) from those files.\n\nOK, to get back to the point here: so in Oracle, tables can't cross\ntablespace boundaries, but a tablespace itself could span multiple\ndisks?\n\nNot sure if I like that better or worse than equating a tablespace\nwith a directory (so, presumably, all the files within it live on\none filesystem) and then trying to make tables able to span\ntablespaces. We will need to do one or the other though, if we want\nto have any significant improvement over the current state of affairs\nfor large tables.\n\nOne way is to play the flip-the-path-ordering game some more,\nand access multiple-segment tables with pathnames like this:\n\n\t.../TABLESPACE/RELATION\t\t-- first or only segment\n\t.../TABLESPACE/N/RELATION\t-- N'th extension segment\n\nThis isn't any harder for md.c to deal with than what we do now,\nbut by making the /N subdirectories be symlinks, the dbadmin could\neasily arrange for extension segments to go on different filesystems.\nAlso, since /N subdirectory symlinks can be added as needed,\nexpanding available space by attaching more disks isn't hard.\n(If the admin hasn't pre-made a /N symlink when it's needed,\nI'd envision the backend just automatically creating a plain\nsubdirectory so that it can extend the table.)\n\nA limitation is that the N'th extension segments of all the relations\nin a given tablespace have to be in the same place, but I don't see\nthat as a major objection. Worst case is you make a separate tablespace\nfor each of your multi-gig relations ... you're probably not going to\nhave a very large number of such relations, so this doesn't seem like\nunmanageable admin complexity.\n\nWe'd still want to create some tools to help the dbadmin with slinging\nall these symlinks around, of course. But I think it's critical to keep\nthe low-level file access protocol simple and reliable, which really\nmeans minimizing the amount of information the backend needs to know to\nfigure out which file to write a page in. With something like the above\nyou only need to know the tablespace name (or more likely OID), the\nrelation OID (+name or not, depending on outcome of other argument),\nand the offset in the table. No worse than now from the software's\npoint of view.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jun 2000 11:46:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> ... But I think it's critical to keep\n> the low-level file access protocol simple and reliable, which really\n> means minimizing the amount of information the backend needs to know \n> to figure out which file to write a page in. With something like the \n> above you only need to know the tablespace name (or more likely OID), \n> the relation OID (+name or not, depending on outcome of other \n> argument), and the offset in the table. No worse than now from the \n> software's point of view.\n> Comments?\n\nI'm probably missing the context a bit, but imho we should try hard to\nstay away from symlinks as the general solution for anything.\n\nSorry for being behind here, but to make sure I'm on the right page:\no tablespaces decouple storage from logical tables\no a database lives in a default tablespace, unless specified\no by default, a table will live in the default tablespace\no (eventually) a table can be split across tablespaces\n\nSome thoughts:\no the ability to split single tables across disks was essential for\nscalability when disks were small. But with RAID, NAS, etc etc isn't\nthat a smaller issue now?\no \"tablespaces\" would implement our less-developed \"with location\"\nfeature, right? Splitting databases, whole indices and whole tables\nacross storage is the biggest win for this work since more users will\nuse the feature.\no location information needs to travel with individual tables anyway.\n",
"msg_date": "Fri, 16 Jun 2000 16:27:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> There are also disadvantages.\n> \n> You can run out of space even if there are plenty GB's\n> free on your disks. You have to create tablespaces\n> explicitly.\n> \n> If you've choosen inadequate extent size parameters, you\n> end up with high fragmented tables (slowing down) or get\n> stuck with running against maxextents, where only a reorg\n> (export/import) helps.\n\nAlso, Tom Lane pointed out to me that file system read-ahead does not\nhelp if your table is spread around in tablespaces.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Jun 2000 12:35:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Thu, 15 Jun 2000, Bruce Momjian wrote:\n\n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > Now I like neither relname nor oid because it's not sufficient \n> > > for my purpose.\n> > \n> > We should probably not do much of anything with this issue until\n> > we have a clearer understanding of what we want to do about\n> > tablespaces and schemas.\n> \n> Here is an analysis of our options:\n> \n> Work required Disadvantages\n> ----------------------------------------------------------------------------\n> \n> Keep current system no work rename/create no rollback\n> \n> relname/oid but less work new pg_class column,\n> no rename change filename not accurate on\n> rename\n> \n> relname/oid with more work complex code\n> rename change during \n> vacuum\n> \n> oid filename less work, but confusing to admins\n> need admin tools \n\nMy vote is with Tom on this one ... oid only ... the admin should be able\nto do a quick SELECT on a table to find out the OID->table mapping, and I\nbelieve its already been pointed out that you cant' just restore one file\nanyway, so it kinda negates the \"server isn't running problem\" ...\n\n\n\n",
"msg_date": "Fri, 16 Jun 2000 13:50:37 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Thu, 15 Jun 2000, Tom Lane wrote:\n\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Please add my opinion for naming rule.\n> \n> > relname/unique_id but\tneed some work\t\tnew pg_class column,\t\n> > no relname change.\tfor unique-id generation\tfilename not relname\n> \n> Why is a unique ID better than --- or even different from ---\n> using the relation's OID? It seems pointless to me...\n\njust to open up a whole new bucket of worms here, but ... if we do use OID\n(which up until this thought I endorse 100%) ... do we not run a risk if\nwe run out of OIDs? As far as I know, those are still a finite resource,\nno? \n\nor, do we just assume that by the time that comes, everyone will be pretty\nmuch using 64bit machines? :)\n\n\n",
"msg_date": "Fri, 16 Jun 2000 13:52:27 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> ... But I think it's critical to keep\n>> the low-level file access protocol simple and reliable, which really\n>> means minimizing the amount of information the backend needs to know \n>> to figure out which file to write a page in. With something like the \n>> above you only need to know the tablespace name (or more likely OID), \n>> the relation OID (+name or not, depending on outcome of other \n>> argument), and the offset in the table. No worse than now from the \n>> software's point of view.\n>> Comments?\n\n> I'm probably missing the context a bit, but imho we should try hard to\n> stay away from symlinks as the general solution for anything.\n\nWhy?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jun 2000 12:54:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> just to open up a whole new bucket of worms here, but ... if we do use OID\n> (which up until this thought I endorse 100%) ... do we not run a risk if\n> we run out of OIDs? As far as I know, those are still a finite resource,\n> no? \n\nThey are, and there is some risk involved, but OID collisions in the\nsystem tables will cause you just as much headache. There's not only\nthe pg_class row to think of, but the pg_attribute rows, etc etc.\n\nIf you did have an OID collision with an existing table you'd have to\nkeep trying until you got a set of OID assignments with no conflicts.\n(Now that we have unique indexes on the system tables, this should\nwork properly, ie, you will hear about it if you have a conflict.)\nI don't think the physical table names make this noticeably worse.\nOf course we'd better be careful to create table files with O_EXCL,\nso as not to tromp on existing files, but we do that already IIRC.\n\n> or, do we just assume that by the time that comes, everyone will be pretty\n> much using 64bit machines? :)\n\nI think we are not too far away from being able to offer 64-bit OID as\na compile-time option (on machines where there is a 64-bit integer type\nthat is). It's just a matter of someone putting it at the head of their\ntodo list.\n\nBottom line is I'm not real worried about this issue.\n\nBut having said all that, I am coming round to agree with Hiroshi's idea\nanyway. See upcoming message.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jun 2000 13:08:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "At 11:46 AM 6/16/00 -0400, Tom Lane wrote:\n\n>OK, to get back to the point here: so in Oracle, tables can't cross\n>tablespace boundaries,\n\nRight, the construct AFAIK is \"create table/index foo on tablespace ...\"\n\n> but a tablespace itself could span multiple\n>disks?\n\nRight.\n\n>Not sure if I like that better or worse than equating a tablespace\n>with a directory (so, presumably, all the files within it live on\n>one filesystem) and then trying to make tables able to span\n>tablespaces. We will need to do one or the other though, if we want\n>to have any significant improvement over the current state of affairs\n>for large tables.\n\nOracle's way does a reasonable job of isolating the datamodel\nfrom the details of the physical layout.\n\nTake the OpenACS web toolkit, for instance. We could take\neach module's tables and indices and assign them appropriately\nto various dataspaces, then provide a separate .sql files with\nonly \"create tablespace\" statements in there.\n\nBy modifying that one central file, the toolkit installation\ncould be customized to run anything from a small site (one\ndisk with everything on it, ala my own personal webserver at\nbirdnotes.net) or a very large site with many spindles, with\nvarious index and table structures spread out widely hither\nand thither.\n\nGiven that the OpenACS datamodel is nearly 10K lines long (including\nmany comments, of course), being able to customize an installation\nto such a degree by modifying a single file filled with \"create\ntablespaces\" would be very attractive.\n\n>One way is to play the flip-the-path-ordering game some more,\n>and access multiple-segment tables with pathnames like this:\n>\n>\t.../TABLESPACE/RELATION\t\t-- first or only segment\n>\t.../TABLESPACE/N/RELATION\t-- N'th extension segment\n>\n>This isn't any harder for md.c to deal with than what we do now,\n>but by making the /N subdirectories be symlinks, the dbadmin could\n>easily arrange for extension segments to go on different filesystems.\n\nI personally dislike depending on symlinks to move stuff around.\nAmong other things, a pg_dump/restore (and presumably future \nbackup tools?) can't recreate the disk layout automatically.\n\n>We'd still want to create some tools to help the dbadmin with slinging\n>all these symlinks around, of course.\n\nOK, if symlinks are simply an implementation detail hidden from the\ndbadmin, and if the physical structure is kept in the db so it can\nbe rebuilt if necessary automatically, then I don't mind symlinks.\n\n> But I think it's critical to keep\n>the low-level file access protocol simple and reliable, which really\n>means minimizing the amount of information the backend needs to know to\n>figure out which file to write a page in. With something like the above\n>you only need to know the tablespace name (or more likely OID), the\n>relation OID (+name or not, depending on outcome of other argument),\n>and the offset in the table. No worse than now from the software's\n>point of view.\n\nMake the code that creates and otherwise manipulates tablespaces\ndo the work, while keeping the low-level file access protocol simple.\n\nYes, this approach sounds very good to me.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 16 Jun 2000 10:50:23 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "At 04:27 PM 6/16/00 +0000, Thomas Lockhart wrote:\n\n>Sorry for being behind here, but to make sure I'm on the right page:\n>o tablespaces decouple storage from logical tables\n>o a database lives in a default tablespace, unless specified\n>o by default, a table will live in the default tablespace\n>o (eventually) a table can be split across tablespaces\n\nOr tablespaces across filesystems/mountpoints whatever.\n\n>Some thoughts:\n>o the ability to split single tables across disks was essential for\n>scalability when disks were small. But with RAID, NAS, etc etc isn't\n>that a smaller issue now?\n\nYes for size issues, I should think, especially if you have the \nmoney for a large RAID subsystem. But for throughput performance,\ncontrol over which spindles particularly busy tables and indices\ngo on would still seem to be pretty relevant, when they're being\nupdated a lot. In order to minimize seek times.\n\nI really can't say how important this is in reality. Oracle-world\nfolks still talk about this kind of optimization being important,\nbut I'm not personally running any kind of database-backed website\nthat's busy enough or contains enough storage to worry about it.\n\n>o \"tablespaces\" would implement our less-developed \"with location\"\n>feature, right? Splitting databases, whole indices and whole tables\n>across storage is the biggest win for this work since more users will\n>use the feature.\n>o location information needs to travel with individual tables anyway.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 16 Jun 2000 11:14:35 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n>> This isn't any harder for md.c to deal with than what we do now,\n>> but by making the /N subdirectories be symlinks, the dbadmin could\n>> easily arrange for extension segments to go on different filesystems.\n\n> I personally dislike depending on symlinks to move stuff around.\n> Among other things, a pg_dump/restore (and presumably future \n> backup tools?) can't recreate the disk layout automatically.\n\nGood point, we'd need some way of saving/restoring the tablespace\nstructures.\n\n>> We'd still want to create some tools to help the dbadmin with slinging\n>> all these symlinks around, of course.\n\n> OK, if symlinks are simply an implementation detail hidden from the\n> dbadmin, and if the physical structure is kept in the db so it can\n> be rebuilt if necessary automatically, then I don't mind symlinks.\n\nI'm not sure about keeping it in the db --- creates a bit of a\nchicken-and-egg problem doesn't it? Maybe there needs to be a\n\"system database\" that has nailed-down pathnames (no tablespaces\nfor you baby) and contains the critical installation-wide tables\nlike pg_database, pg_user, pg_tablespace. A restore would have\nto restore these tables first anyway.\n\n> Make the code that creates and otherwise manipulates tablespaces\n> do the work, while keeping the low-level file access protocol simple.\n\nRight, that's the bottom line for me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jun 2000 15:00:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> >Some thoughts:\n> >o the ability to split single tables across disks was essential for\n> >scalability when disks were small. But with RAID, NAS, etc etc isn't\n> >that a smaller issue now?\n> \n> Yes for size issues, I should think, especially if you have the \n> money for a large RAID subsystem. But for throughput performance,\n> control over which spindles particularly busy tables and indices\n> go on would still seem to be pretty relevant, when they're being\n> updated a lot. In order to minimize seek times.\n> \n> I really can't say how important this is in reality. Oracle-world\n> folks still talk about this kind of optimization being important,\n> but I'm not personally running any kind of database-backed website\n> that's busy enough or contains enough storage to worry about it.\n\nIt is important when you have a few big tables that must be fast. One\nobjection I have always had to the HP logical volume manager is that it\nis difficult to know what drives are being assigned to each logical\nvolume.\n\nSeems if they don't have RAID, we should allow such drive partitioning.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Jun 2000 15:06:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Fri, Jun 16, 2000 at 04:27:22PM +0000, Thomas Lockhart wrote:\n> > ... But I think it's critical to keep\n> > the low-level file access protocol simple and reliable, which really\n> > means minimizing the amount of information the backend needs to know \n> > to figure out which file to write a page in. With something like the \n> > above you only need to know the tablespace name (or more likely OID), \n> > the relation OID (+name or not, depending on outcome of other \n> > argument), and the offset in the table. No worse than now from the \n> > software's point of view.\n> > Comments?\n\nI think the backend needs a per table token that indicates how\nto get at the physical bits of the file. Whether that's a filename\nalone, filename with path, oid, key to a smgr hash table or something\nelse, it's opaque above the smgr routines.\n\nHmm, now I'm thinking, since the tablespace discussion has been reopened,\nthe way to go about coding all this is to reactivate the smgr code: how\nabout I leave the existing md smgr as is, and clone it, call it md2 or\nsomething, and start messing with adding features there?\n\n\n> \n> I'm probably missing the context a bit, but imho we should try hard to\n> stay away from symlinks as the general solution for anything.\n> \n> Sorry for being behind here, but to make sure I'm on the right page:\n> o tablespaces decouple storage from logical tables\n> o a database lives in a default tablespace, unless specified\n> o by default, a table will live in the default tablespace\n> o (eventually) a table can be split across tablespaces\n> \n> Some thoughts:\n> o the ability to split single tables across disks was essential for\n> scalability when disks were small. But with RAID, NAS, etc etc isn't\n> that a smaller issue now?\n> o \"tablespaces\" would implement our less-developed \"with location\"\n> feature, right? Splitting databases, whole indices and whole tables\n> across storage is the biggest win for this work since more users will\n> use the feature.\n> o location information needs to travel with individual tables anyway.\n\nI was juist thinking that that discussion needed some summation.\n\nSome links to historic discussion: \n\nThis one is Vadim saying WAL will need oids names:\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1999-11/msg00809.html\n\nA longer discussion kicked off by Don Baccus:\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/2000-01/msg00510.html\n\nTom suggesting OIDs to allow rollback:\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/2000-03/msg00119.html\n\n\nMartin Neumann posted an question on dataspaces:\n\n(can't find it in the offical archives: looks like March 2000, 10-29 is\nmissing. here's my copy: don't beat on it! n particular, since I threw\nit together for local access, it's one _big_ index page)\n\nhttp://cooker.ir.rice.edu/postgresql/msg20257.html\n(in that thread is a post where I mention blindwrites and getting rid\nof GetRawDatabaseInfo)\n\nMartin later posted an RFD on tablespaces:\n\nhttp://cooker.ir.rice.edu/postgresql/msg20490.html\n\nHere's Hor�k Daniel with a patch for discussion, implementing dataspaces\non a per database level:\n\nhttp://cooker.ir.rice.edu/postgresql/msg20498.html\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Fri, 16 Jun 2000 14:35:28 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "At 03:00 PM 6/16/00 -0400, Tom Lane wrote:\n\n>> OK, if symlinks are simply an implementation detail hidden from the\n>> dbadmin, and if the physical structure is kept in the db so it can\n>> be rebuilt if necessary automatically, then I don't mind symlinks.\n>\n>I'm not sure about keeping it in the db --- creates a bit of a\n>chicken-and-egg problem doesn't it? \n\nNot if the tablespace creates preceeds the tables stored in them.\n\n> Maybe there needs to be a\n>\"system database\" that has nailed-down pathnames (no tablespaces\n>for you baby) and contains the critical installation-wide tables\n>like pg_database, pg_user, pg_tablespace. A restore would have\n>to restore these tables first anyway.\n\nOh, I see. Yes, when I've looked into this and have thought about\nit I've assumed that there would always be a known starting point\nwhich would contain the installation-wide tables.\n\n From a practical point of view, I don't think that's really a\nproblem.\n\nI've not looked into how Oracle does this, I assume it builds \na system tablespace on one of the initial mount points you give\nit when you install the thing. The paths to the mount points\nare stored in specific files known to Oracle, I think. It's \nbeen over a year (not long enough!) since I've set up Oracle...\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 16 Jun 2000 12:37:36 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "On Thu, Jun 15, 2000 at 07:53:52PM -0400, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> > On Thu, Jun 15, 2000 at 03:11:52AM -0400, Tom Lane wrote:\n> >> \"Ross J. Reedstrom\" <[email protected]> writes:\n> >>>> Any strong objections to the mixed relname_oid solution?\n> >> \n> >> Yes!\n> \n> > The plan here was to let VACUUM handle renaming the file, since it\n> > will already have all the necessary locks. This shortens the window\n> > of confusion. ALTER TABLE RENAME doesn't happen that often, really - \n> > the relname is there just for human consumption, then.\n> \n> Yeah, I've seen tons of discussion of how if we do this, that, and\n> the other thing, and be prepared to fix up some other things in case\n> of crash recovery, we can make it work with filename == relname + OID\n> (where relname tracks logical name, at least at some remove).\n> \n> Probably. Assuming nobody forgets anything.\n\nI agree, it seems a major undertaking, at first glance. And second. Even\nthird. Especially for someone who hasn't 'earned his spurs' yet. as\nit were.\n\n> I'm just trying to point out that that's a huge amount of pretty\n> delicate mechanism. The amount of work required to make it trustworthy\n> looks to me to dwarf the admin tools that Bruce is complaining about.\n> And we only have a few people competent to do the work. (With all\n> due respect, Ross, if you weren't already aware of the implications\n> for mdblindwrt, I have to wonder what else you missed.)\n\nAh, you knew that comment would come back to haunt me (I have a\ntendency to think out loud, even if checking and coming back latter\nwould be better;-) In fact, there's no problem, and never was, since the\nbuffer->blind.relname is filled in via RelationGetPhysicalRelationName,\njust like every other path that requires direct file access. I just\ndidn't remember that I had in fact checked it (it's been a couple months,\nand I just got back from vacation ;-)\n\nActually, Once I re-checked it, the code looked very familiar. I had\nspent time looking at the blind write code in the context of getting\nrid of the only non-startup use of GetRawDatabaseInfo.\n\nAs to missing things: I'm leaning heavily on Bruce's previous\nwork for temp tables, to seperate the two uses of relname, via the\nRelationGetRelationName and RelationGetPhysicalRelationName. There are\n102 uses of the first in the current code (many in elog messages), and\nonly 11 of the second. If I'd had to do the original work of finding\nevery use of relname, and catagorizing it, I agree I'm not (yet) up to\nit, but I have more confidence in Bruce's (already tested) work.\n\n> \n> Filename == OID is so simple, reliable, and straightforward by\n> comparison that I think the decision is a no-brainer.\n> \n\nPerhaps. Changing the label of the file on disk still requires finding\nall the code that assumes it knows what that name is, and changing it.\nSame work.\n\n> If we could afford to sink unlimited time into this one issue then\n> it might make sense to do it the hard way, but we have enough\n> important stuff on our TODO list to keep us all busy for years ---\n> I cannot believe that it's an effective use of our time to do this.\n> \n\nThe joys of Open Development. You've spent a fair amount of time trying\nto convince _me_ not to waste my time. Thanks, but I'm pretty bull headed\nsometimes. Since I've already done something of the work, take a look\nat what I've got, and then tell me I'm wasting my time, o.k.?\n\n> \n> > Hmm, what's all this with functions in catalog.c that are only called by\n> > smgr/md.c? seems to me that anything having to do with physical storage\n> > (like the path!) belongs in the smgr abstraction.\n> \n> Yeah, there's a bunch of stuff that should have been implemented by\n> adding new smgr entry points, but wasn't. It should be pushed down.\n> (I can't resist pointing out that one of those things is physical\n> relation rename, which will go away and not *need* to be pushed down\n> if we do it the way I want.)\n> \n\nOh, I agree completely. In fact, As I said to Hiroshi last time this came\nup, I think of the field in pg_class an an opaque token, to be filled in\nby the smgr, and only used by code further up to hand back to the smgr\nroutines. Same should be true of the buffer->blind struct.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n",
"msg_date": "Fri, 16 Jun 2000 16:07:13 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> (At least not when we're sitting atop a well-done filesystem,\n> which may leave the Linux folk out in the cold ;-).)\n\nExactly what fs of Linux are you talking about? I believe that for a database\nserver, ReiserFS would be a natural choice.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2582\nHowitzvej 75 �ben 14.00-18.00 Email: [email protected]\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n",
"msg_date": "Sat, 17 Jun 2000 01:02:49 +0200",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> [email protected] (Jan Wieck) writes:\n> > There are also disadvantages.\n> \n> > You can run out of space even if there are plenty GB's\n> > free on your disks. You have to create tablespaces\n> > explicitly.\n> \n> Not to mention the reverse: if I read this right, you have to suck\n> up your GB's long in advance of actually needing them. That's OK\n> for a machine that's dedicated to Oracle ... not so OK for smaller\n> installations, playpens, etc.\n>\n\nI've had an anxiety about the way like Oracle's preallocation.\nIt had not been easy for me to estimate the extent size in\nOracle. Maybe it would lose the simplicity of environment\nsettings which is one of the biggest advantage of PostgreSQL.\nIt seems that we should also provide not_preallocated DATAFILE\nwhen many_tables_in_a_file storage manager is introduced.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n \n",
"msg_date": "Sat, 17 Jun 2000 08:11:08 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> I think the backend needs a per table token that indicates how\n> to get at the physical bits of the file. Whether that's a filename\n> alone, filename with path, oid, key to a smgr hash table or something\n> else, it's opaque above the smgr routines.\n\nExcept to the commands that provide the user interface for tablespaces\nand so forth. And there aren't all that many places that deal with\nphysical filenames anyway. It would be a good idea to try to be a\nlittle stricter about this, but I'm not sure you can make the separation\na whole lot cleaner than it is now ... with the exception of the obvious\nbogosities like \"rename table\" being done above the smgr level. (But,\nas I said, I want to see that code go away, not just get moved into\nsmgr...)\n\n> Hmm, now I'm thinking, since the tablespace discussion has been reopened,\n> the way to go about coding all this is to reactivate the smgr code: how\n> about I leave the existing md smgr as is, and clone it, call it md2 or\n> something, and start messing with adding features there?\n\nUm, well, you can't have it both ways. If you're going to change/fix\nthe assumptions of code above the smgr, then you've got to update md\nat the same time to match your new definition of the smgr interface.\nWon't do much good to have a playpen smgr if the \"standard\" one is\nbroken.\n\nOne thing I have been thinking would be a good idea is to take the\nrelcache out of the bufmgr/smgr interfaces. The relcache is a\nhigher-level concept and ought not be known to bufmgr or smgr; they\nought to work with some low-level data structure or token for relations.\nWe might be able to eliminate the whole concept of \"blind write\" if we\ndo that. There are other problems with the relcache dependency: entries\nin relcache can get blown away at inopportune times due to shared cache\ninval, and it doesn't provide a good home for tokens for multiple\n\"versions\" of a relation if we go with the fill-a-new-physical-file\napproach to CLUSTER and so on.\n\nHmm, if you replace relcache in the smgr interfaces with pointers to\nan smgr-maintained data structure, that might be the same thing that\nyou are alluding to above about an smgr hash table.\n\nOne thing *not* to do is add yet a third layer of data structure on\ntop of the ones already maintained in fd.c and md.c. Whatever extra\ndata might be needed here should be added to md.c's tables, I think,\nand then the tokens used in the smgr interface would be pointers into\nthat table.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jun 2000 19:16:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> It seems that we should also provide not_preallocated DATAFILE\n> when many_tables_in_a_file storage manager is introduced.\n\nSeveral people in this thread have been talking like a\nsingle-physical-file storage manager is in our future, but I can't\nrecall anyone saying that they were going to do such a thing or even\npresenting reasons why it'd be a good idea.\n\nSeems to me that physical file per relation is considerably better for\nour purposes. It's easier to figure out what's going on for admin and\ndebug work, it means less lock contention among different backends\nappending concurrently to different relations, and it gives the OS a\nbetter shot at doing effective read-ahead on sequential scans.\n\nSo why all the enthusiasm for multi-tables-per-file?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jun 2000 19:30:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > It seems that we should also provide not_preallocated DATAFILE\n> > when many_tables_in_a_file storage manager is introduced.\n> \n> Several people in this thread have been talking like a\n> single-physical-file storage manager is in our future, but I can't\n> recall anyone saying that they were going to do such a thing or even\n> presenting reasons why it'd be a good idea.\n> \n> Seems to me that physical file per relation is considerably better for\n> our purposes. It's easier to figure out what's going on for admin and\n> debug work, it means less lock contention among different backends\n> appending concurrently to different relations, and it gives the OS a\n> better shot at doing effective read-ahead on sequential scans.\n> \n> So why all the enthusiasm for multi-tables-per-file?\n\nNo idea. I thought Vadim mentioned it, but I am not sure anymore. I\ncertainly like our current system.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Jun 2000 20:08:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "\n> > So why all the enthusiasm for multi-tables-per-file?\n\nIt allows you to use raw partitions which stop the OS double buffering\nand wasting half of memory, as well as removing the overhead of indirect\nblocks in the file system.\n",
"msg_date": "Sat, 17 Jun 2000 10:39:16 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> > \n> > So why all the enthusiasm for multi-tables-per-file?\n> \n> No idea. I thought Vadim mentioned it, but I am not sure anymore. I\n> certainly like our current system.\n> \n\nOops,I'm not so enthusiastic for multi_tables_per_file smgr.\nI believe that Ross and I have taken a practical way that doesn't\nbreak current file_per_table smgr.\n\nHowever it seems very natural to take multi_tables_per_file\nsmgr into account when we consider TABLESPACE concept.\nBecause TABLESPACE is an encapsulation,it should have\na possibility to handle multi_tables_per_file smgr IMHO.\n\nRegards. \n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Sat, 17 Jun 2000 18:38:29 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> However it seems very natural to take multi_tables_per_file\n> smgr into account when we consider TABLESPACE concept.\n> Because TABLESPACE is an encapsulation,it should have\n> a possibility to handle multi_tables_per_file smgr IMHO.\n\nOK, I see: you're just saying that the tablespace stuff should be\ndesigned in such a way that it would work with a non-file-per-table\nsmgr. Agreed, that'd be a good check of a clean design, and someday\nwe might need it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jun 2000 12:11:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> Not to mention the reverse: if I read this right, you have to suck\n> up your GB's long in advance of actually needing them. That's OK\n> for a machine that's dedicated to Oracle ... not so OK for smaller\n> installations, playpens, etc.\n\nTo me it looks like a way to make Oracle work on VMS machines. This is the way\nfiles are allocated on Digital hardware.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2582\nHowitzvej 75 �ben 14.00-18.00 Email: [email protected]\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n",
"msg_date": "Sat, 17 Jun 2000 18:32:06 +0200",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items"
},
{
"msg_contents": "[This followup was posted to comp.databases.postgresql.hackers and a copy \nwas sent to the cited author.]\n\nA few thoughts:\n\n1) There may be reasons why someone might not want to use RAID. \n For instance, suppose one wants to put different tables on different \ndrives so that the seeks for one table doesn't move the drive heads away \nfrom the disk area for another table.\n Also, suppose someone wants to use a particular drive for a particular \npurpose (eg certain indexes) because it is faster at seeking vs another \ndrive that is faster at sustained transfer rates.\n Also, someone may want to span a drive across multiple SCSI \ncontrollers. Most RAID arrays I'm aware of are per SCSI controller. \n I think it is fair to say that there will always be instances where \npeople want to have more control over where stuff goes because they are \nwilling to put the effort into more subtle tuning games. Well, there \nought to be a way.\n\n2) Some OSs do not support symlinks. The ability to list a bunch of \ndevices for where things will go would be of value.\n Also, if you aren't putting your data on a real file system (say on \nraw partitions instead) you are going to need a way to specify that \nanyway.\n\nIn news:<[email protected]>, \[email protected] says...\n> o the ability to split single tables across disks was essential for\n> scalability when disks were small. But with RAID, NAS, etc etc isn't\n> that a smaller issue now?\n> o \"tablespaces\" would implement our less-developed \"with location\"\n> feature, right? Splitting databases, whole indices and whole tables\n> across storage is the biggest win for this work since more users will\n> use the feature.\n> o location information needs to travel with individual tables anyway.\n\n\n \n",
"msg_date": "Sat, 17 Jun 2000 14:52:37 -0700",
"msg_from": "Randall Parker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > There are also disadvantages.\n>\n> > You can run out of space even if there are plenty GB's\n> > free on your disks. You have to create tablespaces\n> > explicitly.\n>\n> Not to mention the reverse: if I read this right, you have to suck\n> up your GB's long in advance of actually needing them. That's OK\n> for a machine that's dedicated to Oracle ... not so OK for smaller\n> installations, playpens, etc.\n\n Right, the design is perfect for a few databases with a more\n or less stable size and schema (slow to medium growth). The\n problem is, that production databases tend to fall into that\n behaviour and that might be a reason for so many people\n asking for Oracle compatibility - they want to do development\n in the high flexible Postgres environment, while running\n their production server under Oracle :-(.\n\n> I'm not convinced that there's anything fundamentally wrong with\n> doing storage allocation in Unix files the way we have been.\n>\n> (At least not when we're sitting atop a well-done filesystem,\n> which may leave the Linux folk out in the cold ;-).)\n\n I'm with you on that, even if I'm one of the Linux loosers.\n The only point that really strikes me is that in our system\n you might end up with a corrupted file system because some\n inode changes didn't make it to the disk before a crash. Even\n if using fsync() instead of fdatasync() (what we cannot use\n at all and that's a pain from the performance PoV). In the\n Oracle world, that could only happen during\n\n ALTER TABLESPACE <tsname> ADD DATAFILE ...\n\n which is a fairly seldom command, issued usually by the DB\n admin (at least it requires admin privileges) and thus\n ensures the \"admin is there and already paying attention\". A\n little detail not to underestimate IMHO.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Sun, 18 Jun 2000 01:23:59 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> > (At least not when we're sitting atop a well-done filesystem,\n> > which may leave the Linux folk out in the cold ;-).)\n>\n> Those who live in HP houses should not throw stones :))\n\n Huh? Up to HPUX-9 they used to have BSD-FFS - even if it was\n a 4.2 BSD one - no?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Sun, 18 Jun 2000 01:27:09 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > Tom Lane wrote:\n> >> It gets a little trickier if you want to be able to split\n> >> multi-gig tables across several tablespaces, though, since\n> >> you couldn't just append \".N\" to the base table path in that\n> >> scenario.\n> >>\n> >> I'd be interested to know what sort of facilities Oracle\n> >> provides for managing huge tables...\n>\n> > Oracle tablespaces are a collection of 1...n preallocated\n> > files. Each table then is bound to a tablespace and\n> > allocates extents (chunks) from those files.\n>\n> OK, to get back to the point here: so in Oracle, tables can't cross\n> tablespace boundaries, but a tablespace itself could span multiple\n> disks?\n\n They can. The path in\n\n ALTER TABLESPACE <tsname> ADD DATAFILE ...\n\n can point to any location the db system has access to.\n\n>\n> Not sure if I like that better or worse than equating a tablespace\n> with a directory (so, presumably, all the files within it live on\n> one filesystem) and then trying to make tables able to span\n> tablespaces. We will need to do one or the other though, if we want\n> to have any significant improvement over the current state of affairs\n> for large tables.\n>\n> One way is to play the flip-the-path-ordering game some more,\n> and access multiple-segment tables with pathnames like this:\n>\n> .../TABLESPACE/RELATION -- first or only segment\n> .../TABLESPACE/N/RELATION -- N'th extension segment\n>\n> [...]\n\n In most cases all objects in one database are bound to one or\n two tablespaces (data and indices). So you do an estimation\n of the size required, create the tablespaces (and probably\n all their extension files), then create the schema and load\n it. The only reason not to do so is if your DB exceeds some\n size where you have to fear not beeing able to finish online\n backups before getting into Online-Relolog stuck. Has to do\n the the online backup procedure of Oracle.\n\n> This isn't any harder for md.c to deal with than what we do now,\n> but by making the /N subdirectories be symlinks, the dbadmin could\n> easily arrange for extension segments to go on different filesystems.\n> Also, since /N subdirectory symlinks can be added as needed,\n> expanding available space by attaching more disks isn't hard.\n> (If the admin hasn't pre-made a /N symlink when it's needed,\n> I'd envision the backend just automatically creating a plain\n> subdirectory so that it can extend the table.)\n\n So the admin allways have to leave enough freespace in the\n default location to keep the DB running until he can take it\n offline, move the autocreated files and create the symlinks.\n What a pain for 24/7 systems.\n\n> We'd still want to create some tools to help the dbadmin with slinging\n> all these symlinks around, of course. But I think it's critical to keep\n> the low-level file access protocol simple and reliable, which really\n> means minimizing the amount of information the backend needs to know to\n> figure out which file to write a page in. With something like the above\n> you only need to know the tablespace name (or more likely OID), the\n> relation OID (+name or not, depending on outcome of other argument),\n> and the offset in the table. No worse than now from the software's\n> point of view.\n\n Exactly the \"low-level file access\" protocol is highly\n complicated in Postgres. Because nearly every object needs\n his own file, we need to deal with virtual file descriptors.\n With an Oracle-like tablespace concept and a fixed limit of\n total tablespace files (this time OS or installation\n specific), we could keep them all open all the time. IMHO a\n big win.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Sun, 18 Jun 2000 02:10:09 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > There are also disadvantages.\n> >\n> > You can run out of space even if there are plenty GB's\n> > free on your disks. You have to create tablespaces\n> > explicitly.\n> >\n> > If you've choosen inadequate extent size parameters, you\n> > end up with high fragmented tables (slowing down) or get\n> > stuck with running against maxextents, where only a reorg\n> > (export/import) helps.\n>\n> Also, Tom Lane pointed out to me that file system read-ahead does not\n> help if your table is spread around in tablespaces.\n\n Not with our HEAP concept. With the Oracle EXTENT concept it\n does pretty good, because they have different block/extent\n sizes. Usually an extent spans multiple blocks, so in the\n case of sequential reads they read each extent of probably\n hundreds of K sequential. And in the case of indexed reads,\n they know the extent and offset of the tuple inside of the\n extent, so they know the exact location of the record inside\n the tablespace to read.\n\n The big problem we allways had (why we need TOAST at all) is\n that the logical blocksize (extent size) of a table is bound\n to your physical blocksize used in the shared cache. This is\n fixed so deeply in the heap storage architecture, that I'm\n scared about it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Sun, 18 Jun 2000 02:20:15 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Don Baccus wrote:\n> At 11:46 AM 6/16/00 -0400, Tom Lane wrote:\n>\n> I personally dislike depending on symlinks to move stuff around.\n> Among other things, a pg_dump/restore (and presumably future\n> backup tools?) can't recreate the disk layout automatically.\n>\n\n Most impact from this one, IMHO.\n\n Not that Oracle tools are able to do it either. But I think\n it's more trivial to recreate a 30+ tablespace layout on the\n disks than to recreate all symlinks for a 20,000+\n tables/indices database like an SAP R/3 one.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Sun, 18 Jun 2000 02:36:01 +0200 (MEST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "\n> Thomas Lockhart wrote:\n> > > (At least not when we're sitting atop a well-done filesystem,\n> > > which may leave the Linux folk out in the cold ;-).)\n> >\n> > Those who live in HP houses should not throw stones :))\n> \n> Huh? Up to HPUX-9 they used to have BSD-FFS - even if it was\n> a 4.2 BSD one - no?\n\nIt's still there, along with VxFS from Veritas.\n\nCiao,\n\nGiles\n",
"msg_date": "Sun, 18 Jun 2000 12:08:57 +1000",
"msg_from": "Giles Lean <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "OK, I have thought about tablespaces, and here is my proposal. Maybe\nthere will some good ideas in my design.\n\nMy feeling is that intelligent use of directories and symlinks can allow\nPostgreSQL to handle tablespaces and allow administrators to use\nsymlinks outside of PostgreSQL and have PostgreSQL honor those changes\nin a reload.\n\nSeems we have three tablespace needs:\n\n\tlocate database in separate disk\n\tlocate tables in separate directory/symlink\n\tlocate secondary extents on different drives\n\nIf we have a new CREATE DATABASE LOCATION command, we can say:\n\n\tCREATE DATABASE LOCATION dbloc IN '/var/private/pgsql';\n\tCREATE DATABASE newdb IN dbloc;\n\nThe first command makes sure /var/private/pgsql exists and is write-able\nby postgres. It then creates a dbloc directory and a symlink:\n\n\tmkdir /var/private/pgsql/dbloc\n\tln -s /var/private/pgsql/dbloc data/base/dbloc\n\nThe CREATE DATABASE command creates data/base/dbloc/newdb and creates\nthe database there. We would have to store the dbloc location in\npg_database.\n\nTo handle placing tables, we can use:\n\n\tCREATE LOCATION tabloc IN '/var/private/pgsql';\n\tCREATE TABLE newtab ... IN tabloc;\n\nThe first command makes sure /var/private/pgsql exists and is write-able\nby postgres. It then creates a directory tabloc in /var/private/pgsql,\nand does a symlink:\n\n\tln -s /var/private/pgsql/tabloc data/base/dbloc/newdb/tabloc\n\nand creates the table in there. These location names have to be stored\nin pg_class.\n\nThe difference betweeen CREATE LOCATION and CREATE DATABASE LOCATION is\nthat the first one puts it in the current database, while the latter\nputs the symlinks in data/base. \n\n(Can we remove data/base and just make it data/?)\n\nI would also allow a simpler CREATE LOCATION tabloc2 which just creates\na directory in the database directory. These can be moved later using\nsymlinks. Of course, CREATE DATABASE LOCATION too.\n\nI haven't figured out extent locations yet. One idea is to allow\nadministrators to create symlinks for tables >1 gig, and to not remove\nthe symlinks when a table shrinks. Only remove the file pointed to by\nthe table, but leave the symlink there so if the table grows again, it\ncan use the symlink. lstat() would allow this.\n\nNow on to preserving this information. My ideas is that PostgreSQL\nshould never remove a directory or symlink in the data/base directory. \nThose represent locations made by the administrator. So, pg_dump with a\n-l option can go through the db directory and output CREATE LOCATION\ncommands for every database, so when reloaded, the locations will be\npreserved, assuming the symlinks point to still-valid directories.\n\nWhat this does allow is someone to create locations during table\npopulation, but to keep them all on the same drive. If they later move\nthings around on the disk using cp and symlinks, this will be preserved\nby pg_dump.\n\nMy problem with many of the tablespace systems is that it requires two\nchanges. One in the file system using symlinks, and another in the\ndatabase to point to the new entries, or it does not preserve them\nacross backups.\n\nIf someone does want to remove a location, they would have to remove all\ntables in the directory, and the base directory and symlink can be\nremoved with DROP LOCATION.\n\nMy solution basically stores locations for databases and tables in the\ndatabase, but does _not_ store information about what locations exist or\nif they are symlinks. However, it does allow for preserving of this\ninformation in dumps.\n\nI feel this solution is very flexible. \n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Jun 2000 23:16:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Thomas Lockhart wrote:\n>> Those who live in HP houses should not throw stones :))\n\n> Huh? Up to HPUX-9 they used to have BSD-FFS - even if it was\n> a 4.2 BSD one - no?\n\nYeah, the standard HPUX filesystem is still BSD ... and it still runs\nrings around Linux extfs2 in my experience. (I've been informed that\nLinux has better filesystems than extfs2, but that seems to be what\nthe average Linux user is running.) I have a realtime data collection\nprogram that usually wants to write several thousand small files during\nshutdown. The shutdown typically takes about 3 minutes on an HP 715/75,\nupwards of 10 minutes on a Linux box with nominally-faster hardware.\n\nBTW, HP is trying to sell people on using a new journaling filesystem\nthat they claim outperforms BSD, but my few experiments with it\nhaven't encouraged me to pursue it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Jun 2000 01:21:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "On Sun, 18 Jun 2000, Jan Wieck wrote:\n...\n> ALTER TABLESPACE <tsname> ADD DATAFILE ...\n> \n> which is a fairly seldom command, issued usually by the DB\n> admin (at least it requires admin privileges) and thus\n> ensures the \"admin is there and already paying attention\". A\n> little detail not to underestimate IMHO.\n...\nEsp. in the R/3 area this will become no longer be true the more commonly\ncommands like \"AUTOEXTEND\" and \"RESIZE\" are used (automated at worst).\n\nBye!\n----\nMichael Reifenberger\n^.*Plaut.*$, IT, R/3 Basis, GPS\n\n",
"msg_date": "Sun, 18 Jun 2000 12:38:39 +0200 (CEST)",
"msg_from": "Michael Reifenberger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> I haven't figured out extent locations yet. One idea is to allow\n> administrators to create symlinks for tables >1 gig, and to not remove\n> the symlinks when a table shrinks. Only remove the file pointed to by\n> the table, but leave the symlink there so if the table grows again, it\n> can use the symlink. lstat() would allow this.\n\nOK, I have an extent idea. It is:\n\n\tCREATE LOCATION tabloc IN '/var/private/pgsql' EXTENT2\n'/usr/pg'.\n\nThis creates an /extents directory in the location, with extents/2\nsymlinked to /usr/pg:\n\n\tdata/base/mydb/tabloc\n\tdata/base/mydb/tabloc/extents/2\n\nWhen extending a table, it looks for an extents/2 directory and uses\nthat if it exists. Same for extents3. We could even get fancy and\nround-robin through all the extents directories, looping around to the\nbeginning when we run out of them. That sounds nice.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Jun 2000 09:33:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> > I haven't figured out extent locations yet. One idea is to allow\n> > administrators to create symlinks for tables >1 gig, and to not remove\n> > the symlinks when a table shrinks. Only remove the file pointed to by\n> > the table, but leave the symlink there so if the table grows again, it\n> > can use the symlink. lstat() would allow this.\n> \n> OK, I have an extent idea. It is:\n> \n> \tCREATE LOCATION tabloc IN '/var/private/pgsql' EXTENT2\n> '/usr/pg'.\n\nEven better:\n\n\tCREATE LOCATION tabloc IN '/var/private/pgsql' \n\t\tEXTENT '/usr/pg', '/usr1/pg'\n\nThis will create extent/2 and extent/3, and the system can rotate\nextents between the primary storage area, and 2 and 3.\n\nAlso, CREATE INDEX will need a location specification added.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Jun 2000 10:35:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> ... We could even get fancy and\n> round-robin through all the extents directories, looping around to the\n> beginning when we run out of them. That sounds nice.\n\nThat sounds horrible. There's no way to tell which extent directory\nextent N goes into except by scanning the location directory to find\nout how many extent subdirectories there are (so that you can compute\nN modulo number-of-directories). Do you want to pay that price on every\nfile open?\n\nWorse, what happens when you add another extent directory? You can't\nfind your old extents anymore, that's what, because they're not in the\nright place (N modulo number-of-directories just changed). Since the\nextents are presumably on different volumes, you're talking about\nphysical file moves to get them where they should be. You probably\ncan't add a new extent without shutting down the entire database while\nyou reshuffle files --- at the very least you'd need to get exclusive\nlocks on all the tables in that tablespace.\n\nAlso, you'll get filename conflicts from multiple extents of a single\ntable appearing in one of the recycled extent dirs. You could work\naround it by using the non-modulo'd N as part of the final file name,\nbut that just adds more complexity and makes the filename-generation\nmachinery that much more closely tied to this specific way of doing\nthings.\n\nThe right way to do this is that extent N goes into extents subdirectory\nN, period. If there's no such subdirectory, create one on-the-fly as a\nplain subdirectory of the location directory. The dbadmin can easily\ncreate secondary extent symlinks *in advance of their being needed*.\nReorganizing later is much more painful since it requires moving\nphysical files, but I think that'd be true no matter what. At least\nwe should see to it that adding more space in advance of needing it is\npainless.\n\nIt's possible to do it that way (auto-create extent subdir if needed)\nwithout tying the md.c machinery real closely to a specific filename\ncreation procedure: it's just the same sort of thing as install programs\ncustomarily do. \"If you fail to create a file, try creating its\nancestor directory.\" We'd have to think about whether it'd be a good\nidea to allow auto-creation of more than one level of directory; offhand\nit seems that needing to make more than one level is probably a sign of\nan erroneous path, not need for another extent subdirectory.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Jun 2000 12:06:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "If we eliminate the round-robin idea, what did people think of the rest\nof the ideas?\n\n> Bruce Momjian <[email protected]> writes:\n> > ... We could even get fancy and\n> > round-robin through all the extents directories, looping around to the\n> > beginning when we run out of them. That sounds nice.\n> \n> That sounds horrible. There's no way to tell which extent directory\n> extent N goes into except by scanning the location directory to find\n> out how many extent subdirectories there are (so that you can compute\n> N modulo number-of-directories). Do you want to pay that price on every\n> file open?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Jun 2000 18:50:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> > Not to mention the reverse: if I read this right, you have to suck\n> > up your GB's long in advance of actually needing them. That's OK\n> > for a machine that's dedicated to Oracle ... not so OK for smaller\n> > installations, playpens, etc.\n> \n> To me it looks like a way to make Oracle work on VMS machines. This is the way\n> files are allocated on Digital hardware.\n\nAgreed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Jun 2000 19:36:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "At 06:50 PM 6/18/00 -0400, Bruce Momjian wrote:\n>If we eliminate the round-robin idea, what did people think of the rest\n>of the ideas?\n\nWhy invent new syntax when \"create tablespace\" is something a lot\nof folks will recognize?\n\nAnd why not use \"create table ... using ... \"? In other words, \nOracle-compatible for this construct? Sure, Postgres doesn't\nhave to follow Oraclisms but picking an existing contruct means\nat least SOME folks can import a datamodel without having to\nedit it.\n\nDoes your proposal break the smgr abstraction, i.e. does it\npreclude later efforts to (say) implement an (optional) \nraw-device storage manager?\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 18 Jun 2000 16:43:42 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> At 06:50 PM 6/18/00 -0400, Bruce Momjian wrote:\n> >If we eliminate the round-robin idea, what did people think of the rest\n> >of the ideas?\n> \n> Why invent new syntax when \"create tablespace\" is something a lot\n> of folks will recognize?\n> \n> And why not use \"create table ... using ... \"? In other words, \n> Oracle-compatible for this construct? Sure, Postgres doesn't\n> have to follow Oraclisms but picking an existing contruct means\n> at least SOME folks can import a datamodel without having to\n> edit it.\n\nSure, use another syntax. My idea was to use symlinks, and allow their\nmoving using symlinks and preserve them during dump.\n\n> \n> Does your proposal break the smgr abstraction, i.e. does it\n> preclude later efforts to (say) implement an (optional) \n> raw-device storage manager?\n\nSeeing very few want that done, I don't see it as an issue at this\npoint.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Jun 2000 20:08:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "At 08:08 PM 6/18/00 -0400, Bruce Momjian wrote:\n\n>> Does your proposal break the smgr abstraction, i.e. does it\n>> preclude later efforts to (say) implement an (optional) \n>> raw-device storage manager?\n>\n>Seeing very few want that done, I don't see it as an issue at this\n>point.\n\nSorry, I disagree. There's excuse for breaking existing abstractions\nunless there's a compelling reason to do so.\n\nMy question should make it clear I was using a raw-device storage\nmanager as an example. There are other possbilities, like a \nmany-tables-per-file storage manager.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 18 Jun 2000 17:12:22 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> [email protected] (Jan Wieck) writes:\n> > Thomas Lockhart wrote:\n> >> Those who live in HP houses should not throw stones :))\n> \n> > Huh? Up to HPUX-9 they used to have BSD-FFS - even if it was\n> > a 4.2 BSD one - no?\n> \n> Yeah, the standard HPUX filesystem is still BSD ... and it still runs\n> rings around Linux extfs2 in my experience. (I've been informed that\n> Linux has better filesystems than extfs2, but that seems to be what\n> the average Linux user is running.) I have a realtime data collection\n> program that usually wants to write several thousand small files during\n> shutdown. The shutdown typically takes about 3 minutes on an HP 715/75,\n> upwards of 10 minutes on a Linux box with nominally-faster hardware.\n> \n> BTW, HP is trying to sell people on using a new journaling filesystem\n> that they claim outperforms BSD, but my few experiments with it\n> haven't encouraged me to pursue it.\n\nYou should really try the BSD4.4 FFS with soft updates. It re-orders\ndisk flushes to greatly improve performance. It really is great. \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Jun 2000 20:24:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Sun, Jun 18, 2000 at 05:12:22PM -0700, Don Baccus wrote:\n> At 08:08 PM 6/18/00 -0400, Bruce Momjian wrote:\n> \n> >> Does your proposal break the smgr abstraction, i.e. does it\n> >> preclude later efforts to (say) implement an (optional) \n> >> raw-device storage manager?\n> >\n> >Seeing very few want that done, I don't see it as an issue at this\n> >point.\n> \n> Sorry, I disagree. There's excuse for breaking existing abstractions\n> unless there's a compelling reason to do so.\n> \n> My question should make it clear I was using a raw-device storage\n> manager as an example. There are other possbilities, like a \n> many-tables-per-file storage manager.\n> \n\nDon, I see Bruce's proposal as implementation details within the sotrage\nmanager. In fact, we should probably implement the tablespace commands\nwith an extention of the smgr api. One different smgr I've been thinking\na little about is the persistent RAM smgr: I've heard there's some\nnew technologies coming up that may make large amounts cheaper, soon.\nAnd there's always PostgreSQL for PalmOS, right? (Hey, IBM's got a Pocket\nDB2, why shouldn't we?)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n",
"msg_date": "Sun, 18 Jun 2000 19:47:04 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> At 08:08 PM 6/18/00 -0400, Bruce Momjian wrote:\n> \n> >> Does your proposal break the smgr abstraction, i.e. does it\n> >> preclude later efforts to (say) implement an (optional) \n> >> raw-device storage manager?\n> >\n> >Seeing very few want that done, I don't see it as an issue at this\n> >point.\n> \n> Sorry, I disagree. There's excuse for breaking existing abstractions\n> unless there's a compelling reason to do so.\n> \n> My question should make it clear I was using a raw-device storage\n> manager as an example. There are other possbilities, like a \n> many-tables-per-file storage manager.\n\nI agree it is nice to keep things as abstract as possible. I just don't\nknow if the abstraction will cause added complexity.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Jun 2000 20:54:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "My basic proposal is that we optionally allow symlinks when creating\ntablespace directories, and that we interrogate those symlinks during a\ndump so administrators can move tablespaces around without having to\nmodify environment variables or system tables.\n\nI also suggested creating an extent directory to hold extents, like\nextent/2 and extent/3. This will allow administration for smaller sites\nto be simpler.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Jun 2000 23:13:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "At 11:13 PM 6/18/00 -0400, Bruce Momjian wrote:\n>My basic proposal is that we optionally allow symlinks when creating\n>tablespace directories, and that we interrogate those symlinks during a\n>dump so administrators can move tablespaces around without having to\n>modify environment variables or system tables.\n\nIf they can move them around from within the db, they'll have no need to\nmove them around from outside the db. \n\nI don't quite understand your devotion to using filesystem commands\noutside the database to do database administration.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 18 Jun 2000 21:07:48 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I also suggested creating an extent directory to hold extents, like\n> extent/2 and extent/3. This will allow administration for smaller sites\n> to be simpler.\n\nI don't see the value in creating an extra level of directory --- seems\nthat just adds one more Unix directory-lookup cycle to each file open,\nwithout any apparent return. What's wrong with extent directory names\nlike extent2, extent3, etc?\n\nObviously the extent dirnames must be chosen so they can't conflict\nwith table filenames, but that's easily done. For example, if table\nfiles are named like 'OID_xxx' then 'extentN' will never conflict.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Jun 2000 00:25:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> If they can move them around from within the db, they'll have no need to\n> move them around from outside the db. \n> I don't quite understand your devotion to using filesystem commands\n> outside the database to do database administration.\n\nBeing *able* to use filesystem commands to see/fix what's going on is a\ngood thing, particularly from a development/debugging standpoint. But\nI agree we want to have within-the-system admin commands to do the same\nthings.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Jun 2000 00:28:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "At 12:28 AM 6/19/00 -0400, Tom Lane wrote:\n\n>Being *able* to use filesystem commands to see/fix what's going on is a\n>good thing, particularly from a development/debugging standpoint. \n\nOf course it's a crutch for development, but outside of development\ncircles few users will know how to use the OS in regard to the\ndatabase.\n\nAssuming PG takes off. Of course, if it remains the realm of the\ndedicated hard-core hacker, I'm wrong. \n\nI have nothing against preserving the ability to use filesystem\ncommands if there's no significant costs inherent with this approach.\nI'd view the breaking of smgr abstraction as a significant cost (though\nI agree with Ross that it Bruce's proposal shouldn't require that, I\nasked my question to flush Bruce out, if you will, because he's \ndevoted to a particular outside-the-db management model).\n\n> But\n>I agree we want to have within-the-system admin commands to do the same\n>things.\n\nMUST have, I should think.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 18 Jun 2000 21:33:19 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> Don Baccus <[email protected]> writes:\n> > If they can move them around from within the db, they'll have no need to\n> > move them around from outside the db. \n> > I don't quite understand your devotion to using filesystem commands\n> > outside the database to do database administration.\n> \n> Being *able* to use filesystem commands to see/fix what's going on is a\n> good thing, particularly from a development/debugging standpoint. But\n> I agree we want to have within-the-system admin commands to do the same\n> things.\n\nYes, I like to have db commands to do it. I just like to allow things\noutside too, if possible. It also prevents things from getting out of\nsync because the database doesn't need to store the symlink location.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Jun 2000 00:53:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> I'd view the breaking of smgr abstraction as a significant cost\n\nActually, the \"smgr abstraction\" has *been* broken for a long time,\ndue to sloppy implementation of features like relation rename.\nBut I agree we should try to re-establish a clean separation.\n\n>> But\n>> I agree we want to have within-the-system admin commands to do the same\n>> things.\n\n> MUST have, I should think.\n\nNo argument from this quarter. It seems to me that once a PG\ninstallation has been set up, it ought to be possible to do routine\nadmin tasks remotely --- and that means no direct access to the\nserver's filesystem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Jun 2000 01:49:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I also suggested creating an extent directory to hold extents, like\n> > extent/2 and extent/3. This will allow administration for smaller sites\n> > to be simpler.\n> \n> I don't see the value in creating an extra level of directory --- seems\n> that just adds one more Unix directory-lookup cycle to each file open,\n> without any apparent return. What's wrong with extent directory names\n> like extent2, extent3, etc?\n> \n> Obviously the extent dirnames must be chosen so they can't conflict\n> with table filenames, but that's easily done. For example, if table\n> files are named like 'OID_xxx' then 'extentN' will never conflict.\n\nWe could call them extent.2, extent-2, or Extent-2.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Jun 2000 09:28:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> At 12:28 AM 6/19/00 -0400, Tom Lane wrote:\n> \n> >Being *able* to use filesystem commands to see/fix what's going on is a\n> >good thing, particularly from a development/debugging standpoint. \n> \n> Of course it's a crutch for development, but outside of development\n> circles few users will know how to use the OS in regard to the\n> database.\n> \n> Assuming PG takes off. Of course, if it remains the realm of the\n> dedicated hard-core hacker, I'm wrong. \n> \n> I have nothing against preserving the ability to use filesystem\n> commands if there's no significant costs inherent with this approach.\n> I'd view the breaking of smgr abstraction as a significant cost (though\n> I agree with Ross that it Bruce's proposal shouldn't require that, I\n> asked my question to flush Bruce out, if you will, because he's \n> devoted to a particular outside-the-db management model).\n\nThe fact is that symlink information is already stored in the file\nsystem. If we store symlink information in the database too, there\nexists the ability for the two to get out of sync. My point is that I\nthink we can _not_ store symlink information in the database, and query\nthe file system using lstat when required.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Jun 2000 09:30:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> \n> The fact is that symlink information is already stored in the file\n> system. If we store symlink information in the database too, there\n> exists the ability for the two to get out of sync. My point is that I\n> think we can _not_ store symlink information in the database, and query\n> the file system using lstat when required.\n>\n\nHmm,this seems pretty confusing to me.\nI don't understand the necessity of symlink.\nDirectory tree,symlink,hard link ... are OS's standard.\nBut I don't think they are fit for dbms management.\n\nPostgreSQL is a database system of cource. So\ncouldn't it handle more flexible structure than OS's\ndirectory tree for itself ?\n\nRegards.\n\nHiroshi Inoue\[email protected] \n \n\n",
"msg_date": "Tue, 20 Jun 2000 01:17:14 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items"
},
{
"msg_contents": "> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > \n> > The fact is that symlink information is already stored in the file\n> > system. If we store symlink information in the database too, there\n> > exists the ability for the two to get out of sync. My point is that I\n> > think we can _not_ store symlink information in the database, and query\n> > the file system using lstat when required.\n> >\n> \n> Hmm,this seems pretty confusing to me.\n> I don't understand the necessity of symlink.\n> Directory tree,symlink,hard link ... are OS's standard.\n> But I don't think they are fit for dbms management.\n> \n> PostgreSQL is a database system of cource. So\n> couldn't it handle more flexible structure than OS's\n> directory tree for itself ?\n\nYes, but is anyone suggesting a solution that does not work with\nsymlinks? If not, why not do it that way?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Jun 2000 13:35:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n>\n> > > -----Original Message-----\n> > > From: Bruce Momjian [mailto:[email protected]]\n> > >\n> > > The fact is that symlink information is already stored in the file\n> > > system. If we store symlink information in the database too, there\n> > > exists the ability for the two to get out of sync. My point is that I\n> > > think we can _not_ store symlink information in the database,\n> and query\n> > > the file system using lstat when required.\n> > >\n> > Hmm,this seems pretty confusing to me.\n> > I don't understand the necessity of symlink.\n> > Directory tree,symlink,hard link ... are OS's standard.\n> > But I don't think they are fit for dbms management.\n> >\n> > PostgreSQL is a database system of cource. So\n> > couldn't it handle more flexible structure than OS's\n> > directory tree for itself ?\n>\n> Yes, but is anyone suggesting a solution that does not work with\n> symlinks? If not, why not do it that way?\n>\n\nMaybe other solutions have been proposed already because\nthere have been so many opinions and proposals.\n\nI've felt TABLE(DATA)SPACE discussion has always been\ndivergent. IMHO,one of the main cause is that various factors\nhave been discussed at once. Shouldn't we make step by step\nconsensus in TABLE(DATA)SPACE discussion ?\n\nIMHO,the first step is to decide the syntax of CREATE TABLE\ncommand not to define TABLE(DATA)SPACE.\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Tue, 20 Jun 2000 14:52:17 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items"
},
{
"msg_contents": "> > Yes, but is anyone suggesting a solution that does not work with\n> > symlinks? If not, why not do it that way?\n> >\n> \n> Maybe other solutions have been proposed already because\n> there have been so many opinions and proposals.\n> \n> I've felt TABLE(DATA)SPACE discussion has always been\n> divergent. IMHO,one of the main cause is that various factors\n> have been discussed at once. Shouldn't we make step by step\n> consensus in TABLE(DATA)SPACE discussion ?\n> \n> IMHO,the first step is to decide the syntax of CREATE TABLE\n> command not to define TABLE(DATA)SPACE.\n> \n> Comments ?\n\nAgreed. Seems we have several issues:\n\n\tfilename contents\n\ttablespace implementation\n\ttablespace directory layout\n\ttablespace commands and syntax\n\nFilename syntax seems to have resolved to\ntablespace/tablename_oid_version or something like that. I think a\nclean solution to keep symlink names in sync with rename is to use hard\nlinks during rename, and during vacuum, if the link count is greater\nthan one, we can scan the directory and remove old files matching the\noid.\n\nI hope we can implement tablespaces using symlinks that can be dump, but\nthe symlink location does not have to be stored in the database.\n\nSeems we are going to use Extent-2/Extent-3 to store extents under each\ntablespace.\n\nIt also seems we will be using the Oracle tablespace syntax where\nappropriate.\n\nComments?\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jun 2000 09:40:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "At 09:40 20/06/00 -0400, Bruce Momjian wrote:\n>\n> [lots of stuff about symlinks]\n>\n\nIt just occurred to me that the symlinks concerns may be short-circuitable,\nif the following are true:\n\n1. most of the desirability is for external 'management' and debugging etc\non 'reasonably' static database designs.\n\n2. metadata changes (specifically renaming tables) occur infrequently.\n\n3. there is no reason why they are desirable *technically* within the\nimplementations being discussed.\n\nIf these are true, then why not create a utility (eg. pg_update_symlinks)\nthat creates the relevant symlinks. It does not matter if they are\noutdated, from an integrity point of view, and for the most part they can\nbe automatically maintained. Internally, postgresql can totally ignore them.\n\nHave I missed something?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 21 Jun 2000 00:20:07 +1000",
"msg_from": "\"Philip J. Warner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> At 09:40 20/06/00 -0400, Bruce Momjian wrote:\n> >\n> > [lots of stuff about symlinks]\n> >\n> \n> It just occurred to me that the symlinks concerns may be short-circuitable,\n> if the following are true:\n> \n> 1. most of the desirability is for external 'management' and debugging etc\n> on 'reasonably' static database designs.\n> \n> 2. metadata changes (specifically renaming tables) occur infrequently.\n> \n> 3. there is no reason why they are desirable *technically* within the\n> implementations being discussed.\n> \n> If these are true, then why not create a utility (eg. pg_update_symlinks)\n> that creates the relevant symlinks. It does not matter if they are\n> outdated, from an integrity point of view, and for the most part they can\n> be automatically maintained. Internally, postgresql can totally ignore them.\n> \n> Have I missed something?\n\nI am a little confused. Are you suggesting that the entire symlink\nthing can be done outside the database? Yes, that is true if we don't\nstore the symlink locations in the database. Of course, the database\nhas to be down to do this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jun 2000 10:35:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Agreed. Seems we have several issues:\n\n> \tfilename contents\n> \ttablespace implementation\n> \ttablespace directory layout\n> \ttablespace commands and syntax\n\nI think we've agreed that the filename must depend on tablespace,\nfile version, and file segment number in some fashion --- plus\nthe table name/OID of course. Although there's no real consensus\nabout exactly how to construct the name, agreeing on the components\nis still a positive step.\n\nA couple of other areas of contention were:\n\n\trevising smgr interface to be cleaner\n\texactly what to store in pg_class\n\nI don't think there's any quibble about the idea of cleaning up smgr,\nbut we don't have a complete proposal on the table yet either.\n\nAs for the pg_class issue, I still favor storing\n\t(a) OID of tablespace --- not for file access, but so that\n\t associated tablespace-table entry can be looked up\n\t by tablespace management operations\n\t(b) pathname of file as a column of type \"name\", including\n a %d to be replaced by segment #\n\nI think Peter was holding out for storing purely numeric tablespace OID\nand table version in pg_class and having a hardwired mapping to pathname\nsomewhere in smgr. However, I think that doing it that way gains only\nmicro-efficiency compared to passing a \"name\" around, while using the\nname approach buys us flexibility that's needed for at least some of\nthe variants under discussion. Given that the exact filename contents\nare still so contentious, I think it'd be a bad idea to pick an\nimplementation that doesn't allow some leeway as to what the filename\nwill be. A name also has the advantage that it is a single item that\ncan be used to identify the table to smgr, which will help in cleaning\nup the smgr interface.\n\nAs for tablespace layout/implementation, the only real proposal I've\nheard is that there be a subdirectory of the database directory for each\ntablespace, and that that have a subdirectory for each segment (extent)\nof its tables --- where any of these subdirectories could be symlinks\noff to a different filesystem. Some unhappiness was raised about\ndepending on symlinks for this function, but I didn't hear one single\nconcrete reason not to do it, nor an alternative design. Unless someone\ncomes up with a counterproposal, I think that that's what the actual\naccess mechanism will look like. We still need to talk about what we\nwant to store in the SQL-level representation of a tablespace, and what\nsort of tablespace management tools/commands are needed. (Although\n\"try to make it look like Oracle\" seems to be pretty much the consensus\nfor the command level, not all of us know exactly what that means...)\n\nComments? Anything else that we do have consensus on?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Jun 2000 10:36:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "\"Philip J. Warner\" <[email protected]> writes:\n> If these are true, then why not create a utility (eg. pg_update_symlinks)\n> that creates the relevant symlinks. It does not matter if they are\n> outdated, from an integrity point of view, and for the most part they can\n> be automatically maintained. Internally, postgresql can totally ignore them.\n\nWhat?\n\nI think you are confusing a couple of different things. IIRC, at one\ntime when we were just thinking about ALTER TABLE RENAME, there was\na suggestion that the \"real\" table files be named by table OID, and\nthat there be symlinks to those files named by logical table name as\na crutch (:-)) for admins who wanted to know which table file was which.\nThat could be handled as you've sketched above, but I think the whole\nproposal has fallen by the wayside anyway.\n\nThe current discussion of symlinks is focusing on using directory\nsymlinks, not file symlinks, to represent/implement tablespace layout.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Jun 2000 10:45:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "At 10:35 20/06/00 -0400, Bruce Momjian wrote:\n>> \n>> If these are true, then why not create a utility (eg. pg_update_symlinks)\n>> that creates the relevant symlinks. It does not matter if they are\n>> outdated, from an integrity point of view, and for the most part they can\n>> be automatically maintained. Internally, postgresql can totally ignore\nthem.\n>>\n>I am a little confused. Are you suggesting that the entire symlink\n>thing can be done outside the database? Yes, that is true if we don't\n>store the symlink locations in the database. Of course, the database\n>has to be down to do this.\n\nThe idea was to have postgresql, internally, totally ignore symlinks - use\nOID or whatever is technically best for file names. Then create a\nutility/command to make human-centric symlinks in a known location. The\nsymlinks *could* be updated automatically by postgres, if possible, but\nwould never be used internally. Things like vacuum could report out of date\nsymlinks, and maybe fix them (but probably not).\n\nIt may sound crude, but the only reason for the symlinks is for humans to\n'see what is going on', and in most cases they wont be very volatile.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 21 Jun 2000 00:49:59 +1000",
"msg_from": "\"Philip J. Warner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "At 10:45 20/06/00 -0400, Tom Lane wrote:\n>\n>What?\n>\n...\n>\n>The current discussion of symlinks is focusing on using directory\n>symlinks, not file symlinks, to represent/implement tablespace layout.\n>\n\nOoops. I'll pull my head in again.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 21 Jun 2000 00:53:54 +1000",
"msg_from": "\"Philip J. Warner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> If we have a new CREATE DATABASE LOCATION command, we can say:\n> \n> \tCREATE DATABASE LOCATION dbloc IN '/var/private/pgsql';\n> \tCREATE DATABASE newdb IN dbloc;\n\nWe kind of have this already, with CREATE DATABASE foo WITH LOCATION =\n'bar'; but of course with environment variable kludgery. But it's a start.\n\n> \tmkdir /var/private/pgsql/dbloc\n> \tln -s /var/private/pgsql/dbloc data/base/dbloc\n\nI think the problem with this was that you'd have to do an extra lookup\ninto, say, pg_location to resolve this. Some people are talking about\nblind writes, this is not really blind.\n\n> \tCREATE LOCATION tabloc IN '/var/private/pgsql';\n> \tCREATE TABLE newtab ... IN tabloc;\n\nOkay, so we'd have \"table spaces\" and \"database spaces\". Seems like one\n\"space\" ought to be enough. I was thinking that the database \"space\" would\nserve as a default \"space\" for tables created within it but you could\nstill create tables in other \"spaces\" than were the database really is. In\nfact, the database wouldn't show up at all in the file names anymore,\nwhich may or may not be a good thing.\n\nI think Tom suggested something more or less like this:\n\n$PGDATA/base/tablespace/segment/table\n\n(leaving the details of \"table\" aside for now). pg_class would get a\ncolumn storing the table space somehow, say an oid reference to\npg_location. There would have to be a default tablespace that's created by\ninitdb and it's indicated by oid 0. So if you create a simple little table\n\"foo\" it ends up in\n\n$PGDATA/base/0/0/foo\n\nThat is pretty manageable. Now to create a table space you do\n\nCREATE LOCATION \"name\" AT '/some/where';\n\nwhich would make an entry in pg_location and, similar to how you\nsuggested, create a symlink from\n\n$PGDATA/base/newoid -> /some/where\n\nThen when you create a new table at that new location this gets simply\nnoted in pg_class with an oid reference, the rest works completely\ntransparently and no lookup outside of pg_class required. The system would\ncreate the segment 0 subdirectory automatically.\n\nWhen tables get segmented the system would simply create subdirectories 1,\n2, 3, etc. as needed, just as it created the 0 as need, no extra code.\n\npg_dump doesn't need to use lstat or whatever at all because the locations\nare catalogued. Administrators don't even need to know about the linking\nbusiness, they just make sure the target directory exists.\n\nTwo more items to ponder:\n\n* per-location transaction logs\n\n* pg_upgrade\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 20 Jun 2000 18:43:35 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce Momjian writes:\n> \n> > If we have a new CREATE DATABASE LOCATION command, we can say:\n> > \n> > \tCREATE DATABASE LOCATION dbloc IN '/var/private/pgsql';\n> > \tCREATE DATABASE newdb IN dbloc;\n> \n> We kind of have this already, with CREATE DATABASE foo WITH LOCATION =\n> 'bar'; but of course with environment variable kludgery. But it's a start.\n\nYes, I didn't like the environment variable stuff. In fact, I would\nlike to not mention the symlink location anywhere in the database, so it\ncan be changed without changing it in the database.\n\n> \n> > \tmkdir /var/private/pgsql/dbloc\n> > \tln -s /var/private/pgsql/dbloc data/base/dbloc\n> \n> I think the problem with this was that you'd have to do an extra lookup\n> into, say, pg_location to resolve this. Some people are talking about\n> blind writes, this is not really blind.\n\nI was think of storing the relfilename as dbloc/mytab32332.\n\n\n> \n> > \tCREATE LOCATION tabloc IN '/var/private/pgsql';\n> > \tCREATE TABLE newtab ... IN tabloc;\n> \n> Okay, so we'd have \"table spaces\" and \"database spaces\". Seems like one\n> \"space\" ought to be enough. I was thinking that the database \"space\" would\n> serve as a default \"space\" for tables created within it but you could\n> still create tables in other \"spaces\" than were the database really is. In\n> fact, the database wouldn't show up at all in the file names anymore,\n> which may or may not be a good thing.\n> \n> I think Tom suggested something more or less like this:\n> \n> $PGDATA/base/tablespace/segment/table\n\nSo you mix tables from different database in the same tablespace? Seems\nbetter to keep them in separate directories for efficiency and clarity.\n\nWe could use tablespace/dbname/table so that a tablespace would have\na directory for each database that uses the tablespace.\n> \n> (leaving the details of \"table\" aside for now). pg_class would get a\n> column storing the table space somehow, say an oid reference to\n> pg_location. There would have to be a default tablespace that's created by\n> initdb and it's indicated by oid 0. So if you create a simple little table\n> \"foo\" it ends up in\n> \n> $PGDATA/base/0/0/foo\n> \n\nSeems better to use the top directory for 0, and have extents in\nsubdirectories like Extent-2, etc. Easier for administrators and new\npeople.\n\nHowever, one problem is that tables created in a database without a\nlocation are put under pgsql directory. You would have to symlink the\nactual database directory. Maybe that is why I had separate database\nlocations. I realize that is bad.\n\n> That is pretty manageable. Now to create a table space you do\n> \n> CREATE LOCATION \"name\" AT '/some/where';\n> \n> which would make an entry in pg_location and, similar to how you\n> suggested, create a symlink from\n> \n> $PGDATA/base/newoid -> /some/where\n> \n> Then when you create a new table at that new location this gets simply\n> noted in pg_class with an oid reference, the rest works completely\n> transparently and no lookup outside of pg_class required. The system would\n> create the segment 0 subdirectory automatically.\n\n> \n> When tables get segmented the system would simply create subdirectories 1,\n> 2, 3, etc. as needed, just as it created the 0 as need, no extra code.\n> \n> pg_dump doesn't need to use lstat or whatever at all because the locations\n> are catalogued. Administrators don't even need to know about the linking\n> business, they just make sure the target directory exists.\n\nWhat I was suggesting is not to catalog the symlink locations, but to\nuse lstat when dumping, so that admins can move files around using\nsymlinks and not have to udpate the database.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jun 2000 13:53:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> Bruce Momjian <[email protected]> writes:\n> > Agreed. Seems we have several issues:\n> \n> > \tfilename contents\n> > \ttablespace implementation\n> > \ttablespace directory layout\n> > \ttablespace commands and syntax\n>\n\n[snip]\n \n> \n> Comments? Anything else that we do have consensus on?\n>\n\nBefore the details of tablespace implementation,\n\n1) How to change(extend) the syntax of CREATE TABLE\n We only add table(data)space name with some\n keyword ? i.e Do we consider tablespace as an\n abstraction ? \n\nTo confirm our mutual understanding.\n\n2) Is tablespace defined per PostgreSQL's database ?\n3) Is default tablespace defined per database/user or \n for all ?\n\nAFAIK in Oracle,2) global, 3) per user. \n\nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Wed, 21 Jun 2000 05:59:41 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "> -----Original Message-----\n> From: Peter Eisentraut\n>\n> Bruce Momjian writes:\n>\n> > If we have a new CREATE DATABASE LOCATION command, we can say:\n> >\n> > \tCREATE DATABASE LOCATION dbloc IN '/var/private/pgsql';\n> > \tCREATE DATABASE newdb IN dbloc;\n>\n> We kind of have this already, with CREATE DATABASE foo WITH LOCATION =\n> 'bar'; but of course with environment variable kludgery. But it's a start.\n>\n> > \tmkdir /var/private/pgsql/dbloc\n> > \tln -s /var/private/pgsql/dbloc data/base/dbloc\n>\n> I think the problem with this was that you'd have to do an extra lookup\n> into, say, pg_location to resolve this. Some people are talking about\n> blind writes, this is not really blind.\n>\n> > \tCREATE LOCATION tabloc IN '/var/private/pgsql';\n> > \tCREATE TABLE newtab ... IN tabloc;\n>\n> Okay, so we'd have \"table spaces\" and \"database spaces\". Seems like one\n> \"space\" ought to be enough.\n\nDoes your \"database space\" correspond to current PostgreSQL's database ?\nAnd is it different from SCHEMA ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Wed, 21 Jun 2000 08:54:51 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items"
},
{
"msg_contents": "At 05:59 21/06/00 +0900, Hiroshi Inoue wrote:\n>\n>Before the details of tablespace implementation,\n>\n>1) How to change(extend) the syntax of CREATE TABLE\n> We only add table(data)space name with some\n> keyword ? i.e Do we consider tablespace as an\n> abstraction ? \n>\n\nIt may be worth considering leaving the CREATE TABLE statement alone.\nDec/RDB uses a new statement entirely to define where a table goes. It's\nactually a *very* complex statement, but the key syntax is:\n\nCREATE STORAGE MAP <map-name> FOR <table-name>\n [PLACEMENT VIA INDEX <index-name>]\n STORE [COLUMNS ([col-name,])]\n [IN <area-name>\n | RANDOMLY ACROSS <area-list>]\n;\n\nwhere <area-name> is the name of a Dec/RDB STORAGE AREA, which is basically\na file that contains one or more tables/indices etc. There are options to\nspecify area choice by column value, fullness, how to store BLOBs etc etc.\n\nI realize that this is way too complex for a first pass, but it gives an\nidea of where you *might* want to go, and hence, possibly, a reason for\nstarting out with something like:\n\nCREATE STORAGE MAP <map-name> for <table-name> STORE IN <area-name>;\n\n\nP.S. I really hope this is more cogent than my last message.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 21 Jun 2000 11:22:10 +1000",
"msg_from": "\"Philip J. Warner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "Tom Lane wrote:\n> Some unhappiness was raised about\n> depending on symlinks for this function, but I didn't hear one single\n> concrete reason not to do it, nor an alternative design. \n\nAre symlinks portable?\n",
"msg_date": "Wed, 21 Jun 2000 12:27:45 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > -----Original Message-----\n> > From: Peter Eisentraut\n> >\n> > Bruce Momjian writes:\n> >\n> > > If we have a new CREATE DATABASE LOCATION command, we can say:\n> > >\n> > > \tCREATE DATABASE LOCATION dbloc IN '/var/private/pgsql';\n> > > \tCREATE DATABASE newdb IN dbloc;\n> >\n> > We kind of have this already, with CREATE DATABASE foo WITH LOCATION =\n> > 'bar'; but of course with environment variable kludgery. But it's a start.\n> >\n> > > \tmkdir /var/private/pgsql/dbloc\n> > > \tln -s /var/private/pgsql/dbloc data/base/dbloc\n> >\n> > I think the problem with this was that you'd have to do an extra lookup\n> > into, say, pg_location to resolve this. Some people are talking about\n> > blind writes, this is not really blind.\n> >\n> > > \tCREATE LOCATION tabloc IN '/var/private/pgsql';\n> > > \tCREATE TABLE newtab ... IN tabloc;\n> >\n> > Okay, so we'd have \"table spaces\" and \"database spaces\". Seems like one\n> > \"space\" ought to be enough.\n> \n> Does your \"database space\" correspond to current PostgreSQL's database ?\n> And is it different from SCHEMA ?\n\nOK, seems I have things a little confused. My whole idea of database\nlocations vs. normal locations is flawed. Here is my new proposal.\n\nFirst, I believe there should be locations define per database, not\nglobal locations.\n\nI recommend \n\n\tCREATE TABLESPACE tabloc USING '/var/private/pgsql';\n\tCREATE TABLE newtab ... IN tabloc;\n\nand this does:\n\n\tmkdir /var/private/pgsql/dbname\n\tmkdir /var/private/pgsql/dbname/tabloc\n \tln -s /var/private/pgsql/dbname/tabloc data/base/tabloc\n\nI recommend making a dbname in each directory, then putting the\nlocation inside there.\n\nThis allows the same directory to be used for tablespaces by several\ndatabases, and allows databases created in locations without making\nspecial per-database locations.\n\nI can give a more specific proposal if people wish.\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jun 2000 23:45:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> Tom Lane wrote:\n> > Some unhappiness was raised about\n> > depending on symlinks for this function, but I didn't hear one single\n> > concrete reason not to do it, nor an alternative design. \n> \n> Are symlinks portable?\n\nSure, and if the system loading it can not create the required symlinks\nbecause the directories don't exist, it can just skip the symlink step.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Jun 2000 23:46:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I recommend making a dbname in each directory, then putting the\n> location inside there.\n\nThis still seems backwards to me. Why is it better than tablespace\ndirectory inside database directory?\n\nOne significant problem with it is that there's no longer (AFAICS)\na \"default\" per-database directory that corresponds to the current\nworking directory of backends running in that database. Thus,\nfor example, it's not immediately clear where temporary files and\nbackend core-dump files will end up. Also, you've just added an\nessential extra level (if not two) to the pathnames that backends will\nuse to address files.\n\nThere is a great deal to be said for\n\t..../database/tablespace/filename\nwhere .../database/ is the working directory of a backend running in\nthat database, so that the relative pathname used by that backend to\nget to a table is just tablespace/filename. I fail to see any advantage\nin reversing the pathname order. If you see one, enlighten me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jun 2000 00:06:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I recommend making a dbname in each directory, then putting the\n> > location inside there.\n> \n> This still seems backwards to me. Why is it better than tablespace\n> directory inside database directory?\n\nYes, that is what I want too.\n\n> \n> One significant problem with it is that there's no longer (AFAICS)\n> a \"default\" per-database directory that corresponds to the current\n> working directory of backends running in that database. Thus,\n> for example, it's not immediately clear where temporary files and\n> backend core-dump files will end up. Also, you've just added an\n> essential extra level (if not two) to the pathnames that backends will\n> use to address files.\n> \n> There is a great deal to be said for\n> \t..../database/tablespace/filename\n> where .../database/ is the working directory of a backend running in\n> that database, so that the relative pathname used by that backend to\n> get to a table is just tablespace/filename. I fail to see any advantage\n> in reversing the pathname order. If you see one, enlighten me.\n\nYes, agreed. I was thinking this:\n\n\tCREATE TABLESPACE loc USING '/var/pgsql'\n\ndoes:\n\n\tln -s /var/pgsql/dbname/loc data/base/dbname/loc \n\nIn this way, the database has a view of its main directory, plus a /loc\nsubdirectory for the tablespace. In the other location, we have\n/var/pgsql/dbname/loc because this allows different databases to use:\n\n\tCREATE TABLESPACE loc USING '/var/pgsql'\n\nand they do not collide with each other in /var/pgsql. It puts /loc\ninside the dbname that created it. It also allows:\n\n\tCREATE DATABASE loc IN '/var/pgsql'\n\nto work because this does:\n\n\tln -s /var/pgsql/dbname data/base/dbname\n\nSeems we should create the dbname and loc directories for the users\nautomatically in the synlink target to keep things clean. It prevents\nthem from accidentally having two databases point to the same directory.\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 00:33:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Tom Lane wrote:\n> > > Some unhappiness was raised about\n> > > depending on symlinks for this function, but I didn't hear one single\n> > > concrete reason not to do it, nor an alternative design.\n> >\n> > Are symlinks portable?\n> \n> Sure, and if the system loading it can not create the required symlinks\n> because the directories don't exist, it can just skip the symlink step.\n\nWhat I meant is, would you still be able to create tablespaces on\nsystems without symlinks? That would seem to be a desirable feature.\n",
"msg_date": "Wed, 21 Jun 2000 14:45:01 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> Bruce Momjian <[email protected]> writes:\n> > I recommend making a dbname in each directory, then putting the\n> > location inside there.\n> \n> This still seems backwards to me. Why is it better than tablespace\n> directory inside database directory?\n> \n> One significant problem with it is that there's no longer (AFAICS)\n> a \"default\" per-database directory that corresponds to the current\n> working directory of backends running in that database. Thus,\n> for example, it's not immediately clear where temporary files and\n> backend core-dump files will end up. Also, you've just added an\n> essential extra level (if not two) to the pathnames that backends will\n> use to address files.\n> \n> There is a great deal to be said for\n> \t..../database/tablespace/filename\n\nOK,I seem to have gotten the answer for the question\n Is tablespace defined per PostgreSQL's database ?\n\nYou and Bruce\n 1) tablespace is per database\nPeter seems to have the following idea(?? not sure)\n 2) database = tablespace\nMy opinion\n 3) database and tablespace are relatively irrelevant.\n I assume PostgreSQL's database would correspond \n to the concept of SCHEMA.\n\nIt seems we are different from the first.\nShoudln't we reach an agreement on it in the first place ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Wed, 21 Jun 2000 13:55:01 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> What I meant is, would you still be able to create tablespaces on\n> systems without symlinks? That would seem to be a desirable feature.\n\nAll else being equal, it'd be nice. Since all else is not equal,\nexactly how much sweat are we willing to expend on supporting that\nfeature on such systems --- to the exclusion of other features we\nmight expend the same sweat on, with more widely useful results?\n\nBear in mind that everything will still *work* just fine on such a\nplatform, you just don't have a way to spread the database across\nmultiple filesystems. That's only an issue if the platform has a\nfairly Unixy notion of filesystems ... but no symlinks.\n\nA few messages back someone was opining that we were wasting our time\nthinking about tablespaces at all, because any modern platform can\ncreate disk-spanning filesystems for itself, so applications don't have\nto worry. I don't buy that argument in general, but I'm quite willing\nto quote it for the *very* few systems that are Unixy enough to run\nPostgres in the first place, but not quite Unixy enough to have\nsymlinks.\n\nYou gotta draw the line somewhere at what you will support, and\nthis particular line seems to me to be entirely reasonable and\njustifiable. YMMV...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jun 2000 01:09:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "At 11:22 AM 6/21/00 +1000, Philip J. Warner wrote:\n\n>It may be worth considering leaving the CREATE TABLE statement alone.\n>Dec/RDB uses a new statement entirely to define where a table goes...\n\nIt's worth considering, but on the other hand Oracle users greatly\noutnumber Compaq/RDB users these days...\n\nIf there's no SQL92 guidance for implementing a feature, I'm pretty much in\nfavor of tracking Oracle, whose SQL dialect is rapidly becoming a\nde-facto standard. \n\nI'm not saying I like the fact, Oracle's a pain in the ass. But when\nadopting existing syntax, might as well adopt that of the crushing\nborg.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 20 Jun 2000 22:12:48 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "At 12:27 PM 6/21/00 +1000, Chris Bitmead wrote:\n>Tom Lane wrote:\n>> Some unhappiness was raised about\n>> depending on symlinks for this function, but I didn't hear one single\n>> concrete reason not to do it, nor an alternative design. \n>\n>Are symlinks portable?\n\nIn today's world? Yeah, I think so.\n\nMy only unhappiness has hinged around the possibility that a new\nstorage scheme might temp folks to toss aside the sgmr abstraction,\nor weaken it.\n\nIt doesn't appear that this will happen. \n\nGiven an adequate sgmr abstraction, it doesn't really matter what\nlow-level model is adopted in some sense (i.e. other models might\nbecome available, the implemented model might get replaced, etc -\nwithout breaking backends).\n\nObviously we'll all be using the default model for some time, maybe\nforever, but if mistakes are made maintaining the smgr abstraction\nmeans that replacements are possible. Or kinky substitutes like\nworking with DAFS.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 20 Jun 2000 22:16:50 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> Yes, I didn't like the environment variable stuff. In fact, I would\n> like to not mention the symlink location anywhere in the database, so \n> it can be changed without changing it in the database.\n\nWell, as y'all have noticed, I think there are strong reasons to use\nenvironment variables to manage locations, and that symlinks are a\npotential portability and robustness problem.\n\nAn additional point which has relevance to this whole discussion:\n\nIn the future we may allow system resource such as tables to carry names\nwhich use multi-byte encodings. afaik these encodings are not allowed to\nbe used for physical file names, and even if they were the utility of\nusing standard operating system utilities like ls goes way down.\n\nistm that from a portability and evolutionary standpoint OID-only file\nnames (or at least file names *not* based on relation/class names) is a\nrequirement.\n\nComments?\n\n - Thomas\n",
"msg_date": "Wed, 21 Jun 2000 05:19:29 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> There is a great deal to be said for\n>> ..../database/tablespace/filename\n\n> OK,I seem to have gotten the answer for the question\n> Is tablespace defined per PostgreSQL's database ?\n\nNot necessarily --- the tablespace subdirectories could be symlinks\npointing to the same place (assuming you use OIDs or something to keep\nthe table filenames unique even across databases). This is just an\nimplementation mechanism; it doesn't foreclose the policy decision\nwhether tablespaces are database-local or installation-wide.\n\n(OTOH, pathnames like tablespace/database would pretty much force\ntablespaces to be installation-wide whether you wanted it that way\nor not.)\n\n> My opinion\n> 3) database and tablespace are relatively irrelevant.\n> I assume PostgreSQL's database would correspond \n> to the concept of SCHEMA.\n\nMy inclindation is that tablespaces should be installation-wide, but\nI'm not completely sold on it. In any case I could see wanting a\npermissions mechanism that would only allow some databases to have\ntables in a particular tablespace.\n\nWe do need to think more about how traditional Postgres databases\nfit together with SCHEMA. Maybe we wouldn't even need multiple\ndatabases per installation if we had SCHEMA done right.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jun 2000 01:23:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "On Wed, Jun 21, 2000 at 01:23:57AM -0400, Tom Lane wrote:\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> \n> > My opinion\n> > 3) database and tablespace are relatively irrelevant.\n> > I assume PostgreSQL's database would correspond \n> > to the concept of SCHEMA.\n> \n> My inclindation is that tablespaces should be installation-wide, but\n> I'm not completely sold on it. In any case I could see wanting a\n> permissions mechanism that would only allow some databases to have\n> tables in a particular tablespace.\n> \n> We do need to think more about how traditional Postgres databases\n> fit together with SCHEMA. Maybe we wouldn't even need multiple\n> databases per installation if we had SCHEMA done right.\n> \n\nThe important point I think is that tablespaces are about physical\nstorage/namespace, and SCHEMA are about logical namespace: it would make\nsense for tables from multiple schema to live in the same tablespace,\nas well as tables from one schema to be stored in multiple tablespaces.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Wed, 21 Jun 2000 00:45:02 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n\n> The important point I think is that tablespaces are about physical\n> storage/namespace, and SCHEMA are about logical namespace: it would make\n> sense for tables from multiple schema to live in the same tablespace,\n> as well as tables from one schema to be stored in multiple tablespaces.\n\nIf we accept that argument (which sounds good) then wouldn't we have...\n\ndata/base/db1/table1 -> ../../../tablespace/ts1/db1.table1\ndata/base/db1/table2 -> ../../../tablespace/ts1/db1.table2\ndata/tablespace/ts1/db1.table1\ndata/tablespace/ts1/db1.table2\n\nIn other words there is a directory for databases, and a directory for\ntablespaces. Database tables are symlinked to the appropriate\ntablespace. So there is multiple databases per tablespace and multiple\ntablespaces per database.\n",
"msg_date": "Wed, 21 Jun 2000 16:13:47 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "At 22:12 20/06/00 -0700, Don Baccus wrote:\n>At 11:22 AM 6/21/00 +1000, Philip J. Warner wrote:\n>\n>>It may be worth considering leaving the CREATE TABLE statement alone.\n>>Dec/RDB uses a new statement entirely to define where a table goes...\n>\n>It's worth considering, but on the other hand Oracle users greatly\n>outnumber Compaq/RDB users these days...\n\nIt's actually Oracle/Rdb, but I call it Dec/Rdb to distinguish it from\n'Oracle/Oracle'. It was acquired by Oracle, supposedly because Oracle\nwanted their optimizer, management and tuning tools (although that was only\nhearsay). They *say* that they plan to merge the two products.\n\nWhat I was trying to suggest was that the CREATE TABLE statement will get\nvery overloaded, and it might be worth avoiding having to support two\nstorage management syntaxes if/when it becomes desirable to create a\n'storage' statement of some kind.\n\n\n>\n>I'm not saying I like the fact, Oracle's a pain in the ass. But when\n>adopting existing syntax, might as well adopt that of the crushing\n>borg.\n>\n\nOnly if it is a good thing, or part of a real standard. Philosophically,\nwhere possible I would prefer to see statement that are *in* the SQL\nstandard (ie. CREATE TABLE) to be left as unencumbered as possible.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 21 Jun 2000 16:55:58 +1000",
"msg_from": "\"Philip J. Warner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Chris Bitmead\n>\n> \"Ross J. Reedstrom\" wrote:\n>\n> > The important point I think is that tablespaces are about physical\n> > storage/namespace, and SCHEMA are about logical namespace: it would make\n> > sense for tables from multiple schema to live in the same tablespace,\n> > as well as tables from one schema to be stored in multiple tablespaces.\n>\n> If we accept that argument (which sounds good) then wouldn't we have...\n>\n> data/base/db1/table1 -> ../../../tablespace/ts1/db1.table1\n> data/base/db1/table2 -> ../../../tablespace/ts1/db1.table2\n> data/tablespace/ts1/db1.table1\n> data/tablespace/ts1/db1.table2\n>\n\nHmm,is above symlinking business really preferable just because\nit is possible ? Why do we have to be dependent upon directory\ntree representation when we handle db structure ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Wed, 21 Jun 2000 18:37:02 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items"
},
{
"msg_contents": "> > Sure, and if the system loading it can not create the required symlinks\n> > because the directories don't exist, it can just skip the symlink step.\n> \n> What I meant is, would you still be able to create tablespaces on\n> systems without symlinks? That would seem to be a desirable feature.\n\nYou could create tablespaces, but you could not point them at different\ndrives. The issue is that we don't store the symlink location in the\ndatabase, just the tablespace name.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 10:55:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yes, agreed. I was thinking this:\n> \tCREATE TABLESPACE loc USING '/var/pgsql'\n> does:\n> \tln -s /var/pgsql/dbname/loc data/base/dbname/loc \n> In this way, the database has a view of its main directory, plus a /loc\n> subdirectory for the tablespace. In the other location, we have\n> /var/pgsql/dbname/loc because this allows different databases to use:\n> \tCREATE TABLESPACE loc USING '/var/pgsql'\n> and they do not collide with each other in /var/pgsql.\n\nBut they don't collide anyway, because the dbname is already unique.\nIsn't the extra subdirectory a waste?\n\nBecause table files will have installation-wide unique names, there's\nno really good reason to have either level of subdirectory; you could\njust make\n\tCREATE TABLESPACE loc USING '/var/pgsql'\ndo\n\tln -s /var/pgsql data/base/dbname/loc \nand it'd still work even if multiple DBs were using the same tablespace.\n\nHowever, forcing creation of a subdirectory does give you the chance to\nmake sure the subdir is owned by postgres and has the right permissions,\nso there's something to be said for that. It might be reasonable to do\n\tmkdir /var/pgsql/dbname\n\tchmod 700 /var/pgsql/dbname\n\tln -s /var/pgsql/dbname data/base/dbname/loc \n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jun 2000 11:07:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> At 12:27 PM 6/21/00 +1000, Chris Bitmead wrote:\n> >Tom Lane wrote:\n> >> Some unhappiness was raised about\n> >> depending on symlinks for this function, but I didn't hear one single\n> >> concrete reason not to do it, nor an alternative design. \n> >\n> >Are symlinks portable?\n> \n> In today's world? Yeah, I think so.\n> \n> My only unhappiness has hinged around the possibility that a new\n> storage scheme might temp folks to toss aside the sgmr abstraction,\n> or weaken it.\n> \n> It doesn't appear that this will happen. \n> \n> Given an adequate sgmr abstraction, it doesn't really matter what\n> low-level model is adopted in some sense (i.e. other models might\n> become available, the implemented model might get replaced, etc -\n> without breaking backends).\n> \n> Obviously we'll all be using the default model for some time, maybe\n> forever, but if mistakes are made maintaining the smgr abstraction\n> means that replacements are possible. Or kinky substitutes like\n> working with DAFS.\n\nThe symlink solution where the actual symlink location is not stored\nin the database is certainly abstract. We store that info in the file\nsystem, which is where it belongs. We only query the symlink location\nwhen we need it for database location dumping.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 11:08:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> > Yes, I didn't like the environment variable stuff. In fact, I would\n> > like to not mention the symlink location anywhere in the database, so \n> > it can be changed without changing it in the database.\n> \n> Well, as y'all have noticed, I think there are strong reasons to use\n> environment variables to manage locations, and that symlinks are a\n> potential portability and robustness problem.\n\nSorry, disagree. Environment variables are a pain to administer, and\nquite counter-intuitive.\n\nI also don't see any portability or robustness problems. Can you be\nmore specific?\n\n> An additional point which has relevance to this whole discussion:\n> \n> In the future we may allow system resource such as tables to carry names\n> which use multi-byte encodings. afaik these encodings are not allowed to\n> be used for physical file names, and even if they were the utility of\n> using standard operating system utilities like ls goes way down.\n\nThat is really a different issues of file names. Multi-byte table names\ncan be made to hold just the oid. We have complete control over that\nbecause the file name will be in pg_class.\n\n> istm that from a portability and evolutionary standpoint OID-only file\n> names (or at least file names *not* based on relation/class names) is a\n> requirement.\n\nMaybe a requirement at some point for some installations, but I hope not\na general requirement.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 11:11:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> There is a great deal to be said for\n> >> ..../database/tablespace/filename\n> \n> > OK,I seem to have gotten the answer for the question\n> > Is tablespace defined per PostgreSQL's database ?\n> \n> Not necessarily --- the tablespace subdirectories could be symlinks\n> pointing to the same place (assuming you use OIDs or something to keep\n> the table filenames unique even across databases). This is just an\n> implementation mechanism; it doesn't foreclose the policy decision\n> whether tablespaces are database-local or installation-wide.\n\nSeems we are better just auto-creating a directory that matches the\ndbname.\n\n> \n> (OTOH, pathnames like tablespace/database would pretty much force\n> tablespaces to be installation-wide whether you wanted it that way\n> or not.)\n\n\n> \n> > My opinion\n> > 3) database and tablespace are relatively irrelevant.\n> > I assume PostgreSQL's database would correspond \n> > to the concept of SCHEMA.\n> \n> My inclindation is that tablespaces should be installation-wide, but\n> I'm not completely sold on it. In any case I could see wanting a\n> permissions mechanism that would only allow some databases to have\n> tables in a particular tablespace.\n\nOn idea is to allow tablespaces defined in template1 to be propogated to\nnewly created directories, with the symlinks adjusted so they use the\nproper dbname in the symlink.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 11:19:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> The important point I think is that tablespaces are about physical\n> storage/namespace, and SCHEMA are about logical namespace: it would make\n> sense for tables from multiple schema to live in the same tablespace,\n> as well as tables from one schema to be stored in multiple tablespaces.\n> \n\nIt seems mixing the physical layout and the logical SCHEMA would have\nproblems because people have different reasons for using each feature.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 11:21:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> At 22:12 20/06/00 -0700, Don Baccus wrote:\n> >At 11:22 AM 6/21/00 +1000, Philip J. Warner wrote:\n> >\n> >>It may be worth considering leaving the CREATE TABLE statement alone.\n> >>Dec/RDB uses a new statement entirely to define where a table goes...\n> >\n> >It's worth considering, but on the other hand Oracle users greatly\n> >outnumber Compaq/RDB users these days...\n> \n> It's actually Oracle/Rdb, but I call it Dec/Rdb to distinguish it from\n> 'Oracle/Oracle'. It was acquired by Oracle, supposedly because Oracle\n> wanted their optimizer, management and tuning tools (although that was only\n> hearsay). They *say* that they plan to merge the two products.\n> \n> What I was trying to suggest was that the CREATE TABLE statement will get\n> very overloaded, and it might be worth avoiding having to support two\n> storage management syntaxes if/when it becomes desirable to create a\n> 'storage' statement of some kind.\n> \n\nSeems adding tablespace to CREATE TABLE/INDEX/DATABASE is pretty simple.\nDoing it as a separate command seems cumbersome.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 11:23:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> Sorry, disagree. Environment variables are a pain to administer, and\n> quite counter-intuitive.\n\nWell, I guess we disagree. But until we have a complete proposed\nsolution, we should leave environment variables on the table, since they\n*do* allow some decoupling of logical and physical storage, and *do*\ngive the administrator some control over resources *that the admin would\nnot otherwise have*.\n\n> > istm that from a portability and evolutionary standpoint OID-only \n> > file names (or at least file names *not* based on relation/class \n> > names) is a requirement.\n> Maybe a requirement at some point for some installations, but I hope \n> not a general requirement.\n\nIf a table name can have characters which are not legal for file names,\nthen how would you propose to support it? If we are doing a\nrestructuring of the storage scheme, this should be taken into account.\n\nlockhart=# create table \"one/two\" (i int);\nERROR: cannot create one/two\n\nWhy not? It demonstrates an unfortunate linkage between file systems and\ndatabase resources.\n\n - Thomas\n",
"msg_date": "Wed, 21 Jun 2000 15:27:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Well, as y'all have noticed, I think there are strong reasons to use\n> environment variables to manage locations, and that symlinks are a\n> potential portability and robustness problem.\n\nReasons? Evidence?\n\n> An additional point which has relevance to this whole discussion:\n> In the future we may allow system resource such as tables to carry names\n> which use multi-byte encodings. afaik these encodings are not allowed to\n> be used for physical file names, and even if they were the utility of\n> using standard operating system utilities like ls goes way down.\n\nGood point, although in one sense a string is a string --- as long as\nwe don't allow embedded nulls in server-side encodings, we could use\nanything that Postgres thought was a name in a filename, and the OS\nshould take it. But if your local ls doesn't show it the way you see\nin Postgres, the usefulness of having the tablename in the filename\ngoes way down.\n\n> istm that from a portability and evolutionary standpoint OID-only file\n> names (or at least file names *not* based on relation/class names) is a\n> requirement.\n\nNo argument from me ;-). I've been looking for compromise positions\nbut I still think that pure numeric filenames are the cleanest solution.\n\nThere's something else that should be taken into account: for WAL, the\nlog will need to record the table file that each insert/delete/update\noperation affects. To do that with the smgr-token-is-a-pathname\napproach I was suggesting yesterday, I think you have to record the\ndatabase name and pathname in each WAL log entry. That's 64 bytes/log\nentry which is a *lot*. If we bit the bullet and restricted ourselves\nto numeric filenames then the log would need just four numeric values:\n\tdatabase OID\n\ttablespace OID\n\trelation OID\n\trelation version number\n(this set of 4 values would also be an smgr file reference token).\n16 bytes/log entry looks much better than 64.\n\nAt the moment I can recall the following opinions:\n\nPure OID filenames: Thomas, Tom, Marc, Peter E.\n\nOID+relname filenames: Bruce\n\nVadim was in the pure-OID camp a few months ago, but I won't presume\nto list him there now since he hasn't been involved in this most\nrecent round of discussions. I'm not sure where anyone else stands...\nbut at least in terms of the core group it's pretty clear where the\nmajority opinion is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jun 2000 11:28:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Yes, agreed. I was thinking this:\n> > \tCREATE TABLESPACE loc USING '/var/pgsql'\n> > does:\n> > \tln -s /var/pgsql/dbname/loc data/base/dbname/loc \n> > In this way, the database has a view of its main directory, plus a /loc\n> > subdirectory for the tablespace. In the other location, we have\n> > /var/pgsql/dbname/loc because this allows different databases to use:\n> > \tCREATE TABLESPACE loc USING '/var/pgsql'\n> > and they do not collide with each other in /var/pgsql.\n> \n> But they don't collide anyway, because the dbname is already unique.\n> Isn't the extra subdirectory a waste?\n\nNot really. Yes, we could put them all in the same directory, but why\nbother. Probably easier to put them in unique directories per database.\nCuts down on directory searches to open file, and allows 'du' to return\nmeaningful numbers per database. If you don't do that, you can't really\ntell what files belong to which databases.\n\n> \n> Because table files will have installation-wide unique names, there's\n> no really good reason to have either level of subdirectory; you could\n> just make\n> \tCREATE TABLESPACE loc USING '/var/pgsql'\n> do\n> \tln -s /var/pgsql data/base/dbname/loc \n> and it'd still work even if multiple DBs were using the same tablespace.\n> \n> However, forcing creation of a subdirectory does give you the chance to\n> make sure the subdir is owned by postgres and has the right permissions,\n> so there's something to be said for that. It might be reasonable to do\n> \tmkdir /var/pgsql/dbname\n> \tchmod 700 /var/pgsql/dbname\n> \tln -s /var/pgsql/dbname data/base/dbname/loc \n\nYes, that is true. My idea is that they may want to create loc1 and\nloc2 which initially point to the same location, but later may be moved.\nFor example, one tablespace for tables, another for indexes. They may\ninitially point to the same directory, but later be split. Seems we\nneed to keep the actual tablespace information relivant by using\ndifferent directories on the other end too.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 11:45:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Tom Lane wrote:\n \n> Thomas Lockhart <[email protected]> writes:\n> > Well, as y'all have noticed, I think there are strong reasons to use\n> > environment variables to manage locations, and that symlinks are a\n> > potential portability and robustness problem.\n \n> Reasons? Evidence?\n\nDoes Win32 do symlinks these days? I know Win32 does envvars, and Win32\nis currently a supported platform.\n\nI'm not thrilled with either solution -- envvars have their problems\njust as surely as symlinks do.\n \n> At the moment I can recall the following opinions:\n \n> Pure OID filenames: Thomas, Tom, Marc, Peter E.\n\nFWIW, count me here. I have tried administering my system using the\nfilenames -- and have been bitten. Better admin tools in the PostgreSQL\npackage beat using standard filesystem tools -- the PostgreSQL tools can\nbe WAL-aware, transaction-aware, and can provide consistent results. \nFilesystem tools never will be able to provide consistent results for a\ndatabase system that must remain up 24x7, as many if not most PostgreSQL\ninstallations must.\n \n> OID+relname filenames: Bruce\n\nSorry Bruce -- I understand and am sympathetic to your position, and, at\none time, I agreed with it. But not any more.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 21 Jun 2000 11:48:19 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> FWIW, count me here. I have tried administering my system using the\n> filenames -- and have been bitten. Better admin tools in the PostgreSQL\n> package beat using standard filesystem tools -- the PostgreSQL tools can\n> be WAL-aware, transaction-aware, and can provide consistent results. \n> Filesystem tools never will be able to provide consistent results for a\n> database system that must remain up 24x7, as many if not most PostgreSQL\n> installations must.\n> \n> > OID+relname filenames: Bruce\n> \n> Sorry Bruce -- I understand and am sympathetic to your position, and, at\n> one time, I agreed with it. But not any more.\n\nI thought the most recent proposal was to just throw ~16 chars of the\nfile name on the end of the file name, and that should not be used for\nanything except visibility. WAL would not need to store that. It could\njust grab the file name that matches the oid/sequence number.\n\nIf people don't want table names in the file name, I totally understand,\nand we can move on without them. I have made the best case I can for\ntheir inclusion, but if people are not convinced, then maybe I was\nwrong.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 12:03:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yes, that is true. My idea is that they may want to create loc1 and\n> loc2 which initially point to the same location, but later may be moved.\n> For example, one tablespace for tables, another for indexes. They may\n> initially point to the same directory, but later be split.\n\nWell, that opens up a completely different issue, which is what about\nmoving tables from one tablespace to another?\n\nI think the way you appear to be implying above (shut down the server\nso that you can rearrange subdirectories by hand) is the wrong way to\ngo about it. For one thing, lots of people don't want to shut down\ntheir servers completely for that long, but it's difficult to avoid\ndoing so if you want to move files by filesystem commands. For another\nthing, the above approach requires guessing in advance --- maybe long\nin advance --- how you are going to want to repartition your database\nwhen it gets too big for your existing storage.\n\nThe right way to address this problem is to invent a \"move table to\nnew tablespace\" command. This'd be pretty trivial to implement based\non a file-versioning approach: the new version of the pg_class tuple\nhas a new tablespace identifier in it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jun 2000 12:10:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Yes, that is true. My idea is that they may want to create loc1 and\n> > loc2 which initially point to the same location, but later may be moved.\n> > For example, one tablespace for tables, another for indexes. They may\n> > initially point to the same directory, but later be split.\n> \n> Well, that opens up a completely different issue, which is what about\n> moving tables from one tablespace to another?\n\nAre you suggesting that doing dbname/locname is somehow harder to do\nthat? If you are, I don't understand why.\n\nThe general issue of moving tables between tablespaces can be done from\nin the database. I don't think it is reasonable to shut down the db to\ndo that. However, I can see moving tablespaces to different symlinked\nlocations may require a shutdown.\n\n> \n> I think the way you appear to be implying above (shut down the server\n> so that you can rearrange subdirectories by hand) is the wrong way to\n> go about it. For one thing, lots of people don't want to shut down\n> their servers completely for that long, but it's difficult to avoid\n> doing so if you want to move files by filesystem commands. For another\n> thing, the above approach requires guessing in advance --- maybe long\n> in advance --- how you are going to want to repartition your database\n> when it gets too big for your existing storage.\n> \n> The right way to address this problem is to invent a \"move table to\n> new tablespace\" command. This'd be pretty trivial to implement based\n> on a file-versioning approach: the new version of the pg_class tuple\n> has a new tablespace identifier in it.\n\nAgreed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 12:14:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Sorry Bruce -- I understand and am sympathetic to your position, and, at\n>> one time, I agreed with it. But not any more.\n\n> I thought the most recent proposal was to just throw ~16 chars of the\n> file name on the end of the file name, and that should not be used for\n> anything except visibility. WAL would not need to store that. It could\n> just grab the file name that matches the oid/sequence number.\n\nBut that's extra complexity in WAL, plus extra complexity in renaming\ntables (if you want the filename to track the logical table name, which\nI expect you would), plus extra complexity in smgr and bufmgr and other\nplaces.\n\nI think people are coming around to the notion that it's better to keep\nthese low-level operations simple, even if we need to expend more work\non high-level admin tools as a result.\n\nBut we do need to remember to expend that effort on tools! Let's not\ndrop the ball on that, folks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jun 2000 12:17:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Well, that opens up a completely different issue, which is what about\n>> moving tables from one tablespace to another?\n\n> Are you suggesting that doing dbname/locname is somehow harder to do\n> that? If you are, I don't understand why.\n\nIt doesn't make it harder, but it still seems pointless to have the\nextra directory level. Bear in mind that if we go with all-OID\nfilenames then you're not going to be looking at \"loc1\" and \"loc2\"\nanyway, but at \"5938171\" and \"8583727\". It's not much of a convenience\nto the admin to see that, so we might as well save a level of directory\nlookup.\n\n> The general issue of moving tables between tablespaces can be done from\n> in the database. I don't think it is reasonable to shut down the db to\n> do that. However, I can see moving tablespaces to different symlinked\n> locations may require a shutdown.\n\nOnly if you insist on doing it outside the database using filesystem\ntools. Another way is to create a new tablespace in the desired new\nlocation, then move the tables one-by-one to that new tablespace.\n\nI suppose either one might be preferable depending on your access\npatterns --- locking your most critical tables while they're being moved\nmight be as bad as a total shutdown.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jun 2000 12:24:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Well, that opens up a completely different issue, which is what about\n> >> moving tables from one tablespace to another?\n> \n> > Are you suggesting that doing dbname/locname is somehow harder to do\n> > that? If you are, I don't understand why.\n> \n> It doesn't make it harder, but it still seems pointless to have the\n> extra directory level. Bear in mind that if we go with all-OID\n> filenames then you're not going to be looking at \"loc1\" and \"loc2\"\n> anyway, but at \"5938171\" and \"8583727\". It's not much of a convenience\n> to the admin to see that, so we might as well save a level of directory\n> lookup.\n\nJust seems easier to have stuff segregates into separate per-db\ndirectories for clarity. Also, as directories get bigger, finding a\nspecific file in there becomes harder. Putting 10 databases all in the\nsame directory seems bad in this regard.\n\n> \n> > The general issue of moving tables between tablespaces can be done from\n> > in the database. I don't think it is reasonable to shut down the db to\n> > do that. However, I can see moving tablespaces to different symlinked\n> > locations may require a shutdown.\n> \n> Only if you insist on doing it outside the database using filesystem\n> tools. Another way is to create a new tablespace in the desired new\n> location, then move the tables one-by-one to that new tablespace.\n> \n> I suppose either one might be preferable depending on your access\n> patterns --- locking your most critical tables while they're being moved\n> might be as bad as a total shutdown.\n\nSeems we are better having the directory be a symlink so we don't have\nsymlink overhead for every file open. Also, symlinks when removed just\nremove symlink and not the file. I don't think we want to be using\nsymlinks for tables if we can avoid it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 12:40:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> Are you suggesting that doing dbname/locname is somehow harder to do\n>>>> that? If you are, I don't understand why.\n>> \n>> It doesn't make it harder, but it still seems pointless to have the\n>> extra directory level. Bear in mind that if we go with all-OID\n>> filenames then you're not going to be looking at \"loc1\" and \"loc2\"\n>> anyway, but at \"5938171\" and \"8583727\". It's not much of a convenience\n>> to the admin to see that, so we might as well save a level of directory\n>> lookup.\n\n> Just seems easier to have stuff segregates into separate per-db\n> directories for clarity. Also, as directories get bigger, finding a\n> specific file in there becomes harder. Putting 10 databases all in the\n> same directory seems bad in this regard.\n\nHuh? I wasn't arguing against making a db-specific directory below the\ntablespace point. I was arguing against making *another* directory\nbelow that one.\n\n> I don't think we want to be using\n> symlinks for tables if we can avoid it.\n\nAgreed, but where did that come from? None of these proposals mentioned\nsymlinks for anything but directories, AFAIR.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jun 2000 12:46:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> > Just seems easier to have stuff segregates into separate per-db\n> > directories for clarity. Also, as directories get bigger, finding a\n> > specific file in there becomes harder. Putting 10 databases all in the\n> > same directory seems bad in this regard.\n> \n> Huh? I wasn't arguing against making a db-specific directory below the\n> tablespace point. I was arguing against making *another* directory\n> below that one.\n\nI was suggesting:\n\n\tln -s /var/pgsql/dbname/loc data/base/dbname/loc\n\nI thought you were suggesting:\n\n\tln -s /var/pgsql/dbname data/base/dbname/loc\n\nWith this system:\n\n\tln -s /var/pgsql/dbname data/base/dbname/loc1\n\tln -s /var/pgsql/dbname data/base/dbname/loc2\n\ngo into the same directory, which makes it impossible to move loc1\neasily using the file system. Seems cheap to add the extra directory.\n\n> > I don't think we want to be using\n> > symlinks for tables if we can avoid it.\n> \n> Agreed, but where did that come from? None of these proposals mentioned\n> symlinks for anything but directories, AFAIR.\n\nI thought you mentioned it. Sorry.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 13:05:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Tom Lane writes:\n\n> I think Peter was holding out for storing purely numeric tablespace OID\n> and table version in pg_class and having a hardwired mapping to pathname\n> somewhere in smgr. However, I think that doing it that way gains only\n> micro-efficiency compared to passing a \"name\" around, while using the\n> name approach buys us flexibility that's needed for at least some of\n> the variants under discussion.\n\nBut that name can only be a dozen or so characters, contain no slash or\nother funny characters, etc. That's really poor. Then the alternative is\nto have an internal name and an external canonical name. Then you have two\nnames to worry about. Also consider that when you store both the table\nspace oid and the internal name in pg_class you create redundant data.\nWhat if you rename the table space? Do you leave the internal name out of\nsync? Then what good is the internal name? I'm just concerned that we are\ncreating at the table space level problems similar to that we're trying to\nget rid of at the relation and database level.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 21 Jun 2000 20:16:10 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Tom Lane writes:\n> \n> > I think Peter was holding out for storing purely numeric tablespace OID\n> > and table version in pg_class and having a hardwired mapping to pathname\n> > somewhere in smgr. However, I think that doing it that way gains only\n> > micro-efficiency compared to passing a \"name\" around, while using the\n> > name approach buys us flexibility that's needed for at least some of\n> > the variants under discussion.\n> \n> But that name can only be a dozen or so characters, contain no slash or\n> other funny characters, etc. That's really poor. Then the alternative is\n> to have an internal name and an external canonical name. Then you have two\n> names to worry about. Also consider that when you store both the table\n> space oid and the internal name in pg_class you create redundant data.\n> What if you rename the table space? Do you leave the internal name out of\n> sync? Then what good is the internal name? I'm just concerned that we are\n> creating at the table space level problems similar to that we're trying to\n> get rid of at the relation and database level.\n\nAgreed. Having table spaces stored by directories named by oid just\nseems very complicated for no reason.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 14:42:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> But that name can only be a dozen or so characters, contain no slash or\n>> other funny characters, etc. That's really poor. Then the alternative is\n>> to have an internal name and an external canonical name. Then you have two\n>> names to worry about. Also consider that when you store both the table\n>> space oid and the internal name in pg_class you create redundant data.\n>> What if you rename the table space? Do you leave the internal name out of\n>> sync? Then what good is the internal name? I'm just concerned that we are\n>> creating at the table space level problems similar to that we're trying to\n>> get rid of at the relation and database level.\n\n> Agreed. Having table spaces stored by directories named by oid just\n> seems very complicated for no reason.\n\nHuh? He just gave you two very good reasons: avoid Unix-derived\nlimitations on the naming of tablespaces (and tables), and avoid\nproblems with renaming tablespaces.\n\nI'm pretty much firmly back in the \"OID and nothing but\" camp.\nOr perhaps I should say \"OID, file version, and nothing but\",\nsince we still need a version number to do CLUSTER etc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jun 2000 17:39:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n>\n> No argument from me ;-). I've been looking for compromise positions\n> but I still think that pure numeric filenames are the cleanest solution.\n>\n> There's something else that should be taken into account: for WAL, the\n> log will need to record the table file that each insert/delete/update\n> operation affects. To do that with the smgr-token-is-a-pathname\n> approach I was suggesting yesterday, I think you have to record the\n> database name and pathname in each WAL log entry. That's 64 bytes/log\n> entry which is a *lot*. If we bit the bullet and restricted ourselves\n> to numeric filenames then the log would need just four numeric values:\n> \tdatabase OID\n> \ttablespace OID\n\nI strongly object to keep tablespace OID for smgr file reference token\nthough we have to keep it for another purpose of cource. I've mentioned\nmany times tablespace(where to store) info should be distinguished from\n*where it is stored* info. Generally tablespace isn't sufficiently\nrestrictive\nfor this purpose. e.g. there was an idea about round-robin. e.g. Oracle's\ntablespace could have pluaral files... etc.\nIMHO,it is misleading to use tablespace OID as (a part of) reference token.\n\n> \trelation OID\n> \trelation version number\n> (this set of 4 values would also be an smgr file reference token).\n> 16 bytes/log entry looks much better than 64.\n>\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Thu, 22 Jun 2000 08:37:42 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> The symlink solution where the actual symlink location is not stored\n> in the database is certainly abstract. We store that info in the file\n> system, which is where it belongs. We only query the symlink location\n> when we need it for database location dumping.\n\nhow would that work? would pg_dump dump the tablespace locations or not?\n",
"msg_date": "Thu, 22 Jun 2000 10:43:20 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> At the moment I can recall the following opinions:\n> \n> Pure OID filenames: Thomas, Tom, Marc, Peter E.\n> \n> OID+relname filenames: Bruce\n>\n\nPlease add my opinion to the list.\n\nUnique-id filename: Hiroshi\n (Unqiue-id is irrelevant to OID/relname).\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 22 Jun 2000 10:15:01 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > The symlink solution where the actual symlink location is not stored\n> > in the database is certainly abstract. We store that info in the file\n> > system, which is where it belongs. We only query the symlink location\n> > when we need it for database location dumping.\n> \n> how would that work? would pg_dump dump the tablespace locations or not?\n> \n\npg_dump would recreate a CREATE TABLESPACE command:\n\n\tprintf(\"CREATE TABLESPACE %s USING %s\", loc, symloc);\n\nwhere symloc would be SELECT symloc(loc) and return the value into a\nvariable that is used by pg_dump. The backend would do the lstat() and\nreturn the value to the client.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Jun 2000 22:29:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Please add my opinion to the list.\n> Unique-id filename: Hiroshi\n> (Unqiue-id is irrelevant to OID/relname).\n\n\"Unique ID\" is more or less equivalent to \"OID + version number\",\nright?\n\nI was trying earlier to convince myself that a single unique-ID value\nwould be better than OID+version for the smgr interface, because it'd\ncertainly be easier to pass around. I failed to convince myself though,\nand the thing that bothered me was this. Suppose you are trying to\nrecover a corrupted database manually, and the only information you have\nabout which table is which is a somewhat out-of-date listing of OIDs\nversus table names. (Maybe it's out of date because you got it from\nyour last backup tape.) If the files are named OID+version you're not\ngoing to have much trouble seeing which is which, even if some of the\nversions are higher than what was on the tape. But if version-updated\ntables are given entirely new unique IDs, you've got no hope at all of\ntelling which one corresponds to what you had in the listing. Maybe\nyou can tell by looking through the physical file contents, but\ncertainly this way is more fragile from the point of view of data\nrecovery.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Jun 2000 23:27:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> >\n> > > The symlink solution where the actual symlink location is not stored\n> > > in the database is certainly abstract. We store that info in the file\n> > > system, which is where it belongs. We only query the symlink location\n> > > when we need it for database location dumping.\n> >\n> > how would that work? would pg_dump dump the tablespace locations or not?\n> >\n> \n> pg_dump would recreate a CREATE TABLESPACE command:\n> \n> printf(\"CREATE TABLESPACE %s USING %s\", loc, symloc);\n> \n> where symloc would be SELECT symloc(loc) and return the value into a\n> variable that is used by pg_dump. The backend would do the lstat() and\n> return the value to the client.\n\nI'm wondering if pg_dump should store the location of the tablespace. If\nyour machine dies, you get a new machine to re-create the database, you\nmay not want the tablespace in the same spot. And text-editing a\ngigabyte file would be extremely painful.\n",
"msg_date": "Thu, 22 Jun 2000 13:43:56 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> > where symloc would be SELECT symloc(loc) and return the value into a\n> > variable that is used by pg_dump. The backend would do the lstat() and\n> > return the value to the client.\n> \n> I'm wondering if pg_dump should store the location of the tablespace. If\n> your machine dies, you get a new machine to re-create the database, you\n> may not want the tablespace in the same spot. And text-editing a\n> gigabyte file would be extremely painful.\n\nIf the symlink create fails in CREATE TABLESPACE, it just creates an\nordinary directory.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Jun 2000 00:03:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I strongly object to keep tablespace OID for smgr file reference token\n> though we have to keep it for another purpose of cource. I've mentioned\n> many times tablespace(where to store) info should be distinguished from\n> *where it is stored* info.\n\nSure. But this proposal assumes that we're relying on symlinks to\ncarry the information about physical locations corresponding to\ntablespace OIDs. The backend just needs to know enough to access a\nrelation file at a relative pathname like\n\ttablespaceOID/relationOID\n(ignoring version and segment numbers for now). Under the hood,\na symlink for tablespaceOID gets the work done.\n\nCertainly this is not a perfect mechanism. But it is simple, it\nis reliable, it is portable to most of the platforms we care about\n(yeah, I know we have a Win port, but you wouldn't ever recommend\nsomeone to run a *serious* database on it would you?), and in general\nI think the bang-for-the-buck ratio is enormous. I do not want to\nhave to deal with explicit tablespace bookkeeping in the backend,\nbut that seems like what we'd have to do in order to improve on\nsymlinks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Jun 2000 00:29:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "At 01:43 PM 6/22/00 +1000, Chris Bitmead wrote:\n\n>I'm wondering if pg_dump should store the location of the tablespace. If\n>your machine dies, you get a new machine to re-create the database, you\n>may not want the tablespace in the same spot. And text-editing a\n>gigabyte file would be extremely painful.\n\nSo you don't dump your create tablespace statements, recognizing that on\na new machine (due to upgrades or crashing) you might assign them to\ndifferent directories/mount points/whatever. That's the reason for\nwanting to hide physical allocation in tablespaces ... the rest of\nyour datamodel doesn't need to know.\n\nOr you do dump your tablespaces, and knowing the paths assigned\nto various ones set up your new machine accordingly.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 21 Jun 2000 22:41:22 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "At 12:03 AM 6/22/00 -0400, Bruce Momjian wrote:\n\n>If the symlink create fails in CREATE TABLESPACE, it just creates an\n>ordinary directory.\n\nSilent surprises - the earmark of truly professional software ...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 21 Jun 2000 22:51:49 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I strongly object to keep tablespace OID for smgr file reference token\n> > though we have to keep it for another purpose of cource. I've mentioned\n> > many times tablespace(where to store) info should be distinguished from\n> > *where it is stored* info.\n> \n> Sure. But this proposal assumes that we're relying on symlinks to\n> carry the information about physical locations corresponding to\n> tablespace OIDs. The backend just needs to know enough to access a\n> relation file at a relative pathname like\n> \ttablespaceOID/relationOID\n> (ignoring version and segment numbers for now). Under the hood,\n> a symlink for tablespaceOID gets the work done.\n>\n\nI think tablespaceOID is an easy substitution for the purpose.\nI don't like to depend on poor directory tree structure in dbms\neither.. \n \n> Certainly this is not a perfect mechanism. But it is simple, it\n> is reliable, it is portable to most of the platforms we care about\n> (yeah, I know we have a Win port, but you wouldn't ever recommend\n> someone to run a *serious* database on it would you?), and in general\n> I think the bang-for-the-buck ratio is enormous. I do not want to\n> have to deal with explicit tablespace bookkeeping in the backend,\n> but that seems like what we'd have to do in order to improve on\n> symlinks.\n>\n\nI've already mentioned about it 10 times or so but unfortunately\nI see no one on my side yet. \nOK,I've given up the discussion about it. I don't want to waste\nmy time any more.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 22 Jun 2000 14:56:07 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "At 23:27 21/06/00 -0400, Tom Lane wrote:\n>\"Hiroshi Inoue\" <[email protected]> writes:\n>> Please add my opinion to the list.\n>> Unique-id filename: Hiroshi\n>> (Unqiue-id is irrelevant to OID/relname).\n>\n>I was trying earlier to convince myself that a single unique-ID value\n>would be better than OID+version for the smgr interface, because it'd\n>certainly be easier to pass around. I failed to convince myself though,\n>and the thing that bothered me was this. Suppose you are trying to\n>recover a corrupted database manually, and the only information you have\n>about which table is which is a somewhat out-of-date listing of OIDs\n>versus table names.\n\nThis worries me a little; in the Dec/RDB world it is a very long time since\ndatabase backups were done by copying the files. There is a database\nbackup/restore utility which runs while the database is on-line and makes\nsure a valid snapshot is taken. Backing up storage areas (table spapces)\ncan be done separately by the same utility, and again, it records enough\ninformation to ensure integrity. Maybe the thing to do is write a pg_backup\nutility, which in a first pass could, presumably, be synonymous with pg_dump?\n\nAm I missing something here? Is there a problem with backing up using\n'pg_dump | gzip'?\n\n\n> (Maybe it's out of date because you got it from\n>your last backup tape.) If the files are named OID+version you're not\n>going to have much trouble seeing which is which, even if some of the\n>versions are higher than what was on the tape.\n\nUnfortunately here you hit severe RI problems, unless you use a 'proper'\ndatabase backup.\n\n\n> But if version-updated\n>tables are given entirely new unique IDs, you've got no hope at all of\n>telling which one corresponds to what you had in the listing. Maybe\n>you can tell by looking through the physical file contents, but\n>certainly this way is more fragile from the point of view of data\n>recovery.\n\nIn the Dec/RDB world, one has to very occasionally restore from files (this\nonly happens if multiple prior database backups and after-image journals\nare corrupt). In this case, there is a utility for examining and changing\nstorage area file information. This is probably way over the top for\nPostgreSQL.\n\n[Aside: FWIW, the Dec/RDB storage area files are named by DBAs to be\nsomething meaningful to the DBA (eg. EMPLOYEE_ACHIVE), and can contain one\nof more tables etc. The files are never renamed or moved by the database\nwithout an instruction from the DBA. The 'storage manager' manages the\ndatafiles internally. Usually, tables are allocated in chunks of multiples\nof some file-based buffer size, and the file grows as needed. This allows\nfor disk read-ahead to be useful, while storing multiple tables in one\nfile. As stated in a previous message, tables can also be split across\nstorage areas]\n\nOnce again, I hope I have not missed a fundamental fact...\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 22 Jun 2000 16:31:33 +1000",
"msg_from": "\"Philip J. Warner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "At 13:43 22/06/00 +1000, Chris Bitmead wrote:\n>Bruce Momjian wrote:\n>\n>I'm wondering if pg_dump should store the location of the tablespace. If\n>your machine dies, you get a new machine to re-create the database, you\n>may not want the tablespace in the same spot. And text-editing a\n>gigabyte file would be extremely painful.\n>\n\nThis is a very good point; the way Dec/RDB gets around it is to allow the\n'pg_restore' command to override storage settings when restoring a backup\nfile.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 22 Jun 2000 16:32:56 +1000",
"msg_from": "\"Philip J. Warner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> I'm wondering if pg_dump should store the location of the tablespace. If\n> your machine dies, you get a new machine to re-create the database, you\n> may not want the tablespace in the same spot. And text-editing a\n> gigabyte file would be extremely painful.\n\nMight make sense to store the tablespace setup separately from the bulk\nof the data, but certainly you want some way to dump that info in a\nrestorable form.\n\nI've been thinking lately that the pg_dump shove-it-all-in-one-file\napproach doesn't scale anyway. We ought to start thinking about ways\nto make the standard dump method store schema separately from bulk\ndata, for example. That's offtopic for this thread but ought to be\non the TODO list someplace...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Jun 2000 03:05:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "\"Philip J. Warner\" <[email protected]> writes:\n>> ... the thing that bothered me was this. Suppose you are trying to\n>> recover a corrupted database manually, and the only information you have\n>> about which table is which is a somewhat out-of-date listing of OIDs\n>> versus table names.\n\n> This worries me a little; in the Dec/RDB world it is a very long time since\n> database backups were done by copying the files. There is a database\n> backup/restore utility which runs while the database is on-line and makes\n> sure a valid snapshot is taken. Backing up storage areas (table spapces)\n> can be done separately by the same utility, and again, it records enough\n> information to ensure integrity. Maybe the thing to do is write a pg_backup\n> utility, which in a first pass could, presumably, be synonymous with pg_dump?\n\npg_dump already does the consistent-snapshot trick (it just has to run\ninside a single transaction).\n\n> Am I missing something here? Is there a problem with backing up using\n> 'pg_dump | gzip'?\n\nNone, as long as your ambition extends no further than restoring your\ndata to where it was at your last pg_dump. I was thinking about the\nall-too-common-in-the-real-world scenario where you're hoping to recover\nsome data more recent than your last backup from the fractured shards\nof your database...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Jun 2000 03:17:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "At 03:17 22/06/00 -0400, Tom Lane wrote:\n>\n>> This worries me a little; in the Dec/RDB world it is a very long time since\n>> database backups were done by copying the files. There is a database\n>> backup/restore utility which runs while the database is on-line and makes\n>> sure a valid snapshot is taken. Backing up storage areas (table spapces)\n>> can be done separately by the same utility, and again, it records enough\n>> information to ensure integrity. Maybe the thing to do is write a pg_backup\n>> utility, which in a first pass could, presumably, be synonymous with\npg_dump?\n>\n>pg_dump already does the consistent-snapshot trick (it just has to run\n>inside a single transaction).\n>\n>> Am I missing something here? Is there a problem with backing up using\n>> 'pg_dump | gzip'?\n>\n>None, as long as your ambition extends no further than restoring your\n>data to where it was at your last pg_dump. I was thinking about the\n>all-too-common-in-the-real-world scenario where you're hoping to recover\n>some data more recent than your last backup from the fractured shards\n>of your database...\n>\n\npg_dump is a good basis for any pg_backup utility; perhaps as you indicated\nelsewhere, more carefull formatting of the dump files would make\ntable-based restoration possible. In another response, I also suggested\nallowing overrides of placement information in a restore operation- the\nsimplest approach would be an 'ignore-storage-parameters' flag. Does this\nsound reasonable? If so, then discussion of file-id based on OID needs not\nbe too concerned about how db restoration is done.\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 22 Jun 2000 17:50:15 +1000",
"msg_from": "\"Philip J. Warner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Please add my opinion to the list.\n> > Unique-id filename: Hiroshi\n> > (Unqiue-id is irrelevant to OID/relname).\n> \n> \"Unique ID\" is more or less equivalent to \"OID + version number\",\n> right?\n>\n\nHmm,no one seems to be on my side at this point also.\nOK,I change my mind as follows.\n\n OID except cygwin,unique-id on cygwin\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 22 Jun 2000 20:09:07 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Big 7.1 open items "
},
{
"msg_contents": "> At 01:43 PM 6/22/00 +1000, Chris Bitmead wrote:\n> \n> >I'm wondering if pg_dump should store the location of the tablespace. If\n> >your machine dies, you get a new machine to re-create the database, you\n> >may not want the tablespace in the same spot. And text-editing a\n> >gigabyte file would be extremely painful.\n> \n> So you don't dump your create tablespace statements, recognizing that on\n> a new machine (due to upgrades or crashing) you might assign them to\n> different directories/mount points/whatever. That's the reason for\n> wanting to hide physical allocation in tablespaces ... the rest of\n> your datamodel doesn't need to know.\n> \n> Or you do dump your tablespaces, and knowing the paths assigned\n> to various ones set up your new machine accordingly.\n\nI imagine we will have a -l flag to pg_dump to dump tablespace\nlocations. If they exist on the new machine, we use them. If not, we\ncreate just directories with no symlinks.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Jun 2000 10:35:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> OK,I change my mind as follows.\n> OID except cygwin,unique-id on cygwin\n\nWe don't really want to do that, do we? That's a huge difference in\nbehavior to have in just one port --- especially a port that none of\nthe primary developers use (AFAIK anyway). The cygwin port's normal\nstate of existence will be \"broken\", surely, if we go that way.\n\nBesides which, OID alone doesn't give us a possibility of file\nversioning, and as I commented to Vadim I think we will want that,\nWAL or no WAL. So it seems to me the two viable choices are\nunique-id or OID+version-number. Either way, the file-naming behavior\nshould be the same across all platforms.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Jun 2000 11:27:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items "
},
{
"msg_contents": "> pg_dump is a good basis for any pg_backup utility; perhaps as you indicated\n> elsewhere, more carefull formatting of the dump files would make\n> table-based restoration possible. In another response, I also suggested\n> allowing overrides of placement information in a restore operation- the\n> simplest approach would be an 'ignore-storage-parameters' flag. Does this\n> sound reasonable? If so, then discussion of file-id based on OID needs not\n> be too concerned about how db restoration is done.\n\nMy idea was to make dumping of tablespace locations/symlinks optional. \nBy trying to control it on the load end, you have to basically have some\nway of telling the backend to ignore the symlinks on load. Right now,\npg_dump just creates SQL commands and COPY commands.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Jun 2000 16:11:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "On Wed, 21 Jun 2000, Don Baccus wrote:\n\n> At 01:43 PM 6/22/00 +1000, Chris Bitmead wrote:\n> \n> >I'm wondering if pg_dump should store the location of the tablespace. If\n> >your machine dies, you get a new machine to re-create the database, you\n> >may not want the tablespace in the same spot. And text-editing a\n> >gigabyte file would be extremely painful.\n> \n> So you don't dump your create tablespace statements, recognizing that on\n> a new machine (due to upgrades or crashing) you might assign them to\n> different directories/mount points/whatever. That's the reason for\n> wanting to hide physical allocation in tablespaces ... the rest of\n> your datamodel doesn't need to know.\n> \n> Or you do dump your tablespaces, and knowing the paths assigned\n> to various ones set up your new machine accordingly.\n\nOr, modify pg_dump so that it auto-dumps to two files, one for schema, one\nfor data. then its easier to modify the schema on a large database if\ntablespaces change ...\n\n\n",
"msg_date": "Thu, 22 Jun 2000 19:05:38 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "The Hermit Hacker wrote:\n\n> Or, modify pg_dump so that it auto-dumps to two files, one for schema, one\n> for data. then its easier to modify the schema on a large database if\n> tablespaces change ...\n\nThat's a pretty good idea as an option. But I'd say keep the schema\nseparate from the tablespace locations. And if you're going down that\npath why not create a directory automatically and dump each table into a\nseparate file. On occasion I've had to restore one table by hand-editing\nthe pg_dump, and that's a real pain.\n",
"msg_date": "Fri, 23 Jun 2000 09:55:15 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "At 09:55 23/06/00 +1000, Chris Bitmead wrote:\n>The Hermit Hacker wrote:\n>\n>> Or, modify pg_dump so that it auto-dumps to two files, one for schema, one\n>> for data. then its easier to modify the schema on a large database if\n>> tablespaces change ...\n>\n>That's a pretty good idea as an option. But I'd say keep the schema\n>separate from the tablespace locations. And if you're going down that\n>path why not create a directory automatically and dump each table into a\n>separate file. On occasion I've had to restore one table by hand-editing\n>the pg_dump, and that's a real pain.\n>\n\nHave a look at my message entitled:\n\nProposal: More flexible backup/restore via pg_dump\n\nIt's supposed to address these issues.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 23 Jun 2000 11:52:49 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce Momjian writes:\n> \n> > Here is the list I have gotten of open 7.1 items:\n> \n> > \tnew location for config files\n> \n> I'm on that task now, more or less by accident but I might as well get it\n> done. I'm reorganizing all the file name handling code for pg_hba.conf,\n> pg_indent.conf, pg_control, etc. so they have consistent accessor\n> routines. The DataDir global variable will disappear, you'll have to use\n> GetDataDir().\n> \n\nCan we get agreement to remove our secondary password files, and make\nsomething that makes more sense?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Jun 2000 12:19:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Here is the list I have gotten of open 7.1 items:\n\n> \tnew location for config files\n\nI'm on that task now, more or less by accident but I might as well get it\ndone. I'm reorganizing all the file name handling code for pg_hba.conf,\npg_indent.conf, pg_control, etc. so they have consistent accessor\nroutines. The DataDir global variable will disappear, you'll have to use\nGetDataDir().\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 23 Jun 2000 18:20:26 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce Momjian writes:\n> \n> > Can we get agreement to remove our secondary password files, and make\n> > something that makes more sense?\n> \n> How about this: Normally secondary password files look like\n> \n> username:ABS5SGh1EL6bk\n> \n> We could add the option of making them look like\n> \n> username:+\n> \n> which means \"look into pg_shadow\". That would be fully backward\n> compatible, allows the use of alter user with password, and avoids\n> creating any extra system tables (that would need to be dumped to plain\n> text). And the coding looks very simple.\n\nYes, perfect. In fact, how about:\n\n> username\n\nas doing that. Any username with no colon uses pg_shadow.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 24 Jun 2000 20:59:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Can we get agreement to remove our secondary password files, and make\n> something that makes more sense?\n\nHow about this: Normally secondary password files look like\n\nusername:ABS5SGh1EL6bk\n\nWe could add the option of making them look like\n\nusername:+\n\nwhich means \"look into pg_shadow\". That would be fully backward\ncompatible, allows the use of alter user with password, and avoids\ncreating any extra system tables (that would need to be dumped to plain\ntext). And the coding looks very simple.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 25 Jun 2000 03:00:51 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big 7.1 open items"
}
] |
[
{
"msg_contents": "\n\tHi,\n\n\tI'm sending this to -hackers instead of -users because I think I'm\nasking for a new feature in psql.\n\n\tMy Problem: If I run a sql file with create-table-statements through\npsql, all the column names get automagically lowercased. On the other hand, the\nObject-Relational-Mapping tool I'm using recreates all its objects from a\nResultSet, getting the lowercase names, and compares them with uppercase ones,\nthus failing silently.\n\n\tI'm working around this issue by enclosing the column names in \",\nleading to slightly ugly ddl files (create table USERROLE (\"ROLEID\" serial\nPRIMARY KEY, \"PERMISSION\" varchar);) and the necessity to change the default\ndata files (enclosing column names in insert statements with \").\n\n\tWould it be possible to add a flag to psql, telling it to accept the\ncolumn names as they are in the ddl file?\n\n\n\tRegards,\n\t\tHakan\n\n\n-- \nHakan Tandogan [email protected]\n\nICONSULT Tandogan - Egerer GbR Tel.: +49-9131-9047-0\nMemelstrasse 38 - D-91052 Erlangen Fax.: +49-9131-9047-77\n\n\"Any sufficiently advanced bug is indistinguishable from a feature\"\n",
"msg_date": "Mon, 17 Jan 2000 14:45:42 +0100",
"msg_from": "Hakan Tandogan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Auto-lowercasing of column names?"
},
{
"msg_contents": "This is standard, documented behaviour. No way we can change that. Is\nthere a reason why your front-end tools cannot compare the names\ncase-insensitively?\n\nOn Mon, 17 Jan 2000, Hakan Tandogan wrote:\n\n> \n> \tHi,\n> \n> \tI'm sending this to -hackers instead of -users because I think I'm\n> asking for a new feature in psql.\n> \n> \tMy Problem: If I run a sql file with create-table-statements through\n> psql, all the column names get automagically lowercased. On the other hand, the\n> Object-Relational-Mapping tool I'm using recreates all its objects from a\n> ResultSet, getting the lowercase names, and compares them with uppercase ones,\n> thus failing silently.\n> \n> \tI'm working around this issue by enclosing the column names in \",\n> leading to slightly ugly ddl files (create table USERROLE (\"ROLEID\" serial\n> PRIMARY KEY, \"PERMISSION\" varchar);) and the necessity to change the default\n> data files (enclosing column names in insert statements with \").\n> \n> \tWould it be possible to add a flag to psql, telling it to accept the\n> column names as they are in the ddl file?\n> \n> \n> \tRegards,\n> \t\tHakan\n> \n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 17 Jan 2000 15:06:15 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Auto-lowercasing of column names?"
},
{
"msg_contents": "On Mon, 17 Jan 2000, you wrote:\n> This is standard, documented behaviour. No way we can change that. Is\n\n\tThus, a flag to psql, per default keeping the documented behaviour like\nit is.\n\n> there a reason why your front-end tools cannot compare the names\n> case-insensitively?\n\n\tYes and No. They (the authors) told me to change my ddl scripts. The\nbad part is that I can't modify them automatically (except by writing a full SQL\nparser ;-) ). Thus, I have to watch for changes to the database schemas / data\nand modify my own ddl scripts accordingly. While not a full-time-job by itself,\nthis solutions simply struck me as \"not beautiful (tm)\". Well, I guess I can\nlive a little longer with hand-modifying the ddl files ;-).\n\n\tRegards,\n\t\tHakan\n\n-- \nHakan Tandogan [email protected]\n\nICONSULT Tandogan - Egerer GbR Tel.: +49-9131-9047-0\nMemelstrasse 38 - D-91052 Erlangen Fax.: +49-9131-9047-77\n\n\"Any sufficiently advanced bug is indistinguishable from a feature\"\n",
"msg_date": "Mon, 17 Jan 2000 15:20:32 +0100",
"msg_from": "Hakan Tandogan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Auto-lowercasing of column names?"
}
] |
[
{
"msg_contents": "Hi\n\nJust created a rule and received \"rule plan string too big\". My rule is around\n1000 characters long. I added a debug statement which printed the following from\nbackend/rewrite/rewriteDefine.c\n\n sizeof(FormData_pg_rewrite) = 60, strlen(actionbuf) = 6328, strlen(qualbuf) = 4279, MaxAttrSize = 8104\n\nso my rule expands to 10667 characters.\n\nQuestions I have\n1. Is it feasable to increase MaxAttrSize without upsetting the applecart?\n\n2. Does this limitation not severely restrict setting up of rules? Ie. I\n want to log any update on any field made to a 16 attribute table.\n\n The rule is as follows\n\n create rule log_accounts as on update to accounts\n where new.domain != old.domain or\n new.RegistrationDate != old.RegistrationDate or\n new.RegistrationType != old.RegistrationType or\n new.Amount != old.Amount or\n new.BillingType != old.BillingType or\n new.ContactEmail != old.ContactEmail or\n new.PaperDate != old.PaperDate or\n new.PaymentDate != old.PaymentDate or\n new.InvoiceCount != old.InvoiceCount or\n new.InvoiceNo != old.InvoiceNo or\n new.ContractType != old.ContractType or\n new.Organisation != old.Organisation or\n new.Payed != old.Payed\n do insert into accounts_log values (\n getpgusername(),\n 'now'::text,\n new.domain,\n new.RegistrationDate,\n new.RegistrationType,\n new.Amount,\n new.BillingType,\n new.ContactEmail,\n new.PaperDate,\n new.PaymentDate,\n new.InvoiceCount,\n new.InvoiceNo,\n new.ContractType,\n new.Organisation,\n new.Payed,\n new.PaperContract\n );\n\n3. Is there a better way to do this?\n\nTIA\n--------\nRegards\nTheo\n",
"msg_date": "Mon, 17 Jan 2000 16:28:43 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "DefineQueryRewrite: rule plan string too big."
},
{
"msg_contents": "Theo Kramer wrote:\n> Just created a rule and received \"rule plan string too big\". My rule is around\n> 1000 characters long. I added a debug statement which printed the following from\n> backend/rewrite/rewriteDefine.c\n\nOops forgot to mention: Postgres 6.5.2\n--------\nRegards\nTheo\n",
"msg_date": "Mon, 17 Jan 2000 16:43:04 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DefineQueryRewrite: rule plan string too big."
},
{
"msg_contents": "I wrote:\n> \n> Hi\n> \n> Just created a rule and received \"rule plan string too big\". My rule is around\n> 1000 characters long. I added a debug statement which printed the following from\n> backend/rewrite/rewriteDefine.c\n> ...\n> 3. Is there a better way to do this?\n\nWas being thick... I removed the 'where' clause and now it fits and works.\n--------\nRegards\nTheo\n",
"msg_date": "Mon, 17 Jan 2000 17:03:30 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DefineQueryRewrite: rule plan string too big."
}
] |
[
{
"msg_contents": "Peter, there seem to be problems with the COPY statement when psql is run\nwith redirected stdin.\n\nIf I have a file ${DUMPDIR}/dbdump.product containing:\n\nCOPY product FROM stdin;\n05 \\N 000000 \\N \\N S D9 t f f f POLY BAGS-BLACK . Single f\n...\n\\.\n\n\nand I run this command:\npsql -e -d bray < ${DUMPDIR}/dbdump.product\n\nno error messages are seen.\n\nIf I remove the COPY command from the file and run the COPY frpm inside\npsql, I see the errors:\n\nbray=> copy product from '/tmp/dbdump.product';\nERROR: <unnamed> referential integrity violation - key referenced from \nproduct not found in brandname\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And, behold, I come quickly; and my reward is with me,\n to give every man according as his work shall be.\" \n Revelation 22:12 \n\n\n",
"msg_date": "Mon, 17 Jan 2000 14:34:06 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql and COPY "
},
{
"msg_contents": "Sorry, I can't reproduce that. I have not been able to keep up with RI, so\nI can't seem to get a usable table configuration together. I used a script\n\nCOPY three FROM STDIN;\n\\N\n\\.\n\non a table CREATE TABLE three ( a int4 not null ) and it gives me proper\nerror messages either way.\n\nI am almost tempted to say that this is a bug in COPY, though I sure don't\nwant to blame anyone before I see it. Could you send me a complete test\ncase please?\n\n\t-Peter\n\n\nOn 2000-01-17, Oliver Elphick mentioned:\n\n> Peter, there seem to be problems with the COPY statement when psql is run\n> with redirected stdin.\n> \n> If I have a file ${DUMPDIR}/dbdump.product containing:\n> \n> COPY product FROM stdin;\n> 05 \\N 000000 \\N \\N S D9 t f f f POLY BAGS-BLACK . Single f\n> ...\n> \\.\n> \n> \n> and I run this command:\n> psql -e -d bray < ${DUMPDIR}/dbdump.product\n> \n> no error messages are seen.\n> \n> If I remove the COPY command from the file and run the COPY frpm inside\n> psql, I see the errors:\n> \n> bray=> copy product from '/tmp/dbdump.product';\n> ERROR: <unnamed> referential integrity violation - key referenced from \n> product not found in brandname\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 19 Jan 2000 00:28:19 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and COPY "
}
] |
[
{
"msg_contents": "I am defining a table t1 with a NOT NULL field f1 and UNIQUE field f2.\n(it automatically defines t1_f2_key unique index)\n\nI am defining now a new table t2 that inherits t1 table and add some\ncolumns.\n\nThe NOT NULL constraint is preserved for f1 field, the UNIQUE for f2 not\n(the index t2_f2_key) is not defined.\n\nWouldn't be normal that the unique constraint to be inherited also in\nt2?\n\nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n",
"msg_date": "Mon, 17 Jan 2000 22:18:33 +0200",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unique constraint for inherited tables"
}
] |
[
{
"msg_contents": "In my feeble frenzy to enforce a GNU-compliant user interface throughout\nall client programs I have also been starting to stick a\n\nReport bugs to <[email protected]>.\n\ninto the --help output. (RMS would be proud.)\n\nI have seen vague references to this email address being recommended for\nbug reports, but I'm just wondering if this is still active or desirable\nor whatever. Furthermore I'm not sure if this is a mailing list and who\ngets this?\n\nThe current de facto procedure that bug reporters have to subscribe to the\nhackers list is certainly not very user-friendly.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Tue, 18 Jan 2000 01:06:33 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "[email protected]?"
},
{
"msg_contents": "On Tue, 18 Jan 2000, Peter Eisentraut wrote:\n\n> In my feeble frenzy to enforce a GNU-compliant user interface throughout\n> all client programs I have also been starting to stick a\n> \n> Report bugs to <[email protected]>.\n\[email protected] ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 17 Jan 2000 20:08:14 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [email protected]?"
}
] |
[
{
"msg_contents": "On Sunday, I downloaded the newest snapshot of pg_dump, with Tom's fixes\napplied (it no longer dumps core when dumping indexes). However, pg_dump\nis incorrectly dumping argument lists for trigger procedures:\n\ntest=# drop trigger mytrigger on mytable;\nDROP\ntest=# create trigger mytrigger\nbefore insert on mytable\nfor each row execute procedure autoinc ('myfield', 'myseq');\nCREATE\n\n[postgres@ferrari /tmp]$ pg_dump test\n...\nCREATE TRIGGER \"mytrigger\" BEFORE INSERT ON \"mytable\" FOR EACH ROW\nEXECUTE PROCEDURE autoinc ('myfield', 'myfieldmyseq');\n...\n\nNote the second parameter to autoinc() -- it should be 'myseq', not\n'myfieldmyseq'. I suspect someone isn't doing a resetPQExpBuffer()\nsomewhere. Can anyone confirm this?\n\nMike Mascari\n\n\n\n",
"msg_date": "Mon, 17 Jan 2000 19:17:05 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is pg_dump still broken?"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n> Note the second parameter to autoinc() -- it should be 'myseq', not\n> 'myfieldmyseq'. I suspect someone isn't doing a resetPQExpBuffer()\n> somewhere. Can anyone confirm this?\n\nYup, you're right. I note the procedure name isn't getting quoted,\neither. Fix committed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jan 2000 02:31:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Is pg_dump still broken? "
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm trying to implement REINDEX command.\nBecause the command is to repair system indexes,we\ncoundn't rely on system indexes when we call the\ncommand.\n\nI added locally an option of standalone postgres to ignore\nsystem indexes and am add/changing ignore_system_\nindexes stuff.\n\nThere are fairly many places using system indexes. \nProbably I would be able to change them.\nBut is it preferable or possible to force other developers\nto take ignore_system_indexes mode into account ?\nIs it better to limit changes required for REINDEX\ncommand ? \n\nComments ? Better ideas ?\n\nRegards. \n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 18 Jan 2000 10:02:31 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to ignore system indexes"
},
{
"msg_contents": "> I'm trying to implement REINDEX command.\n> Because the command is to repair system indexes,we\n> coundn't rely on system indexes when we call the\n> command.\n> \n> I added locally an option of standalone postgres to ignore\n> system indexes and am add/changing ignore_system_\n> indexes stuff.\n> \n> There are fairly many places using system indexes. \n> Probably I would be able to change them.\n> But is it preferable or possible to force other developers\n> to take ignore_system_indexes mode into account ?\n> Is it better to limit changes required for REINDEX\n> command ? \n\nOne solution is to use pg_upgrade. It allows an initdb and recreate of\nall tables without reload.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jan 2000 23:04:12 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to ignore system indexes"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> \n> > I'm trying to implement REINDEX command.\n> > Because the command is to repair system indexes,we\n> > coundn't rely on system indexes when we call the\n> > command.\n> > \n> > I added locally an option of standalone postgres to ignore\n> > system indexes and am add/changing ignore_system_\n> > indexes stuff.\n> > \n> > There are fairly many places using system indexes. \n> > Probably I would be able to change them.\n> > But is it preferable or possible to force other developers\n> > to take ignore_system_indexes mode into account ?\n> > Is it better to limit changes required for REINDEX\n> > command ? \n> \n> One solution is to use pg_upgrade. It allows an initdb and recreate of\n> all tables without reload.\n> -- \n\nIsn't it a big charge to execute pg_upgrade for a huge database ?\nI have never used pg_upgrade.\nIs pg_upgrade available now ?\nIs pg_upgrade reliable ?\n\nMy design is as follows.\n\npostgres -P test /* I'm using -P as a new option temporarily */.\n\n> reindex database test; (all system indexes of a db)\n> reindex table pg_class; (all indexes of a system table)\n> reindex index pg_index_indexrelid_index; (a system index)\n\nIf we could ignore system indexes,it won't be difficult to implement\nREINDEX command itself..\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n \n",
"msg_date": "Tue, 18 Jan 2000 13:56:04 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] How to ignore system indexes"
},
{
"msg_contents": "> > One solution is to use pg_upgrade. It allows an initdb and recreate of\n> > all tables without reload.\n> > -- \n> \n> Isn't it a big charge to execute pg_upgrade for a huge database ?\n> I have never used pg_upgrade.\n> Is pg_upgrade available now ?\n> Is pg_upgrade reliable ?\n\nIt has been around since 6.3? It allows initdb, recreates the tables,\nthen moves the data files back into place. There is even a manual page.\n\n> \n> My design is as follows.\n> \n> postgres -P test /* I'm using -P as a new option temporarily */.\n> \n> > reindex database test; (all system indexes of a db)\n> > reindex table pg_class; (all indexes of a system table)\n> > reindex index pg_index_indexrelid_index; (a system index)\n> \n> If we could ignore system indexes,it won't be difficult to implement\n> REINDEX command itself..\n\nNot sure how to find all those places.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 00:09:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to ignore system indexes"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> There are fairly many places using system indexes. \n> Probably I would be able to change them.\n> But is it preferable or possible to force other developers\n> to take ignore_system_indexes mode into account ?\n\nIs it really necessary to touch all those places?\n\nSeems to me that if a person needs to rebuild system indexes,\nhe would be firing up a standalone backend and running\nREINDEX --- and darn little else. As long as none of the\nsupport code required by REINDEX insists on using an index,\nit doesn't matter what the rest of the system requires.\n\nYou might even think about doing the reindex in bootstrap mode,\nthough I don't know if that would be easier or harder.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jan 2000 00:09:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to ignore system indexes "
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > \n> > > > One solution is to use pg_upgrade. It allows an initdb and \n> > recreate of\n> > > > all tables without reload.\n> > > > -- \n> > > \n> > > Isn't it a big charge to execute pg_upgrade for a huge database ?\n> > > I have never used pg_upgrade.\n> > > Is pg_upgrade available now ?\n> > > Is pg_upgrade reliable ?\n> > \n> > It has been around since 6.3? It allows initdb, recreates the tables,\n> > then moves the data files back into place. There is even a manual page.\n> >\n> \n> I know the command but does 6.5 have it ?\n\nSure, but it is disabled in 6.5 because we changed the binary table\nformat from 6.4 to 6.5. However, I have already recommended people use\nit who have broken system indexes, and it worked.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 00:47:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to ignore system indexes"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> \n> > > One solution is to use pg_upgrade. It allows an initdb and \n> recreate of\n> > > all tables without reload.\n> > > -- \n> > \n> > Isn't it a big charge to execute pg_upgrade for a huge database ?\n> > I have never used pg_upgrade.\n> > Is pg_upgrade available now ?\n> > Is pg_upgrade reliable ?\n> \n> It has been around since 6.3? It allows initdb, recreates the tables,\n> then moves the data files back into place. There is even a manual page.\n>\n\nI know the command but does 6.5 have it ?\n \n> > \n> > My design is as follows.\n> > \n> > postgres -P test /* I'm using -P as a new option temporarily */.\n> > \n> > > reindex database test; (all system indexes of a db)\n> > > reindex table pg_class; (all indexes of a system table)\n> > > reindex index pg_index_indexrelid_index; (a system index)\n> > \n> > If we could ignore system indexes,it won't be difficult to implement\n> > REINDEX command itself..\n> \n> Not sure how to find all those places.\n>\n\nI would only change the stuff required for REINDEX command,\nthough I know almost all those places.\n \nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 18 Jan 2000 14:48:01 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] How to ignore system indexes"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > There are fairly many places using system indexes. \n> > Probably I would be able to change them.\n> > But is it preferable or possible to force other developers\n> > to take ignore_system_indexes mode into account ?\n> \n> Is it really necessary to touch all those places?\n> \n> Seems to me that if a person needs to rebuild system indexes,\n> he would be firing up a standalone backend and running\n> REINDEX --- and darn little else. As long as none of the\n> support code required by REINDEX insists on using an index,\n> it doesn't matter what the rest of the system requires.\n>\n\nOK,I would limit changes only for REINDEX command.\n \n> You might even think about doing the reindex in bootstrap mode,\n> though I don't know if that would be easier or harder.\n>\n\nYes,bootstrap mode is a natural selection. Jan has already tried\nit and there was a problem of time quliafication. I don't know it is\na big obstacle or not. I prefer standalone postgres because\nthere's a possibility to call various SQL commands together\nwith REINDEX command.\nOf cource,time qualification is no longer a problem in standalone\npostgres.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 18 Jan 2000 14:48:19 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] How to ignore system indexes "
},
{
"msg_contents": " -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n>\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > > -----Original Message-----\n> > > From: Bruce Momjian [mailto:[email protected]]\n> > >\n> > > > > One solution is to use pg_upgrade. It allows an initdb and\n> > > recreate of\n> > > > > all tables without reload.\n> > > > > --\n> > > >\n> > > > Isn't it a big charge to execute pg_upgrade for a huge database ?\n> > > > I have never used pg_upgrade.\n> > > > Is pg_upgrade available now ?\n> > > > Is pg_upgrade reliable ?\n> > >\n> > > It has been around since 6.3? It allows initdb, recreates the tables,\n> > > then moves the data files back into place. There is even a\n> manual page.\n> > >\n> >\n> > I know the command but does 6.5 have it ?\n>\n> Sure, but it is disabled in 6.5 because we changed the binary table\n> format from 6.4 to 6.5. However, I have already recommended people use\n> it who have broken system indexes, and it worked.\n>\n\nIt seems pg_upgrade is too complicated to recover system indexes.\nIn addtion,could pg_upgrade/pg_dump/vacuum etc ... work even when\na critical system index is broken ?\n\nRegards.\n\nHiroshi Inoue\[email protected].\n\n\n",
"msg_date": "Tue, 18 Jan 2000 15:58:23 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] How to ignore system indexes"
},
{
"msg_contents": "> > Sure, but it is disabled in 6.5 because we changed the binary table\n> > format from 6.4 to 6.5. However, I have already recommended people use\n> > it who have broken system indexes, and it worked.\n> >\n> \n> It seems pg_upgrade is too complicated to recover system indexes.\n> In addtion,could pg_upgrade/pg_dump/vacuum etc ... work even when\n> a critical system index is broken ?\n\nYes, pg_dumpall -s may not work with broken system indexes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 13:08:21 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How to ignore system indexes"
}
] |
[
{
"msg_contents": "This message was sent from Geocrawler.com by \"Adam Walczykiewicz\" <[email protected]>\nBe sure to reply to that address.\n\nHi\nIs possible to send a record(more than 8 fields) \nto the function\nin plpgsql then modified it and send back???\nI've tried:... \nCREATE FUNCTION modrec(rec_client) RETURNS \nrec_client)\n(...)\nand call it\n SELECT modrec\n(name::rec_client,age::rec_client..._)\n\nbut it failed. The only method I found\nis to send record as a text and exctract it inside\n,modified and send back as a text.\nThanks for any sugesstions.\nAdam\n\n\nGeocrawler.com - The Knowledge Archive\n",
"msg_date": "Mon, 17 Jan 2000 18:04:55 -0800",
"msg_from": "\"Adam Walczykiewicz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "plpgsql -record in -record out"
}
] |
[
{
"msg_contents": "The multi-byte support in current had been broken for a while due to\nmissing compile flag in Makefile.global.in. I have just committed fix\nfor the problem, and should be ok now.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 18 Jan 2000 12:04:21 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "multi-byte support broken in current"
},
{
"msg_contents": "Huh? MULTIBYTE is defined in include/config.h, where it should be, when\nyou configure with --enable-multibyte. I explicitly removed the\n-DMULTIBYTE option from the compile line. Perhaps the portions of the code\nthat were \"broken\" need to include c.h? The -DMULTIBYTE thing ought to go.\n\nOn 2000-01-18, Tatsuo Ishii mentioned:\n\n> The multi-byte support in current had been broken for a while due to\n> missing compile flag in Makefile.global.in. I have just committed fix\n> for the problem, and should be ok now.\n> --\n> Tatsuo Ishii\n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 19 Jan 2000 00:43:36 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] multi-byte support broken in current"
},
{
"msg_contents": "> Huh? MULTIBYTE is defined in include/config.h, where it should be, when\n> you configure with --enable-multibyte. I explicitly removed the\n> -DMULTIBYTE option from the compile line. Perhaps the portions of the code\n> that were \"broken\" need to include c.h? The -DMULTIBYTE thing ought to go.\n\nOh, I see. I reverted back my change to Makefile.global. Sorry for the\nconfusion.\n\nBTW, while running configure, --with-mb is silently ignored. This is\nnot good since users might believe multibyte has been enabled. I have\nadded checking for --with-mb. If it is specified, configure now prints\nan error message and exits.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 19 Jan 2000 10:48:50 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] multi-byte support broken in current"
},
{
"msg_contents": "Further investigation showed that header files where indeed missing in\nsome files, in particular the regex code. Is that what was broken? I fixed\nthat now and removed all the MBFLAGS business. Every file should include\n\"postgres.h\", so it grabs the #define MULTIBYTE 1 from include/config.h.\nIf you got any more problems with this, let me know and I'll help fix it.\n\nBtw., I tried running the multibyte regression tests, that didn't work so\nwell.\n\nOn 2000-01-18, Tatsuo Ishii mentioned:\n\n> The multi-byte support in current had been broken for a while due to\n> missing compile flag in Makefile.global.in. I have just committed fix\n> for the problem, and should be ok now.\n> --\n> Tatsuo Ishii\n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 19 Jan 2000 04:00:12 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] multi-byte support broken in current"
},
{
"msg_contents": "> Further investigation showed that header files where indeed missing in\n> some files, in particular the regex code. Is that what was broken? I fixed\n> that now and removed all the MBFLAGS business. Every file should include\n> \"postgres.h\", so it grabs the #define MULTIBYTE 1 from include/config.h.\n> If you got any more problems with this, let me know and I'll help fix it.\n\nI did not see such a problem. I just saw -DMULTIBYTE was missing and\nthought MB was broken (that was a misunderstanding). Anyway I have\ndone a cvs up and seen your fixes working well.\n\n> Btw., I tried running the multibyte regression tests, that didn't work so\n> well.\n\nYes, the regression test (src/test/regress) does not work with\nSQL_ASCII because test cases for it is missing in sql/. However, I'm\nnot certain now including the multibyte test cases in the regression\ntest suite is a good thing. Maybe src/test/mb is only the right place\nfor MB and we should remove MB stuffs from the regression.\n\nBTW, src/test/mb tests are broken due to the changes of psql output.\n\neuc_jp .. CREATE DATABASE\nok\nsjis .. ok\neuc_kr .. CREATE DATABASE\nfailed\neuc_cn .. CREATE DATABASE\nfailed\neuc_tw .. CREATE DATABASE\nfailed\nbig5 .. failed\nunicode .. CREATE DATABASE\nfailed\nmule_internal .. CREATE DATABASE\nfailed\n\nI have fixed for EUC_JP and SJIS cases and am going to fix rest of\nthem. But reading files written in languages that I never understand\nis really hard:-)\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 19 Jan 2000 13:40:54 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] multi-byte support broken in current"
}
] |
[
{
"msg_contents": "Hi,\n\nI have added two built-in functions:\n\tpg_char_to_encoding()\t-- convert encoding string to encoding id\n\tpg_encoding_to_char()\t-- convert encoding id to encoding string\n\nMain purpose for this is to allow psql -l to print encoding names\nrather than encoding ids (sample output from psql shown below) in a\nmultibyte enabled installation.\n\n List of databases\n Database | Owner | Encoding \n------------+---------+----------\n regression | t-ishii | EUC_JP\n template1 | t-ishii | EUC_JP\n test | t-ishii | EUC_JP\n(3 rows)\n\nThis is much more \"user friendly\" IMHO.\n\ninitdb required.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 18 Jan 2000 14:51:14 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_proc.h changed"
},
{
"msg_contents": "I've been wondering about that. Great!\n\nOn 2000-01-18, Tatsuo Ishii mentioned:\n\n> Hi,\n> \n> I have added two built-in functions:\n> \tpg_char_to_encoding()\t-- convert encoding string to encoding id\n> \tpg_encoding_to_char()\t-- convert encoding id to encoding string\n> \n> Main purpose for this is to allow psql -l to print encoding names\n> rather than encoding ids (sample output from psql shown below) in a\n> multibyte enabled installation.\n> \n> List of databases\n> Database | Owner | Encoding \n> ------------+---------+----------\n> regression | t-ishii | EUC_JP\n> template1 | t-ishii | EUC_JP\n> test | t-ishii | EUC_JP\n> (3 rows)\n> \n> This is much more \"user friendly\" IMHO.\n> \n> initdb required.\n> --\n> Tatsuo Ishii\n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 19 Jan 2000 03:58:37 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_proc.h changed"
}
] |
[
{
"msg_contents": "I got the pgsql jdbc drivers and their names were jdbc6.5-1.1.jar and\njdbc6.5-1.2.jar. OK everything I've read refers to postgresql.jar\n\n\nI've tried adding the location of the files to the class path as well\nas renaming them, one by one to postgresql.jar and adding the file\nname to the classpath. Nothing seems to work. Now I'm not a java pro\nso perhaps something's wrong with my code or my understanding of what\nto do with the .jar file(s). Here's the code.....It compiles but\nnever gets past \"Failed to load postgresql driver\". Would greatly\nappreacieate any assistance........\n\nimport java.sql.*;\n\npublic class SelectApp {\n\tpublic static void main(String args[]) {\n\t\tString url = \"jdbc:postgresql://gina/testdb\";\n\t\t\n\t\ttry {\n\t\t\tClass.forName(\"postgresql.Driver\");\n\t\t}\n\t\tcatch(Exception e) {\n\t\t\tSystem.out.println(\"Failed to load postgresql\ndriver.\");\n\t\t\treturn;\n\t\t}\n\t\tSystem.out.println(\"Loaded driver successfully\");\n\t\ttry {\n\t\t\tConnection con =\nDriverManager.getConnection(url, \"\", \"\");\n\t\t\tStatement select = con.createStatement();\n\t\t\tResultSet result = select.executeQuery(\"select\n* from cities\");\n\n\t\t\tSystem.out.println(\"Got results:\");\n\t\t\twhile(result.next()) { // process results one\nrow at a time\n\t\t\t\tint key = result.getInt(1);\n\t\t\t\tString val = result.getString(2);\n\n\t\t\t\tSystem.out.println(\"key = \" + key);\n\t\t\t\tSystem.out.println(\"val = \" + val);\n\t\t\t}\n\t\t\tselect.close();\n\t\t\tcon.close();\n\t\t}\n\t\tcatch(Exception e) {\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n}\n\n",
"msg_date": "Tue, 18 Jan 2000 06:39:28 GMT",
"msg_from": "[email protected] (Micheal H.)",
"msg_from_op": true,
"msg_subject": "jdbc question"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm trying to implement REINDEX command.\n\nREINDEX operation itself is available everywhere and\nI've thought about applying it to VACUUM.\n.\nMy plan is as follows.\n\nAdd a new option to force index recreation in vacuum\nand if index recreation is specified.\n\n 1) invalidate all indexes of the target table\n 2) vacuum the target table(heap table only)\n 3) internal commit and truncation\n 4) recreate and validate all indexes of the table.\n\nThe problem is how to invalidate/validate indexes.\nOf cource natural way is to drop/create indexes but the\ndefinition of indexes would be lost in case of abort/crash.\nNow I'm inclined to use relhasindex of pg_class to\nvalidate/invalidate indexes of a table at once.\n \nI remember many people have referred to index recreation\nin vacuum.\n\nAny comment would be greatly appreciated.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 18 Jan 2000 18:18:52 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index recreation in vacuum"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hi all,\n> \n> I'm trying to implement REINDEX command.\n> \n> REINDEX operation itself is available everywhere and\n> I've thought about applying it to VACUUM.\n\nThat is a good idea. Vacuuming of indexes can be very slow.\n\n> .\n> My plan is as follows.\n> \n> Add a new option to force index recreation in vacuum\n> and if index recreation is specified.\n\nCouldn't we auto-recreate indexes based on the number of tuples moved by\nvacuum, or do we update indexes as we move them?\n\n> \n> 1) invalidate all indexes of the target table\n> 2) vacuum the target table(heap table only)\n> 3) internal commit and truncation\n> 4) recreate and validate all indexes of the table.\n> \n> The problem is how to invalidate/validate indexes.\n> Of cource natural way is to drop/create indexes but the\n> definition of indexes would be lost in case of abort/crash.\n\nMy idea would be to create a new index that is a random index name. \nThen, do rename(), which is an atomic OS operation putting the new index\nfile in place of the old name. Seems that would work well.\n\n> Now I'm inclined to use relhasindex of pg_class to\n> validate/invalidate indexes of a table at once.\n\nThere are a few calls to CatalogIndexInsert() that know the system table they\nare using and know it has indexes, so it does not check that field. You\ncould add cases for that.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 13:21:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> >\n> > The problem is how to invalidate/validate indexes.\n> > Of cource natural way is to drop/create indexes but the\n> > definition of indexes would be lost in case of abort/crash.\n>\n> My idea would be to create a new index that is a random index name.\n> Then, do rename(), which is an atomic OS operation putting the new index\n> file in place of the old name. Seems that would work well.\n\nYes, but it can cause disk space problem for very large indices.\nMoreover, you need firts unlink old index file than do rename(),\nit is not atomic.\n\n May be better way is to create tmp file containing index description,\nundestandable for vacuum.\n\n--\nDmitry Samersoff, DM\\S\[email protected] http://devnull.wplus.net\n* There will come soft rains\n\n\n",
"msg_date": "Tue, 18 Jan 2000 22:30:34 +0300",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "[Charset koi8-r unsupported, filtering to ASCII...]\n> Bruce Momjian wrote:\n> \n> > >\n> > > The problem is how to invalidate/validate indexes.\n> > > Of cource natural way is to drop/create indexes but the\n> > > definition of indexes would be lost in case of abort/crash.\n> >\n> > My idea would be to create a new index that is a random index name.\n> > Then, do rename(), which is an atomic OS operation putting the new index\n> > file in place of the old name. Seems that would work well.\n> \n> Yes, but it can cause disk space problem for very large indices.\n\nWell, one would hope you have enough disk space free for that.\n\n> Moreover, you need firts unlink old index file than do rename(),\n> it is not atomic.\n\n\n rename - change the name of a file\n\n int\n rename(const char *from, const char *to);\n\n...\n Rename() causes the link named from to be renamed as to. If to exists, it\n is first removed. Both from and to must be of the same type (that is,\n both directories or both non-directories), and must reside on the same\n file system.\n\n> \n> May be better way is to create tmp file containing index description,\n> undestandable for vacuum.\n\nThat would work too. pg_dump call for just the index, and run it though\na pg_exec_desc() call to recreate.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 14:36:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> > My idea would be to create a new index that is a random index name.\n> > Then, do rename(), which is an atomic OS operation putting the new index\n> > file in place of the old name. Seems that would work well.\n> \n> Yes, but it can cause disk space problem for very large indices.\n> Moreover, you need firts unlink old index file than do rename(),\n> it is not atomic.\n> \n> May be better way is to create tmp file containing index description,\n> undestandable for vacuum.\n\nThe beauty of doing a temp index while keeping the old one is that you\ncan recover right away, and maybe allow the old index to be used while\nyou vacuum?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 14:42:24 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> [Charset koi8-r unsupported, filtering to ASCII...]\n> > Bruce Momjian wrote:\n> >\n> > > >\n> > > > The problem is how to invalidate/validate indexes.\n> > > > Of cource natural way is to drop/create indexes but the\n> > > > definition of indexes would be lost in case of abort/crash.\n> > >\n> > > My idea would be to create a new index that is a random index name.\n> > > Then, do rename(), which is an atomic OS operation putting the new index\n> > > file in place of the old name. Seems that would work well.\n> >\n> > Yes, but it can cause disk space problem for very large indices.\n>\n> Well, one would hope you have enough disk space free for that.\n\nAt least noticed by vacuum\n\n> ...\n> Rename() causes the link named from to be renamed as to. If to exists, it\n> is first removed. Both from and to must be of the same type (that is,\n\nOk. I agree.\n\n--\nDmitry Samersoff, DM\\S\[email protected] http://devnull.wplus.net\n* there will come soft rains\n\n\n",
"msg_date": "Tue, 18 Jan 2000 23:24:44 +0300",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> > Well, one would hope you have enough disk space free for that.\n> \n> At least noticed by vacuum\n> \n> > ...\n> > Rename() causes the link named from to be renamed as to. If to exists, it\n> > is first removed. Both from and to must be of the same type (that is,\n> \n> Ok. I agree.\n> \n\nYou start to think this way when you start looking for conflicting\nsituations.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 15:45:47 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> The beauty of doing a temp index while keeping the old one is that you\n> can recover right away, and maybe allow the old index to be used while\n> you vacuum?\n\nHuh? You've got the whole table locked exclusively for the duration\nof the vacuum anyway.\n\nIn fact, the instant that vacuum does its internal commit, the old index\ncontents are actually *wrong*, and there is no possible value in keeping\nthem after that. Might as well blow them away and recover the disk\nspace for use in constructing the new indexes.\n\nAlso, I agree with Dmitry's concern about peak disk space usage. If\nwe are rebuilding large btree indexes then we are going to see about a\n2X-normal peak usage anyway, for the sort temp file and the new index.\nMaking it 3X instead is just asking for trouble. Especially since,\nif you fail to rebuild the index, you are left with a corrupt index;\nit doesn't agree with the vacuumed table...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jan 2000 17:32:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > The beauty of doing a temp index while keeping the old one is that you\n> > can recover right away, and maybe allow the old index to be used while\n> > you vacuum?\n> \n> Huh? You've got the whole table locked exclusively for the duration\n> of the vacuum anyway.\n> \n> In fact, the instant that vacuum does its internal commit, the old index\n> contents are actually *wrong*, and there is no possible value in keeping\n> them after that. Might as well blow them away and recover the disk\n> space for use in constructing the new indexes.\n\nOh, I thought the vacuum itself would use the index during processing.\n\n> \n> Also, I agree with Dmitry's concern about peak disk space usage. If\n> we are rebuilding large btree indexes then we are going to see about a\n> 2X-normal peak usage anyway, for the sort temp file and the new index.\n> Making it 3X instead is just asking for trouble. Especially since,\n> if you fail to rebuild the index, you are left with a corrupt index;\n> it doesn't agree with the vacuumed table...\n\nOK.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 17:36:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n>\n> > Bruce Momjian <[email protected]> writes:\n> > > The beauty of doing a temp index while keeping the old one is that you\n> > > can recover right away, and maybe allow the old index to be used while\n> > > you vacuum?\n> >\n> > Huh? You've got the whole table locked exclusively for the duration\n> > of the vacuum anyway.\n> >\n> > In fact, the instant that vacuum does its internal commit, the old index\n> > contents are actually *wrong*, and there is no possible value in keeping\n> > them after that. Might as well blow them away and recover the disk\n> > space for use in constructing the new indexes.\n>\n> Oh, I thought the vacuum itself would use the index during processing.\n>\n\nIt's a big charge for vacuum to keep consistency between heap table and\nindexes. The main point of index recreation in vacuum is to invalidate the\nindexes of the target table. Temp indexes or renaming technique is no\nlonger needed once indexes are invalidated.\n\nOnce again,how to invalidate/validate indexes ?\n\nI want to avoid dropping indexes because the definition is lost and\n'commit' is needed internally.\n\nMy proposal is to use relhasindex of pg_class.\nHow about ?\n relhasindex is true -- all indexes of the table are valid if the table has\n\t\t\t indexes.\n relhasindex is false -- either the table has no indexes or all indexes\n\t\t\tof the table are invalid\n\nCREATE INDEX/DROP INDEX/DROP TABLE/VACUUM/REINDEX\nwould be able to ignore relhasindex.\n\nAm I misusing relhasindex ?\n\nIf reindexing vacuum crashes,indexes of the target table would be invalid.\nTo recover indexes there would be 2 ways.\n1) vacuum again\n2) reindex the table\n\nNote that we would be able to REINDEX user tables under postmaster.\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n\n",
"msg_date": "Wed, 19 Jan 2000 09:35:58 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> \n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Hi all,\n> > \n> > I'm trying to implement REINDEX command.\n> > \n> > REINDEX operation itself is available everywhere and\n> > I've thought about applying it to VACUUM.\n> \n> That is a good idea. Vacuuming of indexes can be very slow.\n> \n> > .\n> > My plan is as follows.\n> > \n> > Add a new option to force index recreation in vacuum\n> > and if index recreation is specified.\n> \n> Couldn't we auto-recreate indexes based on the number of tuples moved by\n> vacuum,\n\nYes,we could probably do it. But I'm not sure the availability of new\nvacuum.\n\nNew vacuum would give us a big advantage that\n1) Much faster than current if vacuum remove/moves many tuples.\n2) Does shrink index files\n\nBut in case of abort/crash\n1) couldn't choose index scan for the table\n2) unique constraints of the table would be lost\n\nI don't know how people estimate this disadvantage.\n \n> \n> > Now I'm inclined to use relhasindex of pg_class to\n> > validate/invalidate indexes of a table at once.\n> \n> There are a few calls to CatalogIndexInsert() that know the \n> system table they\n> are using and know it has indexes, so it does not check that field. You\n> could add cases for that.\n>\n\nI think there aren't so many places to check.\nI would examine it if my idea is OK.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 19 Jan 2000 10:13:40 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> > > Add a new option to force index recreation in vacuum\n> > > and if index recreation is specified.\n> > \n> > Couldn't we auto-recreate indexes based on the number of tuples moved by\n> > vacuum,\n> \n> Yes,we could probably do it. But I'm not sure the availability of new\n> vacuum.\n> \n> New vacuum would give us a big advantage that\n> 1) Much faster than current if vacuum remove/moves many tuples.\n> 2) Does shrink index files\n> \n> But in case of abort/crash\n> 1) couldn't choose index scan for the table\n> 2) unique constraints of the table would be lost\n> \n> I don't know how people estimate this disadvantage.\n\nThat's why I was recommending rename(). The actual window of\nvunerability goes from perhaps hours to fractions of a second.\n\nIn fact, if I understand this right, you could make the vulerability\nzero by just performing the rename as one operation.\n\nIn fact, for REINDEX cases where you don't have a lock on the entire\ntable as you do in vacuum, you could reindex the table with a simple\nread-lock on the base table and index, and move the new index into place\nwith the users seeing no change. Only people traversing the index\nduring the change would have a problem. You just need an exclusive\naccess on the index for the duration of the rename() so no one is\ntraversing the index during the rename().\n\nDestroying the index and recreating opens a large time span that there\nis no index, and you have to jury-rig something so people don't try to\nuse the index. With rename() you just put the new index in place with\none operation. Just don't let people traverse the index during the\nchange. The pointers to the heap tuples is the same in both indexes.\n\nIn fact, with WAL, we will allow multiple physical files for the same\ntable by appending the table oid to the file name. In this case, the\nold index could be deleted by rename, and people would continue to use\nthe old index until they deleted the open file pointers. Not sure how\nthis works in practice because new tuples would not be inserted into the\nold copy of the index.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 20:50:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> > I don't know how people estimate this disadvantage.\n> \n> That's why I was recommending rename(). The actual window of\n> vunerability goes from perhaps hours to fractions of a second.\n> \n> In fact, if I understand this right, you could make the vulerability\n> zero by just performing the rename as one operation.\n> \n> In fact, for REINDEX cases where you don't have a lock on the entire\n> table as you do in vacuum, you could reindex the table with a simple\n> read-lock on the base table and index, and move the new index into place\n> with the users seeing no change. Only people traversing the index\n> during the change would have a problem. You just need an exclusive\n> access on the index for the duration of the rename() so no one is\n> traversing the index during the rename().\n> \n> Destroying the index and recreating opens a large time span that there\n> is no index, and you have to jury-rig something so people don't try to\n> use the index. With rename() you just put the new index in place with\n> one operation. Just don't let people traverse the index during the\n> change. The pointers to the heap tuples is the same in both indexes.\n> \n> In fact, with WAL, we will allow multiple physical files for the same\n> table by appending the table oid to the file name. In this case, the\n> old index could be deleted by rename, and people would continue to use\n> the old index until they deleted the open file pointers. Not sure how\n> this works in practice because new tuples would not be inserted into the\n> old copy of the index.\n\nMaybe I am all wrong here. Maybe most of the advantage of rename() are\nmeaningless with reindex using during vacuum, which is the most\nimportant use of reindex.\n\nLet's look at index using during vacuum. Right now, how does vacuum\nhandle indexes when it moves a tuple? Does it do each index update as\nit moves a tuple? Is that why it is so slow?\n\nIf we don't do that and vacuum fails, what state is the table left in? \nIf we don't update the index for every tuple, the index is invalid in a\nvacuum failure. rename() is not going to help us here. It keeps the\nold index around, but the index is invalid anyway, right?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 21:04:11 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n>\n> > > I don't know how people estimate this disadvantage.\n> >\n> > That's why I was recommending rename(). The actual window of\n> > vunerability goes from perhaps hours to fractions of a second.\n> >\n> > In fact, if I understand this right, you could make the vulerability\n> > zero by just performing the rename as one operation.\n> >\n> > In fact, for REINDEX cases where you don't have a lock on the entire\n> > table as you do in vacuum, you could reindex the table with a simple\n> > read-lock on the base table and index, and move the new index into place\n> > with the users seeing no change. Only people traversing the index\n> > during the change would have a problem. You just need an exclusive\n> > access on the index for the duration of the rename() so no one is\n> > traversing the index during the rename().\n> >\n> > Destroying the index and recreating opens a large time span that there\n> > is no index, and you have to jury-rig something so people don't try to\n> > use the index. With rename() you just put the new index in place with\n> > one operation. Just don't let people traverse the index during the\n> > change. The pointers to the heap tuples is the same in both indexes.\n> >\n> > In fact, with WAL, we will allow multiple physical files for the same\n> > table by appending the table oid to the file name. In this case, the\n> > old index could be deleted by rename, and people would continue to use\n> > the old index until they deleted the open file pointers. Not sure how\n> > this works in practice because new tuples would not be inserted into the\n> > old copy of the index.\n>\n> Maybe I am all wrong here. Maybe most of the advantage of rename() are\n> meaningless with reindex using during vacuum, which is the most\n> important use of reindex.\n>\n> Let's look at index using during vacuum. Right now, how does vacuum\n> handle indexes when it moves a tuple? Does it do each index update as\n> it moves a tuple? Is that why it is so slow?\n>\n\nYes,I believe so. It's necessary to keep consistency between heap\ntable and indexes even in case of abort/crash.\nAs far as I see,it has been a big charge for vacuum.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Wed, 19 Jan 2000 11:23:55 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> > > In fact, for REINDEX cases where you don't have a lock on the entire\n> > > table as you do in vacuum, you could reindex the table with a simple\n> > > read-lock on the base table and index, and move the new index into place\n> > > with the users seeing no change. Only people traversing the index\n> > > during the change would have a problem. You just need an exclusive\n> > > access on the index for the duration of the rename() so no one is\n> > > traversing the index during the rename().\n> > >\n> > > Destroying the index and recreating opens a large time span that there\n> > > is no index, and you have to jury-rig something so people don't try to\n> > > use the index. With rename() you just put the new index in place with\n> > > one operation. Just don't let people traverse the index during the\n> > > change. The pointers to the heap tuples is the same in both indexes.\n> > >\n> > > In fact, with WAL, we will allow multiple physical files for the same\n> > > table by appending the table oid to the file name. In this case, the\n> > > old index could be deleted by rename, and people would continue to use\n> > > the old index until they deleted the open file pointers. Not sure how\n> > > this works in practice because new tuples would not be inserted into the\n> > > old copy of the index.\n> >\n> > Maybe I am all wrong here. Maybe most of the advantage of rename() are\n> > meaningless with reindex using during vacuum, which is the most\n> > important use of reindex.\n> >\n> > Let's look at index using during vacuum. Right now, how does vacuum\n> > handle indexes when it moves a tuple? Does it do each index update as\n> > it moves a tuple? Is that why it is so slow?\n> >\n> \n> Yes,I believe so. It's necessary to keep consistency between heap\n> table and indexes even in case of abort/crash.\n> As far as I see,it has been a big charge for vacuum.\n\nOK, how about making a copy of the heap table before starting vacuum,\nmoving all the tuples in that copy, create new index, and then move the\nnew heap and indexes over the old version. We already have an exclusive\nlock on the table. That would be 100% reliable, with the disadvantage\nof using 2x the disk space. Seems like a big win.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 21:45:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> > Let's look at index using during vacuum. Right now, how does vacuum\n> > handle indexes when it moves a tuple? Does it do each index update as\n> > it moves a tuple? Is that why it is so slow?\n> >\n> \n> Yes,I believe so. It's necessary to keep consistency between heap\n> table and indexes even in case of abort/crash.\n> As far as I see,it has been a big charge for vacuum.\n\nIn fact, maybe we just need to look at the ability to recreate the\nentire table/index in one big function. We could do a sequential scan\nof the table and if we find > X number of rows that are expired, we\ndecide to do a full recreate of the table with all new indexes vs.\ndoing a vacuum. This seems to be the core of what the REINDEX function\nis doing anyway. \n\nIn fact, I wonder if we should enable a % parameter to VACUUM, so vacuum\ndoes something only of X% of the disk space will be saved by the vacuum.\nCurrently if someone deletes the first row of a able, every row is moved\nto save a few bytes of disk space. That is certainly a waste, and\ntelling people they have to vacuum every night is probably a waste in\nmost cases too, but we don't give administrators the ability to control\nwhen a vacuum is a good idea.\n\nWe could get ALTER TABLE DROP COLUMN working too by recreating the table\nwithout the dropped column.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 21:58:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> I heard from someone that old vacuum had been like so.\n> Probably 2x disk space for big tables was a big disadvantage.\n\nThat's interesting.\n\n> \n> In addition,rename(),unlink(),mv aren't preferable for transaction\n> control as far as I see. We couldn't avoid inconsistency using\n> those OS functions.\n\nI disagree. Vacuum can't be rolled back anyway in the sense you can\nbring back expire tuples, though I have no idea why you would want to.\n\nYou have an exclusive lock on the table. Putting new heap/indexes in\nplace that match and have no expired tuples seems like it can not fail\nin any situation.\n\nOf course, the buffers of the old table have to be marked as invalid,\nbut with an exclusive lock, that is not a problem. I am sure we do that\nanyway�in vacuum.\n\n> We have to wait the change of relation file naming if copying\n> vacuum is needed.\n> Under the spec we need not rename(),mv etc.\n\nSorry, I don't agree, yet...\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 22:08:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> > >\n> > > Maybe I am all wrong here. Maybe most of the advantage of\n> rename() are\n> > > meaningless with reindex using during vacuum, which is the most\n> > > important use of reindex.\n> > >\n> > > Let's look at index using during vacuum. Right now, how does vacuum\n> > > handle indexes when it moves a tuple? Does it do each index update as\n> > > it moves a tuple? Is that why it is so slow?\n> > >\n> >\n> > Yes,I believe so. It's necessary to keep consistency between heap\n> > table and indexes even in case of abort/crash.\n> > As far as I see,it has been a big charge for vacuum.\n>\n> OK, how about making a copy of the heap table before starting vacuum,\n> moving all the tuples in that copy, create new index, and then move the\n> new heap and indexes over the old version. We already have an exclusive\n> lock on the table. That would be 100% reliable, with the disadvantage\n> of using 2x the disk space. Seems like a big win.\n>\n\nI heard from someone that old vacuum had been like so.\nProbably 2x disk space for big tables was a big disadvantage.\n\nIn addition,rename(),unlink(),mv aren't preferable for transaction\ncontrol as far as I see. We couldn't avoid inconsistency using\nthose OS functions.\nWe have to wait the change of relation file naming if copying\nvacuum is needed.\nUnder the spec we need not rename(),mv etc.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n\n",
"msg_date": "Wed, 19 Jan 2000 12:10:32 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> In addition,rename(),unlink(),mv aren't preferable for transaction\n> control as far as I see. We couldn't avoid inconsistency using\n> those OS functions.\n> We have to wait the change of relation file naming if copying\n> vacuum is needed.\n> Under the spec we need not rename(),mv etc.\n\nAre you worried the system may crash in the middle of renaming one\ntable, but not the indexes. That would be a serious problem.\n\nI see now. I can't think of a way around that. The rename() itself is\natomic.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 22:22:00 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > > Yes,I believe so. It's necessary to keep consistency between heap\n> > > table and indexes even in case of abort/crash.\n> > > As far as I see,it has been a big charge for vacuum.\n> >\n> > OK, how about making a copy of the heap table before starting vacuum,\n> > moving all the tuples in that copy, create new index, and then move the\n> > new heap and indexes over the old version. We already have an exclusive\n> > lock on the table. That would be 100% reliable, with the disadvantage\n> > of using 2x the disk space. Seems like a big win.\n> >\n> \n> I heard from someone that old vacuum had been like so.\n> Probably 2x disk space for big tables was a big disadvantage.\n\nYes, It is critical.\n\nHow about sequence like this:\n\n* Drop indices (keeping somewhere index descriptions)\n* vacuuming table\n* recreate indices\n\nIf something crash, user have been noticed \nto re-run vacuum or recreate indices by hand \nwhen system restarts.\n\nI use script like described above for vacuuming\n - it really increase vacuum performance for large table.\n\n\n-- \nDmitry Samersoff, DM\\S\[email protected] http://devnull.wplus.net\n* there will come soft rains\n",
"msg_date": "Thu, 20 Jan 2000 00:29:01 +0300",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> > I heard from someone that old vacuum had been like so.\n> > Probably 2x disk space for big tables was a big disadvantage.\n> \n> Yes, It is critical.\n> \n> How about sequence like this:\n> \n> * Drop indices (keeping somewhere index descriptions)\n> * vacuuming table\n> * recreate indices\n> \n> If something crash, user have been noticed \n> to re-run vacuum or recreate indices by hand \n> when system restarts.\n> \n> I use script like described above for vacuuming\n> - it really increase vacuum performance for large table.\n\nWe need two things:\n\n\tauto-create index on startup\n\tallow vacuum to run only if number of tuples superceeded > X %\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Jan 2000 16:32:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> We need two things:\n> \n\n> auto-create index on startup\n\nIMHO, It have to be controlled by user, because creating large index \ncan take a number of hours. Sometimes it's better to live without\nindices\nat all, and then build it by hand after workday end.\n\n\n-- \nDmitry Samersoff, DM\\S\[email protected] http://devnull.wplus.net\n* there will come soft rains\n",
"msg_date": "Thu, 20 Jan 2000 00:41:26 +0300",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "[Charset koi8-r unsupported, filtering to ASCII...]\n> Bruce Momjian wrote:\n> > \n> > We need two things:\n> > \n> \n> > auto-create index on startup\n> \n> IMHO, It have to be controlled by user, because creating large index \n> can take a number of hours. Sometimes it's better to live without\n> indices\n> at all, and then build it by hand after workday end.\n\nOK, full circle time. That is why I recommended making a separate new\nheap and index and using rename() to move them into place once the\nvacuum is completed. In a failure during vacuum, the failed vacuum files\nshould be just removed on startup. No downtime, and index is in place.\n\nAlso, I thought about how to do rename() of multiple tables atomically. \nMy idea would be to have a pg_startup table that contains information\nabout what operations should be performed on startup. You could write\nto the file in an atomic action, and if there was a failure, on startup,\nthe file could be read and the operations performed. We would basically\nbe using our own transaction system to guarantee file system atomicity.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Jan 2000 16:51:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index recreation in vacuum"
},
{
"msg_contents": "> -----Original Message-----\n> From: Dmitry Samersoff [mailto:[email protected]]\n>\n> Hiroshi Inoue wrote:\n> > > > Yes,I believe so. It's necessary to keep consistency between heap\n> > > > table and indexes even in case of abort/crash.\n> > > > As far as I see,it has been a big charge for vacuum.\n> > >\n> > > OK, how about making a copy of the heap table before starting vacuum,\n> > > moving all the tuples in that copy, create new index, and\n> then move the\n> > > new heap and indexes over the old version. We already have\n> an exclusive\n> > > lock on the table. That would be 100% reliable, with the disadvantage\n> > > of using 2x the disk space. Seems like a big win.\n> > >\n> >\n> > I heard from someone that old vacuum had been like so.\n> > Probably 2x disk space for big tables was a big disadvantage.\n>\n> Yes, It is critical.\n>\n> How about sequence like this:\n>\n> * Drop indices (keeping somewhere index descriptions)\n> * vacuuming table\n> * recreate indices\n>\n\nYes,my idea is almost same.\nI won't drop indices but make them invisible in a sense.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Thu, 20 Jan 2000 08:19:26 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Index recreation in vacuum"
}
] |
[
{
"msg_contents": "I just tried recompiling the latest source for the first time in weeks, but\nit appears I cannot compile pgsql. My source should be up-to-date since I\nuse cvsup everytime I go online. But during compile I get:\n\ngcc -I../../interfaces/libpq -I../../include -I../../backend -O2 -Wall -Wmissing-prototypes -c -o common.o common.c\nIn file included from ../../include/postgres.h:41,\n from ../../interfaces/libpq/pqsignal.h:20,\n from common.c:23:\n../../include/utils/mcxt.h:25: syntax error before `MemoryContext'\nmake[2]: *** [common.o] Error 1\n\nIs this a problem on my site? \n\nMichael \n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 18 Jan 2000 13:58:40 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cannot compile psql"
},
{
"msg_contents": "> I just tried recompiling the latest source for the first time in weeks, but\n> it appears I cannot compile pgsql. My source should be up-to-date since I\n> use cvsup everytime I go online. But during compile I get:\n> \n> gcc -I../../interfaces/libpq -I../../include -I../../backend -O2 -Wall -Wmissing-prototypes -c -o common.o common.c\n> In file included from ../../include/postgres.h:41,\n> from ../../interfaces/libpq/pqsignal.h:20,\n> from common.c:23:\n> ../../include/utils/mcxt.h:25: syntax error before `MemoryContext'\n> make[2]: *** [common.o] Error 1\n> \n> Is this a problem on my site? \n\nI assume you have configured --with-mb=something *before*\nmake. --with-mb does not seem to work anymore. Try:\n\nmake distclean\n./configure --enable-multibyte[=something]\n\nSeems we have to warn if --with-mb is used, otherwise multibyte users\nwould complain about it.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 18 Jan 2000 22:54:33 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cannot compile psql"
},
{
"msg_contents": "> I just tried recompiling the latest source for the first time in weeks, but\n> it appears I cannot compile pgsql. My source should be up-to-date since I\n> use cvsup everytime I go online. But during compile I get:\n\nSorry, my previous mail was wrong.\n\nYou are compiling psql, right? I do not see any problem on my Linux\nbox (based on RH 5.2). Maybe make distclean and re-configure would\nhelp.\n--\nTatsuo Ishii\n\n",
"msg_date": "Tue, 18 Jan 2000 23:01:30 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cannot compile psql"
},
{
"msg_contents": "On Tue, Jan 18, 2000 at 11:01:30PM +0900, Tatsuo Ishii wrote:\n> Sorry, my previous mail was wrong.\n\nNo problem.\n\n> You are compiling psql, right? I do not see any problem on my Linux\n\nYes, as part of the complete source tree.\n\n> box (based on RH 5.2). Maybe make distclean and re-configure would\n> help.\n\nI will try it and tell you. :-)\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 18 Jan 2000 16:31:06 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Cannot compile psql"
},
{
"msg_contents": "Michael Meskes <[email protected]> writes:\n> ../../include/utils/mcxt.h:25: syntax error before `MemoryContext'\n\nThat line reads\n\nextern DLLIMPORT MemoryContext CurrentMemoryContext;\n\nI'll bet you are trying to compile with a Windoze-oriented config.h\nthat causes DLLIMPORT to expand to something nonempty. Try\nreconfiguring.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jan 2000 10:51:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cannot compile psql "
},
{
"msg_contents": "On Tue, Jan 18, 2000 at 11:01:30PM +0900, Tatsuo Ishii wrote:\n> You are compiling psql, right? I do not see any problem on my Linux\n> box (based on RH 5.2). Maybe make distclean and re-configure would\n> help.\n\nUnfortunately no. It didn't help.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 18 Jan 2000 17:52:25 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Cannot compile psql"
},
{
"msg_contents": "On 2000-01-18, Tatsuo Ishii mentioned:\n\n> I assume you have configured --with-mb=something *before*\n> make. --with-mb does not seem to work anymore. Try:\n> \n> make distclean\n> ./configure --enable-multibyte[=something]\n> \n> Seems we have to warn if --with-mb is used, otherwise multibyte users\n> would complain about it.\n\nI wanted to do that, but as soon as you put that option back in it shows\nup in --help output, which is against the point of it being deprecated.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 19 Jan 2000 03:59:50 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cannot compile psql"
},
{
"msg_contents": "On Tue, Jan 18, 2000 at 10:51:48AM -0500, Tom Lane wrote:\n> Michael Meskes <[email protected]> writes:\n> > ../../include/utils/mcxt.h:25: syntax error before `MemoryContext'\n> \n> That line reads\n> \n> extern DLLIMPORT MemoryContext CurrentMemoryContext;\n\nYes. Commenting out this line removes the message but of course gives some\ncompiler errors.\n\n> I'll bet you are trying to compile with a Windoze-oriented config.h\n\nShouldn't have that since I only run PostgreSQL in Linux.\n\n> that causes DLLIMPORT to expand to something nonempty. Try\n> reconfiguring.\n\nI tried make distclean followed by a new configure to no avail.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 19 Jan 2000 08:45:46 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Cannot compile psql"
},
{
"msg_contents": "Michael Meskes <[email protected]> writes:\n> On Tue, Jan 18, 2000 at 10:51:48AM -0500, Tom Lane wrote:\n>> I'll bet you are trying to compile with a Windoze-oriented config.h\n>> that causes DLLIMPORT to expand to something nonempty. Try\n>> reconfiguring.\n\n> I tried make distclean followed by a new configure to no avail.\n\nHmm. Look at the bottom of include/c.h --- it should be impossible\nfor DLLIMPORT to be nonempty unless __CYGWIN32__ is defined. So where\nis that definition coming from?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jan 2000 09:55:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cannot compile psql "
},
{
"msg_contents": "On Wed, Jan 19, 2000 at 09:55:45AM -0500, Tom Lane wrote:\n> Hmm. Look at the bottom of include/c.h --- it should be impossible\n> for DLLIMPORT to be nonempty unless __CYGWIN32__ is defined. So where\n> is that definition coming from?\n\nI have no idea. But I will check. Also I think that this include file is\nused elsewhere too. But the error message does come up only once.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 19 Jan 2000 18:01:54 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cannot compile psql"
},
{
"msg_contents": "Michael Meskes <[email protected]> writes:\n> I have no idea. But I will check. Also I think that this include file is\n> used elsewhere too. But the error message does come up only once.\n\nOh, that's interesting, because there are a dozen or so files that\nreference DLLIMPORT. Maybe you should be looking at what is included\nby the file with the error (and not by any other files...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jan 2000 17:29:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cannot compile psql "
}
] |
[
{
"msg_contents": "I've just send a patch to gram.y to allow FETCH without FROM/IN to the\npatches list. Since I do not have the complete source checked out I cannot\ncommit it myself.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 18 Jan 2000 14:01:14 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "FETCH syntax"
}
] |
[
{
"msg_contents": "It seems I'm not allowed to send to the patches list. Does it make sense to\nhave this list being closed?\n\n----- Forwarded message from [email protected] -----\n\nFrom: [email protected]\nDate: Tue, 18 Jan 2000 08:04:35 -0500 (EST)\nTo: [email protected], [email protected]\nSubject: BOUNCE [email protected]: Non-member submission from [Michael Meskes <[email protected]>] \n\n>From bouncefilter Tue Jan 18 08:04:25 2000\nReceived: from mail.ngi.de ([195.243.0.227])\n\tby hub.org (8.9.3/8.9.3) with ESMTP id IAA33568\n\tfor <[email protected]>; Tue, 18 Jan 2000 08:03:25 -0500 (EST)\n\t(envelope-from [email protected])\nReceived: from p3E9C1460.dip0.t-ipconnect.de ([email protected] [62.156.20.96])\n\tby mail.ngi.de (8.9.3/8.9.3) with ESMTP id OAA31906\n\tfor <[email protected]>; Tue, 18 Jan 2000 14:05:08 +0100 (CET)\nReceived: (from michael@localhost)\n\tby tanja.fam-meskes.de (8.9.3/8.9.3/Debian 8.9.3-6) id OAA01598\n\tfor [email protected]; Tue, 18 Jan 2000 14:00:17 +0100\nDate: Tue, 18 Jan 2000 14:00:17 +0100\nFrom: Michael Meskes <[email protected]>\nTo: PostgreSQL Patches <[email protected]>\nSubject: FETCH syntax\nMessage-ID: <[email protected]>\nMime-Version: 1.0\nContent-Type: multipart/mixed; boundary=\"AhhlLboLdkugWU4S\"\nUser-Agent: Mutt/1.0i\n\n...\n\n----- End forwarded message -----\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 18 Jan 2000 16:27:03 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "patches list"
},
{
"msg_contents": "On Tue, 18 Jan 2000, Michael Meskes wrote:\n\n> It seems I'm not allowed to send to the patches list. Does it make sense to\n> have this list being closed?\n\nsubscribe to pgsql-loophole and you'll be able to post to any of the\nlists.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 18 Jan 2000 13:23:59 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] patches list"
},
{
"msg_contents": "On Tue, Jan 18, 2000 at 01:23:59PM -0500, Vince Vielhaber wrote:\n> subscribe to pgsql-loophole and you'll be able to post to any of the\n> lists.\n\nHmm, I though I was on that list. Maybe my mail was send from the wrong\naddress. I usually do my PostgreSQL stuff from my postgres.org account.\nUnfortunately I cannot check because I deleted it.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 19 Jan 2000 08:47:10 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] patches list"
}
] |
[
{
"msg_contents": "Since this patch is not big I send it here instead. I do not have the\ncomplete source checked out so I cannot commit it myself.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!",
"msg_date": "Tue, 18 Jan 2000 16:29:29 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "FETCH syntax"
},
{
"msg_contents": "Applied.\n\n> Since this patch is not big I send it here instead. I do not have the\n> complete source checked out so I cannot commit it myself.\n> \n> Michael\n> -- \n> Michael Meskes | Go SF 49ers!\n> Th.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\n> Tel.: (+49) 2431/72651 | Use Debian GNU/Linux!\n> Email: [email protected] | Use PostgreSQL!\n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 14:07:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FETCH syntax"
}
] |
[
{
"msg_contents": "\n> Yes, that's about the sum of it. Why not the links? I think \n> that it's an elegant way of designing the whole thing.\n\nThe only problem with symlinks is, that it does not solve the \n\"too many files in one directory to give optimal performance\"\nproblem for those that have tons of tables.\n\nAndreas\n",
"msg_date": "Tue, 18 Jan 2000 17:15:06 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed (filepath as\n\tsymlink)"
},
{
"msg_contents": "> > Yes, that's about the sum of it. Why not the links? I think \n> > that it's an elegant way of designing the whole thing.\n> \n> The only problem with symlinks is, that it does not solve the \n> \"too many files in one directory to give optimal performance\"\n> problem for those that have tons of tables.\n> \n> Andreas\n\nIs that really a problem on modern operating systems? We could actually\nhash the file names into directory buckets and access them that way, and\nhave one directory that old symlinks to the hashed files.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 13:26:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed (filepath as\n\tsymlink)"
},
{
"msg_contents": "On Tue, 18 Jan 2000, Bruce Momjian wrote:\n\n> > > Yes, that's about the sum of it. Why not the links? I think \n> > > that it's an elegant way of designing the whole thing.\n> > \n> > The only problem with symlinks is, that it does not solve the \n> > \"too many files in one directory to give optimal performance\"\n> > problem for those that have tons of tables.\n> > \n> > Andreas\n> \n> Is that really a problem on modern operating systems? We could actually\n> hash the file names into directory buckets and access them that way, and\n> have one directory that old symlinks to the hashed files.\n\nPersonally, except in *exceptional* circumstances, I hate symlinks...that\nwas one of my first projects whenI started working at th eocal University,\nwas to get rid of the garbage \"revision control\" system they had for\npackages installed on the system ...\n\nIMHO, there have been several methods of doing this, some easier then\nothers, wihtout having to use symlinks for them ... can we *please* avoid\nusing them?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 18 Jan 2000 15:04:00 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed (filepath as\n\tsymlink)"
},
{
"msg_contents": "> > > > Yes, that's about the sum of it. Why not the links? I think\n> > > > that it's an elegant way of designing the whole thing.\n> > > The only problem with symlinks is, that it does not solve the\n> > > \"too many files in one directory to give optimal performance\"\n> > > problem for those that have tons of tables.\n> > Is that really a problem on modern operating systems? We could actually\n> > hash the file names into directory buckets and access them that way, and\n> > have one directory that old symlinks to the hashed files.\n> \n> imho symlinks is exactly the wrong way to head on this. If the system\n> needs to know the true location of something, then it may as well\n> refer to that location explicitly. Our storage manager should learn\n> how to deal with explicit locations, and we shouldn't implement this\n> just as a patch on the table creation code.\n> \n\nOK, no symlinks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 21:22:38 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed (filepath as \n\tsymlink)"
},
{
"msg_contents": "> > > Yes, that's about the sum of it. Why not the links? I think\n> > > that it's an elegant way of designing the whole thing.\n> > The only problem with symlinks is, that it does not solve the\n> > \"too many files in one directory to give optimal performance\"\n> > problem for those that have tons of tables.\n> Is that really a problem on modern operating systems? We could actually\n> hash the file names into directory buckets and access them that way, and\n> have one directory that old symlinks to the hashed files.\n\nimho symlinks is exactly the wrong way to head on this. If the system\nneeds to know the true location of something, then it may as well\nrefer to that location explicitly. Our storage manager should learn\nhow to deal with explicit locations, and we shouldn't implement this\njust as a patch on the table creation code.\n\nMy $.02...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 19 Jan 2000 02:26:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed (filepath as \n\tsymlink)"
},
{
"msg_contents": "At 02:26 AM 1/19/00 +0000, Thomas Lockhart wrote:\n\n>imho symlinks is exactly the wrong way to head on this. If the system\n>needs to know the true location of something, then it may as well\n>refer to that location explicitly. Our storage manager should learn\n>how to deal with explicit locations, and we shouldn't implement this\n>just as a patch on the table creation code.\n\nThat's my feeling too.\n\nWe could document the query needed to list where all tables\nare located, grouped by tablespace.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 18 Jan 2000 18:27:25 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed (filepath\n\tas symlink)"
}
] |
[
{
"msg_contents": "> The Windows makefile for psql is still in prehistoric state. Is there\n> someone who uses Visual C++ or whatever it was that could update it?\n> Otherwise psql will no longer be supported on non-cygwin systems!\n\nOk. I've done this, patch to follow on the patches list within a couple of\nminutes.\n\n//Magnus\n",
"msg_date": "Tue, 18 Jan 2000 18:30:17 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Help from a Windows programmer needed!"
}
] |
[
{
"msg_contents": "A while ago I played around with gperf (GNU perfect hash function\ngenerator), abusing the keyword lookup in parser/keyword.c as playground.\nNow before I delete this I was wondering if this would perhaps be of use\nto the general public. I don't know how huge the speed advantage of this\nis, I'm sure the parser/scanner speed is the least of our problems. But I\nthunk especially ecpg could benefit from this. Btw., gperf is used by GCC,\nso it's not a toy.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 19 Jan 2000 00:38:18 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "gperf anyone?"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> A while ago I played around with gperf (GNU perfect hash function\n> generator), abusing the keyword lookup in parser/keyword.c as playground.\n> Now before I delete this I was wondering if this would perhaps be of use\n> to the general public. I don't know how huge the speed advantage of this\n> is, I'm sure the parser/scanner speed is the least of our problems. But I\n> thunk especially ecpg could benefit from this. Btw., gperf is used by GCC,\n> so it's not a toy.\n\nkeywords are a fixed array, with a binary search to find a match. Could\ngperf be faster? We also can not distribute GNU code.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 19:36:00 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "On Wed, 19 Jan 2000, Peter Eisentraut wrote:\n\n> A while ago I played around with gperf (GNU perfect hash function\n> generator), abusing the keyword lookup in parser/keyword.c as playground.\n> Now before I delete this I was wondering if this would perhaps be of use\n> to the general public. I don't know how huge the speed advantage of this\n> is, I'm sure the parser/scanner speed is the least of our problems. But I\n> thunk especially ecpg could benefit from this. Btw., gperf is used by GCC,\n> so it's not a toy.\n\nOkay, that post told me absolutely nothing :( What would we use it\nfor? What is its purpose?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 18 Jan 2000 20:39:25 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "* Bruce Momjian <[email protected]> [000118 17:14] wrote:\n> [Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> > A while ago I played around with gperf (GNU perfect hash function\n> > generator), abusing the keyword lookup in parser/keyword.c as playground.\n> > Now before I delete this I was wondering if this would perhaps be of use\n> > to the general public. I don't know how huge the speed advantage of this\n> > is, I'm sure the parser/scanner speed is the least of our problems. But I\n> > thunk especially ecpg could benefit from this. Btw., gperf is used by GCC,\n> > so it's not a toy.\n> \n> keywords are a fixed array, with a binary search to find a match. Could\n> gperf be faster? \n\nyes:\n\n~ % gperf \n/* starting time is 21:10:49 */\npostgresql \nreally\nkicks\nbutt\n/* C code produced by gperf version 2.1 (K&R C version) */\n/* Command-line: gperf */\n\n\n\n#define MIN_WORD_LENGTH 4\n#define MAX_WORD_LENGTH 10\n#define MIN_HASH_VALUE 4\n#define MAX_HASH_VALUE 10\n/*\n 4 keywords\n 7 is the maximum key range\n*/\n\nstatic int\nhash (str, len)\n register char *str;\n register unsigned int len;\n{\n static unsigned char hash_table[] =\n {\n 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,\n 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,\n 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,\n 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,\n 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,\n 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,\n 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,\n 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,\n 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,\n 10, 10, 10, 10, 10, 10, 10, 10, 0, 10,\n 10, 10, 10, 10, 10, 10, 10, 0, 0, 10,\n 10, 10, 0, 10, 0, 0, 0, 10, 10, 10,\n 10, 0, 10, 10, 10, 10, 10, 10,\n };\n return len + hash_table[str[len - 1]] + hash_table[str[0]];\n}\n\nchar *\nin_word_set (str, len)\n register char *str;\n register unsigned int len;\n{\n\n static char * wordlist[] =\n {\n \"\", \"\", \"\", \"\", \n \"butt\", \n \"kicks\", \n \"really\", \n \"\", \"\", \"\", \n \"postgresql\", \n };\n\n if (len <= MAX_WORD_LENGTH && len >= MIN_WORD_LENGTH)\n {\n register int key = hash (str, len);\n\n if (key <= MAX_HASH_VALUE && key >= MIN_HASH_VALUE)\n {\n register char *s = wordlist[key];\n\n if (*s == *str && !strcmp (str + 1, s + 1))\n return s;\n }\n }\n return 0;\n}\n/* ending time is 21:10:58 */\n\nA perfect hash should be much faster at the trivial expense of some space.\n\n>From the distribution:\n While teaching a data structures course at University of California,\n Irvine, I developed a program called GPERF that generates perfect hash\n functions for sets of key words. A perfect hash function is simply:\n \n A hash function and a data structure that allows\n recognition of a key word in a set of words using\n exactly 1 probe into the data structure.\n\n\n> We also can not distribute GNU code.\n\nI'm pretty sure that the code the gperf outputs is not covered under the\nGPL, just gperf itself.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Tue, 18 Jan 2000 17:29:34 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "At 07:36 PM 1/18/00 -0500, Bruce Momjian wrote:\n>[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n>> A while ago I played around with gperf (GNU perfect hash function\n>> generator), abusing the keyword lookup in parser/keyword.c as playground.\n>> Now before I delete this I was wondering if this would perhaps be of use\n>> to the general public. I don't know how huge the speed advantage of this\n>> is, I'm sure the parser/scanner speed is the least of our problems. But I\n>> thunk especially ecpg could benefit from this. Btw., gperf is used by GCC,\n>> so it's not a toy.\n>\n>keywords are a fixed array, with a binary search to find a match. Could\n>gperf be faster? We also can not distribute GNU code.\n\nI wondered about this last, i.e. the use of GNU code since Postgres\nis licensed differently.\n\nThe reality is that looking up keywords form a tiny fraction of the\ntime spent by any language system I can think of. The current binary\nsearch on a fixed array might be faster, might be slower than a perfect\nhash on a particular machine depending on the calculation done to\ndo the hashing.\n\nWhether faster or slower, though, I can't imagine either method taking\nnoticably more than 0% of the total time to process a query, even the\nmost simple queries.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 18 Jan 2000 17:49:28 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "> A perfect hash should be much faster at the trivial expense of some space.\n> \n> >From the distribution:\n> While teaching a data structures course at University of California,\n> Irvine, I developed a program called GPERF that generates perfect hash\n> functions for sets of key words. A perfect hash function is simply:\n> \n> A hash function and a data structure that allows\n> recognition of a key word in a set of words using\n> exactly 1 probe into the data structure.\n> \n> \n> > We also can not distribute GNU code.\n> \n> I'm pretty sure that the code the gperf outputs is not covered under the\n> GPL, just gperf itself.\n> \n\nCan you run our keywords.c using our method and gperf and see if there\nis any speed difference?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 20:52:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "> At 07:36 PM 1/18/00 -0500, Bruce Momjian wrote:\n> I wondered about this last, i.e. the use of GNU code since Postgres\n> is licensed differently.\n\nAFAIK this is no worse than using flex or bison --- the source code of\ngperf is GPL'ed, but its output is not.\n\nDon Baccus <[email protected]> writes:\n> Whether faster or slower, though, I can't imagine either method taking\n> noticably more than 0% of the total time to process a query, even the\n> most simple queries.\n\nI agree with Don that the performance benefit is likely to be\nunmeasurable. Still, there could be a win: we currently have to modify\nkeywords.c by hand every time we have to add/delete a keyword. Does\ngperf offer any aid for maintaining the keyword list? If so, that'd\nbe sufficient reason to switch to it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jan 2000 23:40:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone? "
},
{
"msg_contents": "* Tom Lane <[email protected]> [000118 21:10] wrote:\n> > At 07:36 PM 1/18/00 -0500, Bruce Momjian wrote:\n> > I wondered about this last, i.e. the use of GNU code since Postgres\n> > is licensed differently.\n> \n> AFAIK this is no worse than using flex or bison --- the source code of\n> gperf is GPL'ed, but its output is not.\n> \n> Don Baccus <[email protected]> writes:\n> > Whether faster or slower, though, I can't imagine either method taking\n> > noticably more than 0% of the total time to process a query, even the\n> > most simple queries.\n> \n> I agree with Don that the performance benefit is likely to be\n> unmeasurable. Still, there could be a win: we currently have to modify\n> keywords.c by hand every time we have to add/delete a keyword. Does\n> gperf offer any aid for maintaining the keyword list? If so, that'd\n> be sufficient reason to switch to it...\n\nany minimal performance boost shows over time, unfortunatly using gperf\nwill require that you either:\n\na) require gperf to be installed on a system that compiles postgresql\nb) manually maintain the gperf compiled files in your CVS repo\n (sort of like syscalls in FreeBSD)\n\nOption B is not that bad at the expense of additional contributor\noverhead.\n\nI hope to be able to present some soft of bench to determine\nif gperf is worth the additional effort of maintainance in the\nnear future.\n\nin the meanwhile, happy hacking. :)\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Tue, 18 Jan 2000 21:32:49 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "On Wed, Jan 19, 2000 at 12:38:18AM +0100, Peter Eisentraut wrote:\n> is, I'm sure the parser/scanner speed is the least of our problems. But I\n> thunk especially ecpg could benefit from this. Btw., gperf is used by GCC,\n\nWhy do you think ecpg would benefit more from it than the backend? :-)\nBoth parser are mostly the same. In fact running ecpg appears to use a lot\nless time than the subsequent gcc run.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 19 Jan 2000 08:42:05 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "On Tue, Jan 18, 2000 at 09:32:49PM -0800, Alfred Perlstein wrote:\n> any minimal performance boost shows over time, unfortunatly using gperf\n> will require that you either:\n> \n> a) require gperf to be installed on a system that compiles postgresql\n> b) manually maintain the gperf compiled files in your CVS repo\n> (sort of like syscalls in FreeBSD)\n\nSounds like the way we handle the bison/fley files. Why not do this for\ngperf as well.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 19 Jan 2000 08:43:37 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "At 11:40 PM 1/18/00 -0500, Tom Lane wrote:\n\n>I agree with Don that the performance benefit is likely to be\n>unmeasurable. Still, there could be a win: we currently have to modify\n>keywords.c by hand every time we have to add/delete a keyword. Does\n>gperf offer any aid for maintaining the keyword list? If so, that'd\n>be sufficient reason to switch to it...\n\nIf so, yeah, it might make sense. Without looking at the existing\ncode, though, the existing \"binary search on a fixed array\" makes\nme think of a list of keywords in alphabetical order. If true,\nentering new keywords in alphabetical order doesn't seem like a terrible\nburden on the implementor. The resulting list is probably more readable\nif kept alphabetical anyway...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 19 Jan 2000 10:47:35 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone? "
},
{
"msg_contents": "On 2000-01-18, Bruce Momjian mentioned:\n\n> Can you run our keywords.c using our method and gperf and see if there\n> is any speed difference?\n\nIt seems to have a speed advantage of about 2.5. But in practice that\nmeans that 1 million words take half a second. It's not a big deal to me,\nI was just wondering before I throw it out. I guess it really only makes a\ndifference for compilers, which operate on 1000+ lines.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 19 Jan 2000 21:11:35 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 2000-01-18, Bruce Momjian mentioned:\n> \n> > Can you run our keywords.c using our method and gperf and see if there\n> > is any speed difference?\n> \n> It seems to have a speed advantage of about 2.5. But in practice that\n> means that 1 million words take half a second. It's not a big deal to me,\n> I was just wondering before I throw it out. I guess it really only makes a\n> difference for compilers, which operate on 1000+ lines.\n> \n\nThe big difference may be that the compiler has variables/types that are\nadded dynamically while running, while our list is static. Insert time\nfor our types is zero because we don't add SQL keywords at runtime.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Jan 2000 15:12:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "On 2000-01-18, Tom Lane mentioned:\n\n> I agree with Don that the performance benefit is likely to be\n> unmeasurable. Still, there could be a win: we currently have to modify\n> keywords.c by hand every time we have to add/delete a keyword. Does\n> gperf offer any aid for maintaining the keyword list? If so, that'd\n> be sufficient reason to switch to it...\n\nThat's a good point. It would allow you much more ordering freedom. The\nfile is attached for review. Of course adding/deleting keywords would now\nrequire gperf. :(\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden",
"msg_date": "Wed, 19 Jan 2000 21:19:54 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] gperf anyone? "
},
{
"msg_contents": "On Wed, 19 Jan 2000, Bruce Momjian wrote:\n\n> [Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> > On 2000-01-18, Bruce Momjian mentioned:\n> > \n> > > Can you run our keywords.c using our method and gperf and see if there\n> > > is any speed difference?\n> > \n> > It seems to have a speed advantage of about 2.5. But in practice that\n> > means that 1 million words take half a second. It's not a big deal to me,\n> > I was just wondering before I throw it out. I guess it really only makes a\n> > difference for compilers, which operate on 1000+ lines.\n> > \n> \n> The big difference may be that the compiler has variables/types that are\n> added dynamically while running, while our list is static. Insert time\n> for our types is zero because we don't add SQL keywords at runtime.\n\nI'm curious...does it hurt us any to do this? Like, will it slow things\ndown? Is the end result cleaner, for neglible speed improvements?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 19 Jan 2000 16:22:02 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "* The Hermit Hacker <[email protected]> [000119 12:51] wrote:\n> On Wed, 19 Jan 2000, Bruce Momjian wrote:\n> \n> > [Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> > > On 2000-01-18, Bruce Momjian mentioned:\n> > > \n> > > > Can you run our keywords.c using our method and gperf and see if there\n> > > > is any speed difference?\n> > > \n> > > It seems to have a speed advantage of about 2.5. But in practice that\n> > > means that 1 million words take half a second. It's not a big deal to me,\n> > > I was just wondering before I throw it out. I guess it really only makes a\n> > > difference for compilers, which operate on 1000+ lines.\n> > > \n> > \n> > The big difference may be that the compiler has variables/types that are\n> > added dynamically while running, while our list is static. Insert time\n> > for our types is zero because we don't add SQL keywords at runtime.\n> \n> I'm curious...does it hurt us any to do this? Like, will it slow things\n> down? Is the end result cleaner, for neglible speed improvements?\n\nYou can pick up a copy of my test at:\n\nhttp://www.freebsd.org/~alfred/gperf.tgz\n\nIt should compile two programs 'gperf' and 'norm', I was able to get\nalmost a 100% speed improvement:\n\n~/gperf % time ./gperf \n./gperf 0.49s user 0.00s system 100% cpu 0.489 total\n~/gperf % time ./norm \n./norm 0.91s user 0.00s system 99% cpu 0.918 total\n\nOne thing you'll want to consider is the overall application of this,\nif the potentially sparse tables that gperf creates causes us to fault\nin extra cache pages it may not be so cool.\n\nI'm also pretty sure someone more proficient with hash tables and gperf\nin particular could get better results than I did, I sort of guessed\nat gperf command line switches until one seemed to work.\n\nalso... to accomplish the gperf testing I do a strlen on each word,\nover and over to simulate the need for it (as gperf needs that)\nhowever if the strlen is already available in the parser at this\ntime, i'm pretty sure it would be even faster.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Wed, 19 Jan 2000 13:12:28 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "On Wed, 19 Jan 2000, Alfred Perlstein wrote:\n\n> also... to accomplish the gperf testing I do a strlen on each word,\n> over and over to simulate the need for it (as gperf needs that)\n> however if the strlen is already available in the parser at this\n> time, i'm pretty sure it would be even faster.\n\nPrecisely, that was the idea.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 20 Jan 2000 12:31:26 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "On Wed, 19 Jan 2000, Bruce Momjian wrote:\n\n> [Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> > On 2000-01-18, Bruce Momjian mentioned:\n> > \n> > > Can you run our keywords.c using our method and gperf and see if there\n> > > is any speed difference?\n> > \n> > It seems to have a speed advantage of about 2.5. But in practice that\n> > means that 1 million words take half a second. It's not a big deal to me,\n> > I was just wondering before I throw it out. I guess it really only makes a\n> > difference for compilers, which operate on 1000+ lines.\n> > \n> \n> The big difference may be that the compiler has variables/types that are\n> added dynamically while running, while our list is static. Insert time\n> for our types is zero because we don't add SQL keywords at runtime.\n\nNo, compiler don't do this either. This is specifically for keyword\nlookup. The whole idea is that you have one set of keywords that hardly\never changes (such as for programming languages), then you take one\nafternoon off, play around with the different options until you have a\nlookup function which processes your particular keyword set fastest.\n\nAgain, this is not a big deal to me, I just did it to play around. In any\ncase it seems to run faster, but I wasn't sure if people wanted to bother.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 20 Jan 2000 12:34:07 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
},
{
"msg_contents": "At 12:34 PM 1/20/00 +0100, Peter Eisentraut wrote:\n\n>No, compiler don't do this either. This is specifically for keyword\n>lookup. The whole idea is that you have one set of keywords that hardly\n>ever changes (such as for programming languages), then you take one\n>afternoon off, play around with the different options until you have a\n>lookup function which processes your particular keyword set fastest.\n\n>Again, this is not a big deal to me, I just did it to play around. In any\n>case it seems to run faster, but I wasn't sure if people wanted to bother.\n\nThe binary search could very easily be sped up, actually (I just peeked).\n\nIf you'd care to e-mail me your two test cases, the one using the\noutput of gperf and the one using the current binary search, I'd\nbe more than willing to demonstrate.\n\nI have nothing against speeding things up, though again identifying\nkeywords takes a vanishingly small part of the time required to\nexecute a query (or compile a program, for that matter). I more\nor less dislike adding dependencies to external tools when it\ncan be avoided, though. \n\nI just built a new PC to do linux-based development on and it\nwould be fun to have your benchmark program to compare my \nnew and old linux boxes anyway :) (P 200 classic vs. P500E, guess\nwhich will win?)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 20 Jan 2000 07:38:38 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gperf anyone?"
}
] |
[
{
"msg_contents": "\nCan I ask how our big open items for 7.0 are doing:\n\n\tLong tuples/TOAST(Jan)\n\tForiegn keys/action buffer(Jan)\n\tUnify date/time types(Thomas)\n\tOuter Joins(Thomas)\n\nI am only asking to know if we should continue with the planned Feb 1\nbeta?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jan 2000 22:41:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Status on 7.0"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Can I ask how our big open items for 7.0 are doing:\n> \n> Long tuples/TOAST(Jan)\n> Foriegn keys/action buffer(Jan)\n> Unify date/time types(Thomas)\n> Outer Joins(Thomas)\n\nWill we also have a possibility to ALTER constraints (NULL, CHECK, FOREIGN\nKEY)\n\nAFAIK, we can currently only change UNIQUE (by dropping the UNIQUE index),\n\n--------------\nHannu\n",
"msg_date": "Wed, 19 Jan 2000 10:11:28 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on 7.0"
},
{
"msg_contents": "> Can I ask how our big open items for 7.0 are doing:\n> Unify date/time types(Thomas)\n\nWill do this after getting the outer join syntax going...\n\n> Outer Joins(Thomas)\n\nI've finally gotten back to work on this in the last couple of days.\nRemember, at the moment I've only committed to getting *syntax*, but\nthat we were hoping to push this into the planner/optimizer/executor\nsoon after.\n\n> I am only asking to know if we should continue with the planned Feb 1\n> beta?\n\nI'll be able to do the date/time unification once my parser is back\ntogether and stable (no point in doing it earlier because I'll need a\nstable parser to follow up on problem reports). Not sure about making\nFeb 1, though I'll guess that the date/time unification is a couple of\ndays work once I start.\n\nRemember that I'll need two weeks or so to package the docs before\nfinal release, so will need some padding at the end of beta without\ntoo much distraction from other problem reports :(\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 19 Jan 2000 15:27:41 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status on 7.0"
},
{
"msg_contents": "> Can I ask how our big open items for 7.0 are doing:\n\nistm that pg_dump could benefit greatly if it translated internal\nPostgres type names to the SQL92-standard names. For example, int4 ->\ninteger, int8 -> bigint, etc. This would be analogous to the\ntranslation we do when parsing queries in the backend, converting\n(e.g.) integer -> int4.\n\nThis feature would make it a bit easier to move databases around, esp.\naway from Postgres for those who have to...\n\nAnyone interested in looking at this? If not, can you add it to the\nToDo Bruce?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 19 Jan 2000 15:40:11 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status on 7.0"
},
{
"msg_contents": "On 2000-01-18, Bruce Momjian mentioned:\n\n> \n> Can I ask how our big open items for 7.0 are doing:\n> \n> \tLong tuples/TOAST(Jan)\n> \tForiegn keys/action buffer(Jan)\n> \tUnify date/time types(Thomas)\n> \tOuter Joins(Thomas)\n> \n> I am only asking to know if we should continue with the planned Feb 1\n> beta?\n\nAre we going to yank DISTINCT ON?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Thu, 20 Jan 2000 18:55:04 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on 7.0"
},
{
"msg_contents": "On 2000-01-19, Thomas Lockhart mentioned:\n\n> istm that pg_dump could benefit greatly if it translated internal\n> Postgres type names to the SQL92-standard names. For example, int4 ->\n> integer, int8 -> bigint, etc. This would be analogous to the\n> translation we do when parsing queries in the backend, converting\n> (e.g.) integer -> int4.\n\nI certainly think this is a good idea, but I don't consider the proposed\napproach too good. The reason is that the next thing you'd have to do is\nfix up psql as well, creating one more source of inconsistency. Not to\nmention other similar applications which we don't have any control over,\nsuch as pgbash.\n\nWhat I'd suggest -- and the 7.0 release is certainly one of the better\ntimes to do this -- is to change the catalog entries to \"integer\",\n\"bigint\", etc. and instead do the translation of the \"deprecated\" (or\n\"traditional\", if you like) types \"int4\", \"int8\", etc. into the standard\nones. As far as I can see this would require only two changes for each\ndatatype (gram.y:xlateSqlType(), and pg_type.h), so this could be done\ntransparently to the rest of the code. And client applications don't need\nto know this at all.\n\nIs there a problem with this I'm not seeing (other than perhaps affective\nattachment to Postgres'isms)? This almost seems too trivial to not have\nbeen done already.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n",
"msg_date": "Thu, 20 Jan 2000 18:55:28 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Status on 7.0"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 2000-01-18, Bruce Momjian mentioned:\n> \n> > \n> > Can I ask how our big open items for 7.0 are doing:\n> > \n> > \tLong tuples/TOAST(Jan)\n> > \tForiegn keys/action buffer(Jan)\n> > \tUnify date/time types(Thomas)\n> > \tOuter Joins(Thomas)\n> > \n> > I am only asking to know if we should continue with the planned Feb 1\n> > beta?\n> \n> Are we going to yank DISTINCT ON?\n\nI don't show DISTINCT ON as being worked on by anyone.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jan 2000 13:11:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Status on 7.0"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 2000-01-19, Thomas Lockhart mentioned:\n> \n> > istm that pg_dump could benefit greatly if it translated internal\n> > Postgres type names to the SQL92-standard names. For example, int4 ->\n> > integer, int8 -> bigint, etc. This would be analogous to the\n> > translation we do when parsing queries in the backend, converting\n> > (e.g.) integer -> int4.\n> \n> I certainly think this is a good idea, but I don't consider the proposed\n> approach too good. The reason is that the next thing you'd have to do is\n> fix up psql as well, creating one more source of inconsistency. Not to\n> mention other similar applications which we don't have any control over,\n> such as pgbash.\n> \n> What I'd suggest -- and the 7.0 release is certainly one of the better\n> times to do this -- is to change the catalog entries to \"integer\",\n> \"bigint\", etc. and instead do the translation of the \"deprecated\" (or\n> \"traditional\", if you like) types \"int4\", \"int8\", etc. into the standard\n> ones. As far as I can see this would require only two changes for each\n> datatype (gram.y:xlateSqlType(), and pg_type.h), so this could be done\n> transparently to the rest of the code. And client applications don't need\n> to know this at all.\n> \n> Is there a problem with this I'm not seeing (other than perhaps affective\n> attachment to Postgres'isms)? This almost seems too trivial to not have\n> been done already.\n\n[The big incomatibility alarm goes off.]\n\nWe have to discuss if this is a good idea. I am not sure.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jan 2000 13:12:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Status on 7.0"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Are we going to yank DISTINCT ON?\n>\n> I don't show DISTINCT ON as being worked on by anyone.\n\nI proposed removing it, but hadn't gotten around to doing anything\nabout it. Partly, I've been waiting to see if anyone would complain\nthat it should stay...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jan 2000 17:54:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on 7.0 "
},
{
"msg_contents": "> What I'd suggest -- and the 7.0 release is certainly one of the better\n> times to do this -- is to change the catalog entries to \"integer\",\n> \"bigint\", etc. and instead do the translation of the \"deprecated\" (or\n> \"traditional\", if you like) types \"int4\", \"int8\", etc. into the standard\n> ones. As far as I can see this would require only two changes for each\n> datatype (gram.y:xlateSqlType(), and pg_type.h), so this could be done\n> transparently to the rest of the code. And client applications don't need\n> to know this at all.\n> Is there a problem with this I'm not seeing (other than perhaps affective\n> attachment to Postgres'isms)? This almost seems too trivial to not have\n> been done already.\n\nOne reason this hasn't been done is that we would lose the\n(occasional) correspondence between internal names and external names,\nwhich isn't a very good reason.\n\nI've also thought that we might implement some \"by reference\" types to\nreplace the \"by value\" types we have now (and use the SQL92 names for\nthem). But Tom Lane has talked about fixing up the internal problems\nwe have with passing info about NULLs with \"by value\" types, so that\nmight be a bad way to head. However, the downside to eliminating \"by\nvalue\"s (extra footprint?) might be offset by being able to remove\ncode which has to handle both cases (extra speed?).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 21 Jan 2000 03:36:32 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Status on 7.0"
},
{
"msg_contents": "> One reason this hasn't been done is that we would lose the\n> (occasional) correspondence between internal names and external names,\n> which isn't a very good reason.\n> \n> I've also thought that we might implement some \"by reference\" types to\n> replace the \"by value\" types we have now (and use the SQL92 names for\n> them). But Tom Lane has talked about fixing up the internal problems\n> we have with passing info about NULLs with \"by value\" types, so that\n> might be a bad way to head. However, the downside to eliminating \"by\n> value\"s (extra footprint?) might be offset by being able to remove\n> code which has to handle both cases (extra speed?).\n\nI have added a TODO item:\n\n\t* add pg_dump option to dump type names as standard ANSI types\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jan 2000 22:52:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Status on 7.0"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> Can I ask how our big open items for 7.0 are doing:\n>\n> Long tuples/TOAST(Jan)\n> Foriegn keys/action buffer(Jan)\n> Unify date/time types(Thomas)\n> Outer Joins(Thomas)\n>\n> I am only asking to know if we should continue with the planned Feb 1\n> beta?\n\n I know now that I have to do all the missing FOREIGN KEY stuff\n myself. And thats not the file buffering alone. We cannot ship\n FOREIGN KEY if pg_dump cannot handle it.\n\n Up to now, nothing I did for TOAST changed anything visible. So it\n has to wait for FK.\n\n Currently, I'm a little busy, so I don't expect TOAST for 7.0\n anymore - sorry.\n\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n",
"msg_date": "Fri, 21 Jan 2000 09:39:03 +0100",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status on 7.0"
},
{
"msg_contents": "> I know now that I have to do all the missing FOREIGN KEY stuff\n> myself. And thats not the file buffering alone. We cannot ship\n> FOREIGN KEY if pg_dump cannot handle it.\n> \n> Up to now, nothing I did for TOAST changed anything visible. So it\n> has to wait for FK.\n> \n> Currently, I'm a little busy, so I don't expect TOAST for 7.0\n> anymore - sorry.\n\nThat's fine. Better to say it now and take the pressure of yourself.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jan 2000 10:11:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Status on 7.0"
},
{
"msg_contents": "On 2000-01-20, Bruce Momjian mentioned:\n\n> > Are we going to yank DISTINCT ON?\n> \n> I don't show DISTINCT ON as being worked on by anyone.\n\nI think Tom mentioned several times that he would like to yank it because\na) it's non-standard\nb) it's conceptually flawed\nc) there's a standard alternative\nd) it doesn't work too well\ne) people keep trying to use it\n\nJust making sure no one forgets ...\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 22 Jan 2000 15:24:36 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on 7.0"
},
{
"msg_contents": "On 2000-01-21, Thomas Lockhart mentioned:\n\n> > What I'd suggest -- and the 7.0 release is certainly one of the better\n> > times to do this -- is to change the catalog entries to \"integer\",\n> > \"bigint\", etc. and instead do the translation of the \"deprecated\" (or\n> > \"traditional\", if you like) types \"int4\", \"int8\", etc. into the standard\n> > ones.\n\n> I've also thought that we might implement some \"by reference\" types to\n> replace the \"by value\" types we have now (and use the SQL92 names for\n> them). But Tom Lane has talked about fixing up the internal problems\n> we have with passing info about NULLs with \"by value\" types, so that\n> might be a bad way to head. However, the downside to eliminating \"by\n> value\"s (extra footprint?) might be offset by being able to remove\n> code which has to handle both cases (extra speed?).\n\nWell, rather than creating a huge potential hazard for everyone two weeks\nbefore beta I'm going to settle for a cheaper solution (for now). There\nare just too many subtleties that one would have to address early in a\ndevel cycle, so I'll put that on the the Forget-me-not list for 7.1.\n\nInstead I'd suggest extending the idea of gram.y's xlateSqlType to two\nfunctions provided by the backend\n\ntype_sql_to_internal\ntype_internal_to_sql\n\nwhich psql and pg_dump could use. Once we switch some or all datatypes\nover, this would be the only place we'd need to change -- until it's an\nempty function at the end.\n\nComments? Better ideas?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 23 Jan 2000 02:30:43 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Status on 7.0"
},
{
"msg_contents": "> Well, rather than creating a huge potential hazard for everyone two weeks\n> before beta I'm going to settle for a cheaper solution (for now). There\n> are just too many subtleties that one would have to address early in a\n> devel cycle, so I'll put that on the the Forget-me-not list for 7.1.\n\nRight.\n\n> Instead I'd suggest extending the idea of gram.y's xlateSqlType to two\n> functions provided by the backend\n> type_sql_to_internal\n> type_internal_to_sql\n> which psql and pg_dump could use. Once we switch some or all datatypes\n> over, this would be the only place we'd need to change -- until it's an\n> empty function at the end.\n\nSounds good to me. Unless we embed this knowledge in a table\nsomewhere, which perhaps we should have done originally. But then we\nwould have lots of overhead on queries...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 24 Jan 2000 15:58:09 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Status on 7.0"
},
{
"msg_contents": "Already added to TODO list.\n\n\n> > Can I ask how our big open items for 7.0 are doing:\n> \n> istm that pg_dump could benefit greatly if it translated internal\n> Postgres type names to the SQL92-standard names. For example, int4 ->\n> integer, int8 -> bigint, etc. This would be analogous to the\n> translation we do when parsing queries in the backend, converting\n> (e.g.) integer -> int4.\n> \n> This feature would make it a bit easier to move databases around, esp.\n> away from Postgres for those who have to...\n> \n> Anyone interested in looking at this? If not, can you add it to the\n> ToDo Bruce?\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 2 Jun 2000 12:01:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Status on 7.0"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > istm that pg_dump could benefit greatly if it translated internal\n> > Postgres type names to the SQL92-standard names. For example, int4 ->\n> > integer, int8 -> bigint, etc. This would be analogous to the\n> > translation we do when parsing queries in the backend, converting\n> > (e.g.) integer -> int4.\n\nI once proposed to create a function `format_type' or some such which\nwould take an internal type name and a modifier and format it to some\ncanonical representation. I recall that was well received and I'm still\ninterested in that, but I'd let the function manager changes die down\nfirst.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 4 Jun 2000 00:44:46 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Status on 7.0"
}
] |
[
{
"msg_contents": "I hope to get H1B visa Feb 4 and leave for\nSan-Francisco on Feb 6...\nI'll be in Krasnoyarsk till Feb 2.\n\nGood luck!\n\nVadim\n",
"msg_date": "Wed, 19 Jan 2000 11:57:22 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Well..."
},
{
"msg_contents": "> I hope to get H1B visa Feb 4 and leave for\n> San-Francisco on Feb 6...\n> I'll be in Krasnoyarsk till Feb 2.\n\nThat means you are going to spend 2 or 3 months in US, then come back\nin April or May? I just think it would be really nice if you could\nstop over Japan...\n--\nTatsuo Ishii\n\n",
"msg_date": "Sat, 22 Jan 2000 00:25:43 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Well..."
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> > I hope to get H1B visa Feb 4 and leave for\n> > San-Francisco on Feb 6...\n> > I'll be in Krasnoyarsk till Feb 2.\n> \n> That means you are going to spend 2 or 3 months in US, then come back\n\nI'm going to spend ~3 years in USA -:)\n\n> in April or May? I just think it would be really nice if you could\n> stop over Japan...\n\nThanks! I would be glad to visit Japan sometime.\n\nVadim\n",
"msg_date": "Sat, 22 Jan 2000 15:29:40 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Well..."
},
{
"msg_contents": "> > That means you are going to spend 2 or 3 months in US, then come back\n> \n> I'm going to spend ~3 years in USA -:)\n\nOh, I didn't know that.\n\n> > in April or May? I just think it would be really nice if you could\n> > stop over Japan...\n> \n> Thanks! I would be glad to visit Japan sometime.\n\nPlease let me know if you would have such a chance...\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 23 Jan 2000 11:11:03 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Well..."
}
] |
[
{
"msg_contents": "I recently installed postgresql-{server,devel,jdbc,odbc,perl}-6.5.3-2\nand thought I see it referred to alot there was not a\n/usr/doc/postgresql-6.5.3/examples directory included anywhere.\n\nI just want a jdbc driver. I got the jdk1.1.8 that I'm running on my\nwin95 machine and I'd like to create a class that I can run anywhere.\n\npostgresql-jdbc-6.5.3-2 did include jdbc6.5-1.2.jar and\njdbc6.5-1.1.jar based on rpm -ql postgresql-jdbc. \n\nCan I get a clue as to where to get, preferrably a precompiled jdbc\ndriver and perhaps an indication as to how to use it. I've seen\nplaces where the driver for postgresql is postgresql.Driver and others\nwhere the name is postgresql95.Driver.\n\n\tThanks for the help, \n\nI'm trying to help myself but I seem to be in need of a little nudge, \n",
"msg_date": "Wed, 19 Jan 2000 06:55:13 GMT",
"msg_from": "[email protected] (Micheal H.)",
"msg_from_op": true,
"msg_subject": "examples not included"
}
] |
[
{
"msg_contents": "> > Hi!\n> > \n> > Here is a patch to bring both libpq and psql to a state \n> where it compiles on\n> > win32 (native) again. A lot of things have changed, and I \n> have not been able\n> > to keep up with them all, so it has been broken for quite a while.\n> > After this patch, at least it compiles. It also talks \n> \"basic talk\" to the\n> > server, but I have not yet tested all things. Sending \n> queries, and using\n> > e.g. \\d or \\dt works fine. The rest will have to be tested further. \n> > It also bumps the version on libpq.dll to 7.0.\n> \n> Shouldn't the library version number be 2.1?\n\nIt probably should. But the previous one (the one that is out with 6.5.x) is\nversioned 6.5. If we switched back to 2.1, then any sane installation\nprogram would refuse to install that DLL if the 6.5 DLL was already\ninstalled. Which would be bad.\n\n//Magnus\n",
"msg_date": "Wed, 19 Jan 2000 09:42:53 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PATCHES] Patch for Win32 support"
}
] |
[
{
"msg_contents": "The latest source does not compile on Solaris 7 due to\na missing include from a modified file.\n\nHere is a patch to fix it:-\n\nKeith.\n\n*** src/backend/utils/init/miscinit.c.orig Sun Jan 16 12:23:42 2000\n--- src/backend/utils/init/miscinit.c Sun Jan 16 12:27:01 2000\n***************\n*** 18,23 ****\n--- 18,24 ----\n #include <signal.h>\n #include <sys/stat.h>\n #include <sys/file.h>\n+ #include <fcntl.h>\n #include <unistd.h>\n #include <grp.h>\n #include <pwd.h>\n\n",
"msg_date": "Wed, 19 Jan 2000 08:42:59 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Solaris breakage - lastest CVS"
},
{
"msg_contents": "Applied.\n\n> The latest source does not compile on Solaris 7 due to\n> a missing include from a modified file.\n> \n> Here is a patch to fix it:-\n> \n> Keith.\n> \n> *** src/backend/utils/init/miscinit.c.orig Sun Jan 16 12:23:42 2000\n> --- src/backend/utils/init/miscinit.c Sun Jan 16 12:27:01 2000\n> ***************\n> *** 18,23 ****\n> --- 18,24 ----\n> #include <signal.h>\n> #include <sys/stat.h>\n> #include <sys/file.h>\n> + #include <fcntl.h>\n> #include <unistd.h>\n> #include <grp.h>\n> #include <pwd.h>\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Jan 2000 09:01:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solaris breakage - lastest CVS"
}
] |
[
{
"msg_contents": "What is the current state of regression testing on the CVS tree? Are\nthe regression tests performed once in a while, or routinely? If\nperformed once in a while, perhaps the following would idea would be of\ninterest:\n\nI am kicking around the idea of using one of my old machines (a P90) to\nrun nightly regression tests on the CVS tree. The idea would be to use\nvmware to set up multiple virtual machines, one for each OS or\ndistribution (Linux clib, Linux glib, FreeBSD, etc. etc.). Perhaps\neven multiple versions of the same OS/dist (RH6.0, RH6.1, SuSE 6.1, SuSE\n6.2) to catch subtle changes between versions if they are known to be a\nproblem.\n\nThat way, I could conduct a daily automatic regression test on them all.\n\nA program (probably python) would combine the outputs into an HTML table\nof OS version vs. test result. All developers would then be able to\nkeep an eye on the effects their changes have on other platforms.\n\nWould this be of value , or waste of effort ????\n\nSteve\n\n\nPS\n\nLooks like I am going to be offline for about 3 weeks so will not be\nable to pick up replies to this question until I get back. If you\nreply, be sure to Cc [email protected] so I don't lose the\nreply.\n\n",
"msg_date": "Wed, 19 Jan 2000 02:23:34 -0800",
"msg_from": "Stephen Birch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Daily regression testing via vmware - useful?"
},
{
"msg_contents": "Stephen Birch <[email protected]> writes:\n> What is the current state of regression testing on the CVS tree? Are\n> the regression tests performed once in a while, or routinely?\n\nI make a practice of running them whenever I pull a cvs update (once or\ntwice a week usually), and before I commit any changes. I hope other\ndevelopers run them fairly routinely as well.\n\n> I am kicking around the idea of using one of my old machines (a P90) to\n> run nightly regression tests on the CVS tree.\n\nI think this might be a good thing to do, even though as MikeA points\nout, coverage of only Intel-based platforms isn't all that impressive.\n(At least not to those of us who use non-Intel platforms.) If the\nidea seems to work out, perhaps other people could be persuaded to\ncontribute cycles on non-Intel boxes.\n\nThere are other open-source projects running automated regression tests\nthis way; Mozilla is probably the most visible example. As far as I've\nheard, it's been useful for them. If you can set it up without too much\nwork, I'd say give it a try, and we'll find out whether it helps us or\nnot. We can always drop the experiment if it seems not to help.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jan 2000 10:50:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Daily regression testing via vmware - useful? "
}
] |
[
{
"msg_contents": "\n> > > Yes, that's about the sum of it. Why not the links? I think \n> > > that it's an elegant way of designing the whole thing.\n> > \n> > The only problem with symlinks is, that it does not solve the \n> > \"too many files in one directory to give optimal performance\"\n> > problem for those that have tons of tables.\n> > \n> > Andreas\n> \n> Is that really a problem on modern operating systems?\n\nYes, in this regard most OS's (not only Unix) are quite old in their design.\n(A . file, that has a sorted list of files and stats)\nThe problem starts to get severe at about 3000 - 10000 files,\ndepending on the average filename length.\nChange one file --> write whole dirfile.\nDepending on the system simple file create or fstat\ndrops to 3 files per second or worse.\n\nOnce the files are in cache, the performance is usually\ngood again.\n\nBut imho 300 ms for a temp file create is way too much.\n\nMaybe we could only put the temp files in a different directory.\nThey are where performance matters.\nIf a normal table create takes a few seconds that is not a real problem.\n\n> We could actually\n> hash the file names into directory buckets and access them \n> that way, and have one directory that old symlinks to the hashed files.\n\nYou can't have the one directory. It makes no difference whether \nthat dir has 5000 symlinks or 5000 files performance wise.\n\nAndreas\n",
"msg_date": "Wed, 19 Jan 2000 11:46:19 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] [hackers]development suggestion needed (filepath as\n\tsymlink)"
}
] |
[
{
"msg_contents": "Well, I'm hoping to have the 7.0 version of JDBC ready before next\nWednesday (26th). Also, I've just agreed with Assaf to add his\nextensions in on Monday as both of us are around (just got to sort out\nthe time zones).\n\nThe only bit that will need sorting out afterwards is handling the\ndate/time changes, but I can wait until that's settled, but it has to be\ndone before 7.0.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: Wednesday, January 19, 2000 3:42 AM\nTo: Thomas G. Lockhart; Jan Wieck\nCc: PostgreSQL-development\nSubject: [HACKERS] Status on 7.0\n\n\n\nCan I ask how our big open items for 7.0 are doing:\n\n\tLong tuples/TOAST(Jan)\n\tForiegn keys/action buffer(Jan)\n\tUnify date/time types(Thomas)\n\tOuter Joins(Thomas)\n\nI am only asking to know if we should continue with the planned Feb 1\nbeta?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n************\n",
"msg_date": "Wed, 19 Jan 2000 11:30:25 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Status on 7.0"
}
] |
[
{
"msg_contents": "I suspect that it would be of value, however, remember that you are\nconstrained to those OSes that run on Intel chips, and at the moment, I\nbelieve, only a subset of those. I know that Linux can host winnt, win2k,\nlinux, Solaris, and possibly one or two others. I'm not sure how well\nthings like BSD would do, although they would probably do OK. However, you\nwon't be able to test under HP-UX 11, for instance, unless you can get for\nIntel, which I don't think you can.\n\nMikeA\n\n\n>> -----Original Message-----\n>> From: Stephen Birch [mailto:[email protected]]\n>> Sent: Wednesday, January 19, 2000 12:24 PM\n>> To: PostgreSQL-hackers\n>> Subject: [HACKERS] Daily regression testing via vmware - useful?\n>> \n>> \n>> What is the current state of regression testing on the CVS tree? Are\n>> the regression tests performed once in a while, or routinely? If\n>> performed once in a while, perhaps the following would idea \n>> would be of\n>> interest:\n>> \n>> I am kicking around the idea of using one of my old machines \n>> (a P90) to\n>> run nightly regression tests on the CVS tree. The idea \n>> would be to use\n>> vmware to set up multiple virtual machines, one for each OS or\n>> distribution (Linux clib, Linux glib, FreeBSD, etc. etc.). Perhaps\n>> even multiple versions of the same OS/dist (RH6.0, RH6.1, \n>> SuSE 6.1, SuSE\n>> 6.2) to catch subtle changes between versions if they are \n>> known to be a\n>> problem.\n>> \n>> That way, I could conduct a daily automatic regression test \n>> on them all.\n>> \n>> A program (probably python) would combine the outputs into \n>> an HTML table\n>> of OS version vs. test result. All developers would then be able to\n>> keep an eye on the effects their changes have on other platforms.\n>> \n>> Would this be of value , or waste of effort ????\n>> \n>> Steve\n>> \n>> \n>> PS\n>> \n>> Looks like I am going to be offline for about 3 weeks so will not be\n>> able to pick up replies to this question until I get back. If you\n>> reply, be sure to Cc [email protected] so I \n>> don't lose the\n>> reply.\n>> \n>> \n>> ************\n>> \n",
"msg_date": "Wed, 19 Jan 2000 13:31:27 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Daily regression testing via vmware - useful?"
}
] |
[
{
"msg_contents": ">> Maybe we could only put the temp files in a different directory.\n>> They are where performance matters.\n>> If a normal table create takes a few seconds that is not a \n>> real problem.\nOnce you have the ability to create tablespaces, you can modify the temp\ntable thingy a little to create temporary tables in the temp tablespace. If\nthere is no explicit temp tablespace defined, then it defaults to the system\ntablespace (which is where it goes now anyway).\nWhat this means is that tablespaces must have a flag indicating whether or\nnot they are the temp tablespace or not (only one per database). Also, it's\nhandy to have a default tablespace for each user, so that tables are created\nin whichever tablespace is the default for that user, unless they explicitly\nstate which tablespace to use.\n\n>> \n>> > We could actually\n>> > hash the file names into directory buckets and access them \n>> > that way, and have one directory that old symlinks to the \n>> > hashed files.\nI don't think this is necessary, because if you have a system that requires\nthis kind of action, then the administrator can create a temp tablespace\nwhich is used for all the temporary tables, and spread the rest of the\ntables and indices among the remaining tablespaces.\n\n\nMikeA\n\n\n\n",
"msg_date": "Wed, 19 Jan 2000 13:40:04 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] [hackers]development suggestion needed (filepath as\n\tsymlink)"
},
{
"msg_contents": "\"Ansley, Michael\" wrote:\n> \n> >> Maybe we could only put the temp files in a different directory.\n> >> They are where performance matters.\n> >> If a normal table create takes a few seconds that is not a\n> >> real problem.\n> Once you have the ability to create tablespaces, you can modify the temp\n> table thingy a little to create temporary tables in the temp tablespace. If\n> there is no explicit temp tablespace defined, then it defaults to the system\n> tablespace (which is where it goes now anyway).\n\nIMHO, It's good idea placing all temporary files (like pg_sort.tmp etc)\ninto separate subdirectory.\n\n\n> >> > We could actually\n> >> > hash the file names into directory buckets and access them\n> >> > that way, and have one directory that old symlinks to the\n> >> > hashed files.\n> I don't think this is necessary, because if you have a system that requires\n> this kind of action, then the administrator can create a temp tablespace\n> which is used for all the temporary tables, and spread the rest of the\n> tables and indices among the remaining tablespaces.\n\nAccording to my practice, one byte hash improve mail performance twice\nfor 5000 mailboxes. It doesn't significantly increase mail performance\nfor 1000 mailboxes.\n\nI'm hard to belive postgres data directory with 5000 files ;-))\n\n\n-- \nDmitry Samersoff, DM\\S\[email protected] http://devnull.wplus.net\n* there will come soft rains\n",
"msg_date": "Thu, 20 Jan 2000 00:03:56 +0300",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [hackers]development suggestion needed (filepath\n\tassymlink)"
}
] |
[
{
"msg_contents": "I'm sending this to both the hackers and interfaces lists as this\naffects the 7.0 release and an interface.\n\nOk, up until now the driver has operated under a base package of\npostgresql. This has worked fine so far but technically breaks Sun's\nrules on package naming. The rule is that any organisations package\nnames begins with their domain name. This prevents two different package\nnames from clashing.\n\nIe: My own classes always begin with uk.org.retep as my own domain is\nretep.org.uk. The classes I write here begin with uk.gov.maidstone.\n\nNow, what I'm thinking is that as the 7.0 driver isn't going to be\ncompatible with earlier backends (mainly due to the core changes like\ndate/time handling, but there are others), I'm proposing to change our\nbase package name from postgresql to org.postgresql so that we comply\nwith this rule (which has been around since Java first came out).\n\nAll this involves in the source is to create an empty directory called\norg, and move the original postgresql directory into it. Then each .java\nfile will need org. prefixed to the package name.\n\nThe down side, is that any existing source that uses the driver will\nneed amending so that either the Class.forName() line reads:\n\n\tClass.forName(\"org.postgresql\");\n\nor if it's supplied as a parameter (which is my prefered way) the org.\nadded.\n\nNow because of this downside, I want to see what everyone thinks about\nmaking this change before I do it, as I have a lot of things to do to\nthe source to implement it, but it would be better to do it now,\nespecially as it's the first new major release since JDBC was included.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n",
"msg_date": "Wed, 19 Jan 2000 11:52:36 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposed change to the JDBC driver"
},
{
"msg_contents": "\nAs I haven't seen any replies to this, can I assume no body objects to me\nmaking this fairly major change to the driver?\n\nI need to know sometime in the next few hours, as I'm about to fit the\npieces together.\n\nPeter\n\nOn Wed, 19 Jan 2000, Peter Mount wrote:\n> I'm sending this to both the hackers and interfaces lists as this\n> affects the 7.0 release and an interface.\n> \n> Ok, up until now the driver has operated under a base package of\n> postgresql. This has worked fine so far but technically breaks Sun's\n> rules on package naming. The rule is that any organisations package\n> names begins with their domain name. This prevents two different package\n> names from clashing.\n> \n> Ie: My own classes always begin with uk.org.retep as my own domain is\n> retep.org.uk. The classes I write here begin with uk.gov.maidstone.\n> \n> Now, what I'm thinking is that as the 7.0 driver isn't going to be\n> compatible with earlier backends (mainly due to the core changes like\n> date/time handling, but there are others), I'm proposing to change our\n> base package name from postgresql to org.postgresql so that we comply\n> with this rule (which has been around since Java first came out).\n> \n> All this involves in the source is to create an empty directory called\n> org, and move the original postgresql directory into it. Then each .java\n> file will need org. prefixed to the package name.\n> \n> The down side, is that any existing source that uses the driver will\n> need amending so that either the Class.forName() line reads:\n> \n> \tClass.forName(\"org.postgresql\");\n> \n> or if it's supplied as a parameter (which is my prefered way) the org.\n> added.\n> \n> Now because of this downside, I want to see what everyone thinks about\n> making this change before I do it, as I have a lot of things to do to\n> the source to implement it, but it would be better to do it now,\n> especially as it's the first new major release since JDBC was included.\n> \n> Peter\n> \n> -- \n> Peter Mount\n> Enterprise Support\n> Maidstone Borough Council\n> Any views stated are my own, and not those of Maidstone Borough Council.\n> \n> \n> \n> ************\n> \n\n--\n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Mon, 24 Jan 2000 17:33:58 +0000 (GMT)",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Proposed change to the JDBC driver"
},
{
"msg_contents": "Peter Mount <[email protected]> writes:\n> As I haven't seen any replies to this, can I assume no body objects to me\n> making this fairly major change to the driver?\n\nAFAICS, it's not \"major\" in the sense of a functionality change, it's\njust that each calling app will need to replace \"postgresql\" with\n\"org.postgresql\"?\n\nIf you're gonna do it, 7.0 seems like the right time to do it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 Jan 2000 13:06:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Proposed change to the JDBC driver "
},
{
"msg_contents": "On Mon, 24 Jan 2000, Tom Lane wrote:\n\n> Peter Mount <[email protected]> writes:\n> > As I haven't seen any replies to this, can I assume no body objects to me\n> > making this fairly major change to the driver?\n> \n> AFAICS, it's not \"major\" in the sense of a functionality change, it's\n> just that each calling app will need to replace \"postgresql\" with\n> \"org.postgresql\"?\n\nCorrect.\n\n> If you're gonna do it, 7.0 seems like the right time to do it.\n\nThat's exactly why I wanted to do it now.\n\nPeter\n\n--\n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Mon, 24 Jan 2000 18:18:24 +0000 (GMT)",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Proposed change to the JDBC driver "
}
] |
[
{
"msg_contents": "I'm planning on using VMWare to allow me to have different version JVM's\nrunning when I start testing JDBC fully next week. It's easier to have\njust one JVM on an installation.\n\nAs for BSD, I thought I saw some screen shots somewhere with it running?\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Ansley, Michael [mailto:[email protected]]\nSent: Wednesday, January 19, 2000 11:31 AM\nTo: 'Stephen Birch'; PostgreSQL-hackers\nSubject: RE: [HACKERS] Daily regression testing via vmware - useful?\n\n\nI suspect that it would be of value, however, remember that you are\nconstrained to those OSes that run on Intel chips, and at the moment, I\nbelieve, only a subset of those. I know that Linux can host winnt,\nwin2k,\nlinux, Solaris, and possibly one or two others. I'm not sure how well\nthings like BSD would do, although they would probably do OK. However,\nyou\nwon't be able to test under HP-UX 11, for instance, unless you can get\nfor\nIntel, which I don't think you can.\n\nMikeA\n\n\n>> -----Original Message-----\n>> From: Stephen Birch [mailto:[email protected]]\n>> Sent: Wednesday, January 19, 2000 12:24 PM\n>> To: PostgreSQL-hackers\n>> Subject: [HACKERS] Daily regression testing via vmware - useful?\n>> \n>> \n>> What is the current state of regression testing on the CVS tree? Are\n>> the regression tests performed once in a while, or routinely? If\n>> performed once in a while, perhaps the following would idea \n>> would be of\n>> interest:\n>> \n>> I am kicking around the idea of using one of my old machines \n>> (a P90) to\n>> run nightly regression tests on the CVS tree. The idea \n>> would be to use\n>> vmware to set up multiple virtual machines, one for each OS or\n>> distribution (Linux clib, Linux glib, FreeBSD, etc. etc.). Perhaps\n>> even multiple versions of the same OS/dist (RH6.0, RH6.1, \n>> SuSE 6.1, SuSE\n>> 6.2) to catch subtle changes between versions if they are \n>> known to be a\n>> problem.\n>> \n>> That way, I could conduct a daily automatic regression test \n>> on them all.\n>> \n>> A program (probably python) would combine the outputs into \n>> an HTML table\n>> of OS version vs. test result. All developers would then be able to\n>> keep an eye on the effects their changes have on other platforms.\n>> \n>> Would this be of value , or waste of effort ????\n>> \n>> Steve\n>> \n>> \n>> PS\n>> \n>> Looks like I am going to be offline for about 3 weeks so will not be\n>> able to pick up replies to this question until I get back. If you\n>> reply, be sure to Cc [email protected] so I \n>> don't lose the\n>> reply.\n>> \n>> \n>> ************\n>> \n\n************\n",
"msg_date": "Wed, 19 Jan 2000 12:14:17 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Daily regression testing via vmware - useful?"
}
] |
[
{
"msg_contents": "Now that I've modified the code so that casting to a specific length\nactually works --- ie you can do\n\tx :: char(7)\n\tCAST (y AS numeric(40,6))\nand get the expected results --- I am starting to worry that there\nmay be unwanted side-effects. The reason is that the system by default\ninterprets \"char\" as \"char(1)\" and \"numeric\" as \"numeric(30,6)\".\nSo if you just write \"x::char\" you will now get truncation to one\ncharacter, which did not use to happen. Another distressing example\nis\nregression=# select '123456789012345678901234567890.12'::numeric;\nERROR: overflow on numeric ABS(value) >= 10^29 for field with precision 30 scale 6\nwhich I think is arguably a violation of the SQL standard --- it says\npretty clearly that the precision and scale of a numeric constant are\nwhatever is implicit in the number of digits.\n\nI am inclined to think that in the context of a cast, we shouldn't\nenforce a coercion to default length, but should only coerce if a length\nis explicitly specified. This would change the behavior of \"x::char\"\nback to what it was.\n\nI think this could be done by having gram.y insert -1 as the default\ntypmod for a \"char\" or \"numeric\" Typename. The rest of the system\nalready interprets such a typmod as specifying no particular length\nconstraint. Then, to preserve the rule that\n\tcreate table foo (bar char);\ncreates a char(1) field, analyze.c would have to be responsible for\ninserting the appropriate default length in place of -1 when processing\na column definition.\n\nComments? Better ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jan 2000 10:35:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should cast to CHAR or NUMERIC enforce default length limit?"
},
{
"msg_contents": "> I think this could be done by having gram.y insert -1 as the default\n> typmod for a \"char\" or \"numeric\" Typename. The rest of the system\n> already interprets such a typmod as specifying no particular length\n> constraint. Then, to preserve the rule that\n> create table foo (bar char);\n> creates a char(1) field, analyze.c would have to be responsible for\n> inserting the appropriate default length in place of -1 when processing\n> a column definition.\n\nSounds good. My first inclination was to work this out in gram.y,\nwhich you could do pretty easily for TYPECAST rules, but perhaps it\nwould be better to isolate *all* of the default size settings to a\nseparate routine called from the appropriate place. Eventually, this\ncould even be a tunable parameter settable during the session (e.g.\n\"SET DEFAULT PRECISION NUMERIC ...\").\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 19 Jan 2000 16:24:40 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Should cast to CHAR or NUMERIC enforce default length\n\tlimit?"
},
{
"msg_contents": "> Now that I've modified the code so that casting to a specific length\n> actually works --- ie you can do\n> \tx :: char(7)\n> \tCAST (y AS numeric(40,6))\n> and get the expected results --- I am starting to worry that there\n> may be unwanted side-effects. The reason is that the system by default\n> interprets \"char\" as \"char(1)\" and \"numeric\" as \"numeric(30,6)\".\n> So if you just write \"x::char\" you will now get truncation to one\n> character, which did not use to happen. Another distressing example\n> is\n> regression=# select '123456789012345678901234567890.12'::numeric;\n> ERROR: overflow on numeric ABS(value) >= 10^29 for field with precision 30 scale 6\n> which I think is arguably a violation of the SQL standard --- it says\n> pretty clearly that the precision and scale of a numeric constant are\n> whatever is implicit in the number of digits.\n\nYes, this is distressing.\n\n> \n> I am inclined to think that in the context of a cast, we shouldn't\n> enforce a coercion to default length, but should only coerce if a length\n> is explicitly specified. This would change the behavior of \"x::char\"\n> back to what it was.\n> \n> I think this could be done by having gram.y insert -1 as the default\n> typmod for a \"char\" or \"numeric\" Typename. The rest of the system\n> already interprets such a typmod as specifying no particular length\n> constraint. Then, to preserve the rule that\n> \tcreate table foo (bar char);\n> creates a char(1) field, analyze.c would have to be responsible for\n> inserting the appropriate default length in place of -1 when processing\n> a column definition.\n\nSounds good to me.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Jan 2000 12:24:41 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Should cast to CHAR or NUMERIC enforce default length\n\tlimit?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> analyze.c would have to be responsible for\n>> inserting the appropriate default length in place of -1 when processing\n>> a column definition.\n\n> Sounds good. My first inclination was to work this out in gram.y,\n> which you could do pretty easily for TYPECAST rules,\n\nI thought about that, but since the Typename production itself can't\ndo the right thing (it doesn't know its context), you'd have to patch\nup after it in either TYPECAST or column definition rules. On the\nwhole, fixing things that the grammar can't easily get right seems like\nit belongs in analyze.c.\n\n> Eventually, this\n> could even be a tunable parameter settable during the session (e.g.\n> \"SET DEFAULT PRECISION NUMERIC ...\").\n\nAs I'm envisioning it, the only thing the default will be used for is\nsubstituting a default precision into a \"CREATE TABLE foo (bar numeric)\"\ncommand. So adjusting the default on-the-fly doesn't seem all that\nuseful --- it wouldn't affect previously-created tables.\n\nI did speculate about the idea of not enforcing a default precision at\nall. If we allowed the table to be created with atttypmod = -1, then\nthe effect would be that numeric values would get stored with whatever\nprecision appeared in the source value or came out of a calculation.\nSuch a numeric column would have no specific precision/scale, so it'd\nbe more like variable-width \"text\" than fixed-width \"char\". I'm not\nsure if this is useful, and I am sure that it wouldn't be very\nstandard...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jan 2000 12:33:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Should cast to CHAR or NUMERIC enforce default length\n\tlimit?"
}
] |
[
{
"msg_contents": "\n FreeBSD 3.2\n Postgtres 6.5 and 6.5.3\n\n I have 6.5 up and running. I am trying to bring up a second server,\n(6.5.3) on a different port. I have configured it into the compile process\nand also tried it from the command line. As soon as I start the\nsecond postmaster process, the main DB server starts refusing\nconnactions.\n The second server is in a chroot'ed environment, so shared libs,\ndata, etc is _absolutely_ separate.\n Is shared memory my problem? How could I (temporarily) hack\nthis until our compatability testing is complete?\n\n\n--\n\"The opposite of a profound truth may well be another profound truth.\"\n - Bohr\n\n\n\n",
"msg_date": "Wed, 19 Jan 2000 16:24:59 GMT",
"msg_from": "Kurt Seel <[email protected]>",
"msg_from_op": true,
"msg_subject": "running two servers on one machine"
},
{
"msg_contents": "Kurt Seel <[email protected]> writes:\n> FreeBSD 3.2\n> Postgtres 6.5 and 6.5.3\n\n> I have 6.5 up and running. I am trying to bring up a second server,\n> (6.5.3) on a different port. I have configured it into the compile process\n> and also tried it from the command line. As soon as I start the\n> second postmaster process, the main DB server starts refusing\n> connactions.\n\nI run two servers all the time, one production and one test. (Right now\nI actually have three going on this machine.) AFAIK all you have to do\nis make sure they are running with different data directories (-D switch\nto postmaster) and port numbers (-p switch). You can configure those\nitems at configure time, but it's not really necessary to do so.\n\nWhat exactly is the behavior --- what error messages do you see in the\nmain server's log? Can you connect to the alternate server? Does it\nshow any error messages?\n\n> The second server is in a chroot'ed environment, so shared libs,\n> data, etc is _absolutely_ separate.\n> Is shared memory my problem? How could I (temporarily) hack\n> this until our compatability testing is complete?\n\nchroot is certainly not necessary, and I doubt shared memory is the\nissue either, unless maybe you are exceeding your kernel's configuration\nlimit. (But that should mean that the second postmaster is unable to\nstart...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jan 2000 12:43:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] running two servers on one machine "
}
] |
[
{
"msg_contents": "subscribe\n",
"msg_date": "Wed, 19 Jan 2000 17:31:16 +0100",
"msg_from": "Sevo Stille <[email protected]>",
"msg_from_op": true,
"msg_subject": "(no subject)"
}
] |
[
{
"msg_contents": "Trying to load a 6 January pg_dumpall file with today's postgresql gives\nmany\n\ninvalid command \\N\n\nprobably because\nPQexec: you gotta get out of a COPY state yourself.\ndb.out4:11621: invalid command \\.\n\nSomehow psql is out of sync and thinks the \\N isn't within a COPY block.\nThe work around was to redump as insert statements..\n\nIt's tricky (for me) to debug as \"stdin;\" file not found..\n\nHow do you get of a COPY state yourself?\n\nCheers,\n\nPatrick\n",
"msg_date": "Wed, 19 Jan 2000 19:24:54 +0000",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump disaster"
},
{
"msg_contents": "Patrick Welche <[email protected]> writes:\n> Trying to load a 6 January pg_dumpall file with today's postgresql gives\n> many\n> invalid command \\N\n> probably because\n> PQexec: you gotta get out of a COPY state yourself.\n> db.out4:11621: invalid command \\.\n> Somehow psql is out of sync and thinks the \\N isn't within a COPY block.\n\nThe answer appears to be that Perlstein's \"nonblocking mode\" patches\nhave broken psql copy, and doubtless a lot of other applications as\nwell, because pqPutBytes no longer feels any particular compulsion\nto actually send the data it's been handed. (Moreover, if it does\ndo only a partial send, there is no way to discover how much it sent;\nwhile its callers might be blamed for not having checked for an error\nreturn, they'd have no way to recover anyhow.)\n\nI thought these patches should not have been applied without more\npeer review, and now I'm sure of it. I recommend reverting 'em.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jan 2000 00:10:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump disaster "
},
{
"msg_contents": "> Patrick Welche <[email protected]> writes:\n> > Trying to load a 6 January pg_dumpall file with today's postgresql gives\n> > many\n> > invalid command \\N\n> > probably because\n> > PQexec: you gotta get out of a COPY state yourself.\n> > db.out4:11621: invalid command \\.\n> > Somehow psql is out of sync and thinks the \\N isn't within a COPY block.\n> \n> The answer appears to be that Perlstein's \"nonblocking mode\" patches\n> have broken psql copy, and doubtless a lot of other applications as\n> well, because pqPutBytes no longer feels any particular compulsion\n> to actually send the data it's been handed. (Moreover, if it does\n> do only a partial send, there is no way to discover how much it sent;\n> while its callers might be blamed for not having checked for an error\n> return, they'd have no way to recover anyhow.)\n> \n> I thought these patches should not have been applied without more\n> peer review, and now I'm sure of it. I recommend reverting 'em.\n\nCan we give the submitter a few days to address the issue?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jan 2000 00:49:34 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump disaster"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I thought these patches should not have been applied without more\n>> peer review, and now I'm sure of it. I recommend reverting 'em.\n\n> Can we give the submitter a few days to address the issue?\n\nSure, we have plenty of time. But I don't think the problem can be\nfixed without starting over. He's taken routines that had two possible\nreturn conditions (\"Success\" and \"Hard failure: he's dead, Jim\") and\nadded a third condition (\"I didn't do what I was supposed to do, or\nperhaps only some of what I was supposed to do, because I was afraid\nof blocking\"). If dealing with that third condition could be hidden\nentirely inside libpq, that'd be OK, but the entire point of this\nset of changes is to bounce control back to the application rather\nthan block. Therefore, what we are looking at is a fundamental change\nin the API of existing routines, which is *guaranteed* to break existing\napplications. (Worse, it breaks them subtly: the code will compile,\nand may even work under light load.)\n\nI think the correct approach is to leave the semantics of the existing\nexported routines alone, and add parallel entry points with new\nsemantics to be used by applications that need to avoid blocking.\nThat's what we've done for queries, and later for connecting, and it\nhasn't broken anything.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jan 2000 01:09:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump disaster "
},
{
"msg_contents": "* Bruce Momjian <[email protected]> [000120 22:13] wrote:\n> > Patrick Welche <[email protected]> writes:\n> > > Trying to load a 6 January pg_dumpall file with today's postgresql gives\n> > > many\n> > > invalid command \\N\n> > > probably because\n> > > PQexec: you gotta get out of a COPY state yourself.\n> > > db.out4:11621: invalid command \\.\n> > > Somehow psql is out of sync and thinks the \\N isn't within a COPY block.\n> > \n> > The answer appears to be that Perlstein's \"nonblocking mode\" patches\n> > have broken psql copy, and doubtless a lot of other applications as\n> > well, because pqPutBytes no longer feels any particular compulsion\n> > to actually send the data it's been handed. (Moreover, if it does\n> > do only a partial send, there is no way to discover how much it sent;\n> > while its callers might be blamed for not having checked for an error\n> > return, they'd have no way to recover anyhow.)\n> > \n> > I thought these patches should not have been applied without more\n> > peer review, and now I'm sure of it. I recommend reverting 'em.\n> \n> Can we give the submitter a few days to address the issue?\n\nTom is wrong in his assessment, this is exactly the reason I brought\nthe patches in.\n\npqPutBytes _never_ felt any compulsion to flush the buffer to the backend,\nor at least not since I started using it. The only change I made was\nfor send reservations to be made for non-blocking connections, nothing\ninside of postgresql uses non-blocking connections.\n\npqPutBytes from revision 1.33: (plus my commentary)\n\nstatic int\npqPutBytes(const char *s, size_t nbytes, PGconn *conn)\n{\n\tsize_t avail = Max(conn->outBufSize - conn->outCount, 0);\n\n/* while the amount to send is greater than the output buffer */\n\twhile (nbytes > avail)\n\t{\n/* copy as much as we can into the buffer */\n\t\tmemcpy(conn->outBuffer + conn->outCount, s, avail);\n\t\tconn->outCount += avail;\n\t\ts += avail;\n\t\tnbytes -= avail;\n/* try to flush it.... */\n\t\tif (pqFlush(conn))\n\t\t\treturn EOF;\n\t\tavail = conn->outBufSize;\n\t}\n\n/* XXX: ok, we have enough room... where is the flush? */\n\tmemcpy(conn->outBuffer + conn->outCount, s, nbytes);\n\tconn->outCount += nbytes;\n\n\treturn 0;\n}\n\nI may have introduced bugs elsewhere, but i doubt it was in pqPutBytes.\n\nThis is the exact problem I was describing and why I felt that routines\nthat checked for data needed to flush beforehand, there may have been\ndata that still needed to be sent.\n\nI'll be reviewing my own changes again to make sure I haven't broken\nsomething else that could be causing this.\n\nThe implications of this is trully annoying, exporting the socket to\nthe backend to the client application causes all sorts of problems because\nthe person select()'ing on the socket sees that it's 'clear' but yet\nall thier data has not been sent...\n\n-Alfred\n\n\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Fri, 21 Jan 2000 01:03:05 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump disaster"
},
{
"msg_contents": "On Fri, 21 Jan 2000, Bruce Momjian wrote:\n\n> > I thought these patches should not have been applied without more\n> > peer review, and now I'm sure of it. I recommend reverting 'em.\n> \n> Can we give the submitter a few days to address the issue?\n\nConsidering that we haven't even gone BETA yet, I *heavily* second this\n... Alfred, so far as I've seen, has a) spent alot of time on these\npatches and b) tried to address any concerns as they've been presented\nconcerning them...\n\nIMHO, if we hadn't commit'd the patches, we wouldn't have found the bug,\nand getting feedback on the pathes without applying them, so far, has been\nabout as painful as pulling teeth ...\n\n>From what I've seen, nobody has been spending much time in libpq *anyway*,\nso it isn't as if reverting them if we have to is a big issue ... \n\nBut, on the other side of hte coin ... Alfred, we need relatively quick\nturnaround on fixing this, as libpq is kinda crucial :)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 21 Jan 2000 09:57:06 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump disaster"
},
{
"msg_contents": "> On Fri, 21 Jan 2000, Bruce Momjian wrote:\n> \n> > > I thought these patches should not have been applied without more\n> > > peer review, and now I'm sure of it. I recommend reverting 'em.\n> > \n> > Can we give the submitter a few days to address the issue?\n> \n> Considering that we haven't even gone BETA yet, I *heavily* second this\n> ... Alfred, so far as I've seen, has a) spent alot of time on these\n> patches and b) tried to address any concerns as they've been presented\n> concerning them...\n> \n> IMHO, if we hadn't commit'd the patches, we wouldn't have found the bug,\n> and getting feedback on the pathes without applying them, so far, has been\n> about as painful as pulling teeth ...\n\nYes, he has worked very hard on this. That's why I want to give him\nsome time to address the issues. In fact, he already has replied, and I\nam sure a dialog will resolve the issue.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jan 2000 10:18:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump disaster"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n>>>> The answer appears to be that Perlstein's \"nonblocking mode\" patches\n>>>> have broken psql copy, and doubtless a lot of other applications as\n>>>> well, because pqPutBytes no longer feels any particular compulsion\n>>>> to actually send the data it's been handed. (Moreover, if it does\n>>>> do only a partial send, there is no way to discover how much it sent;\n>>>> while its callers might be blamed for not having checked for an error\n>>>> return, they'd have no way to recover anyhow.)\n\n> pqPutBytes _never_ felt any compulsion to flush the buffer to the backend,\n> or at least not since I started using it.\n\nSorry, I was insufficiently careful about my wording. It's true that\npqPutBytes doesn't worry about actually flushing the data out to the\nbackend. (It shouldn't, either, since it is typically called with small\nfragments of a message rather than complete messages.) It did, however,\ntake care to *accept* all the data it was given and ensure that the data\nwas queued in the output buffer. As the code now stands, it's\nimpossible to tell whether all the passed data was queued or not, or how\nmuch of it was queued. This is a fundamental design error, because the\ncaller has no way to discover what to do after a failure return (nor\neven a way to tell if it was a hard failure or just I-won't-block).\nMoreover, no existing caller of PQputline thinks it should have to worry\nabout looping around the call, so even if you put in a usable return\nconvention, existing apps would still be broken.\n\nSimilarly, PQendcopy is now willing to return without having gotten\nthe library out of the COPY state, but the caller can't easily tell\nwhat to do about it --- nor do existing callers believe that they\nshould have to do anything about it.\n\n> The implications of this is trully annoying, exporting the socket to\n> the backend to the client application causes all sorts of problems because\n> the person select()'ing on the socket sees that it's 'clear' but yet\n> all thier data has not been sent...\n\nYeah, the original set of exported routines was designed without any\nthought of handling a nonblock mode. But you aren't going to be able\nto fix them this way. There will need to be a new set of entry points\nthat add a concept of \"operation not complete\" to their API, and apps\nthat want to avoid blocking will need to call those instead. Compare\nwhat's been done for connecting (PQconnectPoll) and COPY TO STDOUT\n(PQgetlineAsync).\n\nIt's possible that things were broken before you got to them --- there\nhave been several sets of not-very-carefully-reviewed patches to libpq\nduring the current development cycle, so someone else may have created\nthe seeds of the problem. However, we weren't seeing failures in psql\nbefore this week...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jan 2000 10:44:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump disaster "
},
{
"msg_contents": "* Tom Lane <[email protected]> [000121 08:14] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> >>>> The answer appears to be that Perlstein's \"nonblocking mode\" patches\n> >>>> have broken psql copy, and doubtless a lot of other applications as\n> >>>> well, because pqPutBytes no longer feels any particular compulsion\n> >>>> to actually send the data it's been handed. (Moreover, if it does\n> >>>> do only a partial send, there is no way to discover how much it sent;\n> >>>> while its callers might be blamed for not having checked for an error\n> >>>> return, they'd have no way to recover anyhow.)\n> \n> > pqPutBytes _never_ felt any compulsion to flush the buffer to the backend,\n> > or at least not since I started using it.\n> \n> Sorry, I was insufficiently careful about my wording. It's true that\n> pqPutBytes doesn't worry about actually flushing the data out to the\n> backend. (It shouldn't, either, since it is typically called with small\n> fragments of a message rather than complete messages.) It did, however,\n> take care to *accept* all the data it was given and ensure that the data\n> was queued in the output buffer. As the code now stands, it's\n> impossible to tell whether all the passed data was queued or not, or how\n> much of it was queued. This is a fundamental design error, because the\n> caller has no way to discover what to do after a failure return (nor\n> even a way to tell if it was a hard failure or just I-won't-block).\n> Moreover, no existing caller of PQputline thinks it should have to worry\n> about looping around the call, so even if you put in a usable return\n> convention, existing apps would still be broken.\n> \n> Similarly, PQendcopy is now willing to return without having gotten\n> the library out of the COPY state, but the caller can't easily tell\n> what to do about it --- nor do existing callers believe that they\n> should have to do anything about it.\n> \n> > The implications of this is trully annoying, exporting the socket to\n> > the backend to the client application causes all sorts of problems because\n> > the person select()'ing on the socket sees that it's 'clear' but yet\n> > all thier data has not been sent...\n> \n> Yeah, the original set of exported routines was designed without any\n> thought of handling a nonblock mode. But you aren't going to be able\n> to fix them this way. There will need to be a new set of entry points\n> that add a concept of \"operation not complete\" to their API, and apps\n> that want to avoid blocking will need to call those instead. Compare\n> what's been done for connecting (PQconnectPoll) and COPY TO STDOUT\n> (PQgetlineAsync).\n> \n> It's possible that things were broken before you got to them --- there\n> have been several sets of not-very-carefully-reviewed patches to libpq\n> during the current development cycle, so someone else may have created\n> the seeds of the problem. However, we weren't seeing failures in psql\n> before this week...\n\nWe both missed it, and yes it was my fault. All connections are\nbehaving as if PQsetnonblocking(conn, TRUE) have been called on them.\n\nThe original non-blocking patches did something weird, they seemed\nto _always_ stick the socket into non-blocking mode. This would\nactivate my non-blocking stuff for all connections.\n\nI assumed the only code that called the old makenonblocking function\nwas setup to handle this, unfortunatly it's not and you know what they\nsay about assumptions. :(\n\nI should have a fix tonight.\n\nsorry,\n-Alfred\n\n",
"msg_date": "Fri, 21 Jan 2000 13:46:38 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump disaster"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> We both missed it, and yes it was my fault. All connections are\n> behaving as if PQsetnonblocking(conn, TRUE) have been called on them.\n> The original non-blocking patches did something weird, they seemed\n> to _always_ stick the socket into non-blocking mode. This would\n> activate my non-blocking stuff for all connections.\n\nYes, the present state of the code seems to activate nonblocking socket\nmode all the time; possibly we could band-aid our way back to a working\npsql by turning off nonblock mode by default. But this doesn't address\nthe fact that the API of these routines cannot support nonblock mode\nwithout being redesigned.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jan 2000 17:05:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump disaster "
},
{
"msg_contents": "* Tom Lane <[email protected]> [000121 14:35] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > We both missed it, and yes it was my fault. All connections are\n> > behaving as if PQsetnonblocking(conn, TRUE) have been called on them.\n> > The original non-blocking patches did something weird, they seemed\n> > to _always_ stick the socket into non-blocking mode. This would\n> > activate my non-blocking stuff for all connections.\n> \n> Yes, the present state of the code seems to activate nonblocking socket\n> mode all the time; possibly we could band-aid our way back to a working\n> psql by turning off nonblock mode by default. But this doesn't address\n> the fact that the API of these routines cannot support nonblock mode\n> without being redesigned.\n\nThese patches revert the default setting of the non-block flag back\nto the old way connections were done. Since i'm unable to reproduce\nthis bug I'm hoping people can _please_ give me feedback.\n\nThis is just a first shot at fixing the issue, I'll supply changes\nto the docs if this all goes well (that you must explicitly set the\nblocking status after a connect/disconnect)\n\nI'm a bit concerned about busy looping because the connection is\nleft in a non-blocking state after the connect, however most of\nthe code performs select() and checks for EWOULDBLOCK/EAGAIN so it\nmight not be an issue.\n\nThanks for holding off on backing out the changes.\n\nSummary:\ndon't set the nonblock flag during connections\nPQsetnonblocking doesn't fiddle with socket flags anymore as the library\n seems to be setup to deal with the socket being in non-block mode at\n all times\nturn off the nonblock flag if/when the connection is torn down to ensure\n that a reconnect behaves like it used to.\n\n\nIndex: fe-connect.c\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.114\ndiff -u -c -I$Header: -r1.114 fe-connect.c\ncvs diff: conflicting specifications of output style\n*** fe-connect.c\t2000/01/23 01:27:39\t1.114\n--- fe-connect.c\t2000/01/23 08:56:17\n***************\n*** 391,397 ****\n \tPGconn\t *conn;\n \tchar\t *tmp;\t/* An error message from some service we call. */\n \tbool\t\terror = FALSE;\t/* We encountered an error. */\n- \tint\t\t\ti;\n \n \tconn = makeEmptyPGconn();\n \tif (conn == NULL)\n--- 391,396 ----\n***************\n*** 586,591 ****\n--- 585,614 ----\n }\n \n /* ----------\n+ * connectMakeNonblocking -\n+ * Make a connection non-blocking.\n+ * Returns 1 if successful, 0 if not.\n+ * ----------\n+ */\n+ static int\n+ connectMakeNonblocking(PGconn *conn)\n+ {\n+ #ifndef WIN32\n+ \tif (fcntl(conn->sock, F_SETFL, O_NONBLOCK) < 0)\n+ #else\n+ \tif (ioctlsocket(conn->sock, FIONBIO, &on) != 0)\n+ #endif\n+ \t{\n+ \t\tprintfPQExpBuffer(&conn->errorMessage,\n+ \t\t\t\t\t\t \"connectMakeNonblocking -- fcntl() failed: errno=%d\\n%s\\n\",\n+ \t\t\t\t\t\t errno, strerror(errno));\n+ \t\treturn 0;\n+ \t}\n+ \n+ \treturn 1;\n+ }\n+ \n+ /* ----------\n * connectNoDelay -\n * Sets the TCP_NODELAY socket option.\n * Returns 1 if successful, 0 if not.\n***************\n*** 755,761 ****\n \t * Ewan Mellor <[email protected]>.\n \t * ---------- */\n #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n! \tif (PQsetnonblocking(conn, TRUE) != 0)\n \t\tgoto connect_errReturn;\n #endif\t\n \n--- 778,784 ----\n \t * Ewan Mellor <[email protected]>.\n \t * ---------- */\n #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n! \tif (connectMakeNonblocking(conn) == 0)\n \t\tgoto connect_errReturn;\n #endif\t\n \n***************\n*** 868,874 ****\n \t/* This makes the connection non-blocking, for all those cases which forced us\n \t not to do it above. */\n #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n! \tif (PQsetnonblocking(conn, TRUE) != 0)\n \t\tgoto connect_errReturn;\n #endif\t\n \n--- 891,897 ----\n \t/* This makes the connection non-blocking, for all those cases which forced us\n \t not to do it above. */\n #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n! \tif (connectMakeNonblocking(conn) == 0)\n \t\tgoto connect_errReturn;\n #endif\t\n \n***************\n*** 1785,1790 ****\n--- 1808,1820 ----\n \t\t(void) pqPuts(\"X\", conn);\n \t\t(void) pqFlush(conn);\n \t}\n+ \n+ \t/* \n+ \t * must reset the blocking status so a possible reconnect will work\n+ \t * don't call PQsetnonblocking() because it will fail if it's unable\n+ \t * to flush the connection.\n+ \t */\n+ \tconn->nonblocking = FALSE;\n \n \t/*\n \t * Close the connection, reset all transient state, flush I/O buffers.\nIndex: fe-exec.c\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-exec.c,v\nretrieving revision 1.87\ndiff -u -c -I$Header: -r1.87 fe-exec.c\ncvs diff: conflicting specifications of output style\n*** fe-exec.c\t2000/01/18 06:09:24\t1.87\n--- fe-exec.c\t2000/01/23 08:56:29\n***************\n*** 2116,2122 ****\n int\n PQsetnonblocking(PGconn *conn, int arg)\n {\n- \tint\tfcntlarg;\n \n \targ = (arg == TRUE) ? 1 : 0;\n \t/* early out if the socket is already in the state requested */\n--- 2116,2121 ----\n***************\n*** 2131,2174 ****\n \t * _from_ or _to_ blocking mode, either way we can block them.\n \t */\n \t/* if we are going from blocking to non-blocking flush here */\n! \tif (!pqIsnonblocking(conn) && pqFlush(conn))\n! \t\treturn (-1);\n! \n! \n! #ifdef USE_SSL\n! \tif (conn->ssl)\n! \t{\n! \t\tprintfPQExpBuffer(&conn->errorMessage,\n! \t\t\t\"PQsetnonblocking() -- not supported when using SSL\\n\");\n! \t\treturn (-1);\n! \t}\n! #endif /* USE_SSL */\n! \n! #ifndef WIN32\n! \tfcntlarg = fcntl(conn->sock, F_GETFL, 0);\n! \tif (fcntlarg == -1)\n! \t\treturn (-1);\n! \n! \tif ((arg == TRUE && \n! \t\tfcntl(conn->sock, F_SETFL, fcntlarg | O_NONBLOCK) == -1) ||\n! \t\t(arg == FALSE &&\n! \t\tfcntl(conn->sock, F_SETFL, fcntlarg & ~O_NONBLOCK) == -1)) \n! #else\n! \tfcntlarg = arg;\n! \tif (ioctlsocket(conn->sock, FIONBIO, &fcntlarg) != 0)\n! #endif\n! \t{\n! \t\tprintfPQExpBuffer(&conn->errorMessage,\n! \t\t\t\"PQsetblocking() -- unable to set nonblocking status to %s\\n\",\n! \t\t\targ == TRUE ? \"TRUE\" : \"FALSE\");\n \t\treturn (-1);\n- \t}\n \n \tconn->nonblocking = arg;\n- \n- \t/* if we are going from non-blocking to blocking flush here */\n- \tif (pqIsnonblocking(conn) && pqFlush(conn))\n- \t\treturn (-1);\n \n \treturn (0);\n }\n--- 2130,2139 ----\n \t * _from_ or _to_ blocking mode, either way we can block them.\n \t */\n \t/* if we are going from blocking to non-blocking flush here */\n! \tif (pqFlush(conn))\n \t\treturn (-1);\n \n \tconn->nonblocking = arg;\n \n \treturn (0);\n }\n\n--\n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Sat, 22 Jan 2000 21:14:27 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "pg_dump possible fix,\n\tneed testers. (was: Re: [HACKERS] pg_dump disaster)"
},
{
"msg_contents": "\nPatch appled. And thanks very much for the patch. I am sorry you were\ngiven a bad time about your previous patch. We will try to not let it\nhappen again.\n\n\n\n> \n> These patches revert the default setting of the non-block flag back\n> to the old way connections were done. Since i'm unable to reproduce\n> this bug I'm hoping people can _please_ give me feedback.\n> \n> This is just a first shot at fixing the issue, I'll supply changes\n> to the docs if this all goes well (that you must explicitly set the\n> blocking status after a connect/disconnect)\n> \n> I'm a bit concerned about busy looping because the connection is\n> left in a non-blocking state after the connect, however most of\n> the code performs select() and checks for EWOULDBLOCK/EAGAIN so it\n> might not be an issue.\n> \n> Thanks for holding off on backing out the changes.\n> \n> Summary:\n> don't set the nonblock flag during connections\n> PQsetnonblocking doesn't fiddle with socket flags anymore as the library\n> seems to be setup to deal with the socket being in non-block mode at\n> all times\n> turn off the nonblock flag if/when the connection is torn down to ensure\n> that a reconnect behaves like it used to.\n> \n> \n> Index: fe-connect.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.114\n> diff -u -c -I$Header: -r1.114 fe-connect.c\n> cvs diff: conflicting specifications of output style\n> *** fe-connect.c\t2000/01/23 01:27:39\t1.114\n> --- fe-connect.c\t2000/01/23 08:56:17\n> ***************\n> *** 391,397 ****\n> \tPGconn\t *conn;\n> \tchar\t *tmp;\t/* An error message from some service we call. */\n> \tbool\t\terror = FALSE;\t/* We encountered an error. */\n> - \tint\t\t\ti;\n> \n> \tconn = makeEmptyPGconn();\n> \tif (conn == NULL)\n> --- 391,396 ----\n> ***************\n> *** 586,591 ****\n> --- 585,614 ----\n> }\n> \n> /* ----------\n> + * connectMakeNonblocking -\n> + * Make a connection non-blocking.\n> + * Returns 1 if successful, 0 if not.\n> + * ----------\n> + */\n> + static int\n> + connectMakeNonblocking(PGconn *conn)\n> + {\n> + #ifndef WIN32\n> + \tif (fcntl(conn->sock, F_SETFL, O_NONBLOCK) < 0)\n> + #else\n> + \tif (ioctlsocket(conn->sock, FIONBIO, &on) != 0)\n> + #endif\n> + \t{\n> + \t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\t\t\t \"connectMakeNonblocking -- fcntl() failed: errno=%d\\n%s\\n\",\n> + \t\t\t\t\t\t errno, strerror(errno));\n> + \t\treturn 0;\n> + \t}\n> + \n> + \treturn 1;\n> + }\n> + \n> + /* ----------\n> * connectNoDelay -\n> * Sets the TCP_NODELAY socket option.\n> * Returns 1 if successful, 0 if not.\n> ***************\n> *** 755,761 ****\n> \t * Ewan Mellor <[email protected]>.\n> \t * ---------- */\n> #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n> ! \tif (PQsetnonblocking(conn, TRUE) != 0)\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> --- 778,784 ----\n> \t * Ewan Mellor <[email protected]>.\n> \t * ---------- */\n> #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n> ! \tif (connectMakeNonblocking(conn) == 0)\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> ***************\n> *** 868,874 ****\n> \t/* This makes the connection non-blocking, for all those cases which forced us\n> \t not to do it above. */\n> #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n> ! \tif (PQsetnonblocking(conn, TRUE) != 0)\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> --- 891,897 ----\n> \t/* This makes the connection non-blocking, for all those cases which forced us\n> \t not to do it above. */\n> #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n> ! \tif (connectMakeNonblocking(conn) == 0)\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> ***************\n> *** 1785,1790 ****\n> --- 1808,1820 ----\n> \t\t(void) pqPuts(\"X\", conn);\n> \t\t(void) pqFlush(conn);\n> \t}\n> + \n> + \t/* \n> + \t * must reset the blocking status so a possible reconnect will work\n> + \t * don't call PQsetnonblocking() because it will fail if it's unable\n> + \t * to flush the connection.\n> + \t */\n> + \tconn->nonblocking = FALSE;\n> \n> \t/*\n> \t * Close the connection, reset all transient state, flush I/O buffers.\n> Index: fe-exec.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-exec.c,v\n> retrieving revision 1.87\n> diff -u -c -I$Header: -r1.87 fe-exec.c\n> cvs diff: conflicting specifications of output style\n> *** fe-exec.c\t2000/01/18 06:09:24\t1.87\n> --- fe-exec.c\t2000/01/23 08:56:29\n> ***************\n> *** 2116,2122 ****\n> int\n> PQsetnonblocking(PGconn *conn, int arg)\n> {\n> - \tint\tfcntlarg;\n> \n> \targ = (arg == TRUE) ? 1 : 0;\n> \t/* early out if the socket is already in the state requested */\n> --- 2116,2121 ----\n> ***************\n> *** 2131,2174 ****\n> \t * _from_ or _to_ blocking mode, either way we can block them.\n> \t */\n> \t/* if we are going from blocking to non-blocking flush here */\n> ! \tif (!pqIsnonblocking(conn) && pqFlush(conn))\n> ! \t\treturn (-1);\n> ! \n> ! \n> ! #ifdef USE_SSL\n> ! \tif (conn->ssl)\n> ! \t{\n> ! \t\tprintfPQExpBuffer(&conn->errorMessage,\n> ! \t\t\t\"PQsetnonblocking() -- not supported when using SSL\\n\");\n> ! \t\treturn (-1);\n> ! \t}\n> ! #endif /* USE_SSL */\n> ! \n> ! #ifndef WIN32\n> ! \tfcntlarg = fcntl(conn->sock, F_GETFL, 0);\n> ! \tif (fcntlarg == -1)\n> ! \t\treturn (-1);\n> ! \n> ! \tif ((arg == TRUE && \n> ! \t\tfcntl(conn->sock, F_SETFL, fcntlarg | O_NONBLOCK) == -1) ||\n> ! \t\t(arg == FALSE &&\n> ! \t\tfcntl(conn->sock, F_SETFL, fcntlarg & ~O_NONBLOCK) == -1)) \n> ! #else\n> ! \tfcntlarg = arg;\n> ! \tif (ioctlsocket(conn->sock, FIONBIO, &fcntlarg) != 0)\n> ! #endif\n> ! \t{\n> ! \t\tprintfPQExpBuffer(&conn->errorMessage,\n> ! \t\t\t\"PQsetblocking() -- unable to set nonblocking status to %s\\n\",\n> ! \t\t\targ == TRUE ? \"TRUE\" : \"FALSE\");\n> \t\treturn (-1);\n> - \t}\n> \n> \tconn->nonblocking = arg;\n> - \n> - \t/* if we are going from non-blocking to blocking flush here */\n> - \tif (pqIsnonblocking(conn) && pqFlush(conn))\n> - \t\treturn (-1);\n> \n> \treturn (0);\n> }\n> --- 2130,2139 ----\n> \t * _from_ or _to_ blocking mode, either way we can block them.\n> \t */\n> \t/* if we are going from blocking to non-blocking flush here */\n> ! \tif (pqFlush(conn))\n> \t\treturn (-1);\n> \n> \tconn->nonblocking = arg;\n> \n> \treturn (0);\n> }\n> \n> --\n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 23 Jan 2000 00:25:57 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump possible fix, need testers. (was: Re: [HACKERS] pg_dump\n\tdisaster)"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> I really hope the originator of the problem report will get back to\n> us to make sure it's settled.\n\n> *poking Patrick Welche*\n\nUm, I didn't have any trouble at all reproducing Patrick's complaint.\npg_dump any moderately large table (I used tenk1 from the regress\ndatabase) and try to load the script with psql. Kaboom.\n\nAlso, I still say that turning off nonblock mode by default is only\na band-aid: this code *will fail* whenever nonblock mode is enabled,\nbecause it does not return enough info to the caller to allow the\ncaller to do the right thing. If you haven't seen it fail, that\nonly proves that you haven't actually stressed it to the point of\nexercising the buffer-overrun code paths.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Jan 2000 00:53:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump possible fix,\n\tneed testers. (was: Re: [HACKERS] pg_dump disaster)"
},
{
"msg_contents": "* Bruce Momjian <[email protected]> [000122 21:50] wrote:\n>\n> >\n> > These patches revert the default setting of the non-block flag back\n> > to the old way connections were done. Since i'm unable to reproduce\n> > this bug I'm hoping people can _please_ give me feedback.\n> \n> Patch appled. And thanks very much for the patch. I am sorry you were\n> given a bad time about your previous patch. We will try to not let it\n> happen again.\n\nI really hope the originator of the problem report will get back to\nus to make sure it's settled.\n\n*poking Patrick Welche*\n\n:)\n\nthanks,\n-Alfred\n\n",
"msg_date": "Sat, 22 Jan 2000 22:02:56 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump possible fix,\n\tneed testers. (was: Re: [HACKERS] pg_dump disaster)"
},
{
"msg_contents": "* Tom Lane <[email protected]> [000122 22:17] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > I really hope the originator of the problem report will get back to\n> > us to make sure it's settled.\n> \n> > *poking Patrick Welche*\n> \n> Um, I didn't have any trouble at all reproducing Patrick's complaint.\n> pg_dump any moderately large table (I used tenk1 from the regress\n> database) and try to load the script with psql. Kaboom.\n\nCan you try it on sources from before my initial patches were\napplied, and from before the initial non-blocking connections\npatches from Ewan Mellor were applied?\n\n>From my point of view none of my code should be affecting those that\ndon't explicitly use PQsetnonblocking(), which is nothing besides\nthe application i'm developing in-house.\n\nI'm currently investigating that possibility.\n\nthanks,\n-Alfred\n",
"msg_date": "Sat, 22 Jan 2000 23:05:11 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump possible fix,\n\tneed testers. (was: Re: [HACKERS] pg_dump disaster)"
},
{
"msg_contents": ">> Um, I didn't have any trouble at all reproducing Patrick's complaint.\n>> pg_dump any moderately large table (I used tenk1 from the regress\n>> database) and try to load the script with psql. Kaboom.\n\n> This is after or before my latest patch?\n\nBefore. I haven't updated since yesterday...\n\n> I can't seem to reproduce this problem,\n\nOdd. Maybe there is something different about the kernel's timing of\nmessage sending on your platform. I see it very easily on HPUX 10.20,\nand Patrick sees it very easily on whatever he's using (netbsd I think).\nYou might try varying the situation a little, say\n\tpsql mydb <dumpfile\n\tpsql -f dumpfile mydb\n\tpsql mydb\n\t\t\\i dumpfile\nand the same with -h localhost (to get a TCP/IP connection instead of\nUnix domain). At the moment (pre-patch) I see failures with the\nfirst two of these, but not with the \\i method. -h doesn't seem to\nmatter for me, but it might for you.\n\n> Telling me something is wrong without giving suggestions on how\n> to fix it, nor direct pointers to where it fails doesn't help me\n> one bit. You're not offering constructive critism, you're not\n> even offering valid critism, you're just waving your finger at\n> \"problems\" that you say exist but don't pin down to anything specific.\n\nI have been explaining it as clearly as I could. Let's try it\none more time.\n\n> I spent hours looking over what I did to pqFlush and pqPutnBytes\n> because of what you said earlier when all the bug seems to have\n> come down to is that I missed that the socket is set to non-blocking\n> in all cases now.\n\nLetting the socket mode default to blocking will hide the problems from\nexisting clients that don't care about non-block mode. But people who\ntry to actually use the nonblock mode are going to see the same kinds of\nproblems that psql is exhibiting.\n\n> The old sequence of events that happened was as follows:\n\n> user sends data almost filling the output buffer...\n> user sends another line of text overflowing the buffer...\n> pqFlush is invoked blocking the user until the output pipe clears...\n> and repeat.\n\nRight.\n\n> The nonblocking code allows sends to fail so the user can abort\n> sending stuff to the backend in order to process other work:\n\n> user sends data almost filling the output buffer...\n> user sends another line of text that may overflow the buffer...\n> pqFlush is invoked, \n> if the pipe can't be cleared an error is returned allowing the user to\n> retry the send later.\n> if the flush succeeds then more data is queued and success is returned\n\nBut you haven't thought through the mechanics of the \"error is returned\nallowing the user to retry\" code path clearly enough. Let's take\npqPutBytes for an example. If it returns EOF, is that a hard error or\ndoes it just mean that the application needs to wait a while? The\napplication *must* distinguish these cases, or it will do the wrong\nthing: for example, if it mistakes a hard error for \"wait a while\",\nthen it will wait forever without making any progress or producing\nan error report.\n\nYou need to provide a different return convention that indicates\nwhat happened, say\n\tEOF (-1)\t=> hard error (same as old code)\n\t0\t\t=> OK\n\t1\t\t=> no data was queued due to risk of blocking\nAnd you need to guarantee that the application knows what the state is\nwhen the can't-do-it-yet return is made; note that I specified \"no data\nwas queued\" above. If pqPutBytes might queue some of the data before\nreturning 1, the application is in trouble again. While you apparently\nforesaw that in recoding pqPutBytes, your code doesn't actually work.\nThere is the minor code bug that you fail to update \"avail\" after the\nfirst pqFlush call, and the much more fundamental problem that you\ncannot guarantee to have queued all or none of the data. Think about\nwhat happens if the passed nbytes is larger than the output buffer size.\nYou may pass the first pqFlush successfully, then get into the loop and\nget a won't-block return from pqFlush in the loop. What then?\nYou can't simply refuse to support the case nbytes > bufsize at all,\nbecause that will cause application failures as well (too long query\nsends it into an infinite loop trying to queue data, most likely).\n\nA possible answer is to specify that a return of +N means \"N bytes\nremain unqueued due to risk of blocking\" (after having queued as much\nas you could). This would put the onus on the caller to update his\npointers/counts properly; propagating that into all the internal uses\nof pqPutBytes would be no fun. (Of course, so far you haven't updated\n*any* of the internal callers to behave reasonably in case of a\nwon't-block return; PQfn is just one example.)\n\nAnother possible answer is to preserve pqPutBytes' old API, \"queue or\nbust\", by the expedient of enlarging the output buffer to hold whatever\nwe can't send immediately. This is probably more attractive, even\nthough a long query might suck up a lot of space that won't get\nreclaimed as long as the connection lives. If you don't do this then\nyou are going to have to make a lot of ugly changes in the internal\ncallers to deal with won't-block returns. Actually, a bulk COPY IN\nwould probably be the worst case --- the app could easily load data into\nthe buffer far faster than it could be sent. It might be best to extend\nPQputline to have a three-way return and add code there to limit the\ngrowth of the output buffer, while allowing all internal callers to\nassume that the buffer is expanded when they need it.\n\npqFlush has the same kind of interface design problem: the same EOF code\nis returned for either a hard error or can't-flush-yet, but it would be\ndisastrous to treat those cases alike. You must provide a 3-way return\ncode.\n\nFurthermore, the same sort of 3-way return code convention will have to\npropagate out through anything that calls pqFlush (with corresponding\ndocumentation updates). pqPutBytes can be made to hide a pqFlush won't-\nblock return by trying to enlarge the output buffer, but in most other\nplaces you won't have a choice except to punt it back to the caller.\n\nPQendcopy has the same interface design problem. It used to be that\n(unless you passed a null pointer) PQendcopy would *guarantee* that\nthe connection was no longer in COPY state on return --- by resetting\nit, if necessary. So the return code was mainly informative; the\napplication didn't have to do anything different if PQendcopy reported\nfailure. But now, a nonblocking application does need to pay attention\nto whether PQendcopy completed or not --- and you haven't provided a way\nfor it to tell. If 1 is returned, the connection might still be in\nCOPY state, or it might not (PQendcopy might have reset it). If the\napplication doesn't distinguish these cases then it will fail.\n\nI also think that you want to take a hard look at the automatic \"reset\"\nbehavior upon COPY failure, since a PQreset call will block the\napplication until it finishes. Really, what is needed to close down a\nCOPY safely in nonblock mode is a pair of entry points along the line of\n\"PQendcopyStart\" and \"PQendcopyPoll\", with API conventions similar to\nPQresetStart/PQresetPoll. This gives you the ability to do the reset\n(if one is necessary) without blocking the application. PQendcopy\nitself will only be useful to blocking applications.\n\n> I'm sorry if they don't work for some situations other than COPY IN,\n> but it's functionality that I needed and I expect to be expanded on\n> by myself and others that take interest in nonblocking operation.\n\nI don't think that the nonblock code is anywhere near production quality\nat this point. It may work for you, if you don't stress it too hard and\nnever have a communications failure; but I don't want to see us ship it\nas part of Postgres unless these issues get addressed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Jan 2000 12:55:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump possible fix,\n\tneed testers. (was: Re: [HACKERS] pg_dump disaster)"
},
{
"msg_contents": "* Tom Lane <[email protected]> [000123 10:19] wrote:\n> >> Um, I didn't have any trouble at all reproducing Patrick's complaint.\n> >> pg_dump any moderately large table (I used tenk1 from the regress\n> >> database) and try to load the script with psql. Kaboom.\n> \n> > This is after or before my latest patch?\n> \n> Before. I haven't updated since yesterday...\n> \n> > I can't seem to reproduce this problem,\n> \n> Odd. Maybe there is something different about the kernel's timing of\n> message sending on your platform. I see it very easily on HPUX 10.20,\n> and Patrick sees it very easily on whatever he's using (netbsd I think).\n> You might try varying the situation a little, say\n> \tpsql mydb <dumpfile\n> \tpsql -f dumpfile mydb\n> \tpsql mydb\n> \t\t\\i dumpfile\n> and the same with -h localhost (to get a TCP/IP connection instead of\n> Unix domain). At the moment (pre-patch) I see failures with the\n> first two of these, but not with the \\i method. -h doesn't seem to\n> matter for me, but it might for you.\n\nOk, the latest patch I posted fixes that, with and without the -h\nflag, at least for me.\n\n> > Telling me something is wrong without giving suggestions on how\n> > to fix it, nor direct pointers to where it fails doesn't help me\n> > one bit. You're not offering constructive critism, you're not\n> > even offering valid critism, you're just waving your finger at\n> > \"problems\" that you say exist but don't pin down to anything specific.\n> \n> I have been explaining it as clearly as I could. Let's try it\n> one more time.\n\nWhat I needed was the above steps to validate that the problem is fixed.\n\n> > I spent hours looking over what I did to pqFlush and pqPutnBytes\n> > because of what you said earlier when all the bug seems to have\n> > come down to is that I missed that the socket is set to non-blocking\n> > in all cases now.\n> \n> Letting the socket mode default to blocking will hide the problems from\n> existing clients that don't care about non-block mode. But people who\n> try to actually use the nonblock mode are going to see the same kinds of\n> problems that psql is exhibiting.\n\nThere is no non-block mode, there's the old mode, and the _real_\nnon-blocking mode that I'm trying to get working.\n\n> > The old sequence of events that happened was as follows:\n> \n> > user sends data almost filling the output buffer...\n> > user sends another line of text overflowing the buffer...\n> > pqFlush is invoked blocking the user until the output pipe clears...\n> > and repeat.\n> \n> Right.\n\nYou agree that it's somewhat broken to do that right?\n\n> \n> > The nonblocking code allows sends to fail so the user can abort\n> > sending stuff to the backend in order to process other work:\n> \n> > user sends data almost filling the output buffer...\n> > user sends another line of text that may overflow the buffer...\n> > pqFlush is invoked, \n> > if the pipe can't be cleared an error is returned allowing the user to\n> > retry the send later.\n> > if the flush succeeds then more data is queued and success is returned\n> \n> But you haven't thought through the mechanics of the \"error is returned\n> allowing the user to retry\" code path clearly enough. Let's take\n> pqPutBytes for an example. If it returns EOF, is that a hard error or\n> does it just mean that the application needs to wait a while? The\n> application *must* distinguish these cases, or it will do the wrong\n> thing: for example, if it mistakes a hard error for \"wait a while\",\n> then it will wait forever without making any progress or producing\n> an error report.\n> \n> You need to provide a different return convention that indicates\n> what happened, say\n> \tEOF (-1)\t=> hard error (same as old code)\n> \t0\t\t=> OK\n> \t1\t\t=> no data was queued due to risk of blocking\n> And you need to guarantee that the application knows what the state is\n> when the can't-do-it-yet return is made; note that I specified \"no data\n> was queued\" above. If pqPutBytes might queue some of the data before\n> returning 1, the application is in trouble again. While you apparently\n> foresaw that in recoding pqPutBytes, your code doesn't actually work.\n> There is the minor code bug that you fail to update \"avail\" after the\n> first pqFlush call, and the much more fundamental problem that you\n> cannot guarantee to have queued all or none of the data. Think about\n> what happens if the passed nbytes is larger than the output buffer size.\n> You may pass the first pqFlush successfully, then get into the loop and\n> get a won't-block return from pqFlush in the loop. What then?\n> You can't simply refuse to support the case nbytes > bufsize at all,\n> because that will cause application failures as well (too long query\n> sends it into an infinite loop trying to queue data, most likely).\n\nI don't have to think about this too much (nbytes > conn->outBufSize),\nsee: http://www.postgresql.org/docs/programmer/libpq-chapter4142.htm\nthe Caveats section for libpq:\n\n Caveats\n\n The query buffer is 8192 bytes long, and queries over that length\n will be rejected.\n\nIf I can enforce this limit then i'm fine, also there's nothing\nstopping me from temporarily realloc()'ing the send buffer, or\nchaining another sendbuffer if the first fills mid query, it would\nbe easy to restrict the application to only a single overcommit of\nthe send buffer, or a single circular buffer to avoid memory\ncopies.\n\n> A possible answer is to specify that a return of +N means \"N bytes\n> remain unqueued due to risk of blocking\" (after having queued as much\n> as you could). This would put the onus on the caller to update his\n> pointers/counts properly; propagating that into all the internal uses\n> of pqPutBytes would be no fun. (Of course, so far you haven't updated\n> *any* of the internal callers to behave reasonably in case of a\n> won't-block return; PQfn is just one example.)\n\nNo way dude. :) I would like to get started on a PQfnstart()/PQfnpoll\ninterface soon.\n\n> Another possible answer is to preserve pqPutBytes' old API, \"queue or\n> bust\", by the expedient of enlarging the output buffer to hold whatever\n> we can't send immediately. This is probably more attractive, even\n> though a long query might suck up a lot of space that won't get\n> reclaimed as long as the connection lives. If you don't do this then\n> you are going to have to make a lot of ugly changes in the internal\n> callers to deal with won't-block returns. Actually, a bulk COPY IN\n> would probably be the worst case --- the app could easily load data into\n> the buffer far faster than it could be sent. It might be best to extend\n> PQputline to have a three-way return and add code there to limit the\n> growth of the output buffer, while allowing all internal callers to\n> assume that the buffer is expanded when they need it.\n\nIt's not too difficult to only allow one over-commit into the buffer,\nand enforcing the 8k limit, what do you think about that?\n\n> pqFlush has the same kind of interface design problem: the same EOF code\n> is returned for either a hard error or can't-flush-yet, but it would be\n> disastrous to treat those cases alike. You must provide a 3-way return\n> code.\n> \n> Furthermore, the same sort of 3-way return code convention will have to\n> propagate out through anything that calls pqFlush (with corresponding\n> documentation updates). pqPutBytes can be made to hide a pqFlush won't-\n> block return by trying to enlarge the output buffer, but in most other\n> places you won't have a choice except to punt it back to the caller.\n\nI'm not sure about this, the old pqFlush would reset the connection\nif it went bad, (I was suprised to see that it didn't) it doesn't do\nthis so it can read the dying words from the backend, imo all that's\nneeded is a:\n\tconn->status = CONNECTION_BAD;\n\nbefore returning EOF in pqFlush()\n\npqReadData will happily read from pgconn marked 'CONNECTION_BAD' and\nuser applications can then QPstatus() and reset the connection.\n\n> PQendcopy has the same interface design problem. It used to be that\n> (unless you passed a null pointer) PQendcopy would *guarantee* that\n> the connection was no longer in COPY state on return --- by resetting\n> it, if necessary. So the return code was mainly informative; the\n> application didn't have to do anything different if PQendcopy reported\n> failure. But now, a nonblocking application does need to pay attention\n> to whether PQendcopy completed or not --- and you haven't provided a way\n> for it to tell. If 1 is returned, the connection might still be in\n> COPY state, or it might not (PQendcopy might have reset it). If the\n> application doesn't distinguish these cases then it will fail.\n\nPQstatus will allow you to determine if the connection has gone to\nCONNECTION_BAD.\n\n> I also think that you want to take a hard look at the automatic \"reset\"\n> behavior upon COPY failure, since a PQreset call will block the\n> application until it finishes. Really, what is needed to close down a\n> COPY safely in nonblock mode is a pair of entry points along the line of\n> \"PQendcopyStart\" and \"PQendcopyPoll\", with API conventions similar to\n> PQresetStart/PQresetPoll. This gives you the ability to do the reset\n> (if one is necessary) without blocking the application. PQendcopy\n> itself will only be useful to blocking applications.\n\nI'd be willing to work on fixing this up, but currently other issues\nyou've mentioned seem higher on the priority list.\n\n> > I'm sorry if they don't work for some situations other than COPY IN,\n> > but it's functionality that I needed and I expect to be expanded on\n> > by myself and others that take interest in nonblocking operation.\n> \n> I don't think that the nonblock code is anywhere near production quality\n> at this point. It may work for you, if you don't stress it too hard and\n> never have a communications failure; but I don't want to see us ship it\n> as part of Postgres unless these issues get addressed.\n\nI'd really appreciate if it was instead left as undocumented until we\nhave it completed.\n\nDoing that allows people like myself to see that work is in progress\nto provide this functionality and possibly contribute to polishing\nit up and expanding its usefullness instead of giving up entirely\nor starting from scratch.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Sun, 23 Jan 2000 17:32:54 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump possible fix,\n\tneed testers. (was: Re: [HACKERS] pg_dump disaster)"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> There is no non-block mode, there's the old mode, and the _real_\n> non-blocking mode that I'm trying to get working.\n\nAh --- well, I guess we can agree that it doesn't work yet ;-)\n\n>>>> user sends data almost filling the output buffer...\n>>>> user sends another line of text overflowing the buffer...\n>>>> pqFlush is invoked blocking the user until the output pipe clears...\n>>>> and repeat.\n>> \n>> Right.\n\n> You agree that it's somewhat broken to do that right?\n\nIt's only broken for a client that doesn't want to block --- but,\nclearly, for such a client we can't do it that way. I like the way\nthat fe-connect.c has been rewritten over the past couple of months:\nprovide a \"start\" and a \"poll\" function for each operation, and then\nimplement the old blocking-style function as \"start\" followed by a\nloop around the \"poll\" function. Note (just to beat the drum one\nmore time) that neither the start nor the poll function has the\nsame API as the old-style function.\n\n>> You can't simply refuse to support the case nbytes > bufsize at all,\n>> because that will cause application failures as well (too long query\n>> sends it into an infinite loop trying to queue data, most likely).\n\n> I don't have to think about this too much (nbytes > conn->outBufSize),\n> see: http://www.postgresql.org/docs/programmer/libpq-chapter4142.htm\n> the Caveats section for libpq:\n\n> The query buffer is 8192 bytes long, and queries over that length\n> will be rejected.\n\nIs that still there? Well, I'm sorry if you relied on that statement,\nbut it's obsolete. Postgres no longer has any fixed limit on the\nlength of queries, and it won't do for libpq to re-introduce one.\n\n(Quick grep: yipes, that statement is indeed still in the docs, in\nmore than one place even. Thanks for pointing this out.)\n\n> If I can enforce this limit then i'm fine, also there's nothing\n> stopping me from temporarily realloc()'ing the send buffer, or\n> chaining another sendbuffer if the first fills mid query, it would\n> be easy to restrict the application to only a single overcommit of\n> the send buffer, or a single circular buffer to avoid memory\n> copies.\n\nI think reallocing the send buffer is perfectly acceptable. I'm not\nquite following what you mean by \"only one overcommit\", though.\n\n>> PQendcopy has the same interface design problem. It used to be that\n>> (unless you passed a null pointer) PQendcopy would *guarantee* that\n>> the connection was no longer in COPY state on return --- by resetting\n>> it, if necessary. So the return code was mainly informative; the\n>> application didn't have to do anything different if PQendcopy reported\n>> failure. But now, a nonblocking application does need to pay attention\n>> to whether PQendcopy completed or not --- and you haven't provided a way\n>> for it to tell. If 1 is returned, the connection might still be in\n>> COPY state, or it might not (PQendcopy might have reset it). If the\n>> application doesn't distinguish these cases then it will fail.\n\n> PQstatus will allow you to determine if the connection has gone to\n> CONNECTION_BAD.\n\nNeither of the states that we need to distinguish will be CONNECTION_BAD.\n\nI suppose the application could look at the asyncStatus to see if it is\nCOPY_IN/COPY_OUT, but I think that would be very poor software design:\nasyncStatus is intended to be private to libpq and readily redefinable.\n(It's not presently visible in libpq-fe.h at all.) If we start\nexporting it as public data we will regret it, IMHO. Much better to\nprovide a new exported function whose API is appropriate for this purpose.\n\n>> I don't think that the nonblock code is anywhere near production quality\n>> at this point. It may work for you, if you don't stress it too hard and\n>> never have a communications failure; but I don't want to see us ship it\n>> as part of Postgres unless these issues get addressed.\n\n> I'd really appreciate if it was instead left as undocumented until we\n> have it completed.\n\nI'd be willing to consider that if I were sure that it couldn't break\nany ordinary blocking-mode cases. Right now I don't have much\nconfidence about that...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Jan 2000 23:54:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump possible fix,\n\tneed testers. (was: Re: [HACKERS] pg_dump disaster)"
},
{
"msg_contents": "On Sat, Jan 22, 2000 at 10:02:56PM -0800, Alfred Perlstein wrote:\n> \n> I really hope the originator of the problem report will get back to\n> us to make sure it's settled.\n> \n> *poking Patrick Welche*\n> \n> :)\n\nMorning all!\n\nThings are still not so good for me. The pg_dumpall > file, psql < file did\nwork, but later:\n\nnewnham=> select * from crsids,\"tblPerson\" where\nnewnham-> crsids.crsid != \"tblPerson\".\"CRSID\";\nBackend sent B message without prior T\nD21Enter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself.\n>> \n\nwhich smells like a similar problem. (Note that this is a join. Straight\nselects didn't cause problems)\n\nWhile running that query, I ran the regression tests, so the ERRORs in the\nlog are OK, but just in case, here are the last two lines before the above\nmessage:\n\npostmaster: dumpstatus:\n sock 5\n\nAfter typing \\. at the prompt\n\nUnknown protocol character 'M' read from backend. (The protocol character\nis the first character the backend sends in response to a query it\nreceives).\nPQendcopy: resetting connection\nAsynchronous NOTIFY 'ndropoulou' from backend with pid '1818589281'\nreceived.\nAsynchronous NOTIFY 'ndropoulou' from backend with pid '1818589281'\nreceived.\n\nand in the log:\n\npq_flush: send() failed: Broken pipe\nFATAL: pq_endmessage failed: errno=32\npq_flush: send() failed: Broken pipe\n...\n\n\"ndropoulou\" is an incomplete piece of select * data.\n\n(New also, though probably unrelated: the sanity check fails with number of\nindex tuples exactly half number in heap - not equal)\n\nFor any more info, just ask!\n\nCheers,\n\nPatrick\n",
"msg_date": "Mon, 24 Jan 2000 12:30:00 +0000",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump possible fix, need testers."
},
{
"msg_contents": "Patrick Welche <[email protected]> writes:\n> Things are still not so good for me. The pg_dumpall > file, psql < file did\n> work, but later:\n\n> newnham=> select * from crsids,\"tblPerson\" where\nnewnham-> crsids.crsid != \"tblPerson\".\"CRSID\";\n> Backend sent B message without prior T\n> D21Enter data to be copied followed by a newline.\n> End with a backslash and a period on a line by itself.\n>>> \n\n> which smells like a similar problem. (Note that this is a join. Straight\n> selects didn't cause problems)\n\nBizarre. Obviously, the frontend and backend have gotten out of sync,\nbut it's not too clear who's to blame. Since you also mention\n\n> (New also, though probably unrelated: the sanity check fails with number of\n> index tuples exactly half number in heap - not equal)\n\nI think that you may have some subtle platform-specific problems in the\nbackend.\n\nWhat exactly is your platform/compiler/configuration, anyway?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 Jan 2000 12:29:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pg_dump possible fix, need testers. "
},
{
"msg_contents": "On Mon, Jan 24, 2000 at 12:29:43PM -0500, Tom Lane wrote:\n> Patrick Welche <[email protected]> writes:\n> > Things are still not so good for me. The pg_dumpall > file, psql < file did\n> > work, but later:\n> \n> > newnham=> select * from crsids,\"tblPerson\" where\n> newnham-> crsids.crsid != \"tblPerson\".\"CRSID\";\n> > Backend sent B message without prior T\n> > D21Enter data to be copied followed by a newline.\n> > End with a backslash and a period on a line by itself.\n> >>> \n> \n> > which smells like a similar problem. (Note that this is a join. Straight\n> > selects didn't cause problems)\n> \n> Bizarre. Obviously, the frontend and backend have gotten out of sync,\n> but it's not too clear who's to blame. Since you also mention\n> \n> > (New also, though probably unrelated: the sanity check fails with number of\n> > index tuples exactly half number in heap - not equal)\n> \n> I think that you may have some subtle platform-specific problems in the\n> backend.\n> \n> What exactly is your platform/compiler/configuration, anyway?\n\nNetBSD-1.4P/i386 / egcs-2.91.66 19990314 (egcs-1.1.2 release)\nconfigure --enable-debug\n\nThis morning I cvs'd made, installed, moved data->data2, initdb, started\nup postmaster, reloaded data (this worked!), then tried the join. It's a\nbig one, so I thought I might as well stress it at the same time, and did\na regression test.\n\nAnything I could try to narrow the problem down?\n\nCheers,\n\nPatrick\n",
"msg_date": "Mon, 24 Jan 2000 17:59:21 +0000",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: pg_dump possible fix, need testers."
},
{
"msg_contents": "Patrick Welche <[email protected]> writes:\n> This morning I cvs'd made, installed, moved data->data2, initdb, started\n> up postmaster, reloaded data (this worked!), then tried the join. It's a\n> big one, so I thought I might as well stress it at the same time, and did\n> a regression test.\n\n> Anything I could try to narrow the problem down?\n\nHmm. Why don't you try running the parallel regression test, to see\nif that blows up too?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 Jan 2000 13:20:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pg_dump possible fix, need testers. "
},
{
"msg_contents": "Earlier I wrote:\n> (New also, though probably unrelated: the sanity check fails with number of\n> index tuples exactly half number in heap - not equal)\n \nTL> I think that you may have some subtle platform-specific problems in the\nTL> backend.\n\nOn Mon, Jan 24, 2000, Tom Lane wrote:\nTL> Hmm. Why don't you try running the parallel regression test, to see\nTL> if that blows up too?\n\nRerunning the ordinary regression \"runtest\", the sanity_check passes. The\ndifference being that this time I wasn't running a select at the same time.\nThe parallel test \"runcheck\" fails on different parts at different times eg:\n\n test select_into ... FAILED\nbecause\n! psql: connection to database \"regression\" failed - Backend startup failed\n\n(BTW in resultmap, I need the .*-.*-netbsd rather than just netbsd, I think\nit's because config.guess returns i386-unknown-netbsd1.4P, and just netbsd\nwould imply the string starts with netbsd)\n\n3 times in a row now, gmake runtest on its own is fine, gmake runtest with a\nconcurrent join select makes sanity_check fail with\n\n+ NOTICE: RegisterSharedInvalid: SI buffer overflow\n+ NOTICE: InvalidateSharedInvalid: cache state reset\n+ NOTICE: Index onek_stringu1: NUMBER OF INDEX' TUPLES (1000) IS NOT THE SAME AS HEAP' (2000).\n+ Recreate the index.\n+ NOTICE: Index onek_hundred: NUMBER OF INDEX' TUPLES (1000) IS NOT THE SAME AS\n HEAP' (2000).\n+ Recreate the index.\n\nand the join will still get itself confused\n\nselect * from crsids,\"tblPerson\" where\nnewnham-> crsids.crsid != \"tblPerson\".\"CRSID\";\nBackend sent B message without prior T\nD21Enter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself.\n>> \\.\nUnknown protocol character 'M' read from backend. (The protocol character\nis the first character the backend sends in response to a query it\nreceives).\nPQendcopy: resetting connection\nAsynchronous NOTIFY 'ndropoulou' from backend with pid '1818589281'\nreceived.\nAsynchronous NOTIFY 'ndropoulou' from backend with pid '1818589281'\nreceived.\n\nThen plenty of\n\npq_flush: send() failed: Broken pipe\nFATAL: pq_endmessage failed: errno=32\n\nkeep on happening even though the connection is apparently dropped and psql\nexited. Running a regression test during this fails sanity_check. Restart\npostmaster, and the sanity_check passes. Run the join, and all breaks.\n\nAh - some joins work. The above join works if I replace * by \"Surname\". It\nshould return 750440 rows. It seems to just be a matter of quantity of data\ngoing down the connection. A * row contains 428 varchars worth and about 10\nnumbers. Just in case it's just me, I'll build a new kernel (when in\nkern_proc doubt..)\n\nCheers,\n\nPatrick\n",
"msg_date": "Mon, 24 Jan 2000 23:31:19 +0000",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: pg_dump possible fix, need testers."
},
{
"msg_contents": "On Mon, Jan 24, 2000 at 03:49:26PM -0800, Alfred Perlstein wrote:\n> I just ran the regression tests as best as I know how:\n> \n> ~/pgcvs/pgsql/src/test/regress % gmake runcheck\n> ~/pgcvs/pgsql/src/test/regress % grep FAIL run_check.out \n> test int2 ... FAILED\n> test int4 ... FAILED\n> test float8 ... FAILED\n> sequential test geometry ... FAILED\n> ~/pgcvs/pgsql/src/test/regress %\n> \n> no int2/int4? yipes!\n\nNot to worry, those will be differences in error message wording, but\n\n> I ran it 10 more times and one time I got:\n> test constraints ... FAILED\n\nWhat did this error come from? (cf regression.diffs)\n\n> but i got no weird parse errors or anything from the backend.\n> \n> Have you been able to find any weirdness with the fix I posted,\n> or is this more likely an issue with Patrick Welche's setup?\n\nI'm not sure: on the one hand, that evil join of mine returns the entire\ncontents of a table, and the connection gets confused. Smaller joins work.\nMaybe it doesn't happen to you because you don't put in such a useless\nselect (What do you want 750440 rows for?) On the other hand vacuum analyze\ntable_name doesn't work for me but obviously does for everyone else, so at\nleast something is wrong with my setup.\n\nCheers,\n\nPatrick\n",
"msg_date": "Mon, 24 Jan 2000 23:38:05 +0000",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: pg_dump possible fix, need testers."
},
{
"msg_contents": "* Tom Lane <[email protected]> [000124 10:54] wrote:\n> Patrick Welche <[email protected]> writes:\n> > This morning I cvs'd made, installed, moved data->data2, initdb, started\n> > up postmaster, reloaded data (this worked!), then tried the join. It's a\n> > big one, so I thought I might as well stress it at the same time, and did\n> > a regression test.\n> \n> > Anything I could try to narrow the problem down?\n> \n> Hmm. Why don't you try running the parallel regression test, to see\n> if that blows up too?\n\nI just ran the regression tests as best as I know how:\n\n~/pgcvs/pgsql/src/test/regress % gmake runcheck\n~/pgcvs/pgsql/src/test/regress % grep FAIL run_check.out \n test int2 ... FAILED\n test int4 ... FAILED\n test float8 ... FAILED\nsequential test geometry ... FAILED\n~/pgcvs/pgsql/src/test/regress %\n\nno int2/int4? yipes!\n\nI ran it 10 more times and one time I got:\n test constraints ... FAILED\n\nbut i got no weird parse errors or anything from the backend.\n\nHave you been able to find any weirdness with the fix I posted,\nor is this more likely an issue with Patrick Welche's setup?\n\n-Alfred\n",
"msg_date": "Mon, 24 Jan 2000 15:49:26 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pg_dump possible fix, need testers."
},
{
"msg_contents": "* Patrick Welche <[email protected]> [000124 16:02] wrote:\n> On Mon, Jan 24, 2000 at 03:49:26PM -0800, Alfred Perlstein wrote:\n> > I just ran the regression tests as best as I know how:\n> > \n> > ~/pgcvs/pgsql/src/test/regress % gmake runcheck\n> > ~/pgcvs/pgsql/src/test/regress % grep FAIL run_check.out \n> > test int2 ... FAILED\n> > test int4 ... FAILED\n> > test float8 ... FAILED\n> > sequential test geometry ... FAILED\n> > ~/pgcvs/pgsql/src/test/regress %\n> > \n> > no int2/int4? yipes!\n> \n> Not to worry, those will be differences in error message wording, but\n> \n> > I ran it 10 more times and one time I got:\n> > test constraints ... FAILED\n> \n> What did this error come from? (cf regression.diffs)\n> \n> > but i got no weird parse errors or anything from the backend.\n> > \n> > Have you been able to find any weirdness with the fix I posted,\n> > or is this more likely an issue with Patrick Welche's setup?\n> \n> I'm not sure: on the one hand, that evil join of mine returns the entire\n> contents of a table, and the connection gets confused. Smaller joins work.\n> Maybe it doesn't happen to you because you don't put in such a useless\n> select (What do you want 750440 rows for?) On the other hand vacuum analyze\n> table_name doesn't work for me but obviously does for everyone else, so at\n> least something is wrong with my setup.\n\nwhoa whoa whoa... I just updated my snapshot to today's code and lot\nmore seems broken:\n\n10 runs of the regression test:\n\n test aggregates ... FAILED\n test alter_table ... FAILED\n test btree_index ... FAILED\n test create_misc ... FAILED\n test create_operator ... FAILED\n test float8 ... FAILED\n test hash_index ... FAILED\n test int2 ... FAILED\n test int4 ... FAILED\n test limit ... FAILED\n test portals ... FAILED\n test portals_p2 ... FAILED\n test random ... FAILED\n test rules ... FAILED\n test select_distinct ... FAILED\n test select_distinct_on ... FAILED\n test select_views ... FAILED\n test transactions ... FAILED\n test triggers ... FAILED\nsequential test create_type ... FAILED\nsequential test create_view ... FAILED\nsequential test geometry ... FAILED\nsequential test select ... FAILED\n\nI'm going to see what happens if i revert my changes \nto libpq completely, but before updating from and old\nrepo of 2-3 daysall i get are the int/float single constraints\nerror.\n\nruntests and regression diffs are at:\nhttp://www.freebsd.org/~alfred/postgresql/\n\n-Alfred\n",
"msg_date": "Mon, 24 Jan 2000 17:30:49 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pg_dump possible fix, need testers."
},
{
"msg_contents": "* Alfred Perlstein <[email protected]> [000124 17:40] wrote:\n> * Patrick Welche <[email protected]> [000124 16:02] wrote:\n> > On Mon, Jan 24, 2000 at 03:49:26PM -0800, Alfred Perlstein wrote:\n> > > I just ran the regression tests as best as I know how:\n> > > \n> > > ~/pgcvs/pgsql/src/test/regress % gmake runcheck\n> > > ~/pgcvs/pgsql/src/test/regress % grep FAIL run_check.out \n> > > test int2 ... FAILED\n> > > test int4 ... FAILED\n> > > test float8 ... FAILED\n> > > sequential test geometry ... FAILED\n> > > ~/pgcvs/pgsql/src/test/regress %\n> > > \n> > > no int2/int4? yipes!\n> > \n> > Not to worry, those will be differences in error message wording, but\n> > \n> > > I ran it 10 more times and one time I got:\n> > > test constraints ... FAILED\n> > \n> > What did this error come from? (cf regression.diffs)\n> > \n> > > but i got no weird parse errors or anything from the backend.\n> > > \n> > > Have you been able to find any weirdness with the fix I posted,\n> > > or is this more likely an issue with Patrick Welche's setup?\n> > \n> > I'm not sure: on the one hand, that evil join of mine returns the entire\n> > contents of a table, and the connection gets confused. Smaller joins work.\n> > Maybe it doesn't happen to you because you don't put in such a useless\n> > select (What do you want 750440 rows for?) On the other hand vacuum analyze\n> > table_name doesn't work for me but obviously does for everyone else, so at\n> > least something is wrong with my setup.\n> \n> whoa whoa whoa... I just updated my snapshot to today's code and lot\n> more seems broken:\n\nbecause I forgot to clean & rebuild in the regression dir... oops.\n\nPatrick, I'll have a delta shortly that totally backs out my non-blocking\nchanges for you to test.\n\n-Alfred\n",
"msg_date": "Mon, 24 Jan 2000 18:32:14 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pg_dump possible fix, need testers."
},
{
"msg_contents": "Patrick Welche <[email protected]> writes:\n> Rerunning the ordinary regression \"runtest\", the sanity_check passes. The\n> difference being that this time I wasn't running a select at the same time.\n> The parallel test \"runcheck\" fails on different parts at different times eg:\n\n> test select_into ... FAILED\n> because\n> ! psql: connection to database \"regression\" failed - Backend startup failed\n\nDo you see these failures just from running the parallel regress tests,\nwithout anything else going on? (Theoretically, since the parallel test\nscript is running its own private postmaster, whatever you might be\ndoing under other postmasters shouldn't affect it. But you know the\ndifference between theory and practice...)\n\n> (BTW in resultmap, I need the .*-.*-netbsd rather than just netbsd, I think\n> it's because config.guess returns i386-unknown-netbsd1.4P, and just netbsd\n> would imply the string starts with netbsd)\n\nRight. My oversight --- fix committed.\n\n> 3 times in a row now, gmake runtest on its own is fine, gmake runtest with a\n> concurrent join select makes sanity_check fail with\n\n> + NOTICE: RegisterSharedInvalid: SI buffer overflow\n> + NOTICE: InvalidateSharedInvalid: cache state reset\n> + NOTICE: Index onek_stringu1: NUMBER OF INDEX' TUPLES (1000) IS NOT THE SAME AS HEAP' (2000).\n> + Recreate the index.\n> + NOTICE: Index onek_hundred: NUMBER OF INDEX' TUPLES (1000) IS NOT THE SAME AS\n> HEAP' (2000).\n> + Recreate the index.\n\n> Ah - some joins work. ... It seems to just be a matter of quantity of data\n> going down the connection.\n\nHmm. I betcha that the critical factor here is how much *time* the\noutside join takes, not exactly how much data it emits.\n\nIf that backend is tied up for long enough then it will cause the SI\nbuffer to overflow, just as you show above. In theory, that shouldn't\ncause any problems beyond the appearance of the overflow/cache reset\nNOTICEs (and in fact it did not the last time I tried running things\nwith a very small SI buffer). It looks like we have recently broken\nsomething in SI overrun handling.\n\n(In other words, I don't think this has anything to do with Alfred's\nchanges ... he'll be glad to hear that ;-). Hiroshi may be on the\nhook though. I'm going to rebuild with a small SI buffer and see\nif it breaks.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 Jan 2000 21:39:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pg_dump possible fix, need testers. "
},
{
"msg_contents": "Can someone remind me of where we left this?\n\n\n> >> Um, I didn't have any trouble at all reproducing Patrick's complaint.\n> >> pg_dump any moderately large table (I used tenk1 from the regress\n> >> database) and try to load the script with psql. Kaboom.\n> \n> > This is after or before my latest patch?\n> \n> Before. I haven't updated since yesterday...\n> \n> > I can't seem to reproduce this problem,\n> \n> Odd. Maybe there is something different about the kernel's timing of\n> message sending on your platform. I see it very easily on HPUX 10.20,\n> and Patrick sees it very easily on whatever he's using (netbsd I think).\n> You might try varying the situation a little, say\n> \tpsql mydb <dumpfile\n> \tpsql -f dumpfile mydb\n> \tpsql mydb\n> \t\t\\i dumpfile\n> and the same with -h localhost (to get a TCP/IP connection instead of\n> Unix domain). At the moment (pre-patch) I see failures with the\n> first two of these, but not with the \\i method. -h doesn't seem to\n> matter for me, but it might for you.\n> \n> > Telling me something is wrong without giving suggestions on how\n> > to fix it, nor direct pointers to where it fails doesn't help me\n> > one bit. You're not offering constructive critism, you're not\n> > even offering valid critism, you're just waving your finger at\n> > \"problems\" that you say exist but don't pin down to anything specific.\n> \n> I have been explaining it as clearly as I could. Let's try it\n> one more time.\n> \n> > I spent hours looking over what I did to pqFlush and pqPutnBytes\n> > because of what you said earlier when all the bug seems to have\n> > come down to is that I missed that the socket is set to non-blocking\n> > in all cases now.\n> \n> Letting the socket mode default to blocking will hide the problems from\n> existing clients that don't care about non-block mode. But people who\n> try to actually use the nonblock mode are going to see the same kinds of\n> problems that psql is exhibiting.\n> \n> > The old sequence of events that happened was as follows:\n> \n> > user sends data almost filling the output buffer...\n> > user sends another line of text overflowing the buffer...\n> > pqFlush is invoked blocking the user until the output pipe clears...\n> > and repeat.\n> \n> Right.\n> \n> > The nonblocking code allows sends to fail so the user can abort\n> > sending stuff to the backend in order to process other work:\n> \n> > user sends data almost filling the output buffer...\n> > user sends another line of text that may overflow the buffer...\n> > pqFlush is invoked, \n> > if the pipe can't be cleared an error is returned allowing the user to\n> > retry the send later.\n> > if the flush succeeds then more data is queued and success is returned\n> \n> But you haven't thought through the mechanics of the \"error is returned\n> allowing the user to retry\" code path clearly enough. Let's take\n> pqPutBytes for an example. If it returns EOF, is that a hard error or\n> does it just mean that the application needs to wait a while? The\n> application *must* distinguish these cases, or it will do the wrong\n> thing: for example, if it mistakes a hard error for \"wait a while\",\n> then it will wait forever without making any progress or producing\n> an error report.\n> \n> You need to provide a different return convention that indicates\n> what happened, say\n> \tEOF (-1)\t=> hard error (same as old code)\n> \t0\t\t=> OK\n> \t1\t\t=> no data was queued due to risk of blocking\n> And you need to guarantee that the application knows what the state is\n> when the can't-do-it-yet return is made; note that I specified \"no data\n> was queued\" above. If pqPutBytes might queue some of the data before\n> returning 1, the application is in trouble again. While you apparently\n> foresaw that in recoding pqPutBytes, your code doesn't actually work.\n> There is the minor code bug that you fail to update \"avail\" after the\n> first pqFlush call, and the much more fundamental problem that you\n> cannot guarantee to have queued all or none of the data. Think about\n> what happens if the passed nbytes is larger than the output buffer size.\n> You may pass the first pqFlush successfully, then get into the loop and\n> get a won't-block return from pqFlush in the loop. What then?\n> You can't simply refuse to support the case nbytes > bufsize at all,\n> because that will cause application failures as well (too long query\n> sends it into an infinite loop trying to queue data, most likely).\n> \n> A possible answer is to specify that a return of +N means \"N bytes\n> remain unqueued due to risk of blocking\" (after having queued as much\n> as you could). This would put the onus on the caller to update his\n> pointers/counts properly; propagating that into all the internal uses\n> of pqPutBytes would be no fun. (Of course, so far you haven't updated\n> *any* of the internal callers to behave reasonably in case of a\n> won't-block return; PQfn is just one example.)\n> \n> Another possible answer is to preserve pqPutBytes' old API, \"queue or\n> bust\", by the expedient of enlarging the output buffer to hold whatever\n> we can't send immediately. This is probably more attractive, even\n> though a long query might suck up a lot of space that won't get\n> reclaimed as long as the connection lives. If you don't do this then\n> you are going to have to make a lot of ugly changes in the internal\n> callers to deal with won't-block returns. Actually, a bulk COPY IN\n> would probably be the worst case --- the app could easily load data into\n> the buffer far faster than it could be sent. It might be best to extend\n> PQputline to have a three-way return and add code there to limit the\n> growth of the output buffer, while allowing all internal callers to\n> assume that the buffer is expanded when they need it.\n> \n> pqFlush has the same kind of interface design problem: the same EOF code\n> is returned for either a hard error or can't-flush-yet, but it would be\n> disastrous to treat those cases alike. You must provide a 3-way return\n> code.\n> \n> Furthermore, the same sort of 3-way return code convention will have to\n> propagate out through anything that calls pqFlush (with corresponding\n> documentation updates). pqPutBytes can be made to hide a pqFlush won't-\n> block return by trying to enlarge the output buffer, but in most other\n> places you won't have a choice except to punt it back to the caller.\n> \n> PQendcopy has the same interface design problem. It used to be that\n> (unless you passed a null pointer) PQendcopy would *guarantee* that\n> the connection was no longer in COPY state on return --- by resetting\n> it, if necessary. So the return code was mainly informative; the\n> application didn't have to do anything different if PQendcopy reported\n> failure. But now, a nonblocking application does need to pay attention\n> to whether PQendcopy completed or not --- and you haven't provided a way\n> for it to tell. If 1 is returned, the connection might still be in\n> COPY state, or it might not (PQendcopy might have reset it). If the\n> application doesn't distinguish these cases then it will fail.\n> \n> I also think that you want to take a hard look at the automatic \"reset\"\n> behavior upon COPY failure, since a PQreset call will block the\n> application until it finishes. Really, what is needed to close down a\n> COPY safely in nonblock mode is a pair of entry points along the line of\n> \"PQendcopyStart\" and \"PQendcopyPoll\", with API conventions similar to\n> PQresetStart/PQresetPoll. This gives you the ability to do the reset\n> (if one is necessary) without blocking the application. PQendcopy\n> itself will only be useful to blocking applications.\n> \n> > I'm sorry if they don't work for some situations other than COPY IN,\n> > but it's functionality that I needed and I expect to be expanded on\n> > by myself and others that take interest in nonblocking operation.\n> \n> I don't think that the nonblock code is anywhere near production quality\n> at this point. It may work for you, if you don't stress it too hard and\n> never have a communications failure; but I don't want to see us ship it\n> as part of Postgres unless these issues get addressed.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 29 Sep 2000 22:29:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump possible fix, need testers. (was: Re: pg_dump\n disaster)"
},
{
"msg_contents": "* Bruce Momjian <[email protected]> [000929 19:30] wrote:\n> Can someone remind me of where we left this?\n\nI really haven't figured a correct way to deal with the output buffer.\nI'll try to consider ways to deal with this.\n\n-Alfred\n\n",
"msg_date": "Fri, 29 Sep 2000 20:28:53 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump possible fix, need testers. (was: Re: pg_dump disaster)"
},
{
"msg_contents": "* Bruce Momjian <[email protected]> [000929 19:32] wrote:\n> Can someone remind me of where we left this?\n\n[snip]\n\nSorry for the really long follow-up, but this was posted a long time\nago and I wanted to refresh everyone's memory as to the problem.\n\nThe problem is that when doing a COPY from stdin using libpq and \nmy non-blocking query settings there is a big problem, namely that\nthe size of the query is limited to the size of the output buffer.\n\nThe reason is that I couldn't figure any way to determine what the\napplication was doing, basically, if i'm sending a query I want to\nexpand the output buffer to accomidate the new query, however if\nI'm doing a COPY out, I would like to be informed if the backend\nis lagging behind.\n\nThe concept is that someone doing queries needs to be retrieving\nresults from the database backend between queries, however someone\ndoing a COPY doesn't until they issue the terminating \\\\. line.\n\nI'm pretty sure I know what to do now, it's pretty simple actually,\nI can examine the state of the connection, if it's in PGASYNC_COPY_IN\nthen I don't grow the buffer, I inform the application that the \ndata will block, if it's no PGASYNC_COPY_IN I allow the buffer to grow\nprotecting the application from blocking.\n\nTom, I'd like to know what do you think before running off to implement\nthis.\n\nthanks,\n-Alfred\n\n> > Letting the socket mode default to blocking will hide the problems from\n> > existing clients that don't care about non-block mode. But people who\n> > try to actually use the nonblock mode are going to see the same kinds of\n> > problems that psql is exhibiting.\n> > \n> > > The old sequence of events that happened was as follows:\n> > \n> > > user sends data almost filling the output buffer...\n> > > user sends another line of text overflowing the buffer...\n> > > pqFlush is invoked blocking the user until the output pipe clears...\n> > > and repeat.\n> > \n> > Right.\n> > \n> > > The nonblocking code allows sends to fail so the user can abort\n> > > sending stuff to the backend in order to process other work:\n> > \n> > > user sends data almost filling the output buffer...\n> > > user sends another line of text that may overflow the buffer...\n> > > pqFlush is invoked, \n> > > if the pipe can't be cleared an error is returned allowing the user to\n> > > retry the send later.\n> > > if the flush succeeds then more data is queued and success is returned\n> > \n> > But you haven't thought through the mechanics of the \"error is returned\n> > allowing the user to retry\" code path clearly enough. Let's take\n> > pqPutBytes for an example. If it returns EOF, is that a hard error or\n> > does it just mean that the application needs to wait a while? The\n> > application *must* distinguish these cases, or it will do the wrong\n> > thing: for example, if it mistakes a hard error for \"wait a while\",\n> > then it will wait forever without making any progress or producing\n> > an error report.\n> > \n> > You need to provide a different return convention that indicates\n> > what happened, say\n> > \tEOF (-1)\t=> hard error (same as old code)\n> > \t0\t\t=> OK\n> > \t1\t\t=> no data was queued due to risk of blocking\n> > And you need to guarantee that the application knows what the state is\n> > when the can't-do-it-yet return is made; note that I specified \"no data\n> > was queued\" above. If pqPutBytes might queue some of the data before\n> > returning 1, the application is in trouble again. While you apparently\n> > foresaw that in recoding pqPutBytes, your code doesn't actually work.\n> > There is the minor code bug that you fail to update \"avail\" after the\n> > first pqFlush call, and the much more fundamental problem that you\n> > cannot guarantee to have queued all or none of the data. Think about\n> > what happens if the passed nbytes is larger than the output buffer size.\n> > You may pass the first pqFlush successfully, then get into the loop and\n> > get a won't-block return from pqFlush in the loop. What then?\n> > You can't simply refuse to support the case nbytes > bufsize at all,\n> > because that will cause application failures as well (too long query\n> > sends it into an infinite loop trying to queue data, most likely).\n> > \n> > A possible answer is to specify that a return of +N means \"N bytes\n> > remain unqueued due to risk of blocking\" (after having queued as much\n> > as you could). This would put the onus on the caller to update his\n> > pointers/counts properly; propagating that into all the internal uses\n> > of pqPutBytes would be no fun. (Of course, so far you haven't updated\n> > *any* of the internal callers to behave reasonably in case of a\n> > won't-block return; PQfn is just one example.)\n> > \n> > Another possible answer is to preserve pqPutBytes' old API, \"queue or\n> > bust\", by the expedient of enlarging the output buffer to hold whatever\n> > we can't send immediately. This is probably more attractive, even\n> > though a long query might suck up a lot of space that won't get\n> > reclaimed as long as the connection lives. If you don't do this then\n> > you are going to have to make a lot of ugly changes in the internal\n> > callers to deal with won't-block returns. Actually, a bulk COPY IN\n> > would probably be the worst case --- the app could easily load data into\n> > the buffer far faster than it could be sent. It might be best to extend\n> > PQputline to have a three-way return and add code there to limit the\n> > growth of the output buffer, while allowing all internal callers to\n> > assume that the buffer is expanded when they need it.\n> > \n> > pqFlush has the same kind of interface design problem: the same EOF code\n> > is returned for either a hard error or can't-flush-yet, but it would be\n> > disastrous to treat those cases alike. You must provide a 3-way return\n> > code.\n> > \n> > Furthermore, the same sort of 3-way return code convention will have to\n> > propagate out through anything that calls pqFlush (with corresponding\n> > documentation updates). pqPutBytes can be made to hide a pqFlush won't-\n> > block return by trying to enlarge the output buffer, but in most other\n> > places you won't have a choice except to punt it back to the caller.\n> > \n> > PQendcopy has the same interface design problem. It used to be that\n> > (unless you passed a null pointer) PQendcopy would *guarantee* that\n> > the connection was no longer in COPY state on return --- by resetting\n> > it, if necessary. So the return code was mainly informative; the\n> > application didn't have to do anything different if PQendcopy reported\n> > failure. But now, a nonblocking application does need to pay attention\n> > to whether PQendcopy completed or not --- and you haven't provided a way\n> > for it to tell. If 1 is returned, the connection might still be in\n> > COPY state, or it might not (PQendcopy might have reset it). If the\n> > application doesn't distinguish these cases then it will fail.\n> > \n> > I also think that you want to take a hard look at the automatic \"reset\"\n> > behavior upon COPY failure, since a PQreset call will block the\n> > application until it finishes. Really, what is needed to close down a\n> > COPY safely in nonblock mode is a pair of entry points along the line of\n> > \"PQendcopyStart\" and \"PQendcopyPoll\", with API conventions similar to\n> > PQresetStart/PQresetPoll. This gives you the ability to do the reset\n> > (if one is necessary) without blocking the application. PQendcopy\n> > itself will only be useful to blocking applications.\n> > \n> > > I'm sorry if they don't work for some situations other than COPY IN,\n> > > but it's functionality that I needed and I expect to be expanded on\n> > > by myself and others that take interest in nonblocking operation.\n> > \n> > I don't think that the nonblock code is anywhere near production quality\n> > at this point. It may work for you, if you don't stress it too hard and\n> > never have a communications failure; but I don't want to see us ship it\n> > as part of Postgres unless these issues get addressed.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ************\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 12 Oct 2000 11:35:11 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump possible fix, need testers. (was: Re: pg_dump disaster)"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> I'm pretty sure I know what to do now, it's pretty simple actually,\n> I can examine the state of the connection, if it's in PGASYNC_COPY_IN\n> then I don't grow the buffer, I inform the application that the \n> data will block, if it's no PGASYNC_COPY_IN I allow the buffer to grow\n> protecting the application from blocking.\n\n From what I recall of the prior discussion, it seemed that a state-based\napproach probably isn't the way to go. The real issue is how many\nroutines are you going to have to change to deal with a three-way return\nconvention; you want to minimize the number of places that have to cope\nwith that. IIRC the idea was to let pqPutBytes grow the buffer so that\nits callers didn't need to worry about a \"sorry, won't block\" return\ncondition. If you feel that growing the buffer is inappropriate for a\nspecific caller, then probably the right answer is for that particular\ncaller to make an extra check to see if the buffer will overflow, and\nrefrain from calling pqPutBytes if it doesn't like what will happen.\n\nIf you make pqPutByte's behavior state-based, then callers that aren't\nexpecting a \"won't block\" return will fail (silently :-() in some states.\nWhile you might be able to get away with that for PGASYNC_COPY_IN state\nbecause not much of libpq is expected to be exercised in that state,\nit strikes me as an awfully fragile coding convention. I think you will\nregret that choice eventually, if you make it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2000 15:14:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump possible fix, need testers. (was: Re: pg_dump disaster) "
},
{
"msg_contents": "* Tom Lane <[email protected]> [001012 12:14] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > I'm pretty sure I know what to do now, it's pretty simple actually,\n> > I can examine the state of the connection, if it's in PGASYNC_COPY_IN\n> > then I don't grow the buffer, I inform the application that the \n> > data will block, if it's no PGASYNC_COPY_IN I allow the buffer to grow\n> > protecting the application from blocking.\n> \n> >From what I recall of the prior discussion, it seemed that a state-based\n> approach probably isn't the way to go. The real issue is how many\n> routines are you going to have to change to deal with a three-way return\n> convention; you want to minimize the number of places that have to cope\n> with that. IIRC the idea was to let pqPutBytes grow the buffer so that\n> its callers didn't need to worry about a \"sorry, won't block\" return\n> condition. If you feel that growing the buffer is inappropriate for a\n> specific caller, then probably the right answer is for that particular\n> caller to make an extra check to see if the buffer will overflow, and\n> refrain from calling pqPutBytes if it doesn't like what will happen.\n> \n> If you make pqPutByte's behavior state-based, then callers that aren't\n> expecting a \"won't block\" return will fail (silently :-() in some states.\n> While you might be able to get away with that for PGASYNC_COPY_IN state\n> because not much of libpq is expected to be exercised in that state,\n> it strikes me as an awfully fragile coding convention. I think you will\n> regret that choice eventually, if you make it.\n\nIt is a somewhat fragile change, but much less intrusive than anything\nelse I could think of. It removes the three-way return value from\nthe send code for everything except when in a COPY IN state where\nit's the application's job to handle it. If there would be a\nblocking condition we do as the non-blocking code handles it except\ninstead of blocking we buffer it in it's entirety.\n\nMy main question was wondering if there's any cases where we might \n\"go nuts\" sending data to the backend with the exception of the \nCOPY code?\n\n-- or --\n\nI could make a function to check the buffer space and attempt to\nflush the buffer (in a non-blocking manner) to be called from\nPQputline and PQputnbytes if the connection is non-blocking.\n\nHowever since those are external functions my question is if you\nknow of any other uses for those function besideds when COPY'ing\ninto the backend?\n\nSince afaik I'm the only schmoe using my non-blocking stuff,\nrestricting the check for bufferspace to non-blocking connections\nwouldn't break any APIs from my PoV.\n\nHow should I proceed?\n\n--\n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Thu, 12 Oct 2000 12:32:43 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump possible fix, need testers. (was: Re: pg_dump disaster)"
},
{
"msg_contents": "\nI have added this email to TODO.detail and a mention in the TODO list.\n\n> >> Um, I didn't have any trouble at all reproducing Patrick's complaint.\n> >> pg_dump any moderately large table (I used tenk1 from the regress\n> >> database) and try to load the script with psql. Kaboom.\n> \n> > This is after or before my latest patch?\n> \n> Before. I haven't updated since yesterday...\n> \n> > I can't seem to reproduce this problem,\n> \n> Odd. Maybe there is something different about the kernel's timing of\n> message sending on your platform. I see it very easily on HPUX 10.20,\n> and Patrick sees it very easily on whatever he's using (netbsd I think).\n> You might try varying the situation a little, say\n> \tpsql mydb <dumpfile\n> \tpsql -f dumpfile mydb\n> \tpsql mydb\n> \t\t\\i dumpfile\n> and the same with -h localhost (to get a TCP/IP connection instead of\n> Unix domain). At the moment (pre-patch) I see failures with the\n> first two of these, but not with the \\i method. -h doesn't seem to\n> matter for me, but it might for you.\n> \n> > Telling me something is wrong without giving suggestions on how\n> > to fix it, nor direct pointers to where it fails doesn't help me\n> > one bit. You're not offering constructive critism, you're not\n> > even offering valid critism, you're just waving your finger at\n> > \"problems\" that you say exist but don't pin down to anything specific.\n> \n> I have been explaining it as clearly as I could. Let's try it\n> one more time.\n> \n> > I spent hours looking over what I did to pqFlush and pqPutnBytes\n> > because of what you said earlier when all the bug seems to have\n> > come down to is that I missed that the socket is set to non-blocking\n> > in all cases now.\n> \n> Letting the socket mode default to blocking will hide the problems from\n> existing clients that don't care about non-block mode. But people who\n> try to actually use the nonblock mode are going to see the same kinds of\n> problems that psql is exhibiting.\n> \n> > The old sequence of events that happened was as follows:\n> \n> > user sends data almost filling the output buffer...\n> > user sends another line of text overflowing the buffer...\n> > pqFlush is invoked blocking the user until the output pipe clears...\n> > and repeat.\n> \n> Right.\n> \n> > The nonblocking code allows sends to fail so the user can abort\n> > sending stuff to the backend in order to process other work:\n> \n> > user sends data almost filling the output buffer...\n> > user sends another line of text that may overflow the buffer...\n> > pqFlush is invoked, \n> > if the pipe can't be cleared an error is returned allowing the user to\n> > retry the send later.\n> > if the flush succeeds then more data is queued and success is returned\n> \n> But you haven't thought through the mechanics of the \"error is returned\n> allowing the user to retry\" code path clearly enough. Let's take\n> pqPutBytes for an example. If it returns EOF, is that a hard error or\n> does it just mean that the application needs to wait a while? The\n> application *must* distinguish these cases, or it will do the wrong\n> thing: for example, if it mistakes a hard error for \"wait a while\",\n> then it will wait forever without making any progress or producing\n> an error report.\n> \n> You need to provide a different return convention that indicates\n> what happened, say\n> \tEOF (-1)\t=> hard error (same as old code)\n> \t0\t\t=> OK\n> \t1\t\t=> no data was queued due to risk of blocking\n> And you need to guarantee that the application knows what the state is\n> when the can't-do-it-yet return is made; note that I specified \"no data\n> was queued\" above. If pqPutBytes might queue some of the data before\n> returning 1, the application is in trouble again. While you apparently\n> foresaw that in recoding pqPutBytes, your code doesn't actually work.\n> There is the minor code bug that you fail to update \"avail\" after the\n> first pqFlush call, and the much more fundamental problem that you\n> cannot guarantee to have queued all or none of the data. Think about\n> what happens if the passed nbytes is larger than the output buffer size.\n> You may pass the first pqFlush successfully, then get into the loop and\n> get a won't-block return from pqFlush in the loop. What then?\n> You can't simply refuse to support the case nbytes > bufsize at all,\n> because that will cause application failures as well (too long query\n> sends it into an infinite loop trying to queue data, most likely).\n> \n> A possible answer is to specify that a return of +N means \"N bytes\n> remain unqueued due to risk of blocking\" (after having queued as much\n> as you could). This would put the onus on the caller to update his\n> pointers/counts properly; propagating that into all the internal uses\n> of pqPutBytes would be no fun. (Of course, so far you haven't updated\n> *any* of the internal callers to behave reasonably in case of a\n> won't-block return; PQfn is just one example.)\n> \n> Another possible answer is to preserve pqPutBytes' old API, \"queue or\n> bust\", by the expedient of enlarging the output buffer to hold whatever\n> we can't send immediately. This is probably more attractive, even\n> though a long query might suck up a lot of space that won't get\n> reclaimed as long as the connection lives. If you don't do this then\n> you are going to have to make a lot of ugly changes in the internal\n> callers to deal with won't-block returns. Actually, a bulk COPY IN\n> would probably be the worst case --- the app could easily load data into\n> the buffer far faster than it could be sent. It might be best to extend\n> PQputline to have a three-way return and add code there to limit the\n> growth of the output buffer, while allowing all internal callers to\n> assume that the buffer is expanded when they need it.\n> \n> pqFlush has the same kind of interface design problem: the same EOF code\n> is returned for either a hard error or can't-flush-yet, but it would be\n> disastrous to treat those cases alike. You must provide a 3-way return\n> code.\n> \n> Furthermore, the same sort of 3-way return code convention will have to\n> propagate out through anything that calls pqFlush (with corresponding\n> documentation updates). pqPutBytes can be made to hide a pqFlush won't-\n> block return by trying to enlarge the output buffer, but in most other\n> places you won't have a choice except to punt it back to the caller.\n> \n> PQendcopy has the same interface design problem. It used to be that\n> (unless you passed a null pointer) PQendcopy would *guarantee* that\n> the connection was no longer in COPY state on return --- by resetting\n> it, if necessary. So the return code was mainly informative; the\n> application didn't have to do anything different if PQendcopy reported\n> failure. But now, a nonblocking application does need to pay attention\n> to whether PQendcopy completed or not --- and you haven't provided a way\n> for it to tell. If 1 is returned, the connection might still be in\n> COPY state, or it might not (PQendcopy might have reset it). If the\n> application doesn't distinguish these cases then it will fail.\n> \n> I also think that you want to take a hard look at the automatic \"reset\"\n> behavior upon COPY failure, since a PQreset call will block the\n> application until it finishes. Really, what is needed to close down a\n> COPY safely in nonblock mode is a pair of entry points along the line of\n> \"PQendcopyStart\" and \"PQendcopyPoll\", with API conventions similar to\n> PQresetStart/PQresetPoll. This gives you the ability to do the reset\n> (if one is necessary) without blocking the application. PQendcopy\n> itself will only be useful to blocking applications.\n> \n> > I'm sorry if they don't work for some situations other than COPY IN,\n> > but it's functionality that I needed and I expect to be expanded on\n> > by myself and others that take interest in nonblocking operation.\n> \n> I don't think that the nonblock code is anywhere near production quality\n> at this point. It may work for you, if you don't stress it too hard and\n> never have a communications failure; but I don't want to see us ship it\n> as part of Postgres unless these issues get addressed.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 08:39:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Libpq async issues"
},
{
"msg_contents": "* Bruce Momjian <[email protected]> [010124 07:58] wrote:\n> \n> I have added this email to TODO.detail and a mention in the TODO list.\n\nThe bug mentioned here is long gone, however the problem with\nissuing non-blocking COPY commands is still present (8k limit on\nbuffer size). I hope to get to fix this sometime soon, but you\nshouldn't worry about the \"normal\" path.\n\nThere's also a bug with PQCopyEnd(sp?) where it can still block because\nit automagically calls into a routine that select()'s waiting for data.\n\nIt's on my TODO list as well, but a little behind a several thousand\nline server I'm almost complete with.\n\n-Alfred\n\n> \n> > >> Um, I didn't have any trouble at all reproducing Patrick's complaint.\n> > >> pg_dump any moderately large table (I used tenk1 from the regress\n> > >> database) and try to load the script with psql. Kaboom.\n> > \n> > > This is after or before my latest patch?\n> > \n> > Before. I haven't updated since yesterday...\n> > \n> > > I can't seem to reproduce this problem,\n> > \n> > Odd. Maybe there is something different about the kernel's timing of\n> > message sending on your platform. I see it very easily on HPUX 10.20,\n> > and Patrick sees it very easily on whatever he's using (netbsd I think).\n> > You might try varying the situation a little, say\n> > \tpsql mydb <dumpfile\n> > \tpsql -f dumpfile mydb\n> > \tpsql mydb\n> > \t\t\\i dumpfile\n> > and the same with -h localhost (to get a TCP/IP connection instead of\n> > Unix domain). At the moment (pre-patch) I see failures with the\n> > first two of these, but not with the \\i method. -h doesn't seem to\n> > matter for me, but it might for you.\n> > \n> > > Telling me something is wrong without giving suggestions on how\n> > > to fix it, nor direct pointers to where it fails doesn't help me\n> > > one bit. You're not offering constructive critism, you're not\n> > > even offering valid critism, you're just waving your finger at\n> > > \"problems\" that you say exist but don't pin down to anything specific.\n> > \n> > I have been explaining it as clearly as I could. Let's try it\n> > one more time.\n> > \n> > > I spent hours looking over what I did to pqFlush and pqPutnBytes\n> > > because of what you said earlier when all the bug seems to have\n> > > come down to is that I missed that the socket is set to non-blocking\n> > > in all cases now.\n> > \n> > Letting the socket mode default to blocking will hide the problems from\n> > existing clients that don't care about non-block mode. But people who\n> > try to actually use the nonblock mode are going to see the same kinds of\n> > problems that psql is exhibiting.\n> > \n> > > The old sequence of events that happened was as follows:\n> > \n> > > user sends data almost filling the output buffer...\n> > > user sends another line of text overflowing the buffer...\n> > > pqFlush is invoked blocking the user until the output pipe clears...\n> > > and repeat.\n> > \n> > Right.\n> > \n> > > The nonblocking code allows sends to fail so the user can abort\n> > > sending stuff to the backend in order to process other work:\n> > \n> > > user sends data almost filling the output buffer...\n> > > user sends another line of text that may overflow the buffer...\n> > > pqFlush is invoked, \n> > > if the pipe can't be cleared an error is returned allowing the user to\n> > > retry the send later.\n> > > if the flush succeeds then more data is queued and success is returned\n> > \n> > But you haven't thought through the mechanics of the \"error is returned\n> > allowing the user to retry\" code path clearly enough. Let's take\n> > pqPutBytes for an example. If it returns EOF, is that a hard error or\n> > does it just mean that the application needs to wait a while? The\n> > application *must* distinguish these cases, or it will do the wrong\n> > thing: for example, if it mistakes a hard error for \"wait a while\",\n> > then it will wait forever without making any progress or producing\n> > an error report.\n> > \n> > You need to provide a different return convention that indicates\n> > what happened, say\n> > \tEOF (-1)\t=> hard error (same as old code)\n> > \t0\t\t=> OK\n> > \t1\t\t=> no data was queued due to risk of blocking\n> > And you need to guarantee that the application knows what the state is\n> > when the can't-do-it-yet return is made; note that I specified \"no data\n> > was queued\" above. If pqPutBytes might queue some of the data before\n> > returning 1, the application is in trouble again. While you apparently\n> > foresaw that in recoding pqPutBytes, your code doesn't actually work.\n> > There is the minor code bug that you fail to update \"avail\" after the\n> > first pqFlush call, and the much more fundamental problem that you\n> > cannot guarantee to have queued all or none of the data. Think about\n> > what happens if the passed nbytes is larger than the output buffer size.\n> > You may pass the first pqFlush successfully, then get into the loop and\n> > get a won't-block return from pqFlush in the loop. What then?\n> > You can't simply refuse to support the case nbytes > bufsize at all,\n> > because that will cause application failures as well (too long query\n> > sends it into an infinite loop trying to queue data, most likely).\n> > \n> > A possible answer is to specify that a return of +N means \"N bytes\n> > remain unqueued due to risk of blocking\" (after having queued as much\n> > as you could). This would put the onus on the caller to update his\n> > pointers/counts properly; propagating that into all the internal uses\n> > of pqPutBytes would be no fun. (Of course, so far you haven't updated\n> > *any* of the internal callers to behave reasonably in case of a\n> > won't-block return; PQfn is just one example.)\n> > \n> > Another possible answer is to preserve pqPutBytes' old API, \"queue or\n> > bust\", by the expedient of enlarging the output buffer to hold whatever\n> > we can't send immediately. This is probably more attractive, even\n> > though a long query might suck up a lot of space that won't get\n> > reclaimed as long as the connection lives. If you don't do this then\n> > you are going to have to make a lot of ugly changes in the internal\n> > callers to deal with won't-block returns. Actually, a bulk COPY IN\n> > would probably be the worst case --- the app could easily load data into\n> > the buffer far faster than it could be sent. It might be best to extend\n> > PQputline to have a three-way return and add code there to limit the\n> > growth of the output buffer, while allowing all internal callers to\n> > assume that the buffer is expanded when they need it.\n> > \n> > pqFlush has the same kind of interface design problem: the same EOF code\n> > is returned for either a hard error or can't-flush-yet, but it would be\n> > disastrous to treat those cases alike. You must provide a 3-way return\n> > code.\n> > \n> > Furthermore, the same sort of 3-way return code convention will have to\n> > propagate out through anything that calls pqFlush (with corresponding\n> > documentation updates). pqPutBytes can be made to hide a pqFlush won't-\n> > block return by trying to enlarge the output buffer, but in most other\n> > places you won't have a choice except to punt it back to the caller.\n> > \n> > PQendcopy has the same interface design problem. It used to be that\n> > (unless you passed a null pointer) PQendcopy would *guarantee* that\n> > the connection was no longer in COPY state on return --- by resetting\n> > it, if necessary. So the return code was mainly informative; the\n> > application didn't have to do anything different if PQendcopy reported\n> > failure. But now, a nonblocking application does need to pay attention\n> > to whether PQendcopy completed or not --- and you haven't provided a way\n> > for it to tell. If 1 is returned, the connection might still be in\n> > COPY state, or it might not (PQendcopy might have reset it). If the\n> > application doesn't distinguish these cases then it will fail.\n> > \n> > I also think that you want to take a hard look at the automatic \"reset\"\n> > behavior upon COPY failure, since a PQreset call will block the\n> > application until it finishes. Really, what is needed to close down a\n> > COPY safely in nonblock mode is a pair of entry points along the line of\n> > \"PQendcopyStart\" and \"PQendcopyPoll\", with API conventions similar to\n> > PQresetStart/PQresetPoll. This gives you the ability to do the reset\n> > (if one is necessary) without blocking the application. PQendcopy\n> > itself will only be useful to blocking applications.\n> > \n> > > I'm sorry if they don't work for some situations other than COPY IN,\n> > > but it's functionality that I needed and I expect to be expanded on\n> > > by myself and others that take interest in nonblocking operation.\n> > \n> > I don't think that the nonblock code is anywhere near production quality\n> > at this point. It may work for you, if you don't stress it too hard and\n> > never have a communications failure; but I don't want to see us ship it\n> > as part of Postgres unless these issues get addressed.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ************\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Wed, 24 Jan 2001 08:47:20 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Libpq async issues"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> * Bruce Momjian <[email protected]> [010124 07:58] wrote:\n>> I have added this email to TODO.detail and a mention in the TODO list.\n\n> The bug mentioned here is long gone,\n\nAu contraire, the misdesign is still there. The nonblock-mode code\nwill *never* be reliable under stress until something is done about\nthat, and that means fairly extensive code and API changes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Jan 2001 11:59:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Libpq async issues "
},
{
"msg_contents": "* Tom Lane <[email protected]> [010124 10:27] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > * Bruce Momjian <[email protected]> [010124 07:58] wrote:\n> >> I have added this email to TODO.detail and a mention in the TODO list.\n> \n> > The bug mentioned here is long gone,\n> \n> Au contraire, the misdesign is still there. The nonblock-mode code\n> will *never* be reliable under stress until something is done about\n> that, and that means fairly extensive code and API changes.\n\nThe \"bug\" is the one mentioned in the first paragraph of the email\nwhere I broke _blocking_ connections for a short period.\n\nI still need to fix async connections for myself (and of course\ncontribute it back), but I just haven't had the time. If anyone\nelse wants it fixed earlier they can wait for me to do it, do it\nthemself, contract me to do it or hope someone else comes along\nto fix it.\n\nI'm thinking that I'll do what you said and have seperate paths\nfor writing/reading to the socket and API's to do so that give\nthe user the option of a boundry, basically:\n\n buffer this, but don't allow me to write until it's flushed\n\nwhich would allow for larger than 8k COPY rows to go into the\nbackend.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Wed, 24 Jan 2001 10:33:42 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Libpq async issues"
}
] |
[
{
"msg_contents": "Hi All,\n\nI know that the whole \"id\" thing in initdb is a can of worms.\nWe have pg_id but we don't want to use it, every system has \na different version/variety of whoami/\"who am i\"/id.\n\nWe don't need/want to use the unix ID of the user, but we do\nwhen we can. The whole thing is a mess!!\n\nOne thing we need to do is change the default, \"if all else\nfails\" id from 0 to something else, as 0 is specifically \ndisallowed and causes an \"Abort\" in postgres when bootstrapping.\n\nHere's a patch to set it to 1. ( failing any better suggestion)\n\nKeith.\n\n*** ./src/bin/initdb/initdb.sh.orig Wed Jan 19 08:58:13 2000\n--- ./src/bin/initdb/initdb.sh Wed Jan 19 10:02:44 2000\n***************\n*** 129,135 ****\n # fail, and in that case the argument _must_ be the name of the \neffective\n # user.\n POSTGRES_SUPERUSERNAME=\"$EffectiveUser\"\n! POSTGRES_SUPERUSERID=\"`id -u 2>/dev/null || echo 0`\"\n \n while [ \"$#\" -gt 0 ]\n do\n--- 129,135 ----\n # fail, and in that case the argument _must_ be the name of the \neffective\n # user.\n POSTGRES_SUPERUSERNAME=\"$EffectiveUser\"\n! POSTGRES_SUPERUSERID=\"`id -u 2>/dev/null || echo 1`\"\n \n while [ \"$#\" -gt 0 ]\n do\n\n",
"msg_date": "Wed, 19 Jan 2000 23:41:21 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "initdb problems on Solaris"
},
{
"msg_contents": "Keith Parks <[email protected]> writes:\n> One thing we need to do is change the default, \"if all else\n> fails\" id from 0 to something else, as 0 is specifically \n> disallowed and causes an \"Abort\" in postgres when bootstrapping.\n\nActually, I see no reason why the superuser's Postgres ID number\nshouldn't default to 0. If there's code in there to reject that,\nisn't it doing the wrong thing?\n\nThe postmaster and backend can and should refuse to run with an\neffective Unix userid of 0 (root), but that doesn't mean that\na Postgres ID of 0 is insecure, does it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jan 2000 19:05:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] initdb problems on Solaris "
},
{
"msg_contents": "On Wed, 19 Jan 2000, Keith Parks wrote:\n\n> Hi All,\n> \n> I know that the whole \"id\" thing in initdb is a can of worms.\n> We have pg_id but we don't want to use it, every system has \n> a different version/variety of whoami/\"who am i\"/id.\n> \n> We don't need/want to use the unix ID of the user, but we do\n> when we can. The whole thing is a mess!!\n\nCan you be more specific in regards to what doesn't work, what needs\nfixing, etc.?\n\n> \n> One thing we need to do is change the default, \"if all else\n> fails\" id from 0 to something else, as 0 is specifically \n> disallowed and causes an \"Abort\" in postgres when bootstrapping.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 20 Jan 2000 12:18:11 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] initdb problems on Solaris"
},
{
"msg_contents": "Peter - \nIf you're going to look at the id for initdb, please consider the\nNT port as well. This may be more of a problem with cygwin than pgsql,\nbut pg_id didn't work correctly, and I had to hack on the initdb script\nto get it to to run, with a 6.5.3 install, recently. \n\nRoss\n\nOn Thu, Jan 20, 2000 at 12:18:11PM +0100, Peter Eisentraut wrote:\n> On Wed, 19 Jan 2000, Keith Parks wrote:\n> \n> > Hi All,\n> > \n> > I know that the whole \"id\" thing in initdb is a can of worms.\n> > We have pg_id but we don't want to use it, every system has \n> > a different version/variety of whoami/\"who am i\"/id.\n> > \n> > We don't need/want to use the unix ID of the user, but we do\n> > when we can. The whole thing is a mess!!\n> \n> Can you be more specific in regards to what doesn't work, what needs\n> fixing, etc.?\n> \n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n",
"msg_date": "Thu, 20 Jan 2000 14:28:00 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] initdb problems on Solaris"
},
{
"msg_contents": "On 2000-01-20, Ross J. Reedstrom mentioned:\n\n> Peter - \n> If you're going to look at the id for initdb, please consider the\n> NT port as well. This may be more of a problem with cygwin than pgsql,\n> but pg_id didn't work correctly, and I had to hack on the initdb script\n> to get it to to run, with a 6.5.3 install, recently. \n\nOh dear, I just wrote a new pg_id thingy that fulfills all my initdb\nwishes (no id, not whoami vs 'who am i', no Solaris stupidities), but\nwhat's now? How does NT/Cygwin handle it? Don't they have a geteuid()\nfunction? Feel free to hack up pg_id to always return 42 in that case.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 22 Jan 2000 15:24:05 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] initdb problems on Solaris"
}
] |
[
{
"msg_contents": "OOps,\n\nJust noticed we no longer have trusty \"pg_id\".....\n\n\nFrom: Keith Parks <[email protected]>\n\n>Hi All,\n>\n>I know that the whole \"id\" thing in initdb is a can of worms.\n>We have pg_id but we don't want to use it, every system has \n>a different version/variety of whoami/\"who am i\"/id.\n>\n>We don't need/want to use the unix ID of the user, but we do\n>when we can. The whole thing is a mess!!\n>\n>One thing we need to do is change the default, \"if all else\n>fails\" id from 0 to something else, as 0 is specifically \n>disallowed and causes an \"Abort\" in postgres when bootstrapping.\n>\n>Here's a patch to set it to 1. ( failing any better suggestion)\n>\n>Keith.\n\n",
"msg_date": "Wed, 19 Jan 2000 23:55:41 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] initdb problems on Solaris"
},
{
"msg_contents": "On Wed, 19 Jan 2000, Keith Parks wrote:\n\n> Just noticed we no longer have trusty \"pg_id\".....\n\nWasn't needed any longer. However, with the persisting problems on\nSolaris, I am tempted to take the 'id' command from FreeBSD (which seems\nto be POSIX compliant) and reinstitute it as new pg_id. That would also\nsolve the whoami problem (the reverse if what pg_id did).\n\nAny protests? Otherwise I'll get this fixed.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 20 Jan 2000 12:24:19 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] initdb problems on Solaris"
}
] |
[
{
"msg_contents": "Why do I get different date/time after explicitly setting timezone?\nThis is RH Linux 5.2.\n\ntest=> select '1998-09-23 12:05:10 HST'::datetime;\n ?column? \n------------------------------\n Thu Sep 24 07:05:10 1998 JST\n(1 row)\n\ntest=> show timezone;\nNOTICE: Time zone is unknown\nSHOW VARIABLE\ntest=> set timezone to 'JST';\nSET VARIABLE\ntest=> select '1998-09-23 12:05:10 HST'::datetime;\n ?column? \n------------------------------\n Wed Sep 23 22:05:10 1998 JST\n(1 row)\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 20 Jan 2000 12:49:50 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "timezone problem?"
},
{
"msg_contents": "> Why do I get different date/time after explicitly setting timezone?\n> This is RH Linux 5.2.\n\nBecause...\n\n> test=> select '1998-09-23 12:05:10 HST'::datetime;\n> ------------------------------\n> Thu Sep 24 07:05:10 1998 JST\n> test=> show timezone;\n> NOTICE: Time zone is unknown\n> SHOW VARIABLE\n> test=> set timezone to 'JST';\n> SET VARIABLE\n> test=> select '1998-09-23 12:05:10 HST'::datetime;\n> ------------------------------\n> Wed Sep 23 22:05:10 1998 JST\n\nOn my RH-5.2 box, \"JST\" is not in /usr/share/zoneinfo. A non-existant\nTZ evaluates to be GMT, but the system reports the string you gave\nit!! I don't recall ever running across this before. But the moral of\nthe story is: don't do that! ;)\n\nI'm not sure how one would check to verify that the timezone you set\nis actually a valid timezone. I'd hate to restrict it to the list of\ntimezones Postgres knows about when parsing input (since that is a\nsubset of the possibilities), though that is one solution...\n\n - Thomas\n\n[root@golem zoneinfo]# setenv TZ HST\n[root@golem zoneinfo]# date\nThu Jan 20 05:55:02 HST 2000\n[root@golem zoneinfo]# setenv TZ JST\n[root@golem zoneinfo]# date\nThu Jan 20 15:54:37 JST 2000\n[root@golem zoneinfo]# setenv TZ GMT\n[root@golem zoneinfo]# date\nThu Jan 20 15:54:45 GMT 2000\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 20 Jan 2000 16:01:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> On my RH-5.2 box, \"JST\" is not in /usr/share/zoneinfo. A non-existant\n> TZ evaluates to be GMT, but the system reports the string you gave\n> it!! I don't recall ever running across this before.\n\nUgh. RedHat's not the only one: on my HPUX 10 box,\n\n$ date\nThu Jan 20 11:13:26 EST 2000\n$ TZ=GMT date\nThu Jan 20 16:13:30 GMT 2000\n$ TZ=ZZZ date\nThu Jan 20 16:13:35 ZZZ 2000\n$ TZ=foo date\nThu Jan 20 16:13:53 foo 2000\n\nThis may be a fairly widespread bug^H^H^Hbizarre behavior.\n\n> I'm not sure how one would check to verify that the timezone you set\n> is actually a valid timezone. I'd hate to restrict it to the list of\n> timezones Postgres knows about when parsing input (since that is a\n> subset of the possibilities), though that is one solution...\n\nWell, we could solve a smaller problem: keep a list of the timezone\nnames we think are equivalent to GMT. Then, if we see a zero TZ offset\nfor any name not in the list, emit some sort of warning notice. Bit of\na kluge though.\n\nI am not sure that this relates to Tatsuo's complaint, though.\nHis issue was:\n\n> test=> select '1998-09-23 12:05:10 HST'::datetime;\n> ------------------------------\n> Thu Sep 24 07:05:10 1998 JST\n> test=> show timezone;\n> NOTICE: Time zone is unknown\n\nIf Postgres doesn't know the timezone, why is it displaying \"JST\" in\ndecoded datetimes?\n\nAnother odd thing is that I'd have expected the displayed time to be\nGMT if the system doesn't know the timezone --- but the time being\nshown here is 9 hours ahead of JST, not 9 hours behind... perhaps\nsomething somewhere *does* know the local zone, but is applying the\ncorrection backwards?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jan 2000 11:29:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem? "
},
{
"msg_contents": "On Thu, 20 Jan 2000, Tom Lane wrote:\n\n> Thomas Lockhart <[email protected]> writes:\n> > On my RH-5.2 box, \"JST\" is not in /usr/share/zoneinfo. A non-existant\n> > TZ evaluates to be GMT, but the system reports the string you gave\n> > it!! I don't recall ever running across this before.\n> \n> Ugh. RedHat's not the only one: on my HPUX 10 box,\n> \n> $ date\n> Thu Jan 20 11:13:26 EST 2000\n> $ TZ=GMT date\n> Thu Jan 20 16:13:30 GMT 2000\n> $ TZ=ZZZ date\n> Thu Jan 20 16:13:35 ZZZ 2000\n> $ TZ=foo date\n> Thu Jan 20 16:13:53 foo 2000\n> \n> This may be a fairly widespread bug^H^H^Hbizarre behavior.\n\nOdd. Here's how FreeBSD acts:\n\n$ TZ=GMT date\nThu Jan 20 16:47:29 GMT 2000\n$ TZ=foo date\nThu Jan 20 16:47:36 GMT 2000\n$ TZ=EDT date\nThu Jan 20 16:47:47 GMT 2000\n$ TZ=EST date\nThu Jan 20 11:47:54 EST 2000\n$ TZ=PST date\nThu Jan 20 16:48:03 GMT 2000\n$ TZ=ZZZ date\nThu Jan 20 16:48:09 GMT 2000\n$ TZ=JST date\nThu Jan 20 16:49:00 GMT 2000\n$ TZ=MST date\nThu Jan 20 09:50:05 MST 2000\n$ TZ=CST date\nThu Jan 20 16:49:32 GMT 2000\n\nStrange, it does MST and EST but not CST and PST. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 20 Jan 2000 11:51:29 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem? "
},
{
"msg_contents": "> Well, we could solve a smaller problem: keep a list of the timezone\n> names we think are equivalent to GMT. Then, if we see a zero TZ offset\n> for any name not in the list, emit some sort of warning notice. Bit of\n> a kluge though.\n\nUh, yes it is :)\n\n> I am not sure that this relates to Tatsuo's complaint, though.\n> His issue was:\n> > test=> select '1998-09-23 12:05:10 HST'::datetime;\n> > ------------------------------\n> > Thu Sep 24 07:05:10 1998 JST\n> > test=> show timezone;\n> > NOTICE: Time zone is unknown\n> If Postgres doesn't know the timezone, why is it displaying \"JST\" in\n> decoded datetimes?\n\n\"Time zone is unknown\" is the usual state if there is not an explicit\nSET TIME ZONE by a client. Doesn't mean anything more, and doesn't\nimply that the backend can't do timezone stuff. Postgres relies on\nsystem-supplied routines if the year is between 1903 and 2038 (mas o\nmenos; I didn't look it up).\n\n> Another odd thing is that I'd have expected the displayed time to be\n> GMT if the system doesn't know the timezone --- but the time being\n> shown here is 9 hours ahead of JST, not 9 hours behind... perhaps\n> something somewhere *does* know the local zone, but is applying the\n> correction backwards?\n\nHST is interpreted by Postgres as Hawaii Standard Time, which is on\nthe other side of the date line from Japan. Planning a vacation\nTatsuo?? :))\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 20 Jan 2000 16:52:47 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem?"
},
{
"msg_contents": "> Odd. Here's how FreeBSD acts:\n> Strange, it does MST and EST but not CST and PST.\n\nTry PST8PDT for the Pacific TZ and CST6CDT for Central time. Not sure\nwhy the zinc databases have entries for EST and MST as well as for\nEST5EDT and MST7MDT (at least on my RH-5.2 linux box).\n\nI like the behavior that it prints GMT when given an invalid time\nzone; that is actually the behavior I recall when testing this a year\nor two ago. Something changed/improved/broke in the meantime with some\nof these boxes...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 20 Jan 2000 17:24:40 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> HST is interpreted by Postgres as Hawaii Standard Time, which is on\n> the other side of the date line from Japan. Planning a vacation\n> Tatsuo?? :))\n\nThen there's still something wrong:\n\n> test=> select '1998-09-23 12:05:10 HST'::datetime;\n> ------------------------------\n> Wed Sep 23 22:05:10 1998 JST\n\n10 hours behind JST (= GMT+9, IIRC) is in the wrong ocean to be\nHawaii...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jan 2000 17:42:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem? "
},
{
"msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> > HST is interpreted by Postgres as Hawaii Standard Time, which is on\n> > the other side of the date line from Japan. Planning a vacation\n> > Tatsuo?? :))\n\nI wish I could do so:-) I hate the cold winter in Japan...\n\n> Then there's still something wrong:\n> \n> > test=> select '1998-09-23 12:05:10 HST'::datetime;\n> > ------------------------------\n> > Wed Sep 23 22:05:10 1998 JST\n> \n> 10 hours behind JST (= GMT+9, IIRC) is in the wrong ocean to be\n> Hawaii...\n\nRight. HST is GMT-10, and JST - HST = 19 hours. So '1998-09-23\n12:05:10 HST' shoud be 'Thu Sep 24 07:05:10 1998 JST', rather than 'Wed\nSep 23 22:05:10 1998 JST'...\n\nLooking into the zoneinfo files under /usr/share/zoneinfo, I found 'Japan'\nas a valid zone name (I could not find 'JST' too on my RH box).\n\ntest=> select '1998-09-23 12:05:10 HST'::datetime;\n ?column? \n------------------------------\n Thu Sep 24 07:05:10 1998 JST\t-- correct\n(1 row)\n\ntest=> set timezone to 'JST';\nSET VARIABLE\ntest=> select '1998-09-23 12:05:10 HST'::datetime;\n ?column? \n------------------------------\n Wed Sep 23 22:05:10 1998 JST\t-- wrong. seems interpreted as GMT (UTC)\n(1 row)\n\ntest=> set timezone to 'Japan';\nSET VARIABLE\ntest=> select '1998-09-23 12:05:10 HST'::datetime;\n ?column? \n------------------------------\n Thu Sep 24 07:05:10 1998 JST\t-- correct. but why showed as JST?\n(1 row)\n\ntest=> reset timezone;\nRESET VARIABLE\ntest=> select '1998-09-23 12:05:10 HST'::datetime;\n ?column? \n------------------------------\n Thu Sep 24 07:05:10 1998 JST\t-- again, correct\n(1 row)\n\nSeems something wrong with my RH 5.2. Note that FreeBSD does have the\nproblem.\n--\nTatsuo Ishii\n\n",
"msg_date": "Fri, 21 Jan 2000 10:56:51 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] timezone problem? "
},
{
"msg_contents": "> That is typical when you use the long form of the time zone name such\n> as \"Japan\". You will also find a \"US/Pacific\" on your machine:\n\nMaybe I'm going to check where the translation Japan -> JST has benn\nactually done.\n\n> > Seems something wrong with my RH 5.2. Note that FreeBSD does have the\n> > problem.\n> \n> Sorry, FreeBSD also has the problem, or does not??\n\nSorry, FreeBSD does *not* have the problem as far as I know.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 21 Jan 2000 14:25:41 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] timezone problem?"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> > Thomas Lockhart <[email protected]> writes:\n> > > HST is interpreted by Postgres as Hawaii Standard Time, which is on\n> > > the other side of the date line from Japan. Planning a vacation\n> > > Tatsuo?? :))\n> \n> I wish I could do so:-) I hate the cold winter in Japan...\n> \n> > Then there's still something wrong:\n> >\n> > > test=> select '1998-09-23 12:05:10 HST'::datetime;\n> > > ------------------------------\n> > > Wed Sep 23 22:05:10 1998 JST\n> >\n> > 10 hours behind JST (= GMT+9, IIRC) is in the wrong ocean to be\n> > Hawaii...\n> \n> Right. HST is GMT-10, and JST - HST = 19 hours. So '1998-09-23\n> 12:05:10 HST' shoud be 'Thu Sep 24 07:05:10 1998 JST', rather than 'Wed\n> Sep 23 22:05:10 1998 JST'...\n> \n> Looking into the zoneinfo files under /usr/share/zoneinfo, I found 'Japan'\n> as a valid zone name (I could not find 'JST' too on my RH box).\n> \n> test=> select '1998-09-23 12:05:10 HST'::datetime;\n> ?column?\n> ------------------------------\n> Thu Sep 24 07:05:10 1998 JST -- correct\n> (1 row)\n> \n> test=> set timezone to 'JST';\n> SET VARIABLE\n> test=> select '1998-09-23 12:05:10 HST'::datetime;\n> ?column?\n> ------------------------------\n> Wed Sep 23 22:05:10 1998 JST -- wrong. seems interpreted as GMT (UTC)\n> (1 row)\n> \n> test=> set timezone to 'Japan';\n> SET VARIABLE\n> test=> select '1998-09-23 12:05:10 HST'::datetime;\n> ?column?\n> ------------------------------\n> Thu Sep 24 07:05:10 1998 JST -- correct. but why showed as JST?\n> (1 row)\n\nThat is typical when you use the long form of the time zone name such\nas \"Japan\". You will also find a \"US/Pacific\" on your machine:\n\n[root@golem zoneinfo]# setenv TZ US/Pacific\n[root@golem zoneinfo]# date\nThu Jan 20 21:24:24 PST 2000\n\nwhich is the same as PST8PDT.\n\nIn /usr/share/zoneinfo/US, the mysteries of the various states'\nconventions are revealed:\n\n[root@golem zoneinfo]# ls -1 US\nAlaska\nAleutian\nArizona\nCentral\nEast-Indiana\nEastern\nHawaii\nIndiana-Starke\nMichigan\nMountain\nPacific\nSamoa\n\nwhere, as Vince pointed out, Indiana, Michigan, and Arizona seem to be\nspecial cases within the usual three timezones.\n\n> Seems something wrong with my RH 5.2. Note that FreeBSD does have the\n> problem.\n\nSorry, FreeBSD also has the problem, or does not??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 21 Jan 2000 05:28:44 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem?"
},
{
"msg_contents": "> Maybe I'm going to check where the translation Japan -> JST has benn\n> actually done.\n\nYou will find it in the timezone file itself. Use \"zdump\" to look at\nthe file of interest:\n\n[root@golem zoneinfo]# zdump -v /usr/share/zoneinfo/Japan\nJapan Fri Dec 13 20:45:52 1901 GMT = Sat Dec 14 05:45:52 1901 JST\nisdst=0\nJapan Sat Dec 14 20:45:52 1901 GMT = Sun Dec 15 05:45:52 1901 JST\nisdst=0\nJapan Mon Jan 18 03:14:07 2038 GMT = Mon Jan 18 12:14:07 2038 JST\nisdst=0\nJapan Tue Jan 19 03:14:07 2038 GMT = Tue Jan 19 12:14:07 2038 JST\nisdst=0\n\nWow, that is a short set of rules! The PST8PDT file is 374 lines ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 21 Jan 2000 05:47:49 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem?"
},
{
"msg_contents": "Has anyone noticed the following timezoning problem..\n\nIf a datetime variable is read out, and then inserted back in again\n(verbatim) I get a change in the time value. I suspect that it because\nout lime zona Australia/Adelaide is CST, which I belive is also an\nAmerican timezone. Trimming the timezone info (CST) off, fixes this\nproblem. Can anyone shed any light?\n\nHow does one get the +1030 timezone format?\n\nPaulS\n \n> > Maybe I'm going to check where the translation Japan -> JST has benn\n> > actually done.\n> \n> You will find it in the timezone file itself. Use \"zdump\" to look at\n> the file of interest:\n> \n> [root@golem zoneinfo]# zdump -v /usr/share/zoneinfo/Japan\n> Japan Fri Dec 13 20:45:52 1901 GMT = Sat Dec 14 05:45:52 1901 JST\n> isdst=0\n> Japan Sat Dec 14 20:45:52 1901 GMT = Sun Dec 15 05:45:52 1901 JST\n> isdst=0\n> Japan Mon Jan 18 03:14:07 2038 GMT = Mon Jan 18 12:14:07 2038 JST\n> isdst=0\n> Japan Tue Jan 19 03:14:07 2038 GMT = Tue Jan 19 12:14:07 2038 JST\n> isdst=0\n> \n> Wow, that is a short set of rules! The PST8PDT file is 374 lines ;)\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n> ************\n> \n\n",
"msg_date": "Fri, 21 Jan 2000 16:28:43 +1030 (CST)",
"msg_from": "Paul Schulz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem?"
},
{
"msg_contents": "> If a datetime variable is read out, and then inserted back in again\n> (verbatim) I get a change in the time value. I suspect that it because\n> out lime zona Australia/Adelaide is CST, which I belive is also an\n> American timezone. Trimming the timezone info (CST) off, fixes this\n> problem. Can anyone shed any light?\n\nYup. Fully 1/4 of our timezone lookup table is consumed by Australian\ntime zones (y'all have multiple names for *everything*!). There are\nsome name conflicts, of course :(\n\n> How does one get the +1030 timezone format?\n\nUse ACSST or CADT or SADT (at least that is what is defined in the\nPostgres lookup table for *exactly* the same time offset).\n\nOr...\n\nApply the enclosed patch, then compile the backend with:\n\n -DUSE_AUSTRALIAN_RULES=1\n\n(Or move to another country. Recompiling the backend is probably\neasier... ;)\n\nThis is covered in the docs in the appendix on \"Date/Time Support\",\nbut CST was not included and it looks to me that EAST had sign\ntrouble. Both are fixed in the enclosed patch.\n\nbtw, the patch also tries to fix the \"GMT+hhmm\" timezone format\nreported recently as being available on FreeBSD; perhaps someone could\ntest that at the same time.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California",
"msg_date": "Fri, 21 Jan 2000 07:16:55 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem?"
},
{
"msg_contents": ">> If a datetime variable is read out, and then inserted back in again\n>> (verbatim) I get a change in the time value. I suspect that it because\n>> out lime zona Australia/Adelaide is CST, which I belive is also an\n>> American timezone. Trimming the timezone info (CST) off, fixes this\n>> problem. Can anyone shed any light?\n\nYes, and even worse, CST also is \"China Standard Time\" in some operating\nsystems. I won't go into how broken every operating system is vis-a-vis\nChinese timezones (but, believe me, it's a mess).\n\n>From here on out, I'm strictly in \"+0800\".\n\n>Yup. Fully 1/4 of our timezone lookup table is consumed by Australian\n>time zones (y'all have multiple names for *everything*!). There are\n>some name conflicts, of course :(\n\nI've become convinced that any project that thinks it is going to keep \ncomprehensive, accurate, non-conflicting, non-obsolete timezone information\nin an application-specific table is woefully misguided.\n\n>btw, the patch also tries to fix the \"GMT+hhmm\" timezone format\n>reported recently as being available on FreeBSD; perhaps someone could\n>test that at the same time.\n\nDoes this patch apply cleanly against 6.5.3?\n\n\t-Michael Robinson\n\n\n",
"msg_date": "Fri, 21 Jan 2000 16:39:00 +0800 (+0800)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem?"
},
{
"msg_contents": "> Yes, and even worse, CST also is \"China Standard Time\" in some operating\n> systems. I won't go into how broken every operating system is vis-a-vis\n> Chinese timezones (but, believe me, it's a mess).\n> >From here on out, I'm strictly in \"+0800\".\n> I've become convinced that any project that thinks it is going to keep\n> comprehensive, accurate, non-conflicting, non-obsolete timezone information\n> in an application-specific table is woefully misguided.\n\nYup. And that brings up an issue: I would like to have the *default*\nstyle for date/time output in 7.0 be ISO, rather than the current\n\"traditional Postgres\". I was waiting for a major rev to do this (but\nit probably should have happened before the y2k change of year). It's\na one-liner to update this.\n\nBruce, can you add this to the \"critical items\" for 7.0, barring fatal\nobjections from other developers?\n\n> >btw, the patch also tries to fix the \"GMT+hhmm\" timezone format\n> >reported recently as being available on FreeBSD; perhaps someone could\n> >test that at the same time.\n> Does this patch apply cleanly against 6.5.3?\n\nI'm not certain, but it should since this area of the code does not\nchange very much. If you apply with\n\ncd src/backend/utils/adt\npatch < dt.c.patch\n\nyou should get a dt.c.orig so can revert easily if necessary.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 21 Jan 2000 14:14:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem?"
},
{
"msg_contents": "\nOn 21-Jan-00 Thomas Lockhart wrote:\n> \n> In /usr/share/zoneinfo/US, the mysteries of the various states'\n> conventions are revealed:\n> \n> [root@golem zoneinfo]# ls -1 US\n> Alaska\n> Aleutian\n> Arizona\n> Central\n> East-Indiana\n> Eastern\n> Hawaii\n> Indiana-Starke\n> Michigan\n> Mountain\n> Pacific\n> Samoa\n> \n> where, as Vince pointed out, Indiana, Michigan, and Arizona seem to be\n> special cases within the usual three timezones.\n\nMichigan isn't a special case. We're EST5EDT, I never did figure out why\nwe're listed in there. Perhaps we were among the first to fully implement\nDST? I know I remember we were doing it in a test case long before it went\non the ballot in the state (which was controversial in itself).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Fri, 21 Jan 2000 11:23:40 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] timezone problem?"
}
] |
[
{
"msg_contents": "\nWell, I finally solved the linking problem\nthat kept me from making perl secure.\n\nAttached is uuencoded tarball to add PL/perl\nto postgresql.\n\nThings I know don't work.\n-- triggers\n-- SPI\n\n\nThe README file has a _VERY_ short tutorial.\n\nI would appreciate it if someone would test this on\na non-ELF system.\n\n\n\nbegin 644 plperl.tar.gz\nM'XL(`&V#AC@``^P\\_7/B1K+Y5?P5$]]=%AQLX\\WNI=[ZLE4LR+9N,7`(.TGM\nMVZ*$-(\"R0M+J`X?D]G]_W3TS0@+ACV<Y55<7U]8:1O/1W=/?W7+HA3SR3KYZ\nMSI]6ZU7K^]>OX7?K]/O7K?QO]?-5Z_O6J]/7K=/7W\\&\\T]:K[U]^Q5X_*U3R\nM)XT3*V+LJ^5B<>>\\^Y[_A_Z$XO[%KV/[6<Z`^VS]_=6K\\OM_V0+>.-W<_TN8\nM=_KR]+M77['6LT\"S]?-??O\\GAY7\\U-@A4SS$CAA^8%;,+!9&@<V=-+(\\YEG^\nM/+7FG,V\"B`V#.)E'W/Q7#Y;B:J.K]\\?&N=%ICXU!7PQJC(T7;LSB8);<6A%G\nM\\-D.PG7DSA<)=]ATS:ZLZ!.[##PO6`8^[L/8-$UP8KRPEMSC,2R)W.D4IL^B\nM8`E0)C8\"\"6O_:?GL1^[:GXYSQW%FI<D\"(%SPB,.D>63Y28P8+=TX=@.?L21@\nM+(UYDQ$P36T9..YLW10[.&Z<P'EI@L\\MW]$\\U^9^S%E2P`2>,!?V=0([77(_\nML1*7X(<=D#R6OV9A&H4!'@,T7+D.()`LK(3Q7^$$UY]O*,'\\((%#@-P1%UM$\nM/+%<GSN:ZVN6YQ&@+DR@8VD;C<`1\"Y%:KF][*9PAUJ]X-`6(EC!,H&1(`9#Q\nM,>L'[#9RDX3#4[A$C@@TF414T@&08%&PMKQD#2AQKL$A$?^<NA'>A$0QF`$P\nMBN+N;_`$Z!H?BQVND*RN392)D>A%\"BZM-9MR5F`(O%7<T8W$%O(J!=H^QQF`\nM,I(8&.:6IDJPD:`)W#'<\"(^1810ID`VVKP\"7^[email protected]*_+@=-=W$&8\\\nM2ETH3I^Y49RP$-D?D.:6O8`AC[-;W!]GK)D5AMYZPXI&G_4'3+\\!L6#F9;O7\nM8^-+G;6OQY>#$8-_7<,<CXQWU^/!R&3O=-8SVN]Z.AL/6+O_L]ABV!Z-?V;L\nM'&;#])'>&3<UHZ\\^F4.]8[1[P*9P5H?D#[YIL+=8W1GT3?U?URB6[1Y,ZK:O\nMVA>ZR5A[9)A&_T(;7(\\9&YPS`HQ=F[KZ9IAB!W-P/OZQ/=*;S!B;K#OH7%_A\nM(2C@340!`&5=?63<P-`-[`S[C/3!>9.PEE10VTO$V67[1@=T]3YK=V\\,4^_B\nMF3AC.#!-XYW1,P!E&#*O.Y=B\"P%W3L;SV[7[7:U`249D03W4Z_VL(94[O;9Q\nMI65$!81&;:\");C:)<+WK+A`#/L,&6G\\`).D95\\88`(.[:-)QFG$U[!EZ=WL#\nM0:\\K?=2YA(&V`+ZIG1OCOFZ:\\N+:XAZ-SG6O+6]F>#T\";/4F`-4%)ND?&?WS\nM$0\"A(WF/Q0UDQ&?P>3@:W,`%`ZTD6=M]=M`VX=$!>]<V#4`%M]HP&.U<9#%!\nM>3A.[#!XUS,NZ\"H9(JK)(^#+5=OHC_5^N]\\!\".$>AL/!:`R$N!YVVV.@FEBO\nM]R]Q!D)L$C-<#;J9^C?E;57R<U*KU4X.6;R.$[YD<9+.9@P&_R)5'OM'G#AN\nM<+QX6QSRW.G.F!7-BV.I#YK1*8[-;#_QMI=&H&\"VQGCRRS+$,80NS.QA\"80'\nM_%=NITD0G<2A>[PXR#VQ@^42M%M\\`B?,YSPJ/@6%[<4GW`OF9>/3U/7`DL3%\nM9[/E?&L7RP;C$I\\LN!5:2WR4>Y:`^CW!_T!#)F6'`-EMT'9\\\"VPKL0\"JDW`^\nM00]A[\\-D'=)*(A$Z%1EQ3LAKT%>NQSH!K&I[/$JD+S$-D@525!(4M3\\M3J(U\nMVA$G8\"]0)POC\"5;%G[_`931/*'H&%PNF/N;+U&>NL%*A9R5@N):PGL?^BP0L\nMQ0R,+*YT$[*4BL>`4@Y<!3E%\\6>/C!J)S^2Z#]P],?6KZSX]1:CD4W/[81+,\nM.9P;-85YL\"V8\"-9N&:+=X%$41&0IF3%CMUQ8-S=I\"E#Q5`$%V*=IL`)#XX(?\nMH+!S$UII!K@RX?B$Z(/ǦUDK-02Z<^8#I#H1_$>COCG.P@+,<H^L_C?51\nMO\\C\\Y\"DBZ]<J=$#1>7-]O\"+R&1`38CVD`/B$7F\"A!5<>*8^K4S#(ID@CD//4\nM3J0G3)P]07>B]GM-LQ=6!&J/'<*H#Y[I64T[!TDS`%X-8$F]9.+ZDUGJV_!D\nMX#J:EAL&/W8)PZZ?%(8][F>C/NBF.+\\I?)\\`VK3EA_/K?F=RU?YITAY=F!^S\nM$]0<W']WCMA8S8'#[IKBQI.(>[LSS)M#@GD&?HYO`]I?X.LV@<[^4$8((QY:\nMZ(:BP,?`ZP[*MO_L_/`YY=&ZR!#:9^2%#R];2*I5`)=\"+`+0[%PLW1@^A>^X\nM?_ZRY3#@#)==-AGN%X>[5I(NU?C*\\E+:A<Z1@\\12N2O:P%SQ'5UXP12\"0@=T\nM?75TCS&.LADB).$GO]O&&.@'=GI6,@&?`6^O.,YHE<T`>4NLB`0.&!9L\"H24\nM^:E#F&3X$`L`7\\'_>'VT+K9F'!;A`YC?O^[ULB67-]DLDH&%%2^R.:!PV4A'\nM]Q--AC$>],$O>P??U>JQ[4TN8<78FH(E.\"S<5'$GH8JKO+;S((+HRP';9WM6\nM)&*SRN\\/!4$)C>N[R03NJ(Z#C;.]4W+45E-KQ.XL?]$+$'EP%.J9X*`Z1E71\nMK>P_#)#4D&/4/A:X)#$7CLT(W[J>?AQA*(POXH>O?M#T+VF/U!=ZK=T5=S\nM)DD:>GP\":RAG4+\\$CVR,0XP>-!E]Z8*PX@`*[5T$6Z`.G(\";&V<$`]9#M74G\nM]VT.E;M)SW,_[CF,X(+4,O1*ZQW/!42`CA:S414TB;F%-`%!Z#>2CC%:\"IC;\nM38::4VBP#Q\\;9R5;?TZ#A#]X;VWOSB5;FT-C@M[XPW=_'.BXO[1/#S]\">_01\nMB$+XB`/N)5&>O2\"RD9PJ3$R]9-=LG\\A:HPF4\\B>)!:O]H,D>Q-]*R56IY;9U\nM3T/3CI@!WUS+<W\\#/\\+SGD7IU4J5'G@+->U)V-4T0*H;,%>A(/RBP/?6\\)_-\nM\\?F3<-#`9M6_WK:YC1IZ?DD:^>@\\5($\"F.(H6%.,D^5!4?,S=V.`JT&FQ(A_\nM+>PJ8`7NFT:/'2Y<O)+IP)IRTBSB?.^$_;Z\"]J42FIW#Z2(H!$^#D7^0H.-0\nM*9DV7DR12(O5),7H<6<>(6^N1OIYIS\\&,MKU.EB[!BN=M]=5JIY$,C(@9ZI2\nM8IT<%@B6\\]8*%/M2TS!A50%2_>\"61=R.N)6`PJ)\\=?62LL\\#0\\M0YH*#UYQI\nMA\"_/I[,+H+`CUA%4P\"LF&J#+7B#$LROS'?>4E+HP@GPYY8X#JNS#1R#1[^S@\nMH,D.CCC^_TZ_,/HPA*F?+AC*'N8PHC-526$F;'OV!6>V#MB7'-&+\"H6&X`8\"\nM&V^FH*SS\"@E$C;PR?30:C&#/O21]P^P@]1PJIM\"V2-UPBZ@'@@G(]X8HX2X]\nM*<3;BN(R/=G<Y[8VV7=-EA&O*:1([1:E?OEA-:T2\\<IY`YERQ7S\"MN:([Q*R\nMA[`=2-F.5H0K!7F^O!%R5@4RP+>>1YBHC#(5Z<!79'$:AD%$&<Y,@+:4\"#.#\nM)7>L]?&3-0HJA6?0\"OG@#WTY*BO#/T2(O*\"5&[L836,4ITJQY(_*$N6PE\\?Y\nMF%),A1JVF*V6,R\"A!4$1L80,D-18-MD6)`<PLF5(<Y'L)_]LELTEX=JD+H\\K\nM+(Z<'+)/G(<@S/Z15%\\P3+%MK81\\CXQMU;-\"B(OY+Y&+0FL`44)5;)S)9.ZR\nM8H:$I?HKXE&IWU-P=7>BAFJPZ@2^S^T$*Q<HCHJ+*D$#8T%;[`^&$CP1'!B\\\nMGW0&_;[>&>^Q\"*#^+1]UOUT*&BK^\"O!^CTR91);]2>9M1=E=./T@EA!5'L&I\nM1_C[^/B8+O=.;?LPLNSD!;_]MII[[\"(S+K%D`I3'[3$#G9-]$'VE*4#2:`EI\nM\"`MK:V'DHHV-TZD4PVKNOY-&$?>3L3B7,@$_9.ZHE$WT'TI27$KX69;%RO)7\nMP#/@K_Q^!_.\\2,BED2Z#PCMF:YX<$_N0O[P!H$[:HK$OY;1)-&GH/N-J8,9/\nM7!DT+)E%8)3#<,WH^<[&+1E,[-S^T1'>OO!9F5)6S^.Z\"E]]`G<L$@[$T,*[\nMV'*M0.!PG:;9&\\?6`OV]7@9IC$P2@;P@I]TN`FP]XK\\FHLF'9IHWQV+UB+\"*\nMY2!*,[8-42L,C(QN1-,-4-`+8K`Z%1H=86=J<.AA;1=['(X;R$\"..43R8R(4\nML>*S+/2K:7I_K(^PPM2^T<=70Q,^#J_-RZOVZ#WHM<P1Y'!CDWA5!P:]F)B=\nM=J\\]8O^&C_I-NT<?WNOZ$%@4%YC#]D7;Z./NI!U7X]&U7K]8F3=U'D7S5:-!\nM4&G#P1`+)W#>^%V[\\QX_GH]T!876TP&DLVWV)P2%6<\\9?,M%/?\"&_2T&+]Y<\nM#6_RQS69;S4:69Q+.H$*O7`QQ-\\6<U!+8@^4\\E0$JY(GAG5IX'XPQ\\%,/`2Z\nMHSK%SKR$+\\&MLR+AM(%09`0&U]*\\`8HAE@U%BZ\\A6!^\\KXM)B@SWH$\\RS+*K\nM`PX,T@@`C$.(!]RIZ[D)]G\")RK9++(I690G>IN71TI/M+($\\?X>X.TSTACFN\nM@WIFSB%&`3(X2)K9@:+F!O8<Z`IR*>[B,!+WYPI4<TEXDGG9P?`C5R8VC85X\nM;Z(_0`SB^L2C7@;\"3IC&08A(XNIEX*3@R]9A+35)8I1X'\"X;Q^\"G-[$:ZKG^\nM)SD?U4J`]Q)S;T4U<?)9'2'Y6PG#*5^XLIGB1<RFEFB=!`>O0G<4=!4'VE-\"\nM&9S&9)+#?%*O=T`5V\"L4BIV)$J'\"I/RLVC[*JX!<QN/4F/<#FTS.C9X^F0`_\nM@$3\\9-8/-I\"\\>8,G0DQKA0?-;3\";U-K7V*P3@)6L$0^R^<_\"9V3*,O.=MRR6\nML\"N9+DH68#;F\"ZG\\Q<7&28#Y,&DYP,YBUEW6M=$D@!J*8W#Z8LDN(=;7+6R+\nM1*<%ME\\_@]$XK)7B5LQ&$J\"'U,[9+$8FZ*W0=>>LB[+L5.EPY6\\[2/V$4M>[\nMMB9G;&+*86#H5G<IQ\\5<]@\\Z^.BMJ-XS]]MOA<I$52J>Y)HFW(_BH5\"8&KNS\nMZ@=\"F*`+('64)5<(I4FV79-.$\"&&Z0*AUN\\L)8HPK5ZG\"@L5EDT/M,]A`XB*\nM*!R]Q0(10MHX>@N4:CYJ09+8XL`N-=>&P%2DPK6?D(AQ70))8U]8YCU*>4R6\nMF,.J'V)GVF1F.4Y4_R9'Q*S)!<]JR#JNM@6%JN]JK+B0.E_V/\\:F%Y?J7!M@\nMX]7DI:!VG2QEN*H#A$W6:C0$4B$E_&%,X$-)W8VY(:92.3EB8#\"UXM2L2^9^\nM5V7'5Q';?OW#:<$TL[QM9OM]$\\$:;,MH2A9SL:>-FN>5JLBLZ&.\\I$?#0CUN\nMQ8,?Y\"AESOVV(U/+@Y*'A`!A)8Y^[7ET<B&0(HU\\*;Y0`BCB\\]0#QL^0%I%M\nMU3GJ0H;G\"=T+>S,\\HG\\)E:GH=:(`QK=(/U/;T^O3EUD3F1R3/6U9MQS6^USG\nMK+;3,D;GB[XDT6.&C^':5)<3+-XH]=B=_[(,0?/--.SW4BT]U<3U[U\"?DF>K\nM$!398$1'<\"^]`\"`O,V8#UWEZ\\\"[H`CP._X.B4A'PT=N9/Q'-'#C%#M?U;:J#\nMA$TFP]Z&F@=JLI643!8'431%:BWW?7-C``>,PX>=#2K*@?6\"X%,:2A]CA\\C2\nM.T$K4F%I\\VNV6$VHRASOE\"^;;)=2.7HT9/GSB8AK67<O=N2B7IZ!EG=DNW$!\nM97+K+8!G_1L7\"XM,IPQ]3-Z]T'+8&\"AZ(-')VZQR_:,3X*FCS>HM]V_J!1A%\nMPDH1;XFEHL:80<3@M&A-\"@U;B7&*F#=><&K)Q'[<K'9\":6T1BY05&<3ZI]VI\nMEK63(.O:\\.FL,(CDD(/G0;2<R-YT`L\\DAS?_B(B'_VT>8=J\"9D\\RYG`P?-2D\nM_BL^0MWYX57K?_[^,9LAVX/M20S:!'MEJV&AMJK0B1O:D-N1WAB%=GBG55!9\nM,<D/9>YX`^(%*D/&[F\\\\F.W,$!Z4W.+HK10IV$LMVZ-EV+?LE)9*I;>]PZZ\\\nM-JJB;DXU*8XA5Q<#;U+UE9`4V17(8'(KLA?F.NY@/S/Q;7TX&G0&1C=S8=$8\nM#Z:_<#LQG`N>D#7<L0^%V:TF_2,\"DN;+A,*(P=J[3EU\"0.XU>>:BIV6+RL+]\nMS3\\2(WNRP-B1[0GJB5<S)?E$3NQ`><_:1@\"1J?*RV6`7^M@$][,SSD\"LZEZ!\nM=#*?)E]:S'>4DU:C8-<._!6/8IG7@P5BM?0CJ0&N$N4EM=,>'AC_/-0+/%#&\nM`)*(=%T`'^[X6#Z04#P?'P198$L*=IL5-@HWSPJ$28X5%)3$\"HC(9AG$HFLP\nM*8`+5KL,?X5H#3#]4QE\".?ACX@DDH,ALRE(Z,-.:)PHK^(\\B6^2O;4B)QYKL\nMFPU$Q7=%MO1E\\8T1)!&BMH,_/MRW4'AS6PO(*?^#Q`H\\CZ)<B75`2,O+E<`W\nM'DV%1DOF:S`ZS^E+]09&F0%O8<O.B_]MO<#GVRF@W4TH$:18[7$B?9]$J]=\"\nM,%^1D^HML7Z`7`O@[A*$$DEXH&QG2:P=Z2;Q_O_(]R,$/'?/A>R;>#>$D(CY\nMW?-:XLR]`BNXMR\"Q.SDJD23:?BPS4?<+[>Y2D:7:+[9?*I+=7+612HKJ]?V2\nMRJ.P>>CF.P%3=0R+PC4G7Z\",@^RM0]L2F8ZLOJ$BDDU6.DN0Y>*)ITN^\\K>1\nM@(`6AM/?;$M7'$E=JVI:HIJ&K_5BR,.F'/1F''IN(FV8Y8CZEB0$W(45(S$P\nMBA/Z;;FD=<DM+H7MEL?L,@\"IX*#WUH%/?Y:AW3>-K!\"V$UJH%%>XFE$\"]^\"O\nM;][\\*@9%9^`!RX8!\"^P^2\";8[E1_\\0:66Z#R7S3.#[email protected]@_B`Z&,&)!F3*R-\nML(I:9Y\\_X-VQW]G?8O:%?82U398CH!#(4&F'XG`%_)=KX\\S'C)[email protected](J]`@\nM9\"RW:8/(5:IWKF3C.^WN4*G3)/IO!$$**1%!%<R2BL1<2>!3G49H.YL0?C><\nM5/T#N<3,TV\\&LS*4L'AL4J9)PD`\"XZ[J=>.FH>@LC>.7FC0`%:5NSBE9<P=Y\nM=O-6SQY^FROCIGX())SQQ%[LDE`X#O?0L:52[35MR9<8;W^33ZN\"]?O1BOS-\nM5QGLY^<TLL:\"V)V+/X50+RXZI7ZTELRDJ7.*<[:.E>?DYS3R+Q\"4O\\EYJD3V\nMZ*CDG5`!PCU;M\"CUX,Z]P)^78:*H54$SH&K3'>9+MQ5TO8GL>4[)%<O&DJVR\nM;J^*\\KI=-U8]A)0\"R'41BM99S!7FW`YAE&BI>+<,7R!.E^!USW(%Z*QP@G^I\nM`<O<E#R2J<B:L-Y@7)=I3'\\[\"?^,D.IBSTK=!`E?!M&:^J+`+Z\"%JEL%^R1G\nMKN_&\"W9+K?\"HN&'WQM-;H+/&3+%_H2_SW.@;YN7>MLS\"JHUW7=OM=RM64_=&\nME\\CY=2JV20:A0IMXKW%/U\"E?URQ[C)EQ*EWF.WKDQO3@25*>:\\LMD=)-^7V?\nMF+,GRKD2\\[)FP3M?K*V^OKC=']G8JC\"J/M/G*2QFT65M#SB[;PACN3#?A'J(\nM2[!T?V_Q<+=2J%Z/UZG^<(AFC6,;AZQ'XG=P`NXI*69OFFKR35,8VY0*,'M&\nME0(\\JVN*,\"\"!S_;2*1E%9,KG`R#B@8`-AR([^^J2C51_G6$9`$&2J/AW'&`P\nM^SL.&2E@T$_A9K/E\"/#$3Y<P-3?Q\\!\"'U?)GKI*:^#=EJ)E4M!QG/(BWS%#Y\nM5M\"9K[@&1'6WLQD0+VMWSOI)_ULKP<B33RD$_R?6=Z52`*HA]N>N[V0:XP%A\nMA;(VV2Z%=U[_P*)OEF3YPVNI>4566O+</V$:..MBY357CKVC\\OIGB317(@5/\nM\":*FC5O>RORB/TNJ#[R\"/TNJ?W1)-9>_`T]'O6E*?Y)/_\"VYO/Z7>5,EB14K\nM+7Q5D.*>DO3=/=-0?VW/:X<A]QW=HS^%6[(O7A*,'3Q^76GZ[C$[9%S)#L87\nM$Y)]^\"TJ).(#>)4Q?KS%8!M^BY`(/@0A4PX\"L,&DK_]8^#[H=;&@$Q]4QB!@\nMH&;NKSGUB[1F_]?>DW8W<62;K]:OJ`B#+;\"\\!9)W[``1MB!Z&-DCRTSF`<='\nMEMIV@Z06:LG&A^&_O[M5=?6BS6JQ)-TG,UC=M=>M6W>_U^[@DK\\/>U>-OKJS\nMJ7ZWEZ6)DWJB7NQS(00IK!)VG`.*7?P&81IK\"L:^%A0G+49@(>4KH)6=O@LT\nMJ8(.7?+C3OO:Y(U+`BX`%C/1_)HJ;B7O>7+5..$V6_T\\K./;KNYVZGHYC?CR\nM%-('UQL76BU'`\"AH>IZ6$?*6(Z`XUZ\"Q45=M0AOA=U>1=\\`\\4^SF*[5,BN+/\nMH:\\8O*@)Q'_D)3:T[*KEJ]#[+Z%?PRX/`<J$)C*K>FK*7;::-=N1I+>Y+:JQ\nM:I&Y<\"(.C;:/85A&(-O%ZX_B80U2L-DA=EZ8#`YX6$8]6D*,\"[-TDU8.+ZEI\nM%L[61<DX?H:![!V<'K[\\NHHH51RCC3+!/:S%T\"+#'UU3%68R&?C&L)DV^1@G\nMMN^)W,KL_;$SP,882*2K-67%=\"LH:_O^\";HM.F4)JZ+ELB+)0SPJ<J+BD\\$%\nM4T`P<`\"[UBG00BE%;1A)ZS:U\"B>@=(70'1=B9;ZEBY&T(JPL[([X)B++D=]9\nM<)F2](>\\76%1V%>ST;9$9&<.)FI`C\\4^)PX)L%!A_F691WOYU927,41O[5WR\nM1VOSD@N8W?M>5*,8L*$>NI:)48F`;H0`D748(8B0%K68F66PM@F:YH:F[\"2$\nM,Z15_#O:(0ICQ3!*]ZTQ3*Q_YL*2I+BC,90;/;2C!CQ68*M*JNWZ9,*%2@5*\nM6$-KY\"<-DCC$R>LD<`?D03[)(U;P+QI90GO&%G+*-DUM4H*\\<=_17SCD=5RH\nM*1<E3E[9!R;22.*)FH`N#1P0-]T#*-?0`(>FR]9AFM?6RMYZK?+B1;EV^KQ2\nM*^^?/BL_/ZR5PQ!`=0N35XOFF.<6:`_(ES;>1^EYO5R;IPMJP/0P9:63ZLOJ\nMX;^K>7N9&)>,7\"?^G+Q0,,G3VN&_YYD&5!^W3-C#<;U4IS0E\\_1C&IEWR5C%\nMA,O$>C,\\PZ31%73#<E(KCE&BMW<,EW%\\W9RA/_5I@ULJ#MXFO*05+F[T#GH]\nM#BB!P\\57*!70\\8V`;9?@(PG'X#^GE>IQN39JW3]/N8+<2'Z$7.`V&&*J;J5#\nMUA1'24ZSY`%EG@!]L`+[Y8-R?10NF'8%N)'I5R`_24PY_5K-OP*<-6C.%>!&\nM9`6F/AA,)5GG(GXP+#+X]NLEC2Q^T?6$HOS@S(CHZYVCM\"!L3MNB&8B\"!\"H[\nME1C%+A\"X;<H2Y\\0DUJ3KV:N5`<B5G*#YF:,8^3:2`+8]6^:@I,FA1J)MI&@I\nM\"2-*T5!R!IG>\".!%UFTDS#!?E\\K\\+QV4:QFC20Z!19\"\"*X)W,MMC46DT6O%[\nM3M,%[EL`7W6<)I1P_8X.--5G>Q7MBX6Q&)T6L^Y$IU&DS4&_T?4;9K4Q=F-_\nMD))5I%Y\\%F22%$#]][]JC-4?(S@[X'%R*9:'3F:&)W/#4[##R:)4\"FPR41KZ\nME?AQ_]H=-\"_UBLLR-M&%A47(.]C16=]I?\"`4+-XD.R,G-^P&OH_00@@F[[9@\nMXM+1;EHQW>L!X).%6V!_%<@8.I0`[@Q#`&-:SISVZ^DT+MRF>/?XZO`EAN8\\\nM?EDYPG\\US8VML4(*3L9W:?-+\\JE!OPF;/!JJH*'#E_F\"`9[`=E7L$J=L!)<G\nM:.9ST%*0I*;`EG:I[?`>>8AJ;U(<1W2G$T-54\\`+!PD&$\\7,Q+H^Y52^1#90\nMI/0TMI6$]>@9=@\"`,RZ^>=+*YH3NO&=9<*[Q+S;<+(34.I\\UE%0/ZY6]<EQ1\nMPRYJ\"0>#KV&F_&^+GG1$*&NLZJ[:MD6AS0\\DB[+&/[)#EWTW]6\"#8Z?%:D6V\nMX*=LA<@2J#N46Y?I#=_$J-)VLZ@VP*ANF&6\"39_\"`BD,5,GDG]MEJL\\8UQK#\nM^2DJLX']]/(O$A#+$-F+TT73JUU^S?V+7ZC8[ENY+[2E+_IM)`Y+C+C$7$P7\nM7U,KW96UR(!DQE3@3?A3X.2<#F4FLOR>YW/4=4NHOR(R_W3.WM>0SR>#]'<B\nM5W>EZ/4E!K5<13\"T3J<FCSB+XV``K\\.6F59,',X8J6,2A-Y(>LH@]^0Y&L:E\nMI;6O7'2]/N8T=88MSQQO,?%1+4!J*`1.0[^(.W+?VD@*S`C;L;*^8DA$.)R/\nMU391<>@7Y':'3IIJ:AT>P1+*#SMGZ9@C\\`8K)BS.N=U5(\\^PI_W@P3MC/*!K\nM<34ZI*?5PU*=,T.7\"R-)/HV_@ZFLW/57\\J&>BD4=73%=L]&@3PHTH/6JDHN8\nMBVNQ:=AG+!6[C^D#.R38E484';+^1;7%&@\\\\@3,;F\\;BMB3NV%XL7$-HZW@M\nM[Z*S&Y.9,H7(?LK;V\\W+<#D:T03Q$%8GQH,H!#%3J!U!3;,W(;$6T@'+X]B1\nM!G8:$\\7C/<Y1U\"Q*EIP-4XIJ8FYS:ZWQ)E<<K<2.8:$#S9R;./H!\\1&NG>Q!\nMR/787S\"\"2=A;4%:5?ZS^_+I4.RA72\\#<_%7>/SVN_%]Y#'QPN^II5`<8!:&V\nMTZ6\".Y,*PF!@>EC6N.8:(6*4#4C4Q*X)AQ2EH0(5BB;IUI192/J3-H1@B[6R\nMNER@IS7E0Z^\">D)ATVB#7`TC><,02Z/CS1=5;0_F>K=%QH98S%#QN62\"9AX:\nM*1=E*Q>3/X&6H(#15/DOD[D(41G9BBXBTW#.[GW^?*9!X%2B!G<7[98'2T04\nM9<>1.-ZM85]G@-\"\"0)*8^\"GD5IG@HAM(#8QX+RV.`X59]I2$;\";T>^9<`('0\nM(-3;]S`O2\\X*X3:XI/`S*.2T,W\"(P!-FY`XHLXC336&!?@B#I,FL37P7C6P`\nM01Z'\\HNEOH,;LD;H9U5+1/+^37?0^\"0AH(MJ!4^7V`IT_(N5O!&,8!^O#P]*\nM]<I!N3\"A;Q%DT4G;>@>]L)3$$G_I1.#\\P38,B-7]=ZE63:BI^QQ=D4K<JN;S\nM4KUTD%\"3WH^MN5]^=O(BH2:]'UNS>EBI[B>N$+S?C6H16>43W<R3[H>N=]U5\nMUB8B'Z`[R6G_%'RID_C-\":,CH\"`EM1+?+VN$39`WI31R0\">@CLZ@&\"2,!&M:\nM07=-3AU,`-6GP!5D0(T8RFFE8--(8Z-%EA7>?C?O_6VOYN'+15W?G+4;0Q70\nM7ZKM`OC`>FHY/,4)0]$1)3^BPW_F*,IQ`NA;9[66!(Q2:9$7?CI9QG7:$?)L\nM'710U&)^-GM;X9_;::H&26_'>#:=FUVC]NW94#OO-F\\88H!I,?K\\ZV\"\\?OU>\nMH^D8@4\"G\\<GM##MVB+=F@Y69%WTT>_+H/+M!ML&>1[-+@0#@7!>]D&NNX,D\"\nM]+^M!;H`&E!.OM#O;32`0/A)Q2*[T?Y@(N_)&G!:H\"'):S%$G1Q3HJ`:S0]^\nMN^%?SK\\`(JA$V+>4M_=IOBB$7B%1W!(>A@</%+]A%0+9U'W6.MJ@QENN85=Y\nMNZ+%#.8EEL=,?VFI'RG5GTDZ'*P@N7D-T,>0.:[Y5^P^[[T$(4T^=)2@)'JP\nM0GE*O@J*UTB:N314T:(5UL!&WV)/&&14E4,9T^4M$+.;84Z-W*?AY_R>R\\%=\nM.*`-\\'-O?GUH\\E^8O#\";-LHG#;/XZXO:ATMS_G\"W]<F\\<<U?;<_KV6%DNAR!\nM>->*8(-WC7X9\"W43]#6&]31#'\"+;\"%5B>-UL:3ZWE']:Y/EUG\\I/UIWCU)[*\nME?T4QTT>OT_S*447HVB;+437-X-+/($N!=9`(J+1QAB=-T@Z&'L73)_VX_*X\nMQM['NM@)Y5PX6K2(.^#UTC&$,C?^[Q,N?`*0Z:]U5YA(2V.%W5B7@<VCN,BC\nM,#!93(JY!1X\\<-63Q[H!'=IXMG$FCE3B%=OGD\\?SX,&[9.50\\LAI3[[%R+5E\nMP@MG`/C,-&4F`;P!C2UL7S\"JR=ALQ32(?AA<A;IMD5,$[X))I@9\"\\Q^FH[[3\nM8U9#\"3-G28<\"R5!?BX8XVV9</!2VAV-;N$Q*)%*B$?Q!/;I@7Y4G*%/V=2;;\nM^%X*+\"5MJ[44:'TF\"$3E0/0&G3YS-.`,\\A&<6_RNK?FX1]N:3VS,3F!A*_7_\nMD/W>B&W93-J&*-D8:?:X?%\"IU@]W(J_9)2/ZEMT4HF_9=)\\&YO>`BAZ<KP(Y\nM`N,AJT$L)-)IAQTB1PR?ZMQB^.6]>L34T11@57BI]N($'7M&+YT6;X75,DP&\nM!_J8>(OYH.;8@1L:(CJTDZJD<2_O3QK=U(.SVIQ[?'N'1V,@;M9EP];F']))\nM[?BPEN*@J+VYAU6OE:K'I;UZY;\":VDY:;<X]OL,C<0^9N')3#LXT./O0;%/D\nM!(PA*-\"@BF2)];A1Y@,E;WXHDNW:'D;JH6Z,`'MQ=]1AMWVC&#G1'>6B!2N[\nM.TBN!$]=.D\"^%.'&@IOCS.ES9B*T;^9L-,`%77CSDR(YS5S*+69PL1`:^NL3\nM0U2$BL,/\"@Y!*64IQ4K`@H:_!U%84Q+]732`\\^L)F9=Z8(&,R!J[^A7R$^T[\nMG/%=&<9?7;A7&&#^/=KU^L*VF@A<5#4(H^KVH1#[NZ(6!7C\\0DC/<JEMYXQI\nM/X.>[[2=YL!)(?2J88&!>PL8&I30!LQJ^`28'8/)B>\\AFUA8#)APE&33Q97?\nM;+XSAB:T,0EX3?HIC-[S9!(H-34;TD_I^6XP;#0LT\"AJ,P12Q<NJXGYC:I(A\nM9:$4]PTJ#G\"-D;KFWV2SL20=CMIT:]F:9<P]PPZ[9H?=T`[GC!!/',W(Q2S$\nMHP=<0L&D.3-U0AR[S9LGE`)ZKEZIGI2G*5LKUT]J52.GF`]V;.#AAK50/J'G\nM9[5RB:>C)0O?JX)8.VHFH2'9[?0D`+>:^WCL,0/R^&HZ`[FG66WP3%0&MG67\nM%&A!);C7C/*@+NZ?.DB!U98I`YMUW;B!:P'6SF<',6A##7VW>V$*844L0)^\"\nM2P8QU`>'DMP&[3412:T'2`PN./2C;)(T*=8[M$5^.3H\\.O4@,=+;-^N+5G/H\nME9U:T[$TA:9#)\\PS<>;EMU&<(^KBJ%T?A:J[\\MP6!W>&^8=T&DG.\"#BVJ4+\\\nM+]1V+E,OI&LW,*-)F`6^6E(G*1!O81Z6@B4E>=;9?OEL+X]^:XMP)`Q1`]LH\nMKZ=C!_]2?IZPV'XQ,&'L)^B2<ZYY&^A@&]_*%`BP2,P_&WV,\")8>%`E=>!_9\nM4ONC)#^XV_X$5]\\JFH(6U$=-7'\\,)P+5N.RC2;G(B5W9I\\`:`5<P;H\"'XLP0\nMU'.[Z&-\"-8,$**.JZQ*1-L2*_C;=)W@R)E<-'!B#RIP55KM/)M<3K\\E,_?*#\nMJ%_(^B,\"UH6$#P*W29\\T.\"9]LWPA(U\\X#93]VO:CE.QB\"\\34MI^8C:I]S\"]@\nM)QSQ;_R!TU'B,V;,.`.)VO3)P=,/QQ(*NT*2M4E>9]72JW+(;^R(S=:,AY8.\nMP#*'<]D4CF7(!I$CF6_\\R%00^F4IBFF#S+;01_')X%2<<4XYVY/EQC23@Y@F\nM9N_%(=TDZ8T#NCABS^!&%FYGC#MW&-M*$<2IT[NMR<%*606NF1+R5\",JD,2E\nM2(:EP,E2TX]MSBPP/A>\")HZC-!F,54.Y=P+3JJWM_R%##&.G!/2X[W$^Y/F4\nMJ4:;:KE-:8G(..7@$H\\`3:42E'ULKQ&H&2<I\\Q);L[5SDQHTVK?$EDB;-K&)\nM0%N6W`AKOR8U$]5N);9EJZLF-1A61R4V%RB88HW9\"J2(K,12]QCULQ6=Q^H*\nMRD<;)KN8D6YZ1L01*)HHX@:WF-ZQ/K8%#2D1Y\\4GUBG68A(ZGIKB\"17+3NR4\nM)_;;0*+9P$6#8J7K.T3;FR#QRB&!CK;YC4C;C(`?16XA-YIO+%\"=*M`_\\Z2<\nM4\"[,B-JA_47V,2FR_T>C*$@6EH0[^&9R6U2=]UAJJ^VJ&F%9[=<PX^[-(-V<\nMUXK;Y6;>FQ>DQY\"4=R/%EF,$HP8;ZC@_0BY.-A&W#<KE#1P6+<^0Y>)9WL<O\nM(O`(M3&+*7E0=]$6Y3VQ(><E8>>*IVH:$W.<X],%6)K7W=:-TDG7&JAZ[`/(\nM]=.2..`5:K;HY_#]R5RS_LJ<=&PWTT'8_RB!MX[$P_;ROL7^I\";[_CM:NH\\:\nM.)W5;S%PC3=O,^A_@G5^6JJACMLEKTE2#YG0Q.F<?_?[]!+0*\")$T9A<2'!G\nMG0$^'*SX2+!^E9S%-HUI8$7S8,G9BD?;0.KX7=0H_&M<]BG(4\\1;/W&-)ZAN\nM!,\"3\\C6E9&364#:)@,IX,3`+Y%A-K]]WF@/5=KH7@Q3,A<CFBSH-W].\"8-\"?\nMET/YX'=;UV3PRVCS62\".\\CQ.%**&IM;R'!\\OYPY%.J%`F.8,SFA#2R@H93N_\nMZP::2H@$3S+/:/-+\":ATC2I$\";V\";)^6S'.04JK232LL9$C'%]BF1E'-F/U0\nM^8[K^^34JP>*0YSD(V$M<2Z=X&)QK3/+F',IQ4,<HW36%Y'F*N3O!/5SLJ5V\nM&AD=Q2JAP6L0AGL^#.DMA9GH[$<W'X(2039C#RTA)\\*78X_O\",Y`4R1)W$$2\nM?R!DQR(A=7:=[@Q*72MJ%=>#KGQ'YZ.QJ)$4@NK-H_V=4OV;BGDE*0S?L\\+P\nM/?`6`?>OWI/24`.)A1%%^?3^'93?5/?N,>3%]%?P_>>P`HNI3VY/`@NLLFRA\nMH&)*8:@O8TQN.ZX;$_BD_TN$Z220GB*:_SS6\\8O%ZQ2AG,W?\\?YC6886C%JR\nM-3%47-*R4;I@*9/#.4G#[-BFJ<#_9+`B*!B_K^'@D7$-['N)^;ADMAE>\"1&4\nMH)5-^,9@7$AS3S!+!]L%X'V(F0P2B)0T4<Q<\\:@G2H1TR)3@6B-981IT7\\VV\nMD0!&Q?FD47&[X9MX@I([A\"D^O9+%D-$_Q>5G1R=.UG`.UT4:5*`1R[)/>F:T\nM]%T8+4V)6\\9<6&K:NXJOJAENJBDOJB\\B4/I!/;-L]_=T-,0Q+_>>K0U>BVW8\nMFI(4!2GYOJ?@T0%0>=5`UKW!:0S)#&5-G>O[(,9ZI6+K->X<:&YUSE-`AV#Z\nM,S#=$4A->C`F@Y5%`*4`H5ETA\"PZ0A8=(8N.D$5'R*(C9-$1%LS*B*C.&'-Y\nMP[\"EO*]9139GRX*J?]?\\PM\\HDD,@$@@KFQ<6Q6$^@$DQ!L0_((K#=Q.WP4!9\nM%K<AB]OP=X[;$#M'!:1AG4%P\"W\"*,;06TIFQ@L@-J`KE.X37/G4[80PXD!LU\nMU`3K8&,/K)&`9B@H'`+4[GIK*C\"*I>9@Y;4EK$$2@26Q:UOQ>L,!D$'^H+\\;\nMMRPF^0JE[\\.O9Y[7ALH^2LE\"9K.8Y`H&MIL<,L$D:^2>[#?LIA8*U]#O!2/!\nM-]APZ!7;(X?,CM,D/'5`?0(1@>S7C?XV`!%E!Y4`_(R3J28:FQI\"8YWV0^>*\nM3\"M*@3&`#!GV\\&+!*MP+-F!)U@O?ZG6R=2Z1>KKE2$VKO82C3U.TKWI:GX3+\nM*L^KH3FP3>W`,7UZVL7DM4PI::>L$OJ(AI.^N9SK#3^NH]@V+9N3V%1\\EONF\nM-!G<^DLXP*<7S@!_K@HJH:S`AMB`2X-10.$;9\\]D=/*MTF<2`IZ4:=+]7A-G\nM\"G2NS3:#0*=ML/GL&2ZY'C7U?63)M!G)!A^HM1CC:(IV/:\"3AEU.ID%O^=.?\nMG0ZF>:P.0U2%NG;A=[,Q])W`%@+3\"&$=A(-FO^%?HC,#=':C6N3FX'SJH5TH\nMO,$6T311^5['(;\\'4]'O`'$&)\"*.?#T-V\"<HY).->AS8$`L29=,,+\";D6PYG\nM]>0*5EI/+&%(#=SQ44D\\ES#O;W\\ME+9S:02&9;=S2[89NHF$FH`_Y&I;4V8$\nM^NR)#LJ\\+VB.0DP4J.$3W.Y)3<OMAB3M':?;<L\\70-EBSI*6$(Q:]4?4+48F\nM:P'TBDTN(0+@)\\XI8Q!Y.09I3<A1*`RE\"Z=YCU_?SXV;P\\SD*S2H\\#$TY<RD\nM*KZRVV`*^)8$K$&&_M7I=L?K#X#Y[CK7QZ][5ZOYSWF$#@+4C/)):2H9Y9-1\nM/AGE,R?X)5(^E%\"VE8`XB/,U9LYL=RB^\\&A^R$3(4H72TJ),'@D)^),HH#/A\nMD#T@G5K.>4:P+,%%T6P,>E?GJ[KG/('MXR<$OFOH\\J1!-4R=)),LBA*I?I[0\nM-*V^U;8F=^`_72NH]\"6/GPVT<ZYIV/=3*'1\\=?3:E.PV\"G!CUTYL$9^^*[_D\nM?DI^^!QOU,JE_5?E$67F?3:W-C=_??CPI\\W-S:W?'FW:_V[2WP\\?_;3YV^;#\nMK4>;6X^VMZ#\\UB^/MGY2FPL:3^@9HGQ5J9\\ZEY=CRTWZ_H,^3W#[U:O&!^<<\nM\\/-ZKYU[TH$?N5S3Z]U8:EBD9]ON6;^!N56!#;HFC(6.-SDR%?3\\`2HK/[8I\nM%R2BKF:C\"]@&L-I@735\\'RC,'/GI]!J#2\\1,&T._OX'Q(]L;O0NHN-'Q6L.V\nMXV\\P2*[[7BZW!W!9+ZOG)U6RQ-`$.)K#G7)*GCX0WJR$.%9>K_$1B)'2L5J9\nMW/B*.BA57YR47I35RM[*KND+#M!QO;ROCFJ'>^7]DUKIP\"K(M5=R?Y:J^P?E\nM6M)X<E!Z[_#5406_KQP=;!QA#6B_6%0:R_MN!XELYU,#_XW-$G`;D-BD>'JX\nMAN+EA\\$<\\9?\"&>;DA\"^?OME\\!P02_+OU+K<2'RQT+<80NMU?UAX6>$#`<32]\nM8=\\'[';C#6G+>K!72D=YT&M2>G905H-K%,KZ:A5N*>3`+AQ`MV?ZS\\)NTCRD\nMSJK\\&YH'UN*I+`^(YO$OW7/`57IB`PSW]CG?R'_!Z<FOL_R7W8FSC/7*JF;^\nM%=X*K\\M0S!<D$BGKB@*XB#=10-Y!)2IS[?4_H\"AUP)X7YG*ENK$U.&NTL(75\nMI!WTX(I?K9=?'0&6?[(QZ/0VL#0<0\\3X)/!5^%7E7W@#X)-^?MO-F]79FK`*\nMIE]X-0KWXR/XWT(`Z>.8\"?C_E^U??V7\\__\"W[<U?MZ'\\]J-'VQG^_QH/BL3V\nM;[J-`Z_1<H#UQM][7O?<O>\"_RY\\&)P.W[>_LE#MG:+B4NY.[HSJ`)#!&\"U&=\nM+1TB`SYU;M1RNP6_T6J%_T`0;%YZG=ZJ?\"E((SHM)<401NL5L?LY[)&=+V-L\nM:=/C=X\\5)OO]#*3;,GP6YG>%*R#P+V'9EHLB!.!%T')`K?Y1J>X5E#;D*+94\nM?OETHS$<>!NZC3Q_7L*:&'8F]IWH/?1<L<R;L3&@XMRNTZ(NN8U@H,&B[NRT\nMVJ=0D([V:KYX@,6!^-/-BSX&_B.$\\*KTLOP<;@]$\"OI8(D(0C*`_J]]_/SVN\nME^J5O=/=W)UB6@_MC.X6_H1'_[+\"<A'1X%]2S`./.#BJ5SJI'[XH5\\NU$MZ@\nM-E%QA[>\\[@\"B1/I\":?HA1#KX<!<U'1\\(C2O<^./:WGZE]EBMKV_P?WZ_F7.[\nMS?805OCM\\BI_+P3(ZZ+MG37:*/K#M/\"2-KB#B4\\[/?C>QZS+-/26V\\KM[3U6\nMRPSJGYO-+SD+(DUY'?N%\"1>L>=V`*P30_/.#THOC!X_5'Y_?-)L,YN^^0!,\\\nM/'UYP.W\"*LP6ZN+<)L!Z`S\\&]8,1-%OM\\W;CPJ>1P\"4FOE,T6VY$5NKX7P=!\nM_6(%%N*@\\NSH7[@4_%.OBU3&]M\"%`1FG]<M(55U6J+:<W7G0GY!^KN/G#O9+\nM^_M8^R#<<;OW,0<?H_-JM_2TU-NW<&P%.>#?10(C!HS]@\\-G_WN,%S)V+*N%\nMYAA%7#$#98JV&6IP^<<\"BC\"2_8/CD^?/*W\\5<KE*%<\\'?*376*Z@X%[\\$P9[\nM6OZK7BN=PE\\T1CZL/()>>^BCS-_KH^^UB]8>=W+N.1QQ5?[K\"!O7#3^P6MY9\nM]Q[##RD!G\"C+G'-W@<+<47?7O=P2?-X#!%3TH-8?\\+_?L;8L5('`.#*R`A>`\nM92[D>&PL5R9PYC,'VP$?Y!SNR\"JLMZ$W^#+PO+8*>C44;U!L0J_V`'$J4`'G\nMTHRWWJ2_S%1^E_'NM1V@).'/)OZQ`]QH1Q7/L:PL84'=7_?@?SAB_-8_5^O0\nMMJ]+&@24:SD]!?^#9=W)Y33\"&T?09$_V9$_V9$_V9$_V9$_V9$_V9$_V9$_V\nM9$_V9$_V9$_V9$_V9$_V9$_V9$_V9$_V9$_V9$_V9$_V9$_V9$_V_(V?_P>F\n':7>L`$`!````\n`\nend\n\n\n\n\n-- \n\nMark Hollomon\[email protected]\n",
"msg_date": "Wed, 19 Jan 2000 22:53:54 -0500",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "PL/Perl -- At LAST!!"
},
{
"msg_contents": "Applied.\n\n\n> \n> Well, I finally solved the linking problem\n> that kept me from making perl secure.\n> \n> Attached is uuencoded tarball to add PL/perl\n> to postgresql.\n> \n> Things I know don't work.\n> -- triggers\n> -- SPI\n> \n> \n> The README file has a _VERY_ short tutorial.\n> \n> I would appreciate it if someone would test this on\n> a non-ELF system.\n> \n> \n> \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jan 2000 00:08:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PL/Perl -- At LAST!!"
}
] |
[
{
"msg_contents": "Agreed,\n\nEither way, it's broken now and needs fixing.\n\nKeith.\n\n>Actually, I see no reason why the superuser's Postgres ID number\n>shouldn't default to 0. If there's code in there to reject that,\n>isn't it doing the wrong thing?\n>\n>The postmaster and backend can and should refuse to run with an\n>effective Unix userid of 0 (root), but that doesn't mean that\n>a Postgres ID of 0 is insecure, does it?\n>\n>\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 Jan 2000 08:39:03 +0000 (GMT)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] initdb problems on Solaris "
}
] |
[
{
"msg_contents": "\n> The postmaster and backend can and should refuse to run with an\n> effective Unix userid of 0 (root), but that doesn't mean that\n> a Postgres ID of 0 is insecure, does it?\n\nThe usual setup has the Postgres ID same as the unix id, thus\n0 would be reserved for root.\n\nI think this setup has the advatage, that we could someday issue\nsetuid() calls for \"dba and untrusted stored procedures\", which would \nimho be a very handy feature.\n\nAndreas\n",
"msg_date": "Thu, 20 Jan 2000 11:04:08 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] initdb problems on Solaris "
},
{
"msg_contents": "On Thu, 20 Jan 2000, Zeugswetter Andreas SB wrote:\n\n> \n> > The postmaster and backend can and should refuse to run with an\n> > effective Unix userid of 0 (root), but that doesn't mean that\n> > a Postgres ID of 0 is insecure, does it?\n> \n> The usual setup has the Postgres ID same as the unix id, thus\n> 0 would be reserved for root.\n> \n> I think this setup has the advatage, that we could someday issue\n> setuid() calls for \"dba and untrusted stored procedures\", which would \n> imho be a very handy feature.\n\nThat would require you to set up a Unix user for every Postgres user,\nwhich is certainly not necassary in the general case.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 20 Jan 2000 12:14:45 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] initdb problems on Solaris "
}
] |
[
{
"msg_contents": "Hi,\n\nI just found that when giving permissions on tables to different users,\nthat Update-permission is the same as Delete-permission, is this\nintended behaviour, or is it a bug?\n\nhoping for an answer,\n\nJoost Roeleveld\n\n",
"msg_date": "Thu, 20 Jan 2000 13:36:27 +0100",
"msg_from": "\"J. Roeleveld\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Permissions, update acts same as delete, feature or bug?"
}
] |
[
{
"msg_contents": "Wouldn't it be fair if a notice was generated if you attempt to create\nand/or reference a name that's longer than NAMEDATALEN. Like\n\n=> create table some_really_much_too_long_name_here ( ... );\nNOTICE: \"some_really_much_too_long_name_here\" will be truncated to\n\"some_really_much_too_long_name_\" [ <possible reference to documentation\nand/or source code to change this> ]\n\nBetter than finding out after the fact, ISTM. I could (try to) take care\nof this.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 20 Jan 2000 14:26:44 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "A notice for too long names"
},
{
"msg_contents": "> Wouldn't it be fair if a notice was generated if you attempt to create\n> and/or reference a name that's longer than NAMEDATALEN. Like\n> \n> => create table some_really_much_too_long_name_here ( ... );\n> NOTICE: \"some_really_much_too_long_name_here\" will be truncated to\n> \"some_really_much_too_long_name_\" [ <possible reference to documentation\n> and/or source code to change this> ]\n> Better than finding out after the fact, ISTM. I could (try to) take care\n> of this.\n\nWould it be better to throw an elog(ERROR)? The only place I know of\nwhere names are silently truncated is in generating primary and unique\nindices, where the names are based on the underlying table name plus\nsome automatically generated discriminator.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 20 Jan 2000 15:28:57 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A notice for too long names"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Wouldn't it be fair if a notice was generated if you attempt to create\n>> and/or reference a name that's longer than NAMEDATALEN.\n\n> Would it be better to throw an elog(ERROR)?\n\nDefinitely NOT. Rejecting long identifiers went out with Dartmouth Basic.\n\nThe only reason to worry at all is if someone uses two identifiers that\nare the same for the first NAMEDATALEN characters --- but if so, he'll\nget an error about duplicate column or duplicate table name. I see no\nreal reason to change the current behavior...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jan 2000 10:55:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A notice for too long names "
},
{
"msg_contents": "On 2000-01-20, Tom Lane mentioned:\n\n> Thomas Lockhart <[email protected]> writes:\n> >> Wouldn't it be fair if a notice was generated if you attempt to create\n> >> and/or reference a name that's longer than NAMEDATALEN.\n> \n> > Would it be better to throw an elog(ERROR)?\n> \n> Definitely NOT. Rejecting long identifiers went out with Dartmouth Basic.\n\nBut it came back with compilers issuing warnings (hence notice) about\nthem. Silently truncating input went out with GNU, so I guess it's more of\na current trend ... :)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Thu, 20 Jan 2000 22:54:42 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A notice for too long names "
},
{
"msg_contents": "At 10:54 PM 1/20/00 +0100, Peter Eisentraut wrote:\n>On 2000-01-20, Tom Lane mentioned:\n>\n>> Thomas Lockhart <[email protected]> writes:\n>> >> Wouldn't it be fair if a notice was generated if you attempt to create\n>> >> and/or reference a name that's longer than NAMEDATALEN.\n>> \n>> > Would it be better to throw an elog(ERROR)?\n>> \n>> Definitely NOT. Rejecting long identifiers went out with Dartmouth Basic.\n>\n>But it came back with compilers issuing warnings (hence notice) about\n>them. Silently truncating input went out with GNU,\n\nGNU C was hardly the first compiler to correctly handle identifiers\nof virtually any length. I doubt if it even makes the list of the first\n100...\n\n(I get tired of GNU-worship)\n\nHow deeply embedded is the limitation on identifier length? Ideal\nwould be to remove any artificial limitation whatsoever.\n\nThe current situation isn't bad, since name clashes are rare - it's\nnot as though PostgreSQL is only keeping the first six characters\nlike Fortran 66! Still, all such limitations are fundamentally\nirksome.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 20 Jan 2000 15:01:03 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A notice for too long names "
}
] |
[
{
"msg_contents": "\n> > I think this setup has the advatage, that we could someday issue\n> > setuid() calls for \"dba and untrusted stored procedures\", \n> which would \n> > imho be a very handy feature.\n> \n> That would require you to set up a Unix user for every Postgres user,\n> which is certainly not necassary in the general case.\n\nYes, we would somehow need to distinguish between\nOS users and DB only users.\n\nAndreas\n",
"msg_date": "Thu, 20 Jan 2000 14:39:59 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: [HACKERS] initdb problems on Solaris "
}
] |
[
{
"msg_contents": " List of databases\n Database | Owner \n------------+----------\n Newnham | prlw1\n\n% psql Newnham\npsql: connection to database \"newnham\" failed - FATAL 1: Database \"newnham\" does not exist in the system catalog.\n\ntemplate1=> \\c 'Newnham'\nFATAL 1: Database \"newnham\" does not exist in the system catalog.\nPrevious connection kept\n\n\nHow can I connect to a database with a variable case name?\n\nCheers,\n\nPatrick\n",
"msg_date": "Thu, 20 Jan 2000 15:12:57 +0000",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": true,
"msg_subject": "Variable case database names"
},
{
"msg_contents": "There is some code in libpq which converts all database names to\nlower-case, unless it's double quoted. That seems a little ill-conceived\nto me, since you'd actually have to pass it something like\nPGconnectdb(\"dbname=\\\"Newnham\\\"\");\n\nIf anything, this would make it inconvenient it psql, because you'd have\nto write\n\\c '\"Newnham\"'\nsince\n\\c \"Newnham\"\nis interpreted differently.\n\nDoes anyone have an explanation for this? Why not leave the name as is?\n\n\n\nOn 2000-01-20, Patrick Welche mentioned:\n\n> List of databases\n> Database | Owner \n> ------------+----------\n> Newnham | prlw1\n> \n> % psql Newnham\n> psql: connection to database \"newnham\" failed - FATAL 1: Database \"newnham\" does not exist in the system catalog.\n> \n> template1=> \\c 'Newnham'\n> FATAL 1: Database \"newnham\" does not exist in the system catalog.\n> Previous connection kept\n> \n> \n> How can I connect to a database with a variable case name?\n> \n> Cheers,\n> \n> Patrick\n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Thu, 20 Jan 2000 22:55:11 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Variable case database names"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> There is some code in libpq which converts all database names to\n> lower-case, unless it's double quoted. That seems a little ill-conceived\n> to me, since you'd actually have to pass it something like\n> PGconnectdb(\"dbname=\\\"Newnham\\\"\");\n> \n> If anything, this would make it inconvenient it psql, because you'd have\n> to write\n> \\c '\"Newnham\"'\n> since\n> \\c \"Newnham\"\n> is interpreted differently.\n> \n> Does anyone have an explanation for this? Why not leave the name as is?\n\nWe do the same thing with queries, right? We force identifiers to lower\ncase unless quoted.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jan 2000 17:04:45 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Variable case database names"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> There is some code in libpq which converts all database names to\n> lower-case, unless it's double quoted. That seems a little ill-conceived\n> to me,\n\nI think you are probably right. The backend might try to lowercase the\nname when it gets it, but it seems like libpq shouldn't be doing so\n(any more than it's responsible for downcasing identifiers used in\nSQL commands).\n\nIf the backend *does* lowercase the DB name used in a connect command,\nis there any way to use a mixed-case DB name? I'm not sure there is...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jan 2000 17:51:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Variable case database names "
},
{
"msg_contents": "On Thu, Jan 20, 2000 at 05:04:45PM -0500, Bruce Momjian wrote:\n> [Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> > There is some code in libpq which converts all database names to\n> > lower-case, unless it's double quoted. That seems a little ill-conceived\n> > to me, since you'd actually have to pass it something like\n> > PGconnectdb(\"dbname=\\\"Newnham\\\"\");\n> > \n> > If anything, this would make it inconvenient it psql, because you'd have\n> > to write\n> > \\c '\"Newnham\"'\n> > since\n> > \\c \"Newnham\"\n> > is interpreted differently.\n> > \n> > Does anyone have an explanation for this? Why not leave the name as is?\n> \n> We do the same thing with queries, right? We force identifiers to lower\n> case unless quoted.\n\nThe point was: the database name was quoted. I didn't think to quote it\na second time. (single quoting for the create was sufficient, and the export\nfrom access didn't mind about the case)\n\nCheers,\n\nPatrick\n",
"msg_date": "Fri, 21 Jan 2000 19:06:59 +0000",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Variable case database names"
},
{
"msg_contents": "On 2000-01-20, Tom Lane mentioned:\n\n> Peter Eisentraut <[email protected]> writes:\n> > There is some code in libpq which converts all database names to\n> > lower-case, unless it's double quoted. That seems a little ill-conceived\n> > to me,\n> \n> I think you are probably right. The backend might try to lowercase the\n> name when it gets it, but it seems like libpq shouldn't be doing so\n> (any more than it's responsible for downcasing identifiers used in\n> SQL commands).\n> \n> If the backend *does* lowercase the DB name used in a connect command,\n> is there any way to use a mixed-case DB name? I'm not sure there is...\n\nThe backend doesn't lower case it. I removed that part in libpq and now\nit works fine.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 23 Jan 2000 02:30:07 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Variable case database names "
}
] |
[
{
"msg_contents": "> \n> Hi There,\n> \n> I am not sure if you are the right person to contact to suggest an improvement.\n> \n> But he goes....\n> \n> Can I suggest an improvement to Psql by adding to '/z' which produces the\n> security attributes to the tables etc....\n> \n> If it can be possible could it also take a parameter of an object ie table? So\n> rather then hunting and using the scrolling functions of the telnet session to\n> view the list it would be nice to do '/z table1' .\n> \n> Keep up the good work.\n\nOK, sent to hackers. Look what I get in the current sources:\n\n\ttest=> \\z lkasdf\n\tERROR: attribute 'rename' not found\n\nThis makes no sense to me.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jan 2000 10:17:04 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres improvement"
},
{
"msg_contents": "> > \n> > Hi There,\n> > \n> > I am not sure if you are the right person to contact to suggest an improvement.\n> > \n> > But he goes....\n> > \n> > Can I suggest an improvement to Psql by adding to '/z' which produces the\n> > security attributes to the tables etc....\n> > \n> > If it can be possible could it also take a parameter of an object ie table? So\n> > rather then hunting and using the scrolling functions of the telnet session to\n> > view the list it would be nice to do '/z table1' .\n> > \n> > Keep up the good work.\n> \n> OK, sent to hackers. Look what I get in the current sources:\n> \n> \ttest=> \\z lkasdf\n> \tERROR: attribute 'rename' not found\n> \n> This makes no sense to me.\n\nOK, fixed and sources updated:\n\n\ttest=> \\z test\n\tAccess permissions for database \"test\"\n\t Relation | Access permissions \n\t----------+--------------------\n\t test | {\"=arwR\"}\n\t(1 row)\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jan 2000 10:29:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Postgres improvement"
}
] |
[
{
"msg_contents": "I know this is off topic but...\n\nHow do the programmers here get paid?\n\n1. You do this in your spare time and dont get paid.\n2. Your day job encourages you to spend some of your time developing the db that your company uses.\n3. There is an organization that accepts money and distributes it to the hackers.\n4. You are all independently wealthy and do this for the fun of it.\n5. You are still living at home with mom and expect one day to get a real job from this.\n\nSeriously\nMy friend and I were talking about the open software movement and how great it is and he keeps on asking how people like us get paid.\nBasically my friend and me are the type that like to stay home and not visit customers and just solve problems and write code. Probably a lot like all of you.\n\nthanks\nbob\n\n--\[email protected]\[email protected]\nhttp://people.ne.mediaone.net/rsdavis\n\n\n",
"msg_date": "Thu, 20 Jan 2000 11:22:43 -0500",
"msg_from": "Robert Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "off topic"
},
{
"msg_contents": "> How do the programmers here get paid?\n> 1. You do this in your spare time and dont get paid.\n\nBingo!\n\n> 2. Your day job encourages you to spend some of your time\n> developing the db that your company uses.\n\nI know a few developers are in this category.\n\n> 4. You are all independently wealthy and do this for the fun of it.\n\nI could see this as something to do in retirement; a bunch of cranky\n70 year olds debating the next development for Postgres isn't a pretty\nthought however...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 20 Jan 2000 16:57:32 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] off topic"
},
{
"msg_contents": "> > How do the programmers here get paid?\n> > 1. You do this in your spare time and dont get paid.\n> \n> Bingo!\n\nI will give him a double-bingo!\n\n> \n> > 2. Your day job encourages you to spend some of your time\n> > developing the db that your company uses.\n> \n> I know a few developers are in this category.\n> \n> > 4. You are all independently wealthy and do this for the fun of it.\n> \n> I could see this as something to do in retirement; a bunch of cranky\n> 70 year olds debating the next development for Postgres isn't a pretty\n> thought however...\n\nWe are cranky enough. \"You want to add feature X. We don't think that\nis a good idea.\" :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jan 2000 12:21:07 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] off topic"
},
{
"msg_contents": "On Thu, Jan 20, 2000 at 12:21:07PM -0500, Bruce Momjian wrote:\n> > > How do the programmers here get paid?\n> > > 1. You do this in your spare time and dont get paid.\n> > \n> > Bingo!\n> \n> I will give him a double-bingo!\n\nMake this a triple.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 20 Jan 2000 19:00:14 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] off topic"
},
{
"msg_contents": "On Thu, 20 Jan 2000, Robert Davis wrote:\n\n> I know this is off topic but...\n> \n> How do the programmers here get paid?\n> \n> 1. You do this in your spare time and dont get paid.\n> 2. Your day job encourages you to spend some of your time developing the db that your company uses.\n> 3. There is an organization that accepts money and distributes it to the hackers.\n\nthis one does exist (http://www.pgsql.com) ... part of all the revenues\nthat are made by \"PostgreSQL, Inc\" is allocated to be fed back into the\nproject... the kitty isn't very big ye, but it is there, and its\ngrowing...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 20 Jan 2000 14:14:09 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] off topic"
},
{
"msg_contents": "> On Thu, Jan 20, 2000 at 12:21:07PM -0500, Bruce Momjian wrote:\n> > > > How do the programmers here get paid?\n> > > > 1. You do this in your spare time and dont get paid.\n> > > \n> > > Bingo!\n> > \n> > I will give him a double-bingo!\n> \n> Make this a triple.\n\nI think we are all insane, or very sane. Not sure which.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jan 2000 13:56:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] off topic"
},
{
"msg_contents": "Michael Meskes <[email protected]> writes:\n> On Thu, Jan 20, 2000 at 12:21:07PM -0500, Bruce Momjian wrote:\n>>>>>> How do the programmers here get paid?\n>>>>>> 1. You do this in your spare time and dont get paid.\n>>>> \n>>>> Bingo!\n>> \n>> I will give him a double-bingo!\n\n> Make this a triple.\n\nQuadruple-bingo.\n\nActually I could be put into category (2) --- my company uses Postgres,\nso I have some excuse for working on Postgres during working hours ---\nbut I spend way more personal than business time on it. It's just an\ninteresting project, that's all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jan 2000 17:46:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] off topic "
},
{
"msg_contents": "On Thu, Jan 20, 2000 at 01:56:49PM -0500, Bruce Momjian wrote:\n> I think we are all insane, or very sane. Not sure which.\n\nAsk my wife and she surely says the former. :-)\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 21 Jan 2000 08:35:07 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] off topic"
},
{
"msg_contents": "> On Thu, Jan 20, 2000 at 01:56:49PM -0500, Bruce Momjian wrote:\n> > I think we are all insane, or very sane. Not sure which.\n> \n> Ask my wife and she surely says the former. :-)\n\nThe book helped my wife see some value in this. \"Gee, I don't know\nanyone who wrote a book\", she said.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jan 2000 10:03:07 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] off topic"
},
{
"msg_contents": "On Fri, Jan 21, 2000 at 10:03:07AM -0500, Bruce Momjian wrote:\n> The book helped my wife see some value in this. \"Gee, I don't know\n> anyone who wrote a book\", she said.\n\nGood idea, but you have to find time for writing a book too. Also I already\ndid this with my Ph.D. thesis so I'm afraid it won't count. :-)\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Sat, 22 Jan 2000 14:54:09 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] off topic"
},
{
"msg_contents": "> On Fri, Jan 21, 2000 at 10:03:07AM -0500, Bruce Momjian wrote:\n> > The book helped my wife see some value in this. \"Gee, I don't know\n> > anyone who wrote a book\", she said.\n> \n> Good idea, but you have to find time for writing a book too. Also I already\n> did this with my Ph.D. thesis so I'm afraid it won't count. :-)\n\nYes, you lose even more time with the book, which is added to the\nPostgreSQL time.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 22 Jan 2000 12:36:16 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] off topic"
}
] |
[
{
"msg_contents": "Wer Hast den Key von WinZip?\n\n\n",
"msg_date": "Thu, 20 Jan 2000 19:03:58 +0100",
"msg_from": "\"Hopf\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Code von WINZIP"
}
] |
[
{
"msg_contents": "I don't seem to be ableto use pg_dump where a table uses foreign key\nconstraints. I seem to remember there being a message about pg_dump\nproblems but I can't remember whether it was relevant to this.\n\n$ pg_dump bray\ngetTables(): relation 'area': cannot find function with oid 1655 for trigger \nRI_ConstraintTrigger_20235\n\n[area is a table with RI constaints]\n\nThis seems to be because pg_dump.c getFuncs() excludes system functions\nand thus causes getTables() to fail when it finds the RI triggers.\n\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Neither is there salvation in any other; for there is \n none other name under heaven given among men, whereby \n we must be saved.\" Acts 4:12 \n\n\n",
"msg_date": "Thu, 20 Jan 2000 18:44:53 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump and foreign keys"
}
] |
[
{
"msg_contents": "Can anyone recommend a good web-based frontend for a postgres database? \nI want to port some databases to postgres, and I would love to make all \nmy forms html. I am primarily looking for a html UI for the database, \nand a creation/management tool second.\n\n\nthanks\n\n-- \nPray to God, But Hammer Away\n - Spanish Proverb\n\nClyde Jones\njjj.trbpvgvrf.pbz/pylqr-wbarf\[email protected]\n",
"msg_date": "Thu, 20 Jan 2000 21:29:31 GMT",
"msg_from": "clyde jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to for postgres as website backend"
}
] |
[
{
"msg_contents": "I took a stab at a new install file. It's shrunk to a fifth of its\nformer size and only has 10 steps. Please review it, if you care.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n",
"msg_date": "Thu, 20 Jan 2000 22:57:57 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Install file"
}
] |
[
{
"msg_contents": "I have taken Peter's new install.sgml and generated a new INSTALL text\nfile to match it. I used sgmltools to convert to HTML and netscape to\ndump the html as text.\n\nFile attached.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nChapter 0. Installation\n\nTable of Contents\nBefore you start\nInstallation Procedure\n\n Installation instructions for PostgreSQL 7.0.0.\n\nCommands were tested on RedHat Linux version 5.2 using the bash shell.\nExcept where noted, they will probably work on most systems. Commands like\nps and tar may vary wildly between platforms on what options you should use.\nUse common sense before typing in these commands.\n\nIf you haven't gotten the PostgreSQL distribution, get it from\nftp.postgresql.org, then unpack it:\n\n$ gunzip postgresql-7.0.0.tar.gz\n$ tar -xf postgresql-7.0.0.tar\n$ mv postgresql-7.0.0 /usr/src\n\nAgain, these commands might differ on your system.\n\nBefore you start\n\nBuilding PostgreSQL requires GNU make. It will not work with other make\nprograms. On GNU/Linux systems GNU make is the default tool, on other\nsystems you may find that GNU make is installed under the name \"gmake\". We\nwill use that name from now on to indicate GNU make, no matter what name it\nhas on your system. To test for GNU make enter\n\n$ gmake --version\n\nIf you need to get GNU make, you can find it at ftp://ftp.gnu.org.\n\nUp to date information on supported platforms is at\nhttp://www.postgresql.org/docs/admin/ports.htm. In general, most\nUnix-compatible platforms with modern libraries should be able to run\nPostgreSQL. In the doc subdirectory of the distribution are several\nplatform-specific FAQ and README documents you might wish to consult if you\nare having trouble.\n\nAlthough the minimum required memory for running PostgreSQL can be as little\nas 8MB, there are noticable speed improvements when expanding memory up to\n96MB or beyond. The rule is you can never have too much memory.\n\nCheck that you have sufficient disk space. You will need about 30 Mbytes for\nthe source tree during compilation and about 5 Mbytes for the installation\ndirectory. An empty database takes about 1 Mbyte, otherwise they take about\nfive times the amount of space that a flat text file with the same data\nwould take. If you run the regression tests you will temporarily need an\nextra 20MB.\n\nTo check for disk space, use\n\n$ df -k\n\nConsidering today's prices for hard disks, getting a large and fast hard\ndisk should probably be in your plans before putting a database into\nproduction use.\n\n---------------------------------------------------------------------------\n\nInstallation Procedure\n\nPostgreSQL Installation\n\nFor a fresh install or upgrading from previous releases of PostgreSQL:\n\n 1. Create the PostgreSQL superuser account. This is the user the server\n will run as. For production use you should create a separate,\n unprivileged account (postgres is commonly used). If you do not have\n root access or just want to play around, your own user account is\n enough.\n\n Running PostgreSQL as root, bin, or any other account with special\n access rights is a security risk and therefore won't be allowed.\n\n You need not do the building and installation itself under this account\n (although you can). You will be told when you need to login as the\n database superuser.\n\n 2. If you are not upgrading an existing system then skip to .\n\n You now need to back up your existing database. To dump your fairly\n recent post-6.0 database installation, type\n\n $ pg_dumpall > db.out\n\n If you wish to preserve object id's (oids), then use the -o option when\n running pg_dumpall. However, unless you have a special reason for doing\n this (such as using OIDs as keys in tables), don't do it.\n\n Make sure to use the pg_dumpall command from the version you are\n currently running. However, do not use the pg_dumpall script from 6.0\n or everything will be owned by the PostgreSQL super user. In that case\n you should grab pg_dumpall from a later 6.x.x release. 7.0's pg_dumpall\n will not work on older databases. If you are upgrading from a version\n prior to Postgres95 v1.09 then you must back up your database, install\n Postgres95 v1.09, restore your database, then back it up again.\n\n Caution\n You must make sure that your database is not updated in the middle of your\n backup. If necessary, bring down postmaster, edit the permissions in file\n /usr/local/pgsql/data/pg_hba.conf to allow only you on, then bring\n postmaster back up.\n 3. If you are upgrading an existing system then kill the database server\n now. Type\n\n $ ps ax | grep postmaster\n\n This should list the process numbers for a number of processes, similar\n to this:\n\n 263 ? SW 0:00 (postmaster)\n 777 p1 S 0:00 grep postmaster\n\n Type the following line, with pid replaced by the process id for\n process postmaster (263 in the above case). (Do not use the id for the\n process \"grep postmaster\".)\n\n $ kill pid\n\n Tip: On systems which have PostgreSQL started at boot time,\n there is probably a startup file which will accomplish the\n same thing. For example, on a Redhat Linux system one might\n find that\n\n $ /etc/rc.d/init.d/postgres.init stop\n\n works.\n\n Also move the old directories out of the way. Type the following:\n\n $ mv /usr/local/pgsql /usr/local/pgsql.old\n\n or replace your particular paths.\n\n 4. Configure the source code for your system. It is this step at which you\n can specify your actual installation path for the build process and\n make choices about what gets installed. Change into the src\n subdirectory and type:\n\n $ ./configure [ options ]\n\n For a complete list of options, type:\n\n ./configure --help\n\n Some of the more commonly used ones are:\n\n --prefix=BASEDIR\n\n Selects a different base directory for the installation of\n PostgreSQL. The default is /usr/local/pgsql.\n\n --enable-locale\n\n If you want to use locales.\n\n --enable-multibyte\n\n Allows the use of multibyte character encodings. This is primarily\n for languages like Japanese, Korean, or Chinese.\n\n --with-perl\n\n Builds the Perl interface. Please note that the Perl interface\n will be installed into the usual place for Perl modules (typically\n under /usr/lib/perl), so you must have root access to use this\n option successfully.\n\n --with-odbc\n\n Builds the ODBC driver package.\n\n --with-tcl\n\n Builds interface libraries and programs requiring Tcl/Tk,\n including libpgtcl, pgtclsh, and pgtksh.\n\n 5. Compile the program. Type\n\n $ gmake\n\n The compilation process can take anywhere from 10 minutes to an hour.\n Your milage will most certainly vary.\n\n The last line displayed will hopefully be\n\n All of PostgreSQL is successfully made. Ready to install.\n\n Remember, \"gmake\" may be called \"make\" on your system.\n\n 6. Install the program. Type\n\n $ gmake install\n\n 7. Tell your system how to find the new shared libraries. How to do this\n varies between platforms. What tends to work everywhere is to set the\n environment variable LD_LIBRARY_PATH:\n\n $ LD_LIBRARY_PATH=/usr/local/pgsql/lib\n $ export LD_LIBRARY_PATH\n\n You might want to put this into a shell startup file such as\n ~/.bash_profile.\n\n On some systems the following is the preferred method, but you must\n have root access. Edit file /etc/ld.so.conf to add a line\n\n /usr/local/pgsql/lib\n\n Then run command /sbin/ldconfig.\n\n If in doubt, refer to the manual pages of your system. If you later on\n get a message like\n\n ./psql: error in loading shared libraries\n libpq.so.2.1: cannot open shared object file: No such file or directory\n\n then the above was necessary. Simply do this step then.\n\n 8. Create the database installation. To do this you must log in to your\n PostgreSQL superuser account. It will not work as root.\n\n $ mkdir /usr/local/pgsql/data\n $ chown postgres /usr/local/pgsql/data\n $ su - postgres\n $ /usr/local/pgsql/initdb -D /usr/local/pgsql/data\n\n The -D option specifies the location where the data will be stored. You\n can use any path you want, it does not have to be under the\n installation directory. Just make sure that the superuser account can\n write to it (or create it) before starting initdb.\n\n 9. The previous step should have told you how to start up the database\n server. Do so now.\n\n $ /usr/local/pgsql/initdb/postmaster -D /usr/local/pgsql/data\n\n This will start the server in the foreground. To make it detach to the\n background, use the -S.\n\n 10. If you are upgrading from an existing installation, dump your data back\n in:\n\n $ /usr/local/pgsql/bin/psql < db.out\n\n You also might want to copy over the old pg_hba.conf file and any other\n files you might have had set up for authentication, such as password\n files.\n\nThis concludes the installation proper. To make your life more productive\nand enjoyable you should look at the following optional steps and\nsuggestions.\n\n * Life will be more convenient if you set up some enviroment variables.\n First of all you probably want to include /usr/local/pgsql/bin (or\n equivalent) into your PATH. To do this, add the following to your shell\n startup file, such as ~/.bash_profile (or /etc/profile, if you want it\n to affect every user):\n\n PATH=$PATH:/usr/local/pgsql/bin\n\n Furthermore, if you set PGDATA in the environment of the PostgreSQL\n superuser, you can omit the -D for postmaster and initdb.\n\n * You probably want to install the man and HTML documentation. Type\n\n $ cd /usr/src/pgsql/postgresql-7.0.0/doc\n $ gmake install\n\n This will install files under /usr/local/pgsql/doc.\n\n The documentation is also available in Postscript format. If you have a\n Postscript printer, or have your machine already set up to accept\n Postscript files using a print filter, then to print the User's Guide\n simply type\n\n $ cd /usr/local/pgsql/doc\n $ gunzip -c user.ps.tz | lpr\n\n Here is how you might do it if you have Ghostscript on your system and\n are writing to a laserjet printer.\n\n $ alias gshp='gs -sDEVICE=laserjet -r300 -dNOPAUSE'\n $ export GS_LIB=/usr/share/ghostscript:/usr/share/ghostscript/fonts\n $ gunzip user.ps.gz\n $ gshp -sOUTPUTFILE=user.hp user.ps\n $ gzip user.ps\n $ lpr -l -s -r manpage.hp\n\n If in doubt, confer your manuals or your local expert.\n\n The Adminstrator's Guide should probably be your first reading if you\n are completely new to PostgreSQL, as it contains information about how\n to set up database users and authentication.\n\n * Usually, you will want to modify your computer so that it will\n automatically start the database server whenever it boots. This is not\n required; the PostgreSQL server can be run successfully from\n non-privileged accounts without root intervention.\n\n Different systems have different conventions for starting up daemons at\n boot time, so you are advised to familiarize yourself with them. Most\n systems have a file /etc/rc.local or /etc/rc.d/rc.local which is almost\n certainly no bad place to put such a command. Whatever you do,\n postmaster must be run by the PostgreSQL superuser (postgres) and not\n by root or any other user. Therefore you probably always want to form\n your command lines along the lines of su -c '...' postgres.\n\n It might be advisable to keep a log of the server output. To start the\n server that way try:\n\n nohup su -c 'postmaster -D /usr/local/pgsql/data > server.log 2>&1' postgres &\n\n Here are a few more operating system specific suggestions.\n\n o Edit file rc.local on NetBSD or file rc2.d on SPARC Solaris 2.5.1\n to contain the following single line:\n\n su postgres -c \"/usr/local/pgsql/bin/postmaster -S -D /usr/local/pgsql/data\"\n\n o In FreeBSD 2.2-RELEASE edit /usr/local/etc/rc.d/pgsql.sh to\n contain the following lines and make it chmod 755 and chown\n root:bin.\n\n #!/bin/sh\n [ -x /usr/local/pgsql/bin/postmaster ] && {\n su -l pgsql -c 'exec /usr/local/pgsql/bin/postmaster\n -D/usr/local/pgsql/data\n -S -o -F > /usr/local/pgsql/errlog' &\n echo -n ' pgsql'\n }\n\n You may put the line breaks as shown above. The shell is smart\n enough to keep parsing beyond end-of-line if there is an\n expression unfinished. The exec saves one layer of shell under the\n postmaster process so the parent is init.\n\n o In RedHat Linux add a file /etc/rc.d/init.d/postgres.init which is\n based on the example in contrib/linux/. Then make a softlink to\n this file from /etc/rc.d/rc5.d/S98postgres.init.\n\n * Run the regression tests. The regression tests are a test suite to\n verify that PostgreSQL runs on your machine in the way the developers\n expected it to. You should definitely do this before putting a server\n into production use. The file\n /usr/src/pgsql/postgresql-7.0.0/src/test/regress/README has detailed\n instructions for running and interpreting the regression tests.",
"msg_date": "Thu, 20 Jan 2000 17:19:53 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "New INSTALL text file"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> I have taken Peter's new install.sgml and generated a new INSTALL text\n> file to match it. I used sgmltools to convert to HTML and netscape to\n> dump the html as text.\n> \n> File attached.\n> \n\nThere are only two things I would want to see different. The\nfirst is the example of running configure. Even though it is\nbeyond silly to think that people will interpret step 4\nliterally, I guarantee you some will, and will try to enter: \n\n\"./configure [ options ]\"\n\nYou see it occasionally when users complain that they can't\ncreate 'C' functions per the documentation:\n\nCREATE FUNCTION add_one(int4) RETURNS int4\nAS 'PGROOT/tutorial/funcs.so' LANGUAGE 'c';\n\nPeople actually enter \"PGROOT\". On the vpnd list just yesterday\nsomeone was complaining that the IP address of 324.240.123.122\n(or something like that) was illegal and they couldn't get their\nvpnd software working - despite the fact the documentation\nexplicitly notes that the IP address is an example and is\nintentionally incorrect to prevent people for messing with\nothers' networks. People \"key what they see\" I'm afraid. So an\nexample configure command would be nice. \n\nThe only other thing is if somewhere there is a mention of the -o\n-F options for the backend, suggesting its possible use. Since\nfsync() is on by default, many people who don't dig into the docs\nand are just trying PostgreSQL to see if its a plausible solution\nmay dismiss it out-of-hand for performance reasons. Even though I\nknow robustness is the #1 criteria for a RDBMS, I personally\nbelieve fsync() should be *off* by default, but I know I'm in the\nminority.\n\nJust some thoughts,\n\nMike Mascari\n",
"msg_date": "Thu, 20 Jan 2000 17:57:51 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New INSTALL text file"
},
{
"msg_contents": "On 2000-01-20, Mike Mascari mentioned:\n\n> There are only two things I would want to see different. The\n> first is the example of running configure. Even though it is\n> beyond silly to think that people will interpret step 4\n> literally, I guarantee you some will, and will try to enter: \n> \n> \"./configure [ options ]\"\n\nGood point.\n\n> The only other thing is if somewhere there is a mention of the -o\n> -F options for the backend, suggesting its possible use. Since\n> fsync() is on by default, many people who don't dig into the docs\n> and are just trying PostgreSQL to see if its a plausible solution\n> may dismiss it out-of-hand for performance reasons. Even though I\n> know robustness is the #1 criteria for a RDBMS, I personally\n> believe fsync() should be *off* by default, but I know I'm in the\n> minority.\n\nThis sounds like solicitation to bait-and-switch. As I understand it, the\nofficial (at least as official as it gets around here) recommendation is\nto leave fsynch on. Otherwise this would have to be discussed. I\nfurthermore believe that read only commands (SELECT) will no longer do an\nfsynch in 7.0., so the incentive to turn it off is not so big anymore. ???\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 23 Jan 2000 02:30:00 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New INSTALL text file"
}
] |
[
{
"msg_contents": "\n 2. If you are not upgrading an existing system then skip to .\n\nskip to 4.\n\n\n 6. Install the program. Type\n\n $ gmake install\n\nThe installer needs to have write access to the install directory.\n\n\n 8. Create the database installation. To do this you must log in to your\n PostgreSQL superuser account. It will not work as root.\n\n $ mkdir /usr/local/pgsql/data\n $ chown postgres /usr/local/pgsql/data\n\nI thought the data directory was created either in the gmake install step\nor initdb. Either way the chown might be better as:\n\n# chown -R postgres:postgres /usr/local/pgsql\n\nthat should be the same on most systems with perhaps the exception of the\ncolon. Anyway it'll make sure that all the files have the correct owners.\n\n\n9. The previous step should have told you how to start up the database\n server. Do so now.\n\n $ /usr/local/pgsql/initdb/postmaster -D /usr/local/pgsql/data\n\nShouldn't that be /usr/local/pgsql/bin/postmaster ??\n ^^^\n\n\nOutside of that, it looks great!!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Thu, 20 Jan 2000 18:16:41 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "New install doc"
},
{
"msg_contents": "On 2000-01-20, Vince Vielhaber mentioned:\n\n> \n> 2. If you are not upgrading an existing system then skip to .\n> \n> skip to 4.\n> \n> \n> 6. Install the program. Type\n> \n> $ gmake install\n> \n> The installer needs to have write access to the install directory.\n\nPresumably the installer would only try to install to a directory that he\nhas access to. With the new installation instructions you will end up\ninstalling the program files as root, which is a) the normal thing to do,\nb) less confusing, and c) more secure, since an astray trigger function\ncan't fry your installation proper. You can of course install it under\nwhatever user you want, but then you ought to be experienced enough to\nfigure it out yourself.\n\nI remember my first installation and the juggling with su in and su out\nand, darn, now I installed this as the wrong user, chown -R, etc. only to\nfind out that this was completely unnecessary. Consider Apache (my\nrolemodel :), you don't install that as 'nobody' either.\n\n\n> 8. Create the database installation. To do this you must log in to your\n> PostgreSQL superuser account. It will not work as root.\n> \n> $ mkdir /usr/local/pgsql/data\n> $ chown postgres /usr/local/pgsql/data\n> \n> I thought the data directory was created either in the gmake install step\n> or initdb. Either way the chown might be better as:\n\nNo, it never was. Not sure if initdb used to create the data directory\nitself, at least now it does try to do so if it doesn't exist. But if you\nare going to put your data into a root-owned dir (such as\n/usr/local/pgsql/data) you must create it first and change the ownership.\n\n> # chown -R postgres:postgres /usr/local/pgsql\n> \n> that should be the same on most systems with perhaps the exception of the\n> colon. Anyway it'll make sure that all the files have the correct owners.\n\nNo. See above.\n\n> \n> \n> 9. The previous step should have told you how to start up the database\n> server. Do so now.\n> \n> $ /usr/local/pgsql/initdb/postmaster -D /usr/local/pgsql/data\n> \n> Shouldn't that be /usr/local/pgsql/bin/postmaster ??\n\nOops.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 23 Jan 2000 02:29:54 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New install doc"
},
{
"msg_contents": "\nOn 23-Jan-00 Peter Eisentraut wrote:\n> On 2000-01-20, Vince Vielhaber mentioned:\n>> I thought the data directory was created either in the gmake install step\n>> or initdb. Either way the chown might be better as:\n> \n> No, it never was. Not sure if initdb used to create the data directory\n> itself, at least now it does try to do so if it doesn't exist. But if you\n> are going to put your data into a root-owned dir (such as\n> /usr/local/pgsql/data) you must create it first and change the ownership.\n> \n>> # chown -R postgres:postgres /usr/local/pgsql\n>> \n>> that should be the same on most systems with perhaps the exception of the\n>> colon. Anyway it'll make sure that all the files have the correct owners.\n> \n> No. See above.\n\nThe reason I mentioned it is on one install where I used:\n\ninstall -u postgres -g postgres\n\nall of the directories were created ug root:wheel and the files in them\nwere ug postgres:postgres so I had to go back one and do the above chown.\nPostgreSQL couldn't create databases.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Sat, 22 Jan 2000 20:36:52 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New install doc"
},
{
"msg_contents": "On 2000-01-22, Vince Vielhaber mentioned:\n\n> >> # chown -R postgres:postgres /usr/local/pgsql\n> >> \n> >> that should be the same on most systems with perhaps the exception of the\n> >> colon. Anyway it'll make sure that all the files have the correct owners.\n\n> The reason I mentioned it is on one install where I used:\n> \n> install -u postgres -g postgres\n> \n> all of the directories were created ug root:wheel and the files in them\n> were ug postgres:postgres so I had to go back one and do the above chown.\n> PostgreSQL couldn't create databases.\n\nHa! If you try to subvert the make install logic then you're on your\nown. The directories are created by a script called mkinstalldirs (which\nis used by the rest of the world as well) and that doesn't know about\npermissions. If you would like this to work, then how about patching up\nmkinstalldirs? You'll find it in the src dir, it's a really simple thingy.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Tue, 25 Jan 2000 00:50:02 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New install doc"
}
] |
[
{
"msg_contents": "I have been spending some time measuring actual runtimes for various\nsequential-scan and index-scan query plans, and have learned that the\ncurrent Postgres optimizer's cost estimation equations are not very\nclose to reality at all.\n\nPresently we estimate the cost of a sequential scan as\n\n\tNblocks + CPU_PAGE_WEIGHT * Ntuples\n\n--- that is, the unit of cost is the time to read one disk page,\nand we have a \"fudge factor\" that relates CPU time per tuple to\ndisk time per page. (The default CPU_PAGE_WEIGHT is 0.033, which\nis probably too high for modern hardware --- 0.01 seems like it\nmight be a better default, at least for simple queries.) OK,\nit's a simplistic model, but not too unreasonable so far.\n\nThe cost of an index scan is measured in these same terms as\n\n\tNblocks + CPU_PAGE_WEIGHT * Ntuples +\n\t CPU_INDEX_PAGE_WEIGHT * Nindextuples\n\nHere Ntuples is the number of tuples selected by the index qual\ncondition (typically, it's less than the total table size used in\nsequential-scan estimation). CPU_INDEX_PAGE_WEIGHT essentially\nestimates the cost of scanning an index tuple; by default it's 0.017 or\nhalf CPU_PAGE_WEIGHT. Nblocks is estimated as the index size plus an\nappropriate fraction of the main table size.\n\nThere are two big problems with this:\n\n1. Since main-table tuples are visited in index order, we'll be hopping\naround from page to page in the table. The current cost estimation\nmethod essentially assumes that the buffer cache plus OS disk cache will\nbe 100% efficient --- we will never have to read the same page of the\nmain table twice in a scan, due to having discarded it between\nreferences. This of course is unreasonably optimistic. Worst case\nis that we'd fetch a main-table page for each selected tuple, but in\nmost cases that'd be unreasonably pessimistic.\n\n2. The cost of a disk page fetch is estimated at 1.0 unit for both\nsequential and index scans. In reality, sequential access is *much*\ncheaper than the quasi-random accesses performed by an index scan.\nThis is partly a matter of physical disk seeks, and partly a matter\nof benefitting (or not) from any read-ahead logic the OS may employ.\n\nAs best I can measure on my hardware, the cost of a nonsequential\ndisk read should be estimated at 4 to 5 times the cost of a sequential\none --- I'm getting numbers like 2.2 msec per disk page for sequential\nscans, and as much as 11 msec per page for index scans. I don't\nknow, however, if this ratio is similar enough on other platforms\nto be useful for cost estimating. We could make it a parameter like\nwe do for CPU_PAGE_WEIGHT ... but you know and I know that no one\never bothers to adjust those numbers in the field ...\n\nThe other effect that needs to be modeled, and currently is not, is the\n\"hit rate\" of buffer cache. Presumably, this is 100% for tables smaller\nthan the cache and drops off as the table size increases --- but I have\nno particular thoughts on the form of the dependency. Does anyone have\nideas here? The problem is complicated by the fact that we don't really\nknow how big the cache is; we know the number of buffers Postgres has,\nbut we have no idea how big a disk cache the kernel is keeping. As near\nas I can tell, finding a hit in the kernel disk cache is not a lot more\nexpensive than having the page sitting in Postgres' own buffers ---\ncertainly it's much much cheaper than a disk read.\n\nBTW, if you want to do some measurements of your own, try turning on\nPGOPTIONS=\"-d 2 -te\". This will dump a lot of interesting numbers\ninto the postmaster log, if your platform supports getrusage().\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jan 2000 19:31:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some notes on optimizer cost estimates"
},
{
"msg_contents": "Tom Lane wrote:\n\n> As best I can measure on my hardware, the cost of a nonsequential\n> disk read should be estimated at 4 to 5 times the cost of a sequential\n> one --- I'm getting numbers like 2.2 msec per disk page for sequential\n> scans, and as much as 11 msec per page for index scans. I don't\n> know, however, if this ratio is similar enough on other platforms\n> to be useful for cost estimating. We could make it a parameter like\n> we do for CPU_PAGE_WEIGHT ... but you know and I know that no one\n> ever bothers to adjust those numbers in the field ...\n\nWould it be possible to place those parameters as run-time\nsettings and then write a utility that can ship with the\ndistribution to determine those values? Kind of a self-tuning\nutility? \n\nMike Mascari\n",
"msg_date": "Thu, 20 Jan 2000 19:49:55 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n>> to be useful for cost estimating. We could make it a parameter like\n>> we do for CPU_PAGE_WEIGHT ... but you know and I know that no one\n>> ever bothers to adjust those numbers in the field ...\n\n> Would it be possible to place those parameters as run-time\n> settings and then write a utility that can ship with the\n> distribution to determine those values? Kind of a self-tuning\n> utility? \n\nMaybe. I'm not sure the average user would want to run it ---\nto get believable numbers, you have to be using a table considerably\nbigger than the kernel's disk cache, which means it takes a while.\n(I've been testing with a gigabyte-sized table ... one of the index\nscan runs took thirty hours :-( ... fortunately I have this machine\nto myself, or there would have been some howls about the load.)\n\nBut it'd be nice to have comparable numbers for different platforms.\nWhat I was really hoping was that someone on the list would be aware\nof existing research I could borrow from.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jan 2000 20:11:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> I have been spending some time measuring actual runtimes for various\n> sequential-scan and index-scan query plans, and have learned that the\n> current Postgres optimizer's cost estimation equations are not very\n> close to reality at all.\n> \n\nThanks for your good analysis.\n\nI also have said current cost estimation for index-scan is too low.\nBut I have had no concrete numerical values.\n\nI've wondered why we cound't analyze database without vacuum.\nWe couldn't run vacuum light-heartedly because it acquires an\nexclusive lock for the target table. \nIn addition,vacuum error occurs with analyze option in most\ncases AFAIK. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 21 Jan 2000 10:44:20 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I've wondered why we cound't analyze database without vacuum.\n> We couldn't run vacuum light-heartedly because it acquires an\n> exclusive lock for the target table. \n\nThere is probably no real good reason, except backwards compatibility,\nwhy the ANALYZE function (obtaining pg_statistic data) is part of\nVACUUM at all --- it could just as easily be a separate command that\nwould only use read access on the database. Bruce is thinking about\nrestructuring VACUUM, so maybe now is a good time to think about\nsplitting out the ANALYZE code too.\n\n> In addition,vacuum error occurs with analyze option in most\n> cases AFAIK. \n\nStill, with current sources? What's the error message? I fixed\na problem with pg_statistic tuples getting too big...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jan 2000 21:30:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates "
},
{
"msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I've wondered why we cound't analyze database without vacuum.\n> > We couldn't run vacuum light-heartedly because it acquires an\n> > exclusive lock for the target table. \n> \n> There is probably no real good reason, except backwards compatibility,\n> why the ANALYZE function (obtaining pg_statistic data) is part of\n> VACUUM at all --- it could just as easily be a separate command that\n> would only use read access on the database. Bruce is thinking about\n> restructuring VACUUM, so maybe now is a good time to think about\n> splitting out the ANALYZE code too.\n\nI put it in vacuum because at the time I didn't know how to do such\nthings and vacuum already scanned the table. I just linked on the the\nscan. Seemed like a good idea at the time.\n\nIt is nice that ANALYZE is done during vacuum. I can't imagine why you\nwould want to do an analyze without adding a vacuum to it. I guess\nthat's why I made them the same command.\n\nIf I made them separate commands, both would have to scan the table,\nthough the analyze could do it without the exclusive lock, which would\nbe good.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jan 2000 21:48:57 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> > In addition,vacuum error occurs with analyze option in most\n> > cases AFAIK. \n> \n> Still, with current sources? What's the error message? I fixed\n> a problem with pg_statistic tuples getting too big...\n>\n\nSorry,my English is poor.\nWhen I saw vacuum bug reports,there were 'analyze' option mostly.\n'Analyze' is harmful for safety of vacuum. \n\nI'm thinking that 'analyze' is not preferable especially for index\nrecreation in vacuum. \n\nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Fri, 21 Jan 2000 11:59:20 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Some notes on optimizer cost estimates "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> It is nice that ANALYZE is done during vacuum. I can't imagine why you\n> would want to do an analyze without adding a vacuum to it. I guess\n> that's why I made them the same command.\n\nWell, the main bad thing about ANALYZE being part of VACUUM is that\nit adds to the length of time that VACUUM is holding an exclusive\nlock on the table. I think it'd make more sense for it to be a\nseparate command.\n\nI have also been thinking about how to make ANALYZE produce a more\nreliable estimate of the most common value. The three-element list\nthat it keeps now is a good low-cost hack, but it really doesn't\nproduce a trustworthy answer unless the MCV is pretty darn C (since\nit will never pick up on the MCV at all until there are at least\ntwo occurrences in three adjacent tuples). The only idea I've come\nup with is to use a larger list, which would be slower and take\nmore memory. I think that'd be OK in a separate command, but I\nhesitate to do it inside VACUUM --- VACUUM has its own considerable\nmemory requirements, and there's still the issue of not holding down\nan exclusive lock longer than you have to.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jan 2000 22:10:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates "
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> \n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > I've wondered why we cound't analyze database without vacuum.\n> > > We couldn't run vacuum light-heartedly because it acquires an\n> > > exclusive lock for the target table. \n> > \n> > There is probably no real good reason, except backwards compatibility,\n> > why the ANALYZE function (obtaining pg_statistic data) is part of\n> > VACUUM at all --- it could just as easily be a separate command that\n> > would only use read access on the database. Bruce is thinking about\n> > restructuring VACUUM, so maybe now is a good time to think about\n> > splitting out the ANALYZE code too.\n> \n> I put it in vacuum because at the time I didn't know how to do such\n> things and vacuum already scanned the table. I just linked on the the\n> scan. Seemed like a good idea at the time.\n> \n> It is nice that ANALYZE is done during vacuum. I can't imagine why you\n> would want to do an analyze without adding a vacuum to it. I guess\n> that's why I made them the same command.\n> \n> If I made them separate commands, both would have to scan the table,\n> though the analyze could do it without the exclusive lock, which would\n> be good.\n>\n\nThe functionality of VACUUM and ANALYZE is quite different.\nI don't prefer to charge VACUUM more than now about analyzing\ndatabase. Probably looong lock,more aborts .... \nVarious kind of analysis would be possible by splitting out ANALYZE.\n \nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 21 Jan 2000 12:14:10 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> As best I can measure on my hardware, the cost of a nonsequential\n> disk read should be estimated at 4 to 5 times the cost of a sequential\n> one --- I'm getting numbers like 2.2 msec per disk page for sequential\n> scans, and as much as 11 msec per page for index scans. I don't\n> know, however, if this ratio is similar enough on other platforms\n> to be useful for cost estimating. We could make it a parameter like\n> we do for CPU_PAGE_WEIGHT ... but you know and I know that no one\n> ever bothers to adjust those numbers in the field ...\n\nHere's a thought: there are tools (bonnie, ioscan) whose job is\ndetermining details of disk performance. Do we want to look at\ncreating a small tool/script of our own that would (optionally)\ndetermine the correct parameters for the system it is installed on and\nupdate the appropriate parameters?\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "21 Jan 2000 08:57:35 -0500",
"msg_from": "Brian E Gallew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": "On Thu, 20 Jan 2000, Bruce Momjian wrote:\n\n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > I've wondered why we cound't analyze database without vacuum.\n> > > We couldn't run vacuum light-heartedly because it acquires an\n> > > exclusive lock for the target table. \n> > \n> > There is probably no real good reason, except backwards compatibility,\n> > why the ANALYZE function (obtaining pg_statistic data) is part of\n> > VACUUM at all --- it could just as easily be a separate command that\n> > would only use read access on the database. Bruce is thinking about\n> > restructuring VACUUM, so maybe now is a good time to think about\n> > splitting out the ANALYZE code too.\n> \n> I put it in vacuum because at the time I didn't know how to do such\n> things and vacuum already scanned the table. I just linked on the the\n> scan. Seemed like a good idea at the time.\n> \n> It is nice that ANALYZE is done during vacuum. I can't imagine why you\n> would want to do an analyze without adding a vacuum to it. I guess\n> that's why I made them the same command.\n\nHrmmm...how about making ANALYZE a seperate function, while a VACUUM does\nan ANALYZE implicitly?\n\nThen again, whatever happened with the work that was being done to make\nVACUUM either non-locking *or*, at least, lock only the table being\nvacuum'd?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 21 Jan 2000 10:12:26 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": ">to be useful for cost estimating. We could make it a parameter like\n>we do for CPU_PAGE_WEIGHT ... but you know and I know that no one\n>ever bothers to adjust those numbers in the field ...\n\nAppologies if this is addressed later in this thread.\n\nCouldn't we test some of these parameters inside configure and set them there?\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n",
"msg_date": "Mon, 24 Jan 2000 08:59:57 -0800",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": "> >to be useful for cost estimating. We could make it a parameter like\n> >we do for CPU_PAGE_WEIGHT ... but you know and I know that no one\n> >ever bothers to adjust those numbers in the field ...\n> \n> Appologies if this is addressed later in this thread.\n> \n> Couldn't we test some of these parameters inside configure and set them there?\n\nThat would be cool.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 24 Jan 2000 12:17:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": "\"Henry B. Hotz\" <[email protected]> writes:\n>> to be useful for cost estimating. We could make it a parameter like\n>> we do for CPU_PAGE_WEIGHT ... but you know and I know that no one\n>> ever bothers to adjust those numbers in the field ...\n\n> Couldn't we test some of these parameters inside configure and set\n> them there?\n\nIf we could figure out a reasonably cheap way of estimating these\nnumbers, it'd be worth setting up custom values at installation time.\n\nI don't know how to do that --- AFAICS, getting trustworthy numbers by\nmeasurement would require hundreds of meg of temporary disk space and\nprobably hours of runtime. (A smaller test would be completely\ncorrupted by kernel disk caching effects.)\n\nBut perhaps someone has an idea how to do it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 Jan 2000 12:25:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates "
},
{
"msg_contents": "At 9:25 AM -0800 1/24/00, Tom Lane wrote:\n>\"Henry B. Hotz\" <[email protected]> writes:\n>>> to be useful for cost estimating. We could make it a parameter like\n>>> we do for CPU_PAGE_WEIGHT ... but you know and I know that no one\n>>> ever bothers to adjust those numbers in the field ...\n>\n>> Couldn't we test some of these parameters inside configure and set\n>> them there?\n>\n>If we could figure out a reasonably cheap way of estimating these\n>numbers, it'd be worth setting up custom values at installation time.\n>\n>I don't know how to do that --- AFAICS, getting trustworthy numbers by\n>measurement would require hundreds of meg of temporary disk space and\n>probably hours of runtime. (A smaller test would be completely\n>corrupted by kernel disk caching effects.)\n\nGetting a rough estimate of CPU speed is trivial. Getting a rough estimate\nof sequential disk access shouldn't be too hard, though you would need to\nmake sure it didn't give the wrong answer if you ran configure twice in a\nrow or something. Getting a rough estimate of disk access for a single\nnon-sequential disk page also shouldn't be too hard with the same caviats.\nMeasuring sequential vs. random reads probably takes a large dataset as you\nsay.\n\nI suspect that there is a range of important parameters and we can only\neasily measure some of them. If we do so and use canned values (ratios,\nwhere possible) for the others then we're probably still ahead.\n\nI'll leave the details for the people who actually have time to do some of\nthis stuff.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n",
"msg_date": "Mon, 24 Jan 2000 09:46:40 -0800",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> >I don't know how to do that --- AFAICS, getting trustworthy numbers by\n> >measurement would require hundreds of meg of temporary disk space and\n> >probably hours of runtime. (A smaller test would be completely\n> >corrupted by kernel disk caching effects.)\n> \n> Getting a rough estimate of CPU speed is trivial. Getting a rough estimate\n> of sequential disk access shouldn't be too hard, though you would need to\n> make sure it didn't give the wrong answer if you ran configure twice in a\n> row or something. Getting a rough estimate of disk access for a single\n> non-sequential disk page also shouldn't be too hard with the same caviats.\n> Measuring sequential vs. random reads probably takes a large dataset as you\n> say.\n\nA point for consideration: this need not be a configure test. Any\ncommercial database usage carries with it the expectation of a\nnon-trivial effort at tuning. This being the case, it might make\nsense to bring in some foresight here. As PostgreSQL matures, people\nare going to be using it on non-homogeneous systems (e.g mixture of\n3600, 7200, and 10k rpm disks). Our cost estimates should therefore\nvary somewhat as tables start living on different disks (yet another\nreason why symlinks are not the answer).\n\nRight now, I would kill to have a tool that I could run over a couple\nhours and many gigabytes of disk space that would give me indications\nof how to tune my Oracle database. We may want to think about, in the\nfuture, adding in the ability to tune specific tables by keeping query\nstatistics and analyzing them. Even post-processing debug output\nwould help, although turning debugging on adds some non-trivial\noverhead. \n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "24 Jan 2000 13:11:10 -0500",
"msg_from": "Brian E Gallew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": "\"Henry B. Hotz\" <[email protected]> writes:\n>> I don't know how to do that --- AFAICS, getting trustworthy numbers by\n>> measurement would require hundreds of meg of temporary disk space and\n>> probably hours of runtime. (A smaller test would be completely\n>> corrupted by kernel disk caching effects.)\n\n> Getting a rough estimate of CPU speed is trivial. Getting a rough estimate\n> of sequential disk access shouldn't be too hard, though you would need to\n> make sure it didn't give the wrong answer if you ran configure twice in a\n> row or something. Getting a rough estimate of disk access for a single\n> non-sequential disk page also shouldn't be too hard with the same caviats.\n\nIn practice this would be happening at initdb time, not configure time,\nsince it'd be a lot easier to do it in C code than in a shell script.\nBut that's a detail. I'm still not clear on how you can wave away the\nissue of kernel disk caching --- if you don't use a test file that's\nlarger than the disk cache, ISTM you risk getting a number that's\nentirely devoid of any physical I/O at all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 Jan 2000 13:13:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates "
},
{
"msg_contents": "At 01:13 PM 1/24/00 -0500, Tom Lane wrote:\n\n>In practice this would be happening at initdb time, not configure time,\n>since it'd be a lot easier to do it in C code than in a shell script.\n>But that's a detail. I'm still not clear on how you can wave away the\n>issue of kernel disk caching --- if you don't use a test file that's\n>larger than the disk cache, ISTM you risk getting a number that's\n>entirely devoid of any physical I/O at all.\n\nAnd even the $100 6.4 GB Ultra DMA drive I bought last week has\n2MB of cache. hdparm shows me getting 19 mB/second transfers\neven though it adjusts for the file system cache. It's only a\n5400 RPM disk and I'm certain the on-disk cache is impacting\nthis number.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 24 Jan 2000 11:55:31 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates "
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 01:13 PM 1/24/00 -0500, Tom Lane wrote:\n> \n> >In practice this would be happening at initdb time, not configure time,\n\nor perhaps it can be collected when running regression tests.\n\n> >since it'd be a lot easier to do it in C code than in a shell script.\n> >But that's a detail. I'm still not clear on how you can wave away the\n> >issue of kernel disk caching --- if you don't use a test file that's\n> >larger than the disk cache, ISTM you risk getting a number that's\n> >entirely devoid of any physical I/O at all.\n> \n> And even the $100 6.4 GB Ultra DMA drive I bought last week has\n> 2MB of cache. hdparm shows me getting 19 mB/second transfers\n> even though it adjusts for the file system cache. It's only a\n> 5400 RPM disk and I'm certain the on-disk cache is impacting\n> this number.\n\nBut they also claim internal bitrates of more than 250 Mbits/s, which stays \nthe same for both 5400 and 7200 RPM disks (the latter even have the same \nseek time) so that may actually be true for sequential reads if they have \nall their read paths at optimal efficiency and readahead and cache do \ntheir job.\n\n-------------\nHannu\n",
"msg_date": "Tue, 25 Jan 2000 00:56:16 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": "On 2000-01-24, Henry B. Hotz mentioned:\n\n> >to be useful for cost estimating. We could make it a parameter like\n> >we do for CPU_PAGE_WEIGHT ... but you know and I know that no one\n> >ever bothers to adjust those numbers in the field ...\n\n> Couldn't we test some of these parameters inside configure and set them there?\n\nNope. configure is for testing the build environment, not the runtime\nenvironment. But the same thing could be achieved with some\nruntime-analyzing program that writes its results into some sort of\nconfiguration file.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 26 Jan 2000 00:16:15 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": "At 3:16 PM -0800 1/25/00, Peter Eisentraut wrote:\n>On 2000-01-24, Henry B. Hotz mentioned:\n>> Couldn't we test some of these parameters inside configure and set them\n>>there?\n>\n>Nope. configure is for testing the build environment, not the runtime\n>environment. But the same thing could be achieved with some\n>runtime-analyzing program that writes its results into some sort of\n>configuration file.\n\nAnd then we could make configure run the program for us, and we could\nautomatically put the results into some header file. . . In other words\nI'm not that much of a purist.\n\nAs Tom Lane said this is not an important point. What's important is how\nmuch can we determine how easily. It may actually make more sense to do\nthis tuning after installation. It only affects optimizer choices in\nsituations that aren't obvious.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n",
"msg_date": "Tue, 25 Jan 2000 16:18:17 -0800",
"msg_from": "\"Henry B. Hotz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some notes on optimizer cost estimates"
}
] |
[
{
"msg_contents": "Patrick Welche <[email protected]> writes:\n> This is with source from yesterday, but I seem to remember the same\n> happening to me on 6th January. It used to work before then. I don't\n> know where to look!\n\n> % psql template1\n> template1=> create database test;\n> CREATE DATABASE\n> template1=> \\c test \n> You are now connected to database test.\n> test=> create table atable (x int);\n> CREATE\n> test=> insert into atable values (1);\n> INSERT 558537 1\n> test=> vacuum analyze atable;\n> NOTICE: Vacuum: table not found\n> VACUUM\n> test=> select version();\n> version \n> -------------------------------------------------------------------------\n> PostgreSQL 7.0.0 on i386-unknown-netbsd1.4p, compiled by gcc egcs-1.1.2\n> (1 row)\n\nWow. It works fine for me. Platform-specific bug maybe? Can anyone\nelse reproduce this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jan 2000 20:58:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum failure in current sources"
},
{
"msg_contents": "> Patrick Welche <[email protected]> writes:\n> > This is with source from yesterday, but I seem to remember the same\n> > happening to me on 6th January. It used to work before then. I don't\n> > know where to look!\n> \n> > % psql template1\n> > template1=> create database test;\n> > CREATE DATABASE\n> > template1=> \\c test \n> > You are now connected to database test.\n> > test=> create table atable (x int);\n> > CREATE\n> > test=> insert into atable values (1);\n> > INSERT 558537 1\n> > test=> vacuum analyze atable;\n> > NOTICE: Vacuum: table not found\n> > VACUUM\n> > test=> select version();\n> > version \n> > -------------------------------------------------------------------------\n> > PostgreSQL 7.0.0 on i386-unknown-netbsd1.4p, compiled by gcc egcs-1.1.2\n> > (1 row)\n> \n> Wow. It works fine for me. Platform-specific bug maybe? Can anyone\n> else reproduce this?\n\nI do not see the problem neither. This is RH 5.2.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 21 Jan 2000 12:53:39 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum failure in current sources"
}
] |
[
{
"msg_contents": "I'm very glad you bring up this cost estimate issue.\nRecent work in database research have argued a more\ndetailed disk access cost model should be used for\nlarge queries especially joins.\nTraditional cost estimate only considers the number of\ndisk pages accessed. However a more detailed model\nwould consider three parameters: avg. seek, avg. latency\nand avg. page transfer. For old disk, typical values are\nSEEK=9.5 milliseconds, LATENCY=8.3 ms, TRANSFER=2.6ms.\nA sequential continuous reading of a table (assuming\n1000 continuous pages) would cost\n(SEEK+LATENCY+1000*TRANFER=2617.8ms); while quasi-randomly\nreading 200 times with 2 continuous pages/time would\ncost (SEEK+200*LATENCY+400*TRANSFER=2700ms).\nSomeone from IBM lab re-studied the traditional\nad hoc join algorithms (nested, sort-merge, hash) using the detailed cost model\nand found some interesting results.\n\n>I have been spending some time measuring actual runtimes for various\n>sequential-scan and index-scan query plans, and have learned that the\n>current Postgres optimizer's cost estimation equations are not very\n>close to reality at all.\n\nOne interesting question I'd like to ask is if this non-closeness\nreally affects the optimal choice of postgresql's query optimizer.\nAnd to what degree the effects might be? My point is that\nif the optimizer estimated the cost for sequential-scan is 10 and\nthe cost for index-scan is 20 while the actual costs are 10 vs. 40,\nit should be ok because the optimizer would still choose sequential-scan\nas it should.\n\n>1. Since main-table tuples are visited in index order, we'll be hopping\n>around from page to page in the table.\n\nI'm not sure about the implementation in postgresql. One thing you might\nbe able to do is to first collect all must-read page addresses from \nthe index scan and then order them before the actual ordered page fetching.\nIt would at least avoid the same page being read twice (not entirely\ntrue depending on the context (like in join) and algo.)\n\n>The current cost estimation\n>method essentially assumes that the buffer cache plus OS disk cache will\n>be 100% efficient --- we will never have to read the same page of the\n>main table twice in a scan, due to having discarded it between\n>references. This of course is unreasonably optimistic. Worst case\n>is that we'd fetch a main-table page for each selected tuple, but in\n>most cases that'd be unreasonably pessimistic.\n\nThis is actually the motivation that I asked before if postgresql\nhas a raw disk facility. That way we have much control on this cache\nissue. Of course only if we can provide some algo. better than OS\ncache algo. (depending on the context, like large joins), a raw disk\nfacility will be worthwhile (besides the recoverability).\n\nActually I have another question for you guys which is somehow related\nto this cost estimation issue. You know the difference between OLTP\nand OLAP. My question is how you target postgresql on both kinds\nof applications or just OLTP. From what I know OLTP and OLAP would\nhave a big difference in query characteristics and thus \noptimization difference. If postgresql is only targeted on\nOLTP, the above cost estimation issue might not be that\nimportant. However for OLAP, large tables and large queries are\ncommon and optimization would be difficult.\n\nxun\n\n",
"msg_date": "Thu, 20 Jan 2000 18:19:40 -0800",
"msg_from": "Xun Cheng <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re. [HACKERS] Some notes on optimizer cost estimates"
},
{
"msg_contents": "At 06:19 PM 1/20/00 -0800, Xun Cheng wrote:\n>I'm very glad you bring up this cost estimate issue.\n>Recent work in database research have argued a more\n>detailed disk access cost model should be used for\n>large queries especially joins.\n>Traditional cost estimate only considers the number of\n>disk pages accessed. However a more detailed model\n>would consider three parameters: avg. seek, avg. latency\n>and avg. page transfer. For old disk, typical values are\n>SEEK=9.5 milliseconds, LATENCY=8.3 ms, TRANSFER=2.6ms.\n>A sequential continuous reading of a table (assuming\n>1000 continuous pages) would cost\n>(SEEK+LATENCY+1000*TRANFER=2617.8ms); while quasi-randomly\n>reading 200 times with 2 continuous pages/time would\n>cost (SEEK+200*LATENCY+400*TRANSFER=2700ms).\n>Someone from IBM lab re-studied the traditional\n>ad hoc join algorithms (nested, sort-merge, hash) using the detailed cost\nmodel\n>and found some interesting results.\n\nOne complication when doing an index scan is that you are\naccessing two separate files (table and index), which can frequently\nbe expected to cause an considerable increase in average seek time.\n\nOracle and other commercial databases recommend spreading indices and\ntables over several spindles if at all possible in order to minimize\nthis effect.\n\nI suspect it also helps their optimizer make decisions that are\nmore consistently good for customers with the largest and most\ncomplex databases and queries, by making cost estimates more predictably\nreasonable.\n\nStill...this doesn't help with the question about the effect of the\nfilesystem system cache. I wandered around the web for a little bit\nlast night, and found one summary of a paper by Osterhout on the\neffect of the Solaris cache on a fileserver serving diskless workstations.\nThere was reference to the hierarchy involved (i.e. the local workstation\ncache is faster than the fileserver's cache which has to be read via\nthe network which in turn is faster than reading from the fileserver's\ndisk). It appears the rule-of-thumb for the cache-hit ratio on reads,\npresumably based on measuring some internal Sun systems, used in their\ncalculations was 80%.\n\nJust a datapoint to think about.\n\nThere's also considerable operating system theory on paging systems\nthat might be useful for thinking about trying to estimate the\nPostgres cache/hit ratio. Then again, maybe Postgres could just\nkeep count of how many pages of a given table are in the cache at\nany given time? Or simply keep track of the current ratio of hits\nand misses?\n\n>>I have been spending some time measuring actual runtimes for various\n>>sequential-scan and index-scan query plans, and have learned that the\n>>current Postgres optimizer's cost estimation equations are not very\n>>close to reality at all.\n\n>One interesting question I'd like to ask is if this non-closeness\n>really affects the optimal choice of postgresql's query optimizer.\n>And to what degree the effects might be? My point is that\n>if the optimizer estimated the cost for sequential-scan is 10 and\n>the cost for index-scan is 20 while the actual costs are 10 vs. 40,\n>it should be ok because the optimizer would still choose sequential-scan\n>as it should.\n\nThis is crucial, of course - if there are only two types of scans \navailable, what ever heuristic is used only has to be accurate enough\nto pick the right one. Once the choice is made, it doesn't really\nmatter (from the optimizer's POV) just how long it will actually take,\nthe time will be spent and presumably it will be shorter than the\nalternative.\n\nHow frequently will the optimizer choose wrongly if:\n\n1. All of the tables and indices were in PG buffer cache or filesystem\n cache? (i.e. fixed access times for both types of scans)\n\nor\n\n2. The table's so big that only a small fraction can reside in RAM\n during the scan and join, which means that the non-sequential\n disk access pattern of the indexed scan is much more expensive.\n\nAlso, if you pick sequential scans more frequently based on a presumption\nthat index scans are expensive due to increased average seek time, how\noften will this penalize the heavy-duty user that invests in extra\ndrives and lots of RAM?\n\n...\n\n>>The current cost estimation\n>>method essentially assumes that the buffer cache plus OS disk cache will\n>>be 100% efficient --- we will never have to read the same page of the\n>>main table twice in a scan, due to having discarded it between\n>>references. This of course is unreasonably optimistic. Worst case\n>>is that we'd fetch a main-table page for each selected tuple, but in\n>>most cases that'd be unreasonably pessimistic.\n>\n>This is actually the motivation that I asked before if postgresql\n>has a raw disk facility. That way we have much control on this cache\n>issue. Of course only if we can provide some algo. better than OS\n>cache algo. (depending on the context, like large joins), a raw disk\n>facility will be worthwhile (besides the recoverability).\n\nPostgres does have control over its buffer cache. The one thing that\nraw disk I/O would give you is control over where blocks are placed,\nmeaning you could more accurately model the cost of retrieving them.\nSo presumably the cache could be tuned to the allocation algorithm\nused to place various structures on the disk.\n\nI still wonder just how much gain you get by this approach. Compared,\nto, say simply spending $2,000 on a gigabyte of RAM. Heck, PCs even\nsupport a couple gigs of RAM now.\n\n>Actually I have another question for you guys which is somehow related\n>to this cost estimation issue. You know the difference between OLTP\n>and OLAP. My question is how you target postgresql on both kinds\n>of applications or just OLTP. From what I know OLTP and OLAP would\n>have a big difference in query characteristics and thus \n>optimization difference. If postgresql is only targeted on\n>OLTP, the above cost estimation issue might not be that\n>important. However for OLAP, large tables and large queries are\n>common and optimization would be difficult.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 21 Jan 2000 08:10:44 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re. [HACKERS] Some notes on optimizer cost estimates"
}
] |
[
{
"msg_contents": "> I can see why they'd have one for EST since Indiana doesn't observe, but\n> why MST? Perhaps there's a state I'm not thinking of. Found it, it\n> appears Arizona doesn't observe.\n\nAh, good call. I knew about Arizona, but not Indiana.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 21 Jan 2000 04:25:17 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] timezone problem?"
}
] |
[
{
"msg_contents": "I loaded 10,000,000 rows into CREATE TABLE test (x INTEGER); Table is\n400MB and index is 160MB.\n\nWith index on the single in4 column, I got:\n\t 78 seconds for a vacuum\n\t121 seconds for vacuum after deleting a single row\n\t662 seconds for vacuum after deleting the entire table\n\nWith no index, I got:\n\t 43 seconds for a vacuum\n\t 43 seconds for vacuum after deleting a single row\n\t 43 seconds for vacuum after deleting the entire table\n\nI find this quite interesting.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jan 2000 00:43:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum timings"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I loaded 10,000,000 rows into CREATE TABLE test (x INTEGER); Table is\n> 400MB and index is 160MB.\n\n> With index on the single in4 column, I got:\n> \t 78 seconds for a vacuum\n> \t121 seconds for vacuum after deleting a single row\n> \t662 seconds for vacuum after deleting the entire table\n\n> With no index, I got:\n> \t 43 seconds for a vacuum\n> \t 43 seconds for vacuum after deleting a single row\n> \t 43 seconds for vacuum after deleting the entire table\n\n> I find this quite interesting.\n\nHow long does it take to create the index on your setup --- ie,\nif vacuum did a drop/create index, would it be competitive?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jan 2000 00:51:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum timings "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> I loaded 10,000,000 rows into CREATE TABLE test (x INTEGER); Table is\n> 400MB and index is 160MB.\n> \n> With index on the single in4 column, I got:\n> 78 seconds for a vacuum\n> 121 seconds for vacuum after deleting a single row\n> 662 seconds for vacuum after deleting the entire table\n> \n> With no index, I got:\n> 43 seconds for a vacuum\n> 43 seconds for vacuum after deleting a single row\n> 43 seconds for vacuum after deleting the entire table\n\nWi/wo -F ?\n\nVadim\n",
"msg_date": "Fri, 21 Jan 2000 13:26:33 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum timings"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> \n> I loaded 10,000,000 rows into CREATE TABLE test (x INTEGER); Table is\n> 400MB and index is 160MB.\n> \n> With index on the single in4 column, I got:\n> \t 78 seconds for a vacuum\n\t\tvc_vaconeind() is called once\n\n> \t121 seconds for vacuum after deleting a single row\n\t\tvc_vaconeind() is called twice\n\nHmmm,vc_vaconeind() takes pretty long time even if it does little. \n\n> \t662 seconds for vacuum after deleting the entire table\n>\n\nHow about half of the rows deleted case ?\nIt would take longer time.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 21 Jan 2000 15:46:15 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] vacuum timings"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > I loaded 10,000,000 rows into CREATE TABLE test (x INTEGER); Table is\n> > 400MB and index is 160MB.\n> > \n> > With index on the single in4 column, I got:\n> > 78 seconds for a vacuum\n> > 121 seconds for vacuum after deleting a single row\n> > 662 seconds for vacuum after deleting the entire table\n> > \n> > With no index, I got:\n> > 43 seconds for a vacuum\n> > 43 seconds for vacuum after deleting a single row\n> > 43 seconds for vacuum after deleting the entire table\n> \n> Wi/wo -F ?\n\nWith no -F.\n\nI can get you -F times tomorrow.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jan 2000 01:49:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] vacuum timings"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> >\n> > With index on the single in4 column, I got:\n> > 78 seconds for a vacuum\n> vc_vaconeind() is called once\n ^^^^^^\n not called ?\n> \n> > 121 seconds for vacuum after deleting a single row\n> vc_vaconeind() is called twice\n> \n> Hmmm,vc_vaconeind() takes pretty long time even if it does little.\n\nIt reads all index leaf pages in any case...\n\nVadim\n",
"msg_date": "Fri, 21 Jan 2000 13:50:42 +0700",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum timings"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> \n> Hiroshi Inoue wrote:\n> > \n> > >\n> > > With index on the single in4 column, I got:\n> > > 78 seconds for a vacuum\n> > vc_vaconeind() is called once\n> ^^^^^^\n> not called ?\n\nOops,you are right.\nvc_scanoneind() is called once.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 21 Jan 2000 16:03:35 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] vacuum timings"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I loaded 10,000,000 rows into CREATE TABLE test (x INTEGER); Table is\n> > 400MB and index is 160MB.\n> \n> > With index on the single in4 column, I got:\n> > \t 78 seconds for a vacuum\n> > \t121 seconds for vacuum after deleting a single row\n> > \t662 seconds for vacuum after deleting the entire table\n> \n> > With no index, I got:\n> > \t 43 seconds for a vacuum\n> > \t 43 seconds for vacuum after deleting a single row\n> > \t 43 seconds for vacuum after deleting the entire table\n> \n> > I find this quite interesting.\n> \n> How long does it take to create the index on your setup --- ie,\n> if vacuum did a drop/create index, would it be competitive?\n\nOK, new timings with -F enabled:\n\n\tindex\tno index\n\t519\tsame\tload\t\n\t247\t\"\tfirst vacuum\n\t40\t\"\tother vacuums\n\t\n\t1222\tX\tindex creation\n\t90\tX\tfirst vacuum\n\t80\tX\tother vacuums\n\t\n\t<1\t90\tdelete one row\n\t121\t38\tvacuum after delete 1 row\n\t\n\t346\t344\tdelete all rows\n\t440\t44\tfirst vacuum\n\t20\t<1\tother vacuums(index is still same size)\n\nConclusions:\n\n\to indexes never get smaller\n\to drop/recreate index is slower than vacuum of indexes\n\nWhat other conclusions can be made?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jan 2000 12:51:53 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuum timings"
},
{
"msg_contents": "On Fri, 21 Jan 2000, Bruce Momjian wrote:\n\n> OK, new timings with -F enabled:\n> \n> \tindex\tno index\n> \t519\tsame\tload\t\n> \t247\t\"\tfirst vacuum\n> \t40\t\"\tother vacuums\n> \t\n> \t1222\tX\tindex creation\n> \t90\tX\tfirst vacuum\n> \t80\tX\tother vacuums\n> \t\n> \t<1\t90\tdelete one row\n> \t121\t38\tvacuum after delete 1 row\n> \t\n> \t346\t344\tdelete all rows\n> \t440\t44\tfirst vacuum\n> \t20\t<1\tother vacuums(index is still same size)\n> \n> Conclusions:\n> \n> \to indexes never get smaller\n\nthis one, I thought, was a known? if I remember right, Vadim changed it\nso that space was reused, but index never shrunk in size ... no?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 21 Jan 2000 14:45:34 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: vacuum timings"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Conclusions:\n> \to indexes never get smaller\n\nWhich we knew...\n\n> \to drop/recreate index is slower than vacuum of indexes\n\nQuite a few people have reported finding the opposite in practice.\nYou should probably try vacuuming after deleting or updating some\nfraction of the rows, rather than just the all or none cases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jan 2000 14:06:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum timings "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> > Conclusions:\n> > o indexes never get smaller\n> \n> Which we knew...\n> \n> > o drop/recreate index is slower than vacuum of indexes\n> \n> Quite a few people have reported finding the opposite in practice.\n\nI'm one of them. On 1,5 GB table with three indices it about twice\nslowly.\nProbably becouse vacuuming indices brakes system cache policy.\n(FreeBSD 3.3)\n\n\n\n-- \nDmitry Samersoff, DM\\S\[email protected] http://devnull.wplus.net\n* there will come soft rains\n",
"msg_date": "Fri, 21 Jan 2000 22:48:50 +0300",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: vacuum timings"
},
{
"msg_contents": "[Charset koi8-r unsupported, filtering to ASCII...]\n> Tom Lane wrote:\n> > \n> > Bruce Momjian <[email protected]> writes:\n> > > Conclusions:\n> > > o indexes never get smaller\n> > \n> > Which we knew...\n> > \n> > > o drop/recreate index is slower than vacuum of indexes\n> > \n> > Quite a few people have reported finding the opposite in practice.\n> \n> I'm one of them. On 1,5 GB table with three indices it about twice\n> slowly.\n> Probably becouse vacuuming indices brakes system cache policy.\n> (FreeBSD 3.3)\n\nOK, we are researching what things can be done to improve this. We are\ntoying with:\n\n\tlock table for less duration, or read lock\n\tcreating another copy of heap/indexes, and rename() over old files\n\timproving heap vacuum speed\n\timproving index vacuum speed\n\tmoving analyze out of vacuum\n\t\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jan 2000 14:54:21 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: vacuum timings"
},
{
"msg_contents": "On Fri, 21 Jan 2000, Bruce Momjian wrote:\n\n> [Charset koi8-r unsupported, filtering to ASCII...]\n> > Tom Lane wrote:\n> > > \n> > > Bruce Momjian <[email protected]> writes:\n> > > > Conclusions:\n> > > > o indexes never get smaller\n> > > \n> > > Which we knew...\n> > > \n> > > > o drop/recreate index is slower than vacuum of indexes\n> > > \n> > > Quite a few people have reported finding the opposite in practice.\n> > \n> > I'm one of them. On 1,5 GB table with three indices it about twice\n> > slowly.\n> > Probably becouse vacuuming indices brakes system cache policy.\n> > (FreeBSD 3.3)\n> \n> OK, we are researching what things can be done to improve this. We are\n> toying with:\n> \n> \tlock table for less duration, or read lock\n\nif there is some way that we can work around the bug that I believe Tom\nfound with removing the lock altogether (ie. makig use of MVCC), I think\nthat would be the best option ... if not possible, at least get things\ndown to a table lock vs the whole database?\n\na good example is the udmsearch that we are using on the site ... it uses\nmultiple tables to store the dictionary, each representing words of X size\n... if I'm searching on a 4 letter word, and the whole database is locked\nwhile it is working on the dictionary with 8 letter words, I'm sitting\nthere idle ... at least if we only locked the 8 letter table, everyone not\ndoing 8 letter searches can go on their merry way ...\n\nSlightly longer vacuum's, IMHO, are acceptable if, to the end users, its\nas transparent as possible ... locking per table would be slightly slower,\nI think, because once a table is finished, the next table would need to\nhave an exclusive lock put on it before starting, so you'd have to\npossibly wait for that...?\n\n> \tcreating another copy of heap/indexes, and rename() over old files\n\nsounds to me like introducing a large potential for error here ...\n\n> \tmoving analyze out of vacuum\n\nI think that should be done anyway ... if we ever get to the point that\nwe're able to re-use rows in tables, then that would eliminate the\nimmediate requirement for vacuum, but still retain a requirement for a\nperiodic analyze ... no?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 21 Jan 2000 16:12:25 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: vacuum timings"
},
{
"msg_contents": "Dmitry Samersoff wrote:\n> \n> Tom Lane wrote:\n> >\n> > Bruce Momjian <[email protected]> writes:\n> > > Conclusions:\n> > > o indexes never get smaller\n> >\n> > Which we knew...\n> >\n> > > o drop/recreate index is slower than vacuum of indexes\n> >\n> > Quite a few people have reported finding the opposite in practice.\n> \n> I'm one of them. On 1,5 GB table with three indices it about twice\n> slowly.\n> Probably becouse vacuuming indices brakes system cache policy.\n\nI'm another. Do the times increase linearly with each index\nadded? Do the times increase linearly for each index for each\nfield in a composite index? Does the field type being indexed\nhave any affect (varchar vs int)? \n\nMike Mascari\n",
"msg_date": "Fri, 21 Jan 2000 15:29:34 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: vacuum timings"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> lock table for less duration, or read lock\n\n> if there is some way that we can work around the bug that I believe Tom\n> found with removing the lock altogether (ie. makig use of MVCC), I think\n> that would be the best option ... if not possible, at least get things\n> down to a table lock vs the whole database?\n\nHuh? VACUUM only requires an exclusive lock on the table it is\ncurrently vacuuming; there's no database-wide lock.\n\nEven a single-table exclusive lock is bad, of course, if it's a large\ntable that's critical to a 24x7 application. Bruce was talking about\nthe possibility of having VACUUM get just a write lock on the table;\nother backends could still read it, but not write it, during the vacuum\nprocess. That'd be a considerable step forward for 24x7 applications,\nI think.\n\nIt looks like that could be done if we rewrote the table as a new file\n(instead of compacting-in-place), but there's a problem when it comes\ntime to rename the new files into place. At that point you'd need to\nget an exclusive lock to ensure all the readers are out of the table too\n--- and upgrading from a plain lock to an exclusive lock is a well-known\nrecipe for deadlocks. Not sure if this can be solved.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jan 2000 17:02:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: vacuum timings "
},
{
"msg_contents": "On Fri, 21 Jan 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> >> lock table for less duration, or read lock\n> \n> > if there is some way that we can work around the bug that I believe Tom\n> > found with removing the lock altogether (ie. makig use of MVCC), I think\n> > that would be the best option ... if not possible, at least get things\n> > down to a table lock vs the whole database?\n> \n> Huh? VACUUM only requires an exclusive lock on the table it is\n> currently vacuuming; there's no database-wide lock.\n> \n> Even a single-table exclusive lock is bad, of course, if it's a large\n> table that's critical to a 24x7 application. Bruce was talking about\n> the possibility of having VACUUM get just a write lock on the table;\n> other backends could still read it, but not write it, during the vacuum\n> process. That'd be a considerable step forward for 24x7 applications,\n> I think.\n> \n> It looks like that could be done if we rewrote the table as a new file\n> (instead of compacting-in-place), but there's a problem when it comes\n> time to rename the new files into place. At that point you'd need to\n> get an exclusive lock to ensure all the readers are out of the table too\n> --- and upgrading from a plain lock to an exclusive lock is a well-known\n> recipe for deadlocks. Not sure if this can be solved.\n\nWhat would it take to re-use space vs compacting/truncating the file? \n\nRight now, ppl vacuum the database to clear out old, deleted records, and\ntruncate the tables ... if we were to change things so that an\ninsert/update were to find the next largest contiguous free block in the\ntable and re-used it, then, theoretically, you would eventually hit a\nfixed table size assuming no new inserts, and only updates/deletes, right?\n\nEventually, you'd have \"holes\" in the table, where an inserted record was\nsmaller then the \"next largest contiguous free block\", but what's left\nover is too small for any further additions ... but I would think that\nthat would greatly reduce how often you'd have to do a vacuum, and, if we\nsplit out ANALYZE, you could use that to update statistics ...\n\nTo speed up the search for the \"next largest contiguous free block\", a\nspecial table.FAT could be used similar to an index?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 21 Jan 2000 20:11:27 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: vacuum timings "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Conclusions:\n> \to drop/recreate index is slower than vacuum of indexes\n\nBTW, I did some profiling of CREATE INDEX this evening (quite\nunintentionally actually; I was interested in COPY IN, but the pg_dump\nscript I used as driver happened to create some indexes too). I was\nstartled to discover that 60% of the runtime of CREATE INDEX is spent in\n_bt_invokestrat (which is called from tuplesort.c's comparetup_index,\nand exists only to figure out which specific comparison routine to call).\nOf this, a whopping 4% was spent in the useful subroutine, int4gt. All\nthe rest went into lookup and validation checks that by rights should be\ndone once per index creation, not once per comparison.\n\nIn short: a fairly straightforward bit of optimization will eliminate\ncirca 50% of the CPU time consumed by CREATE INDEX. All we need is to\nfigure out where to cache the lookup results. The optimization would\nimprove insertions and lookups in indexes, as well, if we can cache\nthe lookup results in those scenarios.\n\nThis was for a table small enough that tuplesort.c could do the sort\nentirely in memory, so I'm sure the gains would be smaller for a large\ntable that requires a disk-based sort. Still, it seems worth looking\ninto...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jan 2000 23:50:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum timings "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Conclusions:\n> > \to drop/recreate index is slower than vacuum of indexes\n> \n> BTW, I did some profiling of CREATE INDEX this evening (quite\n> unintentionally actually; I was interested in COPY IN, but the pg_dump\n> script I used as driver happened to create some indexes too). I was\n> startled to discover that 60% of the runtime of CREATE INDEX is spent in\n> _bt_invokestrat (which is called from tuplesort.c's comparetup_index,\n> and exists only to figure out which specific comparison routine to call).\n> Of this, a whopping 4% was spent in the useful subroutine, int4gt. All\n> the rest went into lookup and validation checks that by rights should be\n> done once per index creation, not once per comparison.\n\nGood job, Tom. Clearly a huge win.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 22 Jan 2000 00:17:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuum timings"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> Bruce Momjian <[email protected]> writes:\n> > Conclusions:\n> > \to indexes never get smaller\n> \n> Which we knew...\n> \n> > \to drop/recreate index is slower than vacuum of indexes\n> \n> Quite a few people have reported finding the opposite in practice.\n> You should probably try vacuuming after deleting or updating some\n> fraction of the rows, rather than just the all or none cases.\n>\n\nVacuum after delelting all rows isn't a worst case.\nThere's no moving in that case and vacuum doesn't need to call\nindex_insert() corresponding to the moving of heap tuples.\n\nVacuum after deleting half of rows may be one of the worst case.\nIn this case,index_delete() is called as many times as 'delete all'\ncase and expensive index_insert() is called for moved_in tuples.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Sat, 22 Jan 2000 17:15:37 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: vacuum timings "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Vacuum after deleting half of rows may be one of the worst case.\n\nOr equivalently, vacuum after updating all the rows.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 Jan 2000 11:11:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: vacuum timings "
},
{
"msg_contents": "> > Quite a few people have reported finding the opposite in practice.\n> > You should probably try vacuuming after deleting or updating some\n> > fraction of the rows, rather than just the all or none cases.\n> >\n> \n> Vacuum after delelting all rows isn't a worst case.\n> There's no moving in that case and vacuum doesn't need to call\n> index_insert() corresponding to the moving of heap tuples.\n> \n> Vacuum after deleting half of rows may be one of the worst case.\n> In this case,index_delete() is called as many times as 'delete all'\n> case and expensive index_insert() is called for moved_in tuples.\n\nI will test that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 22 Jan 2000 12:33:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: vacuum timings"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > > Quite a few people have reported finding the opposite in practice.\n> > > You should probably try vacuuming after deleting or updating some\n> > > fraction of the rows, rather than just the all or none cases.\n> > >\n> >\n> > Vacuum after delelting all rows isn't a worst case.\n> > There's no moving in that case and vacuum doesn't need to call\n> > index_insert() corresponding to the moving of heap tuples.\n> >\n> > Vacuum after deleting half of rows may be one of the worst case.\n> > In this case,index_delete() is called as many times as 'delete all'\n> > case and expensive index_insert() is called for moved_in tuples.\n>\n> I will test that.\n>\n\nI tried my test case in less scale than Bruce.\n\nCREATE TABLE t (id int4, dt int4);\nfor (i=0; i < 2500000; i++)\n insert into t values ( i, (i * 1009) % 2500000);\ndelete from t where id < 1250000;\n\n1) vacuum after create index on t(id) 405sec\n2) vacuum after create index on t(dt) > 3600sec\n I gave up to continue execution.\n3) vacuum and create index on t(id) and t(dt)\n 90sec + 114sec + 143sec = 347sec.\n\nSeems random index insert is painful for vacuum.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Tue, 25 Jan 2000 18:53:57 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: vacuum timings"
}
] |
[
{
"msg_contents": "Please help!\n\nThe PSQLODBC aborts a transaction with a strange error while execute a\nlegal query.\nThe message is:\n\n\"Could not begin a transaction; unexpected protocol character from\nbackend (sen_query) (#1)\"\n\n\nI tried the same query with psql client and it works with no problems.\n\nDoes anyone know what this message means ?\n\nMy configuration:\n\nData base server: PostgreSQL v6.5.2\nOS server: Linux 2.0.37 (Debian)\nWin Client: M$_Access95\nPsqlODBC v6.40.0006\nlog file attached.\n\nAny help would be very apreciated.\n\nJos�\n\n\n",
"msg_date": "Fri, 21 Jan 2000 09:56:21 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "ODBC drive strange behavior"
},
{
"msg_contents": "Sorry I forgot to send the attachement :)\n\n\nJose Soares wrote:\n\n> Please help!\n>\n> The PSQLODBC aborts a transaction with a strange error while execute a\n> legal query.\n> The message is:\n>\n> \"Could not begin a transaction; unexpected protocol character from\n> backend (sen_query) (#1)\"\n>\n> I tried the same query with psql client and it works with no problems.\n>\n> Does anyone know what this message means ?\n>\n> My configuration:\n>\n> Data base server: PostgreSQL v6.5.2\n> OS server: Linux 2.0.37 (Debian)\n> Win Client: M$_Access95\n> PsqlODBC v6.40.0006\n> log file attached.\n>\n> Any help would be very apreciated.\n>\n> Jos�",
"msg_date": "Fri, 21 Jan 2000 09:58:43 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ODBC drive strange behavior"
},
{
"msg_contents": "\n\nJose Soares wrote:\n> \n> Sorry I forgot to send the attachement :)\n> \n> Jose Soares wrote:\n> \n> > Please help!\n> >\n> > The PSQLODBC aborts a transaction with a strange error while execute a\n> > legal query.\n> > The message is:\n> >\n> > \"Could not begin a transaction; unexpected protocol character from\n> > backend (sen_query) (#1)\"\n> >\n> > I tried the same query with psql client and it works with no problems.\n> >\n> > Does anyone know what this message means ?\n> >\n> > My configuration:\n> >\n> > Data base server: PostgreSQL v6.5.2\n> > OS server: Linux 2.0.37 (Debian)\n> > Win Client: M$_Access95\n> > PsqlODBC v6.40.0006\n> > log file attached.\n> >\n> > Any help would be very apreciated.\n> >\n> > Jos�\n> \n> ------------------------------------------------------------------------\n> Name: LOG_ERROR.log\n> LOG_ERROR.log Type: Text Document (application/x-unknown-content-type-txtfile)\n> Encoding: base64\n\n\nThe error means the driver didn't receive the expected response\ncharacter from the backend. For queries, the expected response would be\nsomething like:\n\n'T': results are coming (this one is the most likely expected)\n'C': no tuples produced\n'Z': ready for new query (in >= postgres 6.4 only).\n'I': empty query produces this response\n'N': notice\n'E': error\n\n\nIn your case, the query begins: (SELECT \"figure\".\"azienda\" \n\nIt might be the extra parenthesis around the query ? Try removing\nthem. If that's not it, try making the query really short, just as an\nexperiment. Also, using the wrong protocol with the backend can make\nthis happen.\n\nByron\n",
"msg_date": "Sat, 22 Jan 2000 11:40:46 -0500",
"msg_from": "Byron Nikolaidis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: ODBC drive strange behavior"
},
{
"msg_contents": "Le ven, 21 jan 2000, vous avez �crit :\n> >%_Sorry I forgot to send the attachement :)\n> \n> \n> Jose Soares wrote:\n\n> > Data base server: PostgreSQL v6.5.2\n> > OS server: Linux 2.0.37 (Debian)\n> > Win Client: M$_Access95\n> > PsqlODBC v6.40.0006\n> > log file attached.\n> >\n> > Any help would be very apreciated.\n> >\n> > Jos�\n> \nIn my case using M$_Access97 without problem.\na)Try to upgrade to PsqlODBC v6.40.0007\n\nb)Try to tune your odbc DSN with the help of PsqlODBC FAQ\n\nc)Try to upgrade MDAC (odbc32 ==> About) in control panel\nRegards.\nEmmanuel\n",
"msg_date": "Mon, 24 Jan 2000 15:01:30 +0100",
"msg_from": "Compte utilisateur Sultan-advl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: ODBC drive strange behavior"
},
{
"msg_contents": "Byron Nikolaidis wrote:\n\n> Jose Soares wrote:\n> >\n> > Sorry I forgot to send the attachement :)\n> >\n> > Jose Soares wrote:\n> >\n> > > Please help!\n> > >\n> > > The PSQLODBC aborts a transaction with a strange error while execute a\n> > > legal query.\n> > > The message is:\n> > >\n> > > \"Could not begin a transaction; unexpected protocol character from\n> > > backend (sen_query) (#1)\"\n> > >\n> > > I tried the same query with psql client and it works with no problems.\n> > >\n> > > Does anyone know what this message means ?\n> > >\n> > > My configuration:\n> > >\n> > > Data base server: PostgreSQL v6.5.2\n> > > OS server: Linux 2.0.37 (Debian)\n> > > Win Client: M$_Access95\n> > > PsqlODBC v6.40.0006\n> > > log file attached.\n> > >\n> > > Any help would be very apreciated.\n> > >\n> > > Jos�\n> >\n> > ------------------------------------------------------------------------\n> > Name: LOG_ERROR.log\n> > LOG_ERROR.log Type: Text Document (application/x-unknown-content-type-txtfile)\n> > Encoding: base64\n>\n> The error means the driver didn't receive the expected response\n> character from the backend. For queries, the expected response would be\n> something like:\n>\n> 'T': results are coming (this one is the most likely expected)\n> 'C': no tuples produced\n> 'Z': ready for new query (in >= postgres 6.4 only).\n> 'I': empty query produces this response\n> 'N': notice\n> 'E': error\n>\n> In your case, the query begins: (SELECT \"figure\".\"azienda\"\n>\n> It might be the extra parenthesis around the query ? Try removing\n> them. If that's not it, try making the query really short, just as an\n> experiment. Also, using the wrong protocol with the backend can make\n> this happen.\n>\n> Byron\n\n* About parenthesis around select this is the way M$-access translates\nqueries with UNIONs and I can't do nothing to change this behavior.\nThanks to developers, PostgreSQL now recognizes this syntax, in fact if I execute this\nquery on psql\nit works.\n\n* Just for an experiment I did the following change:\n\n* The original query in M$-access was:\n(\nSELECT figure.azienda ,figure.codice_figura ,utenti.ragione_sociale ,utenti.istat\n,utenti.cap,\n utenti.indirizzo ,utenti.civico ,figure.tipo ,utenti.codice_fiscale\n,utenti.partita_iva ,\n figure.fine_attivita ,figure.data_esportazione ,figure.data_aggiornamento\n,utenti.distretto,\n figure.gruppo ,figure.data_esportazione ,utenti.data_esportazione\nFROM figure INNER JOIN utenti ON (figure.codice_figura = utenti.azienda)\nWHERE (figure.tipo IN ('D' ,'DB' ,'DO' ,'DS' ) )\n )\nUNION ALL\n(\nSELECT figure.azienda ,figure.codice_figura ,utenti.ragione_sociale ,utenti.istat ,\n utenti.cap ,utenti.indirizzo,utenti.civico,figure.tipo,utenti.codice_fiscale,\n utenti.partita_iva ,figure.fine_attivita ,figure.data_esportazione,\n figure.data_aggiornamento,utenti.distretto,figure.gruppo,figure.data_esportazione,\n utenti.data_esportazione\nFROM figure INNER JOIN utenti ON (figure.codice_figura = utenti.azienda)\nWHERE (figure.tipo IN ('P' ,'PB' ,'PO' ,'PS' ,'A' ) )\n)\n\nand it was translated to PostgreSQL as:\n\n(SELECT \"figure\".\"azienda\" ,\"figure\".\"codice_figura\" ,\"utenti\".\"ragione_sociale\"\n,\"utenti\".\"istat\" ,\"utenti\".\"cap\" ,\"utenti\".\"indirizzo\" ,\"utenti\".\"civico\"\n,\"figure\".\"tipo\" ,\"utenti\".\"codice_fiscale\" ,\"utenti\".\"partita_iva\"\n,\"figure\".\"fine_attivita\" ,\"figure\".\"data_esportazione\" ,\"figure\".\"data_aggiornamento\"\n,\"utenti\".\"distretto\" ,\"figure\".\"gruppo\" ,\"figure\".\"data_esportazione\"\n,\"utenti\".\"data_esportazione\"\nFROM \"figure\",\"utenti\"\nWHERE ((\"figure\".\"tipo\" IN ('D' ,'DB' ,'DO' ,'DS' ) )\nAND (\"figure\".\"codice_figura\" = \"utenti\".\"azienda\" ) )\n)\nUNION ALL\n(\nSELECT \"figure\".\"azienda\" ,\"figure\".\"codice_figura\" ,\"utenti\".\"ragione_sociale\"\n,\"utenti\".\"istat\" ,\"utenti\".\"cap\" ,\"utenti\".\"indirizzo\" ,\"utenti\".\"civico\"\n,\"figure\".\"tipo\" ,\"utenti\".\"codice_fiscale\" ,\"utenti\".\"partita_iva\"\n,\"figure\".\"fine_attivita\" ,\"figure\".\"data_esportazione\" ,\"figure\".\"data_aggiornamento\"\n,\"utenti\".\"distretto\" ,\"figure\".\"gruppo\" ,\"figure\".\"data_esportazione\"\n,\"utenti\".\"data_esportazione\"\nFROM \"figure\",\"utenti\"\nWHERE ((\"figure\".\"tipo\" IN ('P' ,'PB' ,'PO' ,'PS' ,'A' ) )\nAND (\"figure\".\"codice_figura\" = \"utenti\".\"azienda\" ) )\n)\n\nI replaced the keyword INNER with LEFT and now it works but I can't realize why.\n\nAny ideas Byron ?\n\nThanks,\nJos�\n\n\n\nByron Nikolaidis wrote:\nJose Soares wrote:\n>\n> Sorry I forgot to send the attachement :)\n>\n> Jose Soares wrote:\n>\n> > Please help!\n> >\n> > The PSQLODBC aborts a transaction with a strange error while execute\na\n> > legal query.\n> > The message is:\n> >\n> > \"Could not begin a transaction; unexpected protocol character from\n> > backend (sen_query) (#1)\"\n> >\n> > I tried the same query with psql client and it works with no problems.\n> >\n> > Does anyone know what this message means ?\n> >\n> > My configuration:\n> >\n> > Data base server: PostgreSQL v6.5.2\n> > OS server: Linux 2.0.37 (Debian)\n> > Win Client: M$_Access95\n> > PsqlODBC v6.40.0006\n> > log file attached.\n> >\n> > Any help would be very apreciated.\n> >\n> > José\n>\n> ------------------------------------------------------------------------\n> \nName: LOG_ERROR.log\n> LOG_ERROR.log Type: Text Document\n(application/x-unknown-content-type-txtfile)\n> \nEncoding: base64\nThe error means the driver didn't receive the expected response\ncharacter from the backend. For queries, the expected response\nwould be\nsomething like:\n'T': results are coming (this one is the most likely expected)\n'C': no tuples produced\n'Z': ready for new query (in >= postgres 6.4 only).\n'I': empty query produces this response\n'N': notice\n'E': error\nIn your case, the query begins: (SELECT \"figure\".\"azienda\"\nIt might be the extra parenthesis around the query ? Try removing\nthem. If that's not it, try making the query really short, just\nas an\nexperiment. Also, using the wrong protocol with the backend can\nmake\nthis happen.\nByron\n* About parenthesis around select this is the way M$-access translates\nqueries with UNIONs and I can't do nothing to change this behavior.\nThanks to developers, PostgreSQL now recognizes this syntax, in\nfact if I execute this query on psql\nit works.\n* Just for an experiment I did the following change:\n* The original query in M$-access was:\n(\nSELECT figure.azienda ,figure.codice_figura ,utenti.ragione_sociale\n,utenti.istat ,utenti.cap,\n utenti.indirizzo ,utenti.civico ,figure.tipo ,utenti.codice_fiscale\n,utenti.partita_iva ,\n figure.fine_attivita ,figure.data_esportazione ,figure.data_aggiornamento\n,utenti.distretto,\n figure.gruppo ,figure.data_esportazione ,utenti.data_esportazione\nFROM figure INNER JOIN utenti ON (figure.codice_figura = utenti.azienda)\nWHERE (figure.tipo IN ('D' ,'DB' ,'DO' ,'DS' ) )\n )\nUNION ALL\n(\nSELECT figure.azienda ,figure.codice_figura ,utenti.ragione_sociale\n,utenti.istat ,\n utenti.cap ,utenti.indirizzo,utenti.civico,figure.tipo,utenti.codice_fiscale,\n utenti.partita_iva ,figure.fine_attivita ,figure.data_esportazione,\n figure.data_aggiornamento,utenti.distretto,figure.gruppo,figure.data_esportazione,\n utenti.data_esportazione\nFROM figure INNER JOIN utenti ON (figure.codice_figura = utenti.azienda)\nWHERE (figure.tipo IN ('P' ,'PB' ,'PO' ,'PS' ,'A' ) )\n)\nand it was translated to PostgreSQL as:\n(SELECT \"figure\".\"azienda\" ,\"figure\".\"codice_figura\" ,\"utenti\".\"ragione_sociale\"\n,\"utenti\".\"istat\" ,\"utenti\".\"cap\" ,\"utenti\".\"indirizzo\" ,\"utenti\".\"civico\"\n,\"figure\".\"tipo\" ,\"utenti\".\"codice_fiscale\" ,\"utenti\".\"partita_iva\" ,\"figure\".\"fine_attivita\"\n,\"figure\".\"data_esportazione\" ,\"figure\".\"data_aggiornamento\" ,\"utenti\".\"distretto\"\n,\"figure\".\"gruppo\" ,\"figure\".\"data_esportazione\" ,\"utenti\".\"data_esportazione\"\nFROM \"figure\",\"utenti\"\nWHERE ((\"figure\".\"tipo\" IN ('D' ,'DB' ,'DO' ,'DS' ) )\nAND (\"figure\".\"codice_figura\" = \"utenti\".\"azienda\" ) )\n)\nUNION ALL\n(\nSELECT \"figure\".\"azienda\" ,\"figure\".\"codice_figura\" ,\"utenti\".\"ragione_sociale\"\n,\"utenti\".\"istat\" ,\"utenti\".\"cap\" ,\"utenti\".\"indirizzo\" ,\"utenti\".\"civico\"\n,\"figure\".\"tipo\" ,\"utenti\".\"codice_fiscale\" ,\"utenti\".\"partita_iva\" ,\"figure\".\"fine_attivita\"\n,\"figure\".\"data_esportazione\" ,\"figure\".\"data_aggiornamento\" ,\"utenti\".\"distretto\"\n,\"figure\".\"gruppo\" ,\"figure\".\"data_esportazione\" ,\"utenti\".\"data_esportazione\"\nFROM \"figure\",\"utenti\"\nWHERE ((\"figure\".\"tipo\" IN ('P' ,'PB' ,'PO' ,'PS' ,'A' ) )\nAND (\"figure\".\"codice_figura\" = \"utenti\".\"azienda\" ) )\n)\nI replaced the keyword INNER with LEFT and now it works but I can't\nrealize why.\nAny ideas Byron ?\nThanks,\nJosé",
"msg_date": "Mon, 24 Jan 2000 16:26:44 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: ODBC drive strange behavior"
},
{
"msg_contents": "\n\nCompte utilisateur Sultan-advl wrote:\n\n> Le ven, 21 jan 2000, vous avez �crit :\n> > >%_Sorry I forgot to send the attachement :)\n> >\n> >\n> > Jose Soares wrote:\n>\n> > > Data base server: PostgreSQL v6.5.2\n> > > OS server: Linux 2.0.37 (Debian)\n> > > Win Client: M$_Access95\n> > > PsqlODBC v6.40.0006\n> > > log file attached.\n> > >\n> > > Any help would be very apreciated.\n> > >\n> > > Jos�\n> >\n> In my case using M$_Access97 without problem.\n> a)Try to upgrade to PsqlODBC v6.40.0007\n\nIs there a new release? I can't find it at:\nhttp://www.insightdist.com/psqlodbc/\nIs there another site for psqlODBC ?\n\n>\n>\n> b)Try to tune your odbc DSN with the help of PsqlODBC FAQ\n>\n> c)Try to upgrade MDAC (odbc32 ==> About) in control panel\n> Regards.\n> Emmanuel\n\nJos�\n\n\n",
"msg_date": "Tue, 25 Jan 2000 16:18:25 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: ODBC drive strange behavior"
},
{
"msg_contents": "At 16:18 01/25/2000 +0100, Jose Soares wrote:\n>\n>\n>Compte utilisateur Sultan-advl wrote:\n>\n>> Le ven, 21 jan 2000, vous avez �crit :\n>> > >%_Sorry I forgot to send the attachement :)\n>> >\n>> >\n>> > Jose Soares wrote:\n>>\n>> > > Data base server: PostgreSQL v6.5.2\n>> > > OS server: Linux 2.0.37 (Debian)\n>> > > Win Client: M$_Access95\n>> > > PsqlODBC v6.40.0006\n>> > > log file attached.\n>> > >\n>> > > Any help would be very apreciated.\n>> > >\n>> > > Jos�\n>> >\n>> In my case using M$_Access97 without problem.\n>> a)Try to upgrade to PsqlODBC v6.40.0007\n>\n>Is there a new release? I can't find it at:\n>http://www.insightdist.com/psqlodbc/\n>Is there another site for psqlODBC ?\n\n\nAn old message from Byron:\n\nHi,\n\nI've posted a new odbc driver at the following locations:\n\nftp://ftp.postgresql.org/pub/odbc/\n\nAND\n\nhttp://members.home.net/byron.nikolaidis (my personal homepage). Note\nI have disabled downloading the full install version because of\nbandwidth limitations on my ISP, but the source and dll and everything\nelse is there.\n\nI'm not sure if there is a link from the postgres web site to the driver\nyet.\n\nIf there are any problems, with the driver or the downloading or html\nstuff, please let me know.\n\nByron\nversion.txt date: 09/02/99 version: 06.40.0007\n----------------------------------------------------------------------\nThis file describes changes and or fixes since the last update.\n\nNEW FEATURES:\n\nnone\n\nBUG_FIXES:\n\n1. Fix all info functions to check whether the table is a view.\n\n2. Fix for Access 2000 (unconfirmed)\n\n3. Change most statement errors to General Error S1000 instead of 08S01\n (communication link failure).\n\n",
"msg_date": "Tue, 25 Jan 2000 07:55:05 -0800",
"msg_from": "\"Ken J. Wright\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: ODBC drive strange behavior"
},
{
"msg_contents": "\n\nJose Soares wrote:\n> Is there a new release? I can't find it at:\n> http://www.insightdist.com/psqlodbc/\n> Is there another site for psqlODBC ?\n> \n\n\nYes, I have version 6.40.0007 on my own website. I'm not sure if it is\navailable elsewhere. The address is\nhttp://members.home.net/byron.nikolaidis\n\nUnfortunately, it does not include some recent patches that other people\nhave applied. I know of at least one with large objects.\n\nI could put these in I suppose. Anybody know where these patches are? \nPerhaps the author?\n\nByron\n",
"msg_date": "Tue, 25 Jan 2000 15:14:49 -0500",
"msg_from": "Byron Nikolaidis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: ODBC drive strange behavior"
},
{
"msg_contents": "Byron Nikolaidis wrote:\n> Jose Soares wrote:\n> > Is there a new release? I can't find it at:\n> > http://www.insightdist.com/psqlodbc/\n> > Is there another site for psqlODBC ?\n> \n> Yes, I have version 6.40.0007 on my own website. I'm not sure if it is\n> available elsewhere. The address is\n> http://members.home.net/byron.nikolaidis\n> \n> Unfortunately, it does not include some recent patches that other people\n> have applied. I know of at least one with large objects.\n> \n> I could put these in I suppose. Anybody know where these patches are? \n> Perhaps the author?\n\nI posted an large object patch.\nI will send this patch to you.\n\n=====\nHiroki Kataoka\n\n",
"msg_date": "Thu, 27 Jan 2000 01:06:02 +0900",
"msg_from": "\"Hiroki Kataoka\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ODBC drive strange behavior"
}
] |
[
{
"msg_contents": "If anyone cares, here is what I had to do to build the postgresql\ndocumentation on a Debian system.\n\n1. install jade, docbook, and unzip\n\tapt-get install jade\n\tapt-get install docbook\n\tapt-get install unzip\n\n2. grab the DSSSL stuff from http://www.nwalsh.com/docbook/dsssl\n\t and unzip it somewhere (I chose /usr/share).\n\n3. edit Makefile.custom to add appropriate HSTYLE and PSTYLE\n\tdefinitions\n\tHSTYLE= /usr/share/docbook/html\n\tPSTYLE= /usr/share/docbook/print\n\nAnd thats it.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Fri, 21 Jan 2000 08:31:30 -0500",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Building Documentation under Debian"
},
{
"msg_contents": "\"Mark Hollomon\" wrote:\n >If anyone cares, here is what I had to do to build the postgresql\n >documentation on a Debian system.\n \n...\n\nThomas, don't you do some hand-tweaking after the standard build?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"For we wrestle not against flesh and blood, but \n against principalities, against powers, against the \n rulers of the darkness of this world, against \n spiritual wickedness in high places. Wherefore take \n unto you the whole armour of God, that ye may be able \n to withstand in the evil day, and having done all, to \n stand.\" Ephesians 6:12,13 \n\n\n",
"msg_date": "Fri, 21 Jan 2000 13:51:24 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Building Documentation under Debian "
},
{
"msg_contents": "Mark Hollomon wrote:\n> \n> If anyone cares, here is what I had to do to build the postgresql\n> documentation on a Debian system.\n\nThanks. I've added a section on Debian to the \"Doc Guide\" appendix.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 21 Jan 2000 15:06:40 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Building Documentation under Debian"
},
{
"msg_contents": "> >If anyone cares, here is what I had to do to build the postgresql\n> >documentation on a Debian system.\n> Thomas, don't you do some hand-tweaking after the standard build?\n\nFor hardcopy, yes, to spiff up the page breaks and get the page\nnumbering right. But nothing gets tweaked for html...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 21 Jan 2000 15:16:39 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Building Documentation under Debian"
},
{
"msg_contents": "> > > 2. grab the DSSSL stuff from http://www.nwalsh.com/docbook/dsssl\n> > > and unzip it somewhere (I chose /usr/share).\n> > FWIW, this stuff is available in a package as as docbook-stylesheets.\n> Ah - that's what they named it. I looked for a package called dsssl.\n> Where does the package install the files /usr/share, /usr/lib,\n> something else?\n> Thomas, change step 2 to:\n> 2. install the docbook-stylesheets package\n> apt-get install docbook-stylesheets\n> drop the unzip package from step 1.\n\nActually, unless there is an objection I'll add the\ndocbook-stylesheets package but will retain mention of the Norm Walsh\nzip files since these are usually a bit more up to date than those in\na packaged distro. I use them on my Linux box even though there are\nrpms available...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 22 Jan 2000 02:49:20 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Building Documentation under Debian"
},
{
"msg_contents": "On 2000-01-22, Thomas Lockhart mentioned:\n\n> > > > 2. grab the DSSSL stuff from http://www.nwalsh.com/docbook/dsssl\n> > > > and unzip it somewhere (I chose /usr/share).\n> > > FWIW, this stuff is available in a package as as docbook-stylesheets.\n> > Ah - that's what they named it. I looked for a package called dsssl.\n> > Where does the package install the files /usr/share, /usr/lib,\n> > something else?\n> > Thomas, change step 2 to:\n> > 2. install the docbook-stylesheets package\n> > apt-get install docbook-stylesheets\n> > drop the unzip package from step 1.\n> \n> Actually, unless there is an objection I'll add the\n> docbook-stylesheets package but will retain mention of the Norm Walsh\n> zip files since these are usually a bit more up to date than those in\n> a packaged distro. I use them on my Linux box even though there are\n> rpms available...\n\nI think a lot of people found sgmltools to work for them (and perhaps a\nlot more would, if they knew about it). That way you don't have to get\nfive different things and tune them to each other.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 23 Jan 2000 02:29:16 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: [HACKERS] Building Documentation under Debian"
},
{
"msg_contents": "> I think a lot of people found sgmltools to work for them (and perhaps a\n> lot more would, if they knew about it). That way you don't have to get\n> five different things and tune them to each other.\n\nCurrently, sgmltools is in an essentially unsupported state. Cees De\nGroot has tried to enlist additional developers, to no avail, and has\nresigned. \n\nBut if someone has a procedure to install sgmltools and can give the\nmods for the makefiles we already have I'll be happy to put it in the\ndocs.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 24 Jan 2000 15:15:06 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: [HACKERS] Building Documentation under Debian"
}
] |
[
{
"msg_contents": "\nJust curious, but shouldn't something like this be possible? Or is my\nsyntax just off?\n\nupdate dict d, dictionary m \n set d.word = m.word_val \n where d.word = m.word;\n\nI want to go through one table and replace all occurances of every word in\nthe second table with a numeric value ...\n\nI've tried it as:\n\nupdate dict, dictionary\n set dict.word = dictionary.word_val\n where dict.word = dictionary.word;\n\nand that doesn't work either ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 21 Jan 2000 09:46:03 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Playing with UPDATEs ..."
},
{
"msg_contents": "update dict\n set word = m.word_val\n from dictionary m\n where dict.word = m.word;\n\n> I want to go through one table and replace all occurances of every word in\n> the second table with a numeric value ...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 21 Jan 2000 15:12:48 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Playing with UPDATEs ..."
}
] |
[
{
"msg_contents": "libpq gives back the internal typenumbers of the attributes. How do I know\nwhich number means which type? I need to find out if the type is an array.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Fri, 21 Jan 2000 15:11:32 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Types"
},
{
"msg_contents": "Michael Meskes wrote:\n >libpq gives back the internal typenumbers of the attributes. How do I know\n >which number means which type? I need to find out if the type is an array.\n\nIf the type is 1007, then:\n\ntemplate1=> select typname from pg_type where oid = 1007;\ntypname\n-------\n_int4 \n(1 row)\n\nIf the typename begins with an underscore, it is an array type.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"He that giveth unto the poor shall not lack...\" \n Proverbs 28:27 \n\n\n",
"msg_date": "Sat, 22 Jan 2000 10:56:19 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Types "
},
{
"msg_contents": "On Sat, Jan 22, 2000 at 10:56:19AM +0000, Oliver Elphick wrote:\n> If the type is 1007, then:\n> \n> template1=> select typname from pg_type where oid = 1007;\n> typname\n> -------\n> _int4 \n> (1 row)\n> \n> If the typename begins with an underscore, it is an array type.\n\nThanks.\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Sat, 22 Jan 2000 14:53:09 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Types"
},
{
"msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> Michael Meskes wrote:\n>> libpq gives back the internal typenumbers of the attributes. How do I know\n>> which number means which type? I need to find out if the type is an array.\n\n> If the type is 1007, then:\n\n> template1=> select typname from pg_type where oid = 1007;\n> typname\n> -------\n> _int4 \n> (1 row)\n\nRight...\n\n> If the typename begins with an underscore, it is an array type.\n\nIf you are going to the trouble of looking in pg_type, then you\nshouldn't rely on the convention that array type names begin with\nunderscores. What you *should* do is look at the typelem field.\nIf that's zero, it's not an array; if nonzero, it is an array type\n(and typelem gives the OID of the array elements' type).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 Jan 2000 11:09:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Types "
}
] |
[
{
"msg_contents": "Thanks for all the responses.\nI really have the heart of doing this all the time but I dont have the will.\nI have 3 kids and a wife and I enjoy boating, hiking etc. I just cant seem to find the extra time. I like the idea of convincing work to let me work on projects like this.\n\nI was surprised that nobody fit into category 5.\nI assumed you were all young kids w/o wife and kids and could spend all your free time on this. ;-)\nActually I think about doing something like this when I make my first million and retire.\n\nthanks again\nbob\n\nPeter Eisentraut wrote:\n\n> It's a learning experience. This project touches all the interesting\n> aspects of computer science at once.\n>\n> Also, I have no life, of course.\n>\n> On 2000-01-20, Robert Davis mentioned:\n>\n> > I know this is off topic but...\n> >\n> > How do the programmers here get paid?\n> >\n> > 1. You do this in your spare time and dont get paid.\n> > 2. Your day job encourages you to spend some of your time developing the db that your company uses.\n> > 3. There is an organization that accepts money and distributes it to the hackers.\n> > 4. You are all independently wealthy and do this for the fun of it.\n> > 5. You are still living at home with mom and expect one day to get a real job from this.\n> >\n> > Seriously\n> > My friend and I were talking about the open software movement and how great it is and he keeps on asking how people like us get paid.\n> > Basically my friend and me are the type that like to stay home and not visit customers and just solve problems and write code. Probably a lot like all of you.\n> >\n> > thanks\n> > bob\n> >\n> > --\n> > [email protected]\n> > [email protected]\n> > http://people.ne.mediaone.net/rsdavis\n> >\n> >\n> >\n> > ************\n> >\n> >\n>\n> --\n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n\n--\[email protected]\[email protected]\nhttp://people.ne.mediaone.net/rsdavis\n\n\n",
"msg_date": "Fri, 21 Jan 2000 09:52:39 -0500",
"msg_from": "Robert Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] off topic"
},
{
"msg_contents": "On Fri, Jan 21, 2000 at 09:52:39AM -0500, Robert Davis wrote:\n> I really have the heart of doing this all the time but I dont have the\n> will. I have 3 kids and a wife and I enjoy boating, hiking etc. I just\n\nMe too. I have wife plus three sons between 2 and 9 years old.\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Sat, 22 Jan 2000 14:52:55 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] off topic"
}
] |
[
{
"msg_contents": "> I am getting an error: FATAL 1: Memory exausted in allocSetAlloc().\n(taken on-list)\n\n> I get this when I do the attached command. The table definition is the \n> second attachment. When I did the select statement, I only had 200 or so \n> lines in the table. I was doing the select command in both psql and libpq.\n\nThis is known and expected behavior, though certainly undesirable for\nyour query. I don't know much about it, but I'll guess that you are\nexhausting memory just trying to *plan* the query, or that the\nbazillion intermediate results from the huge number of \"or\" clauses is\nchewing things up.\n\nI would try two things (again, I'm not recalling all I should here):\n1) Do an \"explain\" on your query. If it fails, set Postgres to use the\ngenetic optimizer (SET GEQO ON;)\n2) Ask on the hackers mailing list. Perhaps there is a way to clear\nmemory from intermediate results (or a better workaround). I'm\nguessing that there is for multiple statements within a transaction,\nbut maybe not within a single query. The hackers mailing list archives\nmight have some details on this.\n\nGood luck.\n\n - Thomas\n\n> ----------------------------------------------------------------------\n> select seqid, barcode, run from sequence where (seqid=28904 and phredsum=170) or (seqid=28907 and phredsum=48) or (seqid=28912 and phredsum=212) or (seqid=28923 and phredsum=124) or (seqid=28924 and phredsum=224) or (seqid=28928 and phredsum=52) or (seqid=28929 and phredsum=176) or (seqid=28930 and phredsum=197) or (seqid=28931 and phredsum=184) or (seqid=28932 and phredsum=169) or (seqid=28936 and phredsum=274) or (seqid=28937 and phredsum=165) or (seqid=28938 and phredsum=297) or (seqid=28939 and phredsum=172) or (seqid=28942 and phredsum=162) or (seqid=28943 and phredsum=211) or (seqid=28944 and phredsum=246) or (seqid=28945 and phredsum=259) or (seqid=28946 and phredsum=357) or (seqid=28947 and phredsum=295) or (seqid=28955 and phredsum=239) or (seqid=28956 and phredsum=129) or (seqid=28958 and phredsum=13) or (seqid=28959 and phredsum=263) or (seqid=28960 and phredsum=171) or (seqid=28962 and phredsum=46) or (seqid=28963 and phredsum=297) or (seqid=28964 and phredsum=17!\n7) or (seqid=28965 and phredsum=97) or (seqid=28967 and phredsum=143) or (seqid=28968 and phredsum=109) or (seqid=28969 and phredsum=233) or (seqid=28976 and phredsum=76);\n> \n> ----------------------------------------------------------------------\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | seqid | int4 not null | 4 |\n> | barcode | int4 not null | 4 |\n> | run | int2 not null | 2 |\n> | sequence | text | var |\n> | quality | text | var |\n> | length | int2 | 2 |\n> | seqtime | datetime | 8 |\n> | geltype | text | var |\n> | phredsum | int2 | 2 |\n> | identifier | int4 not null default nextval ( | 4 |\n> +----------------------------------+----------------------------------+-------+\n> Indices: phredsum_ind\n> seqid_ind\n> sequence_pkey\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 21 Jan 2000 16:19:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak????"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> ... I don't know much about it, but I'll guess that you are\n> exhausting memory just trying to *plan* the query, or that the\n> bazillion intermediate results from the huge number of \"or\" clauses is\n> chewing things up.\n\nSET KSQO = 'ON' might help. I've reduced cnfify's tendence to eat\nmemory when presented with a big OR-of-ANDs clause, but that fix isn't\nin 6.5.* (it still needs more work).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jan 2000 11:30:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: memory leak???? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Thomas Lockhart <[email protected]> writes:\n> > ... I don't know much about it, but I'll guess that you are\n> > exhausting memory just trying to *plan* the query, or that the\n> > bazillion intermediate results from the huge number of \"or\" clauses is\n> > chewing things up.\n\n> The problem can also be solved by doing SELECT UNION SELCT\n\nfor each \"or\" clause. This is only a cute fix, it does not solve the\nproblemof multiple \"or\" \"and\" statements.\n\nGary\n\n",
"msg_date": "Fri, 21 Jan 2000 08:49:55 -0800",
"msg_from": "\"gary.wolfe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: memory leak????"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.