threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hi,\n\nI have just installed Perl 5.6.0 and PostgreSQL 7.0.2. After successfull installation of both these\nprograms I tried to make PL/Perl support. After running the commands from Postgres manual I have\nreceived the following errors\n\n\n[root@eaccess plperl]# perl Makefile.PL\nWriting Makefile for plperl\n[root@eaccess plperl]# make\ncc -c -I../../../src/include -I../../../src/backend -fno-strict-aliasing -D_LAR\nGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2 -DVERSION=\\\"0.10\\\" -DXS_VERSION=\\\"0\n.10\\\" -fpic -I/usr/local/lib/perl5/5.6.0/i686-linux/CORE plperl.c\nIn file included from plperl.c:76:\n/usr/local/lib/perl5/5.6.0/i686-linux/CORE/perl.h:467: warning: `USE_LOCALE' red\nefined\n../../../src/include/config.h:213: warning: this is the location of the previous\n definition\nIn file included from plperl.c:76:\n/usr/local/lib/perl5/5.6.0/i686-linux/CORE/perl.h:2027: warning: `DEBUG' redefin\ned\n../../../src/include/utils/elog.h:22: warning: this is the location of the previ\nous definition\nplperl.c: In function `plperl_create_sub':\nplperl.c:328: `errgv' undeclared (first use in this function)\nplperl.c:328: (Each undeclared identifier is reported only once\nplperl.c:328: for each function it appears in.)\nplperl.c:334: `na' undeclared (first use in this function)\nplperl.c: In function `plperl_call_perl_func':\nplperl.c:444: `errgv' undeclared (first use in this function)\nplperl.c:450: `na' undeclared (first use in this function)\nplperl.c: In function `plperl_func_handler':\nplperl.c:654: `na' undeclared (first use in this function)\nplperl.c: In function `plperl_build_tuple_argument':\nplperl.c:2192: `na' undeclared (first use in this function)\nmake: *** [plperl.o] Error 1\n[root@eaccess plperl]#\n\nWhat I'm doing wrong?\n\nRegards,\nAlex\n\n\n",
"msg_date": "Sat, 2 Sep 2000 12:47:58 +0400",
"msg_from": "Alex Guryanow <[email protected]>",
"msg_from_op": true,
"msg_subject": "PL/Perl compilation error"
},
{
"msg_contents": "Alex Guryanow <[email protected]> writes:\n> [root@eaccess plperl]# perl Makefile.PL\n\nFor recent Perl versions you need to do\n\t\tperl Makefile.PL POLLUTE=1\ninstead. The src/pl Makefile would've done it that way for you,\nbut it looks like that code patch didn't make it to the docs...\n\nSomeone needs to update our Perl code so that it will compile cleanly\nagainst both newer and not-so-new Perls. There are notes in our mail\narchives about how to do this (basically \"use Devel::PPPort\" is the\nlong-term answer) but it hasn't gotten to the top of anyone's to-do\nlist.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Sep 2000 12:48:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error "
},
{
"msg_contents": "Jan Wieck <[email protected]> writes:\n> [ why hasn't plperl been fixed yet? ]\n\nIMHO, the portability problems with plperl will need a Perl guru to fix.\nSpecifically somebody who knows the ins and outs of embedding Perl into\nother applications, which is not such a commonly done thing. pltcl was\na simpler project because Tcl has always been designed to be embedded as\na library into other applications. Perl is still in process of being\nredesigned from a standalone program into an embeddable library, and\nmost everyday Perl programmers don't know much about the pitfalls that\nstill remain in using it that way.\n\nJust to give you one example of the ways in which Perl is not designed\nto be embeddable: last I checked, libperl was not built as PIC code by\ndefault. On machines where that makes a difference (like HPUX) that\nmeans that plperl cannot work with a default Perl installation. Period.\nNot one damn thing you can do about it except reconfigure/rebuild/\nreinstall Perl, which is a tad outside the charter of our build process.\n\nThe cross-version compatibility issues could be fixed more easily, but\nprobably not with just an hour or two's work (has anyone here actually\ndone anything with Devel::PPPort? how hard is it?). When working around\nthem just takes \"add POLLUTE=1 to Makefile build\", I can see why people\naren't eager to invest the work for a cleaner solution.\n\nPerl is getting better over time (indeed 5.6.0 may do the right thing\nalready on the PIC front; I haven't installed it yet) but I think in\nthe near term it's going to be difficult to have a really robust\nportability solution for plperl.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Sep 2000 19:33:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error "
},
{
"msg_contents": "Tom Lane wrote:\n> Alex Guryanow <[email protected]> writes:\n> > [root@eaccess plperl]# perl Makefile.PL\n>\n> For recent Perl versions you need to do\n> perl Makefile.PL POLLUTE=1\n> instead. The src/pl Makefile would've done it that way for you,\n> but it looks like that code patch didn't make it to the docs...\n>\n> Someone needs to update our Perl code so that it will compile cleanly\n> against both newer and not-so-new Perls. There are notes in our mail\n> archives about how to do this (basically \"use Devel::PPPort\" is the\n> long-term answer) but it hasn't gotten to the top of anyone's to-do\n> list.\n\n Can someone eventually enlighten me a little?\n\n We've had problems like platform/version dependant\n compilation errors with PL/Tcl in the past too, but they got\n fixed pretty quick and a reasonable number of people worked\n on that all together.\n\n We have frequent compilation error reports with PL/perl but\n nobody seems to be able/willing to do anything about it.\n\n PL/perl was once highly requested feature. Now there is a\n code base and lesser experienced programmers could continue\n the work, but nobody does.\n\n What is the problem with perl? Are there only alot of users\n but no hackers? The frequent fail reports suggest that there\n are folks who want to have that thing running. I can't\n believe that a piece of open source software, that is so\n popular, is implemented in such an ugly way that nobody has a\n clue how to fix that damned thing.\n\n So please tell me why people spend their time writing error\n reports again and again instead of simply fixing it and\n submitting a patch.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Sat, 2 Sep 2000 18:53:48 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error"
},
{
"msg_contents": "Tom Lane wrote:\n> Alex Guryanow <[email protected]> writes:\n> > [root@eaccess plperl]# perl Makefile.PL\n>\n> For recent Perl versions you need to do\n> perl Makefile.PL POLLUTE=1\n> instead. The src/pl Makefile would've done it that way for you,\n> but it looks like that code patch didn't make it to the docs...\n>\n> Someone needs to update our Perl code so that it will compile cleanly\n> against both newer and not-so-new Perls. There are notes in our mail\n> archives about how to do this (basically \"use Devel::PPPort\" is the\n> long-term answer) but it hasn't gotten to the top of anyone's to-do\n> list.\n\n Can someone eventually enlighten me a little?\n\n We've had problems like platform/version dependant\n compilation errors with PL/Tcl in the past too, but they got\n fixed pretty quick and a reasonable number of people worked\n on that all together.\n\n We have frequent compilation error reports with PL/perl but\n nobody seems to be able/willing to do anything about it.\n\n PL/perl was once highly requested feature. Now there is a\n code base and lesser experienced programmers could continue\n the work, but nobody does.\n\n What is the problem with perl? Are there only alot of users\n but no hackers? The frequent fail reports suggest that there\n are folks who want to have that thing running. I can't\n believe that a piece of open source software, that is so\n popular, is implemented in such an ugly way that nobody has a\n clue how to fix that damned thing.\n\n So please tell me why people spend their time writing error\n reports again and again instead of simply fixing it and\n submitting a patch.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n\n\n",
"msg_date": "Sun, 3 Sep 2000 10:04:04 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error"
},
{
"msg_contents": "Hi,\n\nI have take a look to the source code concerning PL/Perl, it seems that 2 variables\nhave a bad call : errgv and na.\n\nIf you replace them by their normal call (in 5.6.0) PL_errgv and PL_na you will get\nsuccess to compile the lib plperl.so.\n\nAlso in Perl documentation you will find the answer for backward compatibility :\n\n> The API function perl_get_sv(\"@\",FALSE) should be used instead of directly accessing\n> perl globals as GvSV(errgv). The API call is backward compatible with existing perls and\n> provides source compatibility with threading is enabled.\n\nIt seems to be easily repared. I have no time yet but I will take a look as soon as possible.\n\nRegards\nGilles\n\nAlex Guryanow wrote:\n\n> Hi,\n>\n> I have just installed Perl 5.6.0 and PostgreSQL 7.0.2. After successfull installation of both these\n> programs I tried to make PL/Perl support. After running the commands from Postgres manual I have\n> received the following errors\n>\n\n",
"msg_date": "Mon, 04 Sep 2000 14:29:33 +0200",
"msg_from": "Gilles DAROLD <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error"
},
{
"msg_contents": "\n This week, I had the opportunity to compare the performance of PostgreSQL\non an Alpha and an Intel server, and the results kind of surprised me. I'd\nlove to hear if this has been the case for others as well...\n\n-------------\nIntel Machine\n\nSuperMicro 8050 quad Xeon server\n512 MB RAM\n4 x PII Xeon 400 MHz (secondary cache disabled)\nRAID array w/ 5 9-gig drives\n\nApproximate cost: $6000\n--------------\nAlpha Machine\nAlphaServer DS20E\n2 x CPU (500 MHz or 667 MHz)\n2 GB RAM\n9-gig SCSI drive\n\nApproximate cost: $20,000 - $25,000\n-----------------------\n\nGeneral System notes\n\n I'm not sure which chips the Alpha uses, the 500 MHz or the 667 MHz.\nAlso, because the SuperMicro board is meant for the newer Xeons, the\nsecondary cache had to be completely disabled on the PII 400 Xeons, so that\nmachine was definitely not running up to potential.\n\n-------------------------\nTest method\n\n This wasn't exactly the ANSI tests, but it accurately reflected what we\nneed out of a machine. A while back we logged 87,000 individual queries on\nour production machine, and I selected one thousand distinct queries from\nthat.\n\n On each machine I spawned 20 parallel processes, each performing the\n1,000 queries, and timed how long it took for all processes to finish.\n\n To try and keep the disk subsystem from being a factor, this used only\nselects, no updates or deletes. Also, the database is small enough that the\nentire thing was easily in the disk cache at all times.\n--------------------------\nTest results\n\n The Alpha finished in just over 60 minutes, the Xeon finished in just over\n90.\n\n-----------------------------\nTest interpretation\n\n Once I started looking at the numbers, I was suprised. On a\nprocessor-for-processor basis, the Alpha was three times as fast as the\nIntels. However, the Intels that it was pitted against were only 400 MHz\nchips, only PII (not the PIII), *and* had the external cache completely\ndisabled.\n\n So, the Alpha provided three times the performance for four times the\ncost - but if the megabyte of cache had been enabled on the Xeons, I think\nthat the results would have been significantly different. Also, if the\nchips had been even relatively recent chips (say, some 700 or 800 MHz Xeons)\nwith the cache enabled, it's possible that it could have come close to the\nperformance of the Alpha, at a much lower cost.\n\n Overall, I was expecting the Alpha to give the Intel a better trouncing,\nespecially considering the difference in cost, but I guess it's hard to beat\nIntel for transactions/dollar. If sheer server capacity is the only\nrelevant factor, forget Intel (You won't find Intels with 64 processors, and\nI don't think you'll see them even with the Itaniums). If your needs are\nmore down-to-Earth, they're the best you can get for the money.\n\nsteve\n\n\n",
"msg_date": "Tue, 5 Sep 2000 11:14:27 -0600",
"msg_from": "\"Steve Wolfe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Report of performance on Alpha vs. Intel"
},
{
"msg_contents": "I'm curious, what OS did you perform these test under?\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Steve Wolfe\" <[email protected]>\nTo: <[email protected]>\nSent: Tuesday, September 05, 2000 10:14 AM\nSubject: [GENERAL] Report of performance on Alpha vs. Intel\n\n\n>\n> This week, I had the opportunity to compare the performance of\nPostgreSQL\n> on an Alpha and an Intel server, and the results kind of surprised me.\nI'd\n> love to hear if this has been the case for others as well...\n>\n> -------------\n> Intel Machine\n>\n> SuperMicro 8050 quad Xeon server\n> 512 MB RAM\n> 4 x PII Xeon 400 MHz (secondary cache disabled)\n> RAID array w/ 5 9-gig drives\n>\n> Approximate cost: $6000\n> --------------\n> Alpha Machine\n> AlphaServer DS20E\n> 2 x CPU (500 MHz or 667 MHz)\n> 2 GB RAM\n> 9-gig SCSI drive\n>\n> Approximate cost: $20,000 - $25,000\n> -----------------------\n>\n> General System notes\n>\n> I'm not sure which chips the Alpha uses, the 500 MHz or the 667 MHz.\n> Also, because the SuperMicro board is meant for the newer Xeons, the\n> secondary cache had to be completely disabled on the PII 400 Xeons, so\nthat\n> machine was definitely not running up to potential.\n>\n> -------------------------\n> Test method\n>\n> This wasn't exactly the ANSI tests, but it accurately reflected what we\n> need out of a machine. A while back we logged 87,000 individual queries\non\n> our production machine, and I selected one thousand distinct queries from\n> that.\n>\n> On each machine I spawned 20 parallel processes, each performing the\n> 1,000 queries, and timed how long it took for all processes to finish.\n>\n> To try and keep the disk subsystem from being a factor, this used only\n> selects, no updates or deletes. Also, the database is small enough that\nthe\n> entire thing was easily in the disk cache at all times.\n> --------------------------\n> Test results\n>\n> The Alpha finished in just over 60 minutes, the Xeon finished in just\nover\n> 90.\n>\n> -----------------------------\n> Test interpretation\n>\n> Once I started looking at the numbers, I was suprised. On a\n> processor-for-processor basis, the Alpha was three times as fast as the\n> Intels. However, the Intels that it was pitted against were only 400 MHz\n> chips, only PII (not the PIII), *and* had the external cache completely\n> disabled.\n>\n> So, the Alpha provided three times the performance for four times the\n> cost - but if the megabyte of cache had been enabled on the Xeons, I think\n> that the results would have been significantly different. Also, if the\n> chips had been even relatively recent chips (say, some 700 or 800 MHz\nXeons)\n> with the cache enabled, it's possible that it could have come close to the\n> performance of the Alpha, at a much lower cost.\n>\n> Overall, I was expecting the Alpha to give the Intel a better trouncing,\n> especially considering the difference in cost, but I guess it's hard to\nbeat\n> Intel for transactions/dollar. If sheer server capacity is the only\n> relevant factor, forget Intel (You won't find Intels with 64 processors,\nand\n> I don't think you'll see them even with the Itaniums). If your needs are\n> more down-to-Earth, they're the best you can get for the money.\n>\n> steve\n>\n>\n>\n\n",
"msg_date": "Tue, 5 Sep 2000 10:35:57 -0700",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Report of performance on Alpha vs. Intel"
},
{
"msg_contents": "\n> I'm curious, what OS did you perform these test under?\n\n Doh! Silly me.\n\n The Xeon ran a Linux 2.2.16 kernel, and the Alpha ran \"Tru64\".\n\nSteve\n\n",
"msg_date": "Tue, 5 Sep 2000 11:42:09 -0600",
"msg_from": "\"Steve Wolfe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Report of performance on Alpha vs. Intel"
},
{
"msg_contents": "Memory and cache are the most important parameters for db server, and PC\nlacks both.\n\nAt 19:14 5.9.2000 , Steve Wolfe wrote:\n>\n> This week, I had the opportunity to compare the performance of PostgreSQL\n>on an Alpha and an Intel server, and the results kind of surprised me. I'd\n>love to hear if this has been the case for others as well...\n>\n>-------------\n>Intel Machine\n>\n>SuperMicro 8050 quad Xeon server\n>512 MB RAM\n>4 x PII Xeon 400 MHz (secondary cache disabled)\n>RAID array w/ 5 9-gig drives\n>\n>Approximate cost: $6000\n>--------------\n>Alpha Machine\n>AlphaServer DS20E\n>2 x CPU (500 MHz or 667 MHz)\n>2 GB RAM\n>9-gig SCSI drive\n>\n>Approximate cost: $20,000 - $25,000\n>-----------------------\n>\n>General System notes\n>\n> I'm not sure which chips the Alpha uses, the 500 MHz or the 667 MHz.\n>Also, because the SuperMicro board is meant for the newer Xeons, the\n>secondary cache had to be completely disabled on the PII 400 Xeons, so that\n>machine was definitely not running up to potential.\n>\n>-------------------------\n>Test method\n>\n> This wasn't exactly the ANSI tests, but it accurately reflected what we\n>need out of a machine. A while back we logged 87,000 individual queries on\n>our production machine, and I selected one thousand distinct queries from\n>that.\n>\n> On each machine I spawned 20 parallel processes, each performing the\n>1,000 queries, and timed how long it took for all processes to finish.\n>\n> To try and keep the disk subsystem from being a factor, this used only\n>selects, no updates or deletes. Also, the database is small enough that the\n>entire thing was easily in the disk cache at all times.\n>--------------------------\n>Test results\n>\n> The Alpha finished in just over 60 minutes, the Xeon finished in just over\n>90.\n>\n>-----------------------------\n>Test interpretation\n>\n> Once I started looking at the numbers, I was suprised. On a\n>processor-for-processor basis, the Alpha was three times as fast as the\n>Intels. However, the Intels that it was pitted against were only 400 MHz\n>chips, only PII (not the PIII), *and* had the external cache completely\n>disabled.\n>\n> So, the Alpha provided three times the performance for four times the\n>cost - but if the megabyte of cache had been enabled on the Xeons, I think\n>that the results would have been significantly different. Also, if the\n>chips had been even relatively recent chips (say, some 700 or 800 MHz Xeons)\n>with the cache enabled, it's possible that it could have come close to the\n>performance of the Alpha, at a much lower cost.\n>\n> Overall, I was expecting the Alpha to give the Intel a better trouncing,\n>especially considering the difference in cost, but I guess it's hard to beat\n>Intel for transactions/dollar. If sheer server capacity is the only\n>relevant factor, forget Intel (You won't find Intels with 64 processors, and\n>I don't think you'll see them even with the Itaniums). If your needs are\n>more down-to-Earth, they're the best you can get for the money.\n>\n>steve\n>\n>\nv\nZeljko Trogrlic\n____________________________________________________________\n\nAeris d.o.o.\nSv. Petka 60 b, HR-31000 Osijek, Croatia\nTel: +385 (31) 53 00 15\nEmail: mailto:[email protected]\n",
"msg_date": "Tue, 05 Sep 2000 21:00:46 +0200",
"msg_from": "Zeljko Trogrlic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Report of performance on Alpha vs. Intel"
},
{
"msg_contents": "Can you send me a patch?\n\n\n> Hi,\n> \n> I have take a look to the source code concerning PL/Perl, it seems that 2 variables\n> have a bad call : errgv and na.\n> \n> If you replace them by their normal call (in 5.6.0) PL_errgv and PL_na you will get\n> success to compile the lib plperl.so.\n> \n> Also in Perl documentation you will find the answer for backward compatibility :\n> \n> > The API function perl_get_sv(\"@\",FALSE) should be used instead of directly accessing\n> > perl globals as GvSV(errgv). The API call is backward compatible with existing perls and\n> > provides source compatibility with threading is enabled.\n> \n> It seems to be easily repared. I have no time yet but I will take a look as soon as possible.\n> \n> Regards\n> Gilles\n> \n> Alex Guryanow wrote:\n> \n> > Hi,\n> >\n> > I have just installed Perl 5.6.0 and PostgreSQL 7.0.2. After successfull installation of both these\n> > programs I tried to make PL/Perl support. After running the commands from Postgres manual I have\n> > received the following errors\n> >\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 12:50:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> Can you send me a patch?\n>\n> > Hi,\n> >\n> > I have take a look to the source code concerning PL/Perl, it seems that 2 variables\n> > have a bad call : errgv and na.\n> >\n> > If you replace them by their normal call (in 5.6.0) PL_errgv and PL_na you will get\n> > success to compile the lib plperl.so.\n> >\n\nThis patch (simple diff) applies to postgresql-7.0.2.\nSee attachment...\n\nRegards\n\nGilles DAROLD",
"msg_date": "Mon, 16 Oct 2000 19:34:41 +0200",
"msg_from": "Gilles DAROLD <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error"
},
{
"msg_contents": "I can not apply this. Seems it has changed in the current tree. Here\nis the current plperl.c file. \n\n> Bruce Momjian wrote:\n> \n> > Can you send me a patch?\n> >\n> > > Hi,\n> > >\n> > > I have take a look to the source code concerning PL/Perl, it seems that 2 variables\n> > > have a bad call : errgv and na.\n> > >\n> > > If you replace them by their normal call (in 5.6.0) PL_errgv and PL_na you will get\n> > > success to compile the lib plperl.so.\n> > >\n> \n> This patch (simple diff) applies to postgresql-7.0.2.\n> See attachment...\n> \n> Regards\n> \n> Gilles DAROLD\n> \n> \n> \n\n> 328c328\n> < \tif (SvTRUE(GvSV(PL_errgv)))\n> ---\n> > \tif (SvTRUE(GvSV(errgv)))\n> 334c334\n> < \t\telog(ERROR, \"creation of function failed : %s\", SvPV(GvSV(PL_errgv), PL_na));\n> ---\n> > \t\telog(ERROR, \"creation of function failed : %s\", SvPV(GvSV(errgv), na));\n> 444c444\n> < \tif (SvTRUE(GvSV(PL_errgv)))\n> ---\n> > \tif (SvTRUE(GvSV(errgv)))\n> 450c450\n> < \t\telog(ERROR, \"plperl : error from function : %s\", SvPV(GvSV(PL_errgv), PL_na));\n> ---\n> > \t\telog(ERROR, \"plperl : error from function : %s\", SvPV(GvSV(errgv), na));\n> 654c654\n> < \t\t(SvPV(perlret, PL_na),\n> ---\n> > \t\t(SvPV(perlret, na),\n> 2192c2192\n> < \toutput = perl_eval_pv(SvPV(output, PL_na), TRUE);\n> ---\n> > \toutput = perl_eval_pv(SvPV(output, na), TRUE);\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n/**********************************************************************\n * plperl.c - perl as a procedural language for PostgreSQL\n *\n * IDENTIFICATION\n *\n *\t This software is copyrighted by Mark Hollomon\n *\t but is shameless cribbed from pltcl.c by Jan Weick.\n *\n *\t The author hereby grants permission to use, copy,\tmodify,\n *\t distribute, and\tlicense this software and its documentation\n *\t for any purpose, provided that existing copyright notices are\n *\t retained\tin\tall copies and that\tthis notice is included\n *\t verbatim in any distributions. No written agreement, license,\n *\t or royalty fee\tis required for any of the authorized uses.\n *\t Modifications to this software may be copyrighted by their\n *\t author and need not follow the licensing terms described\n *\t here, provided that the new terms are clearly indicated on\n *\t the first page of each file where they apply.\n *\n *\t IN NO EVENT SHALL THE AUTHOR OR DISTRIBUTORS BE LIABLE TO ANY\n *\t PARTY FOR DIRECT,\tINDIRECT,\tSPECIAL, INCIDENTAL,\t OR\n *\t CONSEQUENTIAL DAMAGES ARISING\tOUT OF THE USE OF THIS\n *\t SOFTWARE, ITS DOCUMENTATION, OR ANY DERIVATIVES THEREOF, EVEN\n *\t IF THE AUTHOR HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH\n *\t DAMAGE.\n *\n *\t THE AUTHOR AND\tDISTRIBUTORS SPECIFICALLY\t DISCLAIM\tANY\n *\t WARRANTIES, INCLUDING, BUT\tNOT LIMITED TO, THE\tIMPLIED\n *\t WARRANTIES OF MERCHANTABILITY,\tFITNESS FOR A PARTICULAR\n *\t PURPOSE,\tAND NON-INFRINGEMENT. THIS SOFTWARE IS PROVIDED ON\n *\t AN \"AS IS\" BASIS, AND THE AUTHOR\tAND DISTRIBUTORS HAVE NO\n *\t OBLIGATION TO\tPROVIDE MAINTENANCE,\t SUPPORT, UPDATES,\n *\t ENHANCEMENTS, OR MODIFICATIONS.\n *\n * IDENTIFICATION\n *\t $Header: /home/projects/pgsql/cvsroot/pgsql/src/pl/plperl/plperl.c,v 1.13 2000/09/12 04:28:30 momjian Exp $\n *\n **********************************************************************/\n\n\n/* system stuff */\n#include <stdio.h>\n#include <stdlib.h>\n#include <stdarg.h>\n#include <unistd.h>\n#include <fcntl.h>\n#include <string.h>\n#include <setjmp.h>\n\n/* postgreSQL stuff */\n#include \"executor/spi.h\"\n#include \"commands/trigger.h\"\n#include \"utils/elog.h\"\n#include \"fmgr.h\"\n#include \"access/heapam.h\"\n\n#include \"tcop/tcopprot.h\"\n#include \"utils/syscache.h\"\n#include \"catalog/pg_proc.h\"\n#include \"catalog/pg_type.h\"\n\n/* perl stuff */\n/*\n * Evil Code Alert\n *\n * both posgreSQL and perl try to do 'the right thing'\n * and provide union semun if the platform doesn't define\n * it in a system header.\n * psql uses HAVE_UNION_SEMUN\n * perl uses HAS_UNION_SEMUN\n * together, they cause compile errors.\n * If we need it, the psql headers above will provide it.\n * So we tell perl that we have it.\n */\n#ifndef HAS_UNION_SEMUN\n#define HAS_UNION_SEMUN\n#endif\n#include \"EXTERN.h\"\n#include \"perl.h\"\n\n\n/**********************************************************************\n * The information we cache about loaded procedures\n **********************************************************************/\ntypedef struct plperl_proc_desc\n{\n\tchar\t *proname;\n\tFmgrInfo\tresult_in_func;\n\tOid\t\t\tresult_in_elem;\n\tint\t\t\tresult_in_len;\n\tint\t\t\tnargs;\n\tFmgrInfo\targ_out_func[FUNC_MAX_ARGS];\n\tOid\t\t\targ_out_elem[FUNC_MAX_ARGS];\n\tint\t\t\targ_out_len[FUNC_MAX_ARGS];\n\tint\t\t\targ_is_rel[FUNC_MAX_ARGS];\n\tSV\t\t *reference;\n}\t\t\tplperl_proc_desc;\n\n\n/**********************************************************************\n * The information we cache about prepared and saved plans\n **********************************************************************/\ntypedef struct plperl_query_desc\n{\n\tchar\t\tqname[20];\n\tvoid\t *plan;\n\tint\t\t\tnargs;\n\tOid\t\t *argtypes;\n\tFmgrInfo *arginfuncs;\n\tOid\t\t *argtypelems;\n\tDatum\t *argvalues;\n\tint\t\t *arglen;\n}\t\t\tplperl_query_desc;\n\n\n/**********************************************************************\n * Global data\n **********************************************************************/\nstatic int\tplperl_firstcall = 1;\nstatic int\tplperl_call_level = 0;\nstatic int\tplperl_restart_in_progress = 0;\nstatic PerlInterpreter *plperl_safe_interp = NULL;\nstatic HV *plperl_proc_hash = NULL;\n\n#if REALLYHAVEITONTHEBALL\nstatic Tcl_HashTable *plperl_query_hash = NULL;\n\n#endif\n\n/**********************************************************************\n * Forward declarations\n **********************************************************************/\nstatic void plperl_init_all(void);\nstatic void plperl_init_safe_interp(void);\n\nDatum plperl_call_handler(PG_FUNCTION_ARGS);\n\nstatic Datum plperl_func_handler(PG_FUNCTION_ARGS);\n\nstatic SV *plperl_build_tuple_argument(HeapTuple tuple, TupleDesc tupdesc);\nstatic void plperl_init_shared_libs(void);\n\n#ifdef REALLYHAVEITONTHEBALL\nstatic HeapTuple plperl_trigger_handler(PG_FUNCTION_ARGS);\n\nstatic int plperl_elog(ClientData cdata, Tcl_Interp *interp,\n\t\t\tint argc, char *argv[]);\nstatic int plperl_quote(ClientData cdata, Tcl_Interp *interp,\n\t\t\t int argc, char *argv[]);\n\nstatic int plperl_SPI_exec(ClientData cdata, Tcl_Interp *interp,\n\t\t\t\tint argc, char *argv[]);\nstatic int plperl_SPI_prepare(ClientData cdata, Tcl_Interp *interp,\n\t\t\t\t int argc, char *argv[]);\nstatic int plperl_SPI_execp(ClientData cdata, Tcl_Interp *interp,\n\t\t\t\t int argc, char *argv[]);\n\nstatic void plperl_set_tuple_values(Tcl_Interp *interp, char *arrayname,\n\t\t\t\t\t\tint tupno, HeapTuple tuple, TupleDesc tupdesc);\n\n#endif\n\n\n/**********************************************************************\n * plperl_init_all()\t\t- Initialize all\n **********************************************************************/\nstatic void\nplperl_init_all(void)\n{\n\n\t/************************************************************\n\t * Do initialization only once\n\t ************************************************************/\n\tif (!plperl_firstcall)\n\t\treturn;\n\n\n\t/************************************************************\n\t * Destroy the existing safe interpreter\n\t ************************************************************/\n\tif (plperl_safe_interp != NULL)\n\t{\n\t\tperl_destruct(plperl_safe_interp);\n\t\tperl_free(plperl_safe_interp);\n\t\tplperl_safe_interp = NULL;\n\t}\n\n\t/************************************************************\n\t * Free the proc hash table\n\t ************************************************************/\n\tif (plperl_proc_hash != NULL)\n\t{\n\t\thv_undef(plperl_proc_hash);\n\t\tSvREFCNT_dec((SV *) plperl_proc_hash);\n\t\tplperl_proc_hash = NULL;\n\t}\n\n\t/************************************************************\n\t * Free the prepared query hash table\n\t ************************************************************/\n\n\t/*\n\t * if (plperl_query_hash != NULL) { }\n\t */\n\n\t/************************************************************\n\t * Now recreate a new safe interpreter\n\t ************************************************************/\n\tplperl_init_safe_interp();\n\n\tplperl_firstcall = 0;\n\treturn;\n}\n\n\n/**********************************************************************\n * plperl_init_safe_interp() - Create the safe Perl interpreter\n **********************************************************************/\nstatic void\nplperl_init_safe_interp(void)\n{\n\n\tchar\t *embedding[3] = {\n\t\t\"\", \"-e\", \n\t\t/* no commas between the next 4 please. They are supposed to be one string\n\t\t */\n\t\t\"require Safe; SPI::bootstrap();\"\n\t\t\"sub ::mksafefunc { my $x = new Safe; $x->permit_only(':default');\"\n\t\t\"$x->share(qw[&elog &DEBUG &NOTICE &NOIND &ERROR]);\"\n\t\t\" return $x->reval(qq[sub { $_[0] }]); }\"\n\t\t};\n\n\tplperl_safe_interp = perl_alloc();\n\tif (!plperl_safe_interp)\n\t\telog(ERROR, \"plperl_init_safe_interp(): could not allocate perl interpreter\");\n\n\tperl_construct(plperl_safe_interp);\n\tperl_parse(plperl_safe_interp, plperl_init_shared_libs, 3, embedding, NULL);\n\tperl_run(plperl_safe_interp);\n\n\n\n\t/************************************************************\n\t * Initialize the proc and query hash tables\n\t ************************* ***********************************/\n\tplperl_proc_hash = newHV();\n\n}\n\n\n\n/**********************************************************************\n * plperl_call_handler\t\t- This is the only visible function\n *\t\t\t\t of the PL interpreter. The PostgreSQL\n *\t\t\t\t function manager and trigger manager\n *\t\t\t\t call this function for execution of\n *\t\t\t\t perl procedures.\n **********************************************************************/\n\n/* keep non-static */\nDatum\nplperl_call_handler(PG_FUNCTION_ARGS)\n{\n\tDatum\t\tretval;\n\n\t/************************************************************\n\t * Initialize interpreters on first call\n\t ************************************************************/\n\tif (plperl_firstcall)\n\t\tplperl_init_all();\n\n\t/************************************************************\n\t * Connect to SPI manager\n\t ************************************************************/\n\tif (SPI_connect() != SPI_OK_CONNECT)\n\t\telog(ERROR, \"plperl: cannot connect to SPI manager\");\n\t/************************************************************\n\t * Keep track about the nesting of Tcl-SPI-Tcl-... calls\n\t ************************************************************/\n\tplperl_call_level++;\n\n\t/************************************************************\n\t * Determine if called as function or trigger and\n\t * call appropriate subhandler\n\t ************************************************************/\n\tif (CALLED_AS_TRIGGER(fcinfo))\n\t{\n\t\telog(ERROR, \"plperl: can't use perl in triggers yet.\");\n\n\t\t/*\n\t\t * retval = PointerGetDatum(plperl_trigger_handler(fcinfo));\n\t\t */\n\t\t/* make the compiler happy */\n\t\tretval = (Datum) 0;\n\t}\n\telse\n\t\tretval = plperl_func_handler(fcinfo);\n\n\tplperl_call_level--;\n\n\treturn retval;\n}\n\n\n/**********************************************************************\n * plperl_create_sub()\t\t- calls the perl interpreter to\n *\t\tcreate the anonymous subroutine whose text is in the SV.\n *\t\tReturns the SV containing the RV to the closure.\n **********************************************************************/\nstatic\nSV *\nplperl_create_sub(char * s)\n{\n\tdSP;\n\n\tSV\t\t *subref = NULL;\n\tint count;\n\n\tENTER;\n\tSAVETMPS;\n\tPUSHMARK(SP);\n\tXPUSHs(sv_2mortal(newSVpv(s,0)));\n\tPUTBACK;\n\tcount = perl_call_pv(\"mksafefunc\", G_SCALAR | G_EVAL | G_KEEPERR);\n\tSPAGAIN;\n\n\tif (SvTRUE(ERRSV))\n\t{\n\t\tPOPs;\n\t\tPUTBACK;\n\t\tFREETMPS;\n\t\tLEAVE;\n\t\telog(ERROR, \"creation of function failed : %s\", SvPV_nolen(ERRSV));\n\t}\n\n\tif (count != 1) {\n\t\telog(ERROR, \"creation of function failed - no return from mksafefunc\");\n\t}\n\n\t/*\n\t * need to make a deep copy of the return. it comes off the stack as a\n\t * temporary.\n\t */\n\tsubref = newSVsv(POPs);\n\n\tif (!SvROK(subref))\n\t{\n\t\tPUTBACK;\n\t\tFREETMPS;\n\t\tLEAVE;\n\n\t\t/*\n\t\t * subref is our responsibility because it is not mortal\n\t\t */\n\t\tSvREFCNT_dec(subref);\n\t\telog(ERROR, \"plperl_create_sub: didn't get a code ref\");\n\t}\n\n\tPUTBACK;\n\tFREETMPS;\n\tLEAVE;\n\treturn subref;\n}\n\n/**********************************************************************\n * plperl_init_shared_libs()\t\t-\n *\n * We cannot use the DynaLoader directly to get at the Opcode\n * module (used by Safe.pm). So, we link Opcode into ourselves\n * and do the initialization behind perl's back.\n *\n **********************************************************************/\n\nextern void boot_Opcode _((CV * cv));\nextern void boot_SPI _((CV * cv));\n\nstatic void\nplperl_init_shared_libs(void)\n{\n\tchar\t *file = __FILE__;\n\n\tnewXS(\"Opcode::bootstrap\", boot_Opcode, file);\n\tnewXS(\"SPI::bootstrap\", boot_SPI, file);\n}\n\n/**********************************************************************\n * plperl_call_perl_func()\t\t- calls a perl function through the RV\n *\t\t\tstored in the prodesc structure. massages the input parms properly\n **********************************************************************/\nstatic\nSV *\nplperl_call_perl_func(plperl_proc_desc * desc, FunctionCallInfo fcinfo)\n{\n\tdSP;\n\n\tSV\t\t *retval;\n\tint\t\t\ti;\n\tint\t\t\tcount;\n\n\n\tENTER;\n\tSAVETMPS;\n\n\tPUSHMARK(sp);\n\tfor (i = 0; i < desc->nargs; i++)\n\t{\n\t\tif (desc->arg_is_rel[i])\n\t\t{\n\t\t\tTupleTableSlot *slot = (TupleTableSlot *) fcinfo->arg[i];\n\t\t\tSV\t\t *hashref;\n\n\t\t\tAssert(slot != NULL && ! fcinfo->argnull[i]);\n\t\t\t/*\n\t\t\t * plperl_build_tuple_argument better return a mortal SV.\n\t\t\t */\n\t\t\thashref = plperl_build_tuple_argument(slot->val,\n\t\t\t\t\t\t\t\t\t\t\t\t slot->ttc_tupleDescriptor);\n\t\t\tXPUSHs(hashref);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (fcinfo->argnull[i])\n\t\t\t{\n\t\t\t\tXPUSHs(&PL_sv_undef);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tchar\t *tmp;\n\n\t\t\t\ttmp = DatumGetCString(FunctionCall3(&(desc->arg_out_func[i]),\n\t\t\t\t\t\t\t\t\t fcinfo->arg[i],\n\t\t\t\t\t\t\t\t\t ObjectIdGetDatum(desc->arg_out_elem[i]),\n\t\t\t\t\t\t\t\t\t Int32GetDatum(desc->arg_out_len[i])));\n\t\t\t\tXPUSHs(sv_2mortal(newSVpv(tmp, 0)));\n\t\t\t\tpfree(tmp);\n\t\t\t}\n\t\t}\n\t}\n\tPUTBACK;\n\tcount = perl_call_sv(desc->reference, G_SCALAR | G_EVAL | G_KEEPERR);\n\n\tSPAGAIN;\n\n\tif (count != 1)\n\t{\n\t\tPUTBACK;\n\t\tFREETMPS;\n\t\tLEAVE;\n\t\telog(ERROR, \"plperl : didn't get a return item from function\");\n\t}\n\n\tif (SvTRUE(ERRSV))\n\t{\n\t\tPOPs;\n\t\tPUTBACK;\n\t\tFREETMPS;\n\t\tLEAVE;\n\t\telog(ERROR, \"plperl : error from function : %s\", SvPV_nolen(ERRSV));\n\t}\n\n\tretval = newSVsv(POPs);\n\n\n\tPUTBACK;\n\tFREETMPS;\n\tLEAVE;\n\n\treturn retval;\n\n\n}\n\n/**********************************************************************\n * plperl_func_handler()\t\t- Handler for regular function calls\n **********************************************************************/\nstatic Datum\nplperl_func_handler(PG_FUNCTION_ARGS)\n{\n\tint\t\t\ti;\n\tchar\t\tinternal_proname[512];\n\tint\t\t\tproname_len;\n\tplperl_proc_desc *prodesc;\n\tSV\t\t *perlret;\n\tDatum\t\tretval;\n\tsigjmp_buf\tsave_restart;\n\n\t/************************************************************\n\t * Build our internal proc name from the functions Oid\n\t ************************************************************/\n\tsprintf(internal_proname, \"__PLPerl_proc_%u\", fcinfo->flinfo->fn_oid);\n\tproname_len = strlen(internal_proname);\n\n\t/************************************************************\n\t * Lookup the internal proc name in the hashtable\n\t ************************************************************/\n\tif (!hv_exists(plperl_proc_hash, internal_proname, proname_len))\n\t{\n\t\t/************************************************************\n\t\t * If we haven't found it in the hashtable, we analyze\n\t\t * the functions arguments and returntype and store\n\t\t * the in-/out-functions in the prodesc block and create\n\t\t * a new hashtable entry for it.\n\t\t *\n\t\t * Then we load the procedure into the safe interpreter.\n\t\t ************************************************************/\n\t\tHeapTuple\tprocTup;\n\t\tHeapTuple\ttypeTup;\n\t\tForm_pg_proc procStruct;\n\t\tForm_pg_type typeStruct;\n\t\tchar\t *proc_source;\n\n\t\t/************************************************************\n\t\t * Allocate a new procedure description block\n\t\t ************************************************************/\n\t\tprodesc = (plperl_proc_desc *) malloc(sizeof(plperl_proc_desc));\n\t\tprodesc->proname = malloc(strlen(internal_proname) + 1);\n\t\tstrcpy(prodesc->proname, internal_proname);\n\n\t\t/************************************************************\n\t\t * Lookup the pg_proc tuple by Oid\n\t\t ************************************************************/\n\t\tprocTup = SearchSysCacheTuple(PROCOID,\n\t\t\t\t\t\t\t\t\t ObjectIdGetDatum(fcinfo->flinfo->fn_oid),\n\t\t\t\t\t\t\t\t\t 0, 0, 0);\n\t\tif (!HeapTupleIsValid(procTup))\n\t\t{\n\t\t\tfree(prodesc->proname);\n\t\t\tfree(prodesc);\n\t\t\telog(ERROR, \"plperl: cache lookup for proc %u failed\",\n\t\t\t\t fcinfo->flinfo->fn_oid);\n\t\t}\n\t\tprocStruct = (Form_pg_proc) GETSTRUCT(procTup);\n\n\t\t/************************************************************\n\t\t * Get the required information for input conversion of the\n\t\t * return value.\n\t\t ************************************************************/\n\t\ttypeTup = SearchSysCacheTuple(TYPEOID,\n\t\t\t\t\t\t\t\tObjectIdGetDatum(procStruct->prorettype),\n\t\t\t\t\t\t\t\t\t 0, 0, 0);\n\t\tif (!HeapTupleIsValid(typeTup))\n\t\t{\n\t\t\tfree(prodesc->proname);\n\t\t\tfree(prodesc);\n\t\t\telog(ERROR, \"plperl: cache lookup for return type %u failed\",\n\t\t\t\t procStruct->prorettype);\n\t\t}\n\t\ttypeStruct = (Form_pg_type) GETSTRUCT(typeTup);\n\n\t\tif (typeStruct->typrelid != InvalidOid)\n\t\t{\n\t\t\tfree(prodesc->proname);\n\t\t\tfree(prodesc);\n\t\t\telog(ERROR, \"plperl: return types of tuples not supported yet\");\n\t\t}\n\n\t\tfmgr_info(typeStruct->typinput, &(prodesc->result_in_func));\n\t\tprodesc->result_in_elem = (Oid) (typeStruct->typelem);\n\t\tprodesc->result_in_len = typeStruct->typlen;\n\n\t\t/************************************************************\n\t\t * Get the required information for output conversion\n\t\t * of all procedure arguments\n\t\t ************************************************************/\n\t\tprodesc->nargs = procStruct->pronargs;\n\t\tfor (i = 0; i < prodesc->nargs; i++)\n\t\t{\n\t\t\ttypeTup = SearchSysCacheTuple(TYPEOID,\n\t\t\t\t\t\t\tObjectIdGetDatum(procStruct->proargtypes[i]),\n\t\t\t\t\t\t\t\t\t\t 0, 0, 0);\n\t\t\tif (!HeapTupleIsValid(typeTup))\n\t\t\t{\n\t\t\t\tfree(prodesc->proname);\n\t\t\t\tfree(prodesc);\n\t\t\t\telog(ERROR, \"plperl: cache lookup for argument type %u failed\",\n\t\t\t\t\t procStruct->proargtypes[i]);\n\t\t\t}\n\t\t\ttypeStruct = (Form_pg_type) GETSTRUCT(typeTup);\n\n\t\t\tif (typeStruct->typrelid != InvalidOid)\n\t\t\t\tprodesc->arg_is_rel[i] = 1;\n\t\t\telse\n\t\t\t\tprodesc->arg_is_rel[i] = 0;\n\n\t\t\tfmgr_info(typeStruct->typoutput, &(prodesc->arg_out_func[i]));\n\t\t\tprodesc->arg_out_elem[i] = (Oid) (typeStruct->typelem);\n\t\t\tprodesc->arg_out_len[i] = typeStruct->typlen;\n\n\t\t}\n\n\t\t/************************************************************\n\t\t * create the text of the anonymous subroutine.\n\t\t * we do not use a named subroutine so that we can call directly\n\t\t * through the reference.\n\t\t *\n\t\t ************************************************************/\n\t\tproc_source = DatumGetCString(DirectFunctionCall1(textout,\n\t\t\t\t\t\t\t\t\tPointerGetDatum(&procStruct->prosrc)));\n\n\t\t/************************************************************\n\t\t * Create the procedure in the interpreter\n\t\t ************************************************************/\n\t\tprodesc->reference = plperl_create_sub(proc_source);\n\t\tpfree(proc_source);\n\t\tif (!prodesc->reference)\n\t\t{\n\t\t\tfree(prodesc->proname);\n\t\t\tfree(prodesc);\n\t\t\telog(ERROR, \"plperl: cannot create internal procedure %s\",\n\t\t\t\t internal_proname);\n\t\t}\n\n\t\t/************************************************************\n\t\t * Add the proc description block to the hashtable\n\t\t ************************************************************/\n\t\thv_store(plperl_proc_hash, internal_proname, proname_len,\n\t\t\t\t newSViv((IV) prodesc), 0);\n\t}\n\telse\n\t{\n\t\t/************************************************************\n\t\t * Found the proc description block in the hashtable\n\t\t ************************************************************/\n\t\tprodesc = (plperl_proc_desc *) SvIV(*hv_fetch(plperl_proc_hash,\n\t\t\t\t\t\t\t\t\t internal_proname, proname_len, 0));\n\t}\n\n\n\tmemcpy(&save_restart, &Warn_restart, sizeof(save_restart));\n\n\tif (sigsetjmp(Warn_restart, 1) != 0)\n\t{\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\tplperl_restart_in_progress = 1;\n\t\tif (--plperl_call_level == 0)\n\t\t\tplperl_restart_in_progress = 0;\n\t\tsiglongjmp(Warn_restart, 1);\n\t}\n\n\n\t/************************************************************\n\t * Call the Perl function\n\t ************************************************************/\n\tperlret = plperl_call_perl_func(prodesc, fcinfo);\n\n\t/************************************************************\n\t * Disconnect from SPI manager and then create the return\n\t * values datum (if the input function does a palloc for it\n\t * this must not be allocated in the SPI memory context\n\t * because SPI_finish would free it).\n\t ************************************************************/\n\tif (SPI_finish() != SPI_OK_FINISH)\n\t\telog(ERROR, \"plperl: SPI_finish() failed\");\n\n\t/* XXX is this the approved way to check for an undef result? */\n\tif (perlret == &PL_sv_undef)\n\t{\n\t\tretval = (Datum) 0;\n\t\tfcinfo->isnull = true;\n\t}\n\telse\n\t{\n\t\tretval = FunctionCall3(&prodesc->result_in_func,\n\t\t\t\t\t\t\t PointerGetDatum(SvPV_nolen(perlret)),\n\t\t\t\t\t\t\t ObjectIdGetDatum(prodesc->result_in_elem),\n\t\t\t\t\t\t\t Int32GetDatum(prodesc->result_in_len));\n\t}\n\n\tSvREFCNT_dec(perlret);\n\n\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\tif (plperl_restart_in_progress)\n\t{\n\t\tif (--plperl_call_level == 0)\n\t\t\tplperl_restart_in_progress = 0;\n\t\tsiglongjmp(Warn_restart, 1);\n\t}\n\n\treturn retval;\n}\n\n\n#ifdef REALLYHAVEITONTHEBALL\n/**********************************************************************\n * plperl_trigger_handler() - Handler for trigger calls\n **********************************************************************/\nstatic HeapTuple\nplperl_trigger_handler(PG_FUNCTION_ARGS)\n{\n\tTriggerData *trigdata = (TriggerData *) fcinfo->context;\n\tchar\t\tinternal_proname[512];\n\tchar\t *stroid;\n\tTcl_HashEntry *hashent;\n\tint\t\t\thashnew;\n\tplperl_proc_desc *prodesc;\n\tTupleDesc\ttupdesc;\n\tHeapTuple\trettup;\n\tTcl_DString tcl_cmd;\n\tTcl_DString tcl_trigtup;\n\tTcl_DString tcl_newtup;\n\tint\t\t\ttcl_rc;\n\tint\t\t\ti;\n\n\tint\t\t *modattrs;\n\tDatum\t *modvalues;\n\tchar\t *modnulls;\n\n\tint\t\t\tret_numvals;\n\tchar\t **ret_values;\n\n\tsigjmp_buf\tsave_restart;\n\n\t/************************************************************\n\t * Build our internal proc name from the functions Oid\n\t ************************************************************/\n\tsprintf(internal_proname, \"__PLPerl_proc_%u\", fcinfo->flinfo->fn_oid);\n\n\t/************************************************************\n\t * Lookup the internal proc name in the hashtable\n\t ************************************************************/\n\thashent = Tcl_FindHashEntry(plperl_proc_hash, internal_proname);\n\tif (hashent == NULL)\n\t{\n\t\t/************************************************************\n\t\t * If we haven't found it in the hashtable,\n\t\t * we load the procedure into the safe interpreter.\n\t\t ************************************************************/\n\t\tTcl_DString proc_internal_def;\n\t\tTcl_DString proc_internal_body;\n\t\tHeapTuple\tprocTup;\n\t\tForm_pg_proc procStruct;\n\t\tchar\t *proc_source;\n\n\t\t/************************************************************\n\t\t * Allocate a new procedure description block\n\t\t ************************************************************/\n\t\tprodesc = (plperl_proc_desc *) malloc(sizeof(plperl_proc_desc));\n\t\tmemset(prodesc, 0, sizeof(plperl_proc_desc));\n\t\tprodesc->proname = malloc(strlen(internal_proname) + 1);\n\t\tstrcpy(prodesc->proname, internal_proname);\n\n\t\t/************************************************************\n\t\t * Lookup the pg_proc tuple by Oid\n\t\t ************************************************************/\n\t\tprocTup = SearchSysCacheTuple(PROCOID,\n\t\t\t\t\t\t\t\t\t ObjectIdGetDatum(fcinfo->flinfo->fn_oid),\n\t\t\t\t\t\t\t\t\t 0, 0, 0);\n\t\tif (!HeapTupleIsValid(procTup))\n\t\t{\n\t\t\tfree(prodesc->proname);\n\t\t\tfree(prodesc);\n\t\t\telog(ERROR, \"plperl: cache lookup for proc %u failed\",\n\t\t\t\t fcinfo->flinfo->fn_oid);\n\t\t}\n\t\tprocStruct = (Form_pg_proc) GETSTRUCT(procTup);\n\n\t\t/************************************************************\n\t\t * Create the tcl command to define the internal\n\t\t * procedure\n\t\t ************************************************************/\n\t\tTcl_DStringInit(&proc_internal_def);\n\t\tTcl_DStringInit(&proc_internal_body);\n\t\tTcl_DStringAppendElement(&proc_internal_def, \"proc\");\n\t\tTcl_DStringAppendElement(&proc_internal_def, internal_proname);\n\t\tTcl_DStringAppendElement(&proc_internal_def,\n\t\t\t\t\t\t\t\t \"TG_name TG_relid TG_relatts TG_when TG_level TG_op __PLTcl_Tup_NEW __PLTcl_Tup_OLD args\");\n\n\t\t/************************************************************\n\t\t * prefix procedure body with\n\t\t * upvar #0 <internal_procname> GD\n\t\t * and with appropriate setting of NEW, OLD,\n\t\t * and the arguments as numerical variables.\n\t\t ************************************************************/\n\t\tTcl_DStringAppend(&proc_internal_body, \"upvar #0 \", -1);\n\t\tTcl_DStringAppend(&proc_internal_body, internal_proname, -1);\n\t\tTcl_DStringAppend(&proc_internal_body, \" GD\\n\", -1);\n\n\t\tTcl_DStringAppend(&proc_internal_body,\n\t\t\t\t\t\t \"array set NEW $__PLTcl_Tup_NEW\\n\", -1);\n\t\tTcl_DStringAppend(&proc_internal_body,\n\t\t\t\t\t\t \"array set OLD $__PLTcl_Tup_OLD\\n\", -1);\n\n\t\tTcl_DStringAppend(&proc_internal_body,\n\t\t\t\t\t\t \"set i 0\\n\"\n\t\t\t\t\t\t \"set v 0\\n\"\n\t\t\t\t\t\t \"foreach v $args {\\n\"\n\t\t\t\t\t\t \" incr i\\n\"\n\t\t\t\t\t\t \" set $i $v\\n\"\n\t\t\t\t\t\t \"}\\n\"\n\t\t\t\t\t\t \"unset i v\\n\\n\", -1);\n\n\t\tproc_source = DatumGetCString(DirectFunctionCall1(textout,\n\t\t\t\t\t\t\t\t\tPointerGetDatum(&procStruct->prosrc)));\n\t\tTcl_DStringAppend(&proc_internal_body, proc_source, -1);\n\t\tpfree(proc_source);\n\t\tTcl_DStringAppendElement(&proc_internal_def,\n\t\t\t\t\t\t\t\t Tcl_DStringValue(&proc_internal_body));\n\t\tTcl_DStringFree(&proc_internal_body);\n\n\t\t/************************************************************\n\t\t * Create the procedure in the safe interpreter\n\t\t ************************************************************/\n\t\ttcl_rc = Tcl_GlobalEval(plperl_safe_interp,\n\t\t\t\t\t\t\t\tTcl_DStringValue(&proc_internal_def));\n\t\tTcl_DStringFree(&proc_internal_def);\n\t\tif (tcl_rc != TCL_OK)\n\t\t{\n\t\t\tfree(prodesc->proname);\n\t\t\tfree(prodesc);\n\t\t\telog(ERROR, \"plperl: cannot create internal procedure %s - %s\",\n\t\t\t\t internal_proname, plperl_safe_interp->result);\n\t\t}\n\n\t\t/************************************************************\n\t\t * Add the proc description block to the hashtable\n\t\t ************************************************************/\n\t\thashent = Tcl_CreateHashEntry(plperl_proc_hash,\n\t\t\t\t\t\t\t\t\t prodesc->proname, &hashnew);\n\t\tTcl_SetHashValue(hashent, (ClientData) prodesc);\n\t}\n\telse\n\t{\n\t\t/************************************************************\n\t\t * Found the proc description block in the hashtable\n\t\t ************************************************************/\n\t\tprodesc = (plperl_proc_desc *) Tcl_GetHashValue(hashent);\n\t}\n\n\ttupdesc = trigdata->tg_relation->rd_att;\n\n\t/************************************************************\n\t * Create the tcl command to call the internal\n\t * proc in the safe interpreter\n\t ************************************************************/\n\tTcl_DStringInit(&tcl_cmd);\n\tTcl_DStringInit(&tcl_trigtup);\n\tTcl_DStringInit(&tcl_newtup);\n\n\t/************************************************************\n\t * We call external functions below - care for elog(ERROR)\n\t ************************************************************/\n\tmemcpy(&save_restart, &Warn_restart, sizeof(save_restart));\n\tif (sigsetjmp(Warn_restart, 1) != 0)\n\t{\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\tTcl_DStringFree(&tcl_cmd);\n\t\tTcl_DStringFree(&tcl_trigtup);\n\t\tTcl_DStringFree(&tcl_newtup);\n\t\tplperl_restart_in_progress = 1;\n\t\tif (--plperl_call_level == 0)\n\t\t\tplperl_restart_in_progress = 0;\n\t\tsiglongjmp(Warn_restart, 1);\n\t}\n\n\t/* The procedure name */\n\tTcl_DStringAppendElement(&tcl_cmd, internal_proname);\n\n\t/* The trigger name for argument TG_name */\n\tTcl_DStringAppendElement(&tcl_cmd, trigdata->tg_trigger->tgname);\n\n\t/* The oid of the trigger relation for argument TG_relid */\n\tstroid = DatumGetCString(DirectFunctionCall1(oidout,\n\t\t\t\t\t\t\t ObjectIdGetDatum(trigdata->tg_relation->rd_id)));\n\tTcl_DStringAppendElement(&tcl_cmd, stroid);\n\tpfree(stroid);\n\n\t/* A list of attribute names for argument TG_relatts */\n\tTcl_DStringAppendElement(&tcl_trigtup, \"\");\n\tfor (i = 0; i < tupdesc->natts; i++)\n\t\tTcl_DStringAppendElement(&tcl_trigtup, tupdesc->attrs[i]->attname.data);\n\tTcl_DStringAppendElement(&tcl_cmd, Tcl_DStringValue(&tcl_trigtup));\n\tTcl_DStringFree(&tcl_trigtup);\n\tTcl_DStringInit(&tcl_trigtup);\n\n\t/* The when part of the event for TG_when */\n\tif (TRIGGER_FIRED_BEFORE(trigdata->tg_event))\n\t\tTcl_DStringAppendElement(&tcl_cmd, \"BEFORE\");\n\telse if (TRIGGER_FIRED_AFTER(trigdata->tg_event))\n\t\tTcl_DStringAppendElement(&tcl_cmd, \"AFTER\");\n\telse\n\t\tTcl_DStringAppendElement(&tcl_cmd, \"UNKNOWN\");\n\n\t/* The level part of the event for TG_level */\n\tif (TRIGGER_FIRED_FOR_ROW(trigdata->tg_event))\n\t\tTcl_DStringAppendElement(&tcl_cmd, \"ROW\");\n\telse if (TRIGGER_FIRED_FOR_STATEMENT(trigdata->tg_event))\n\t\tTcl_DStringAppendElement(&tcl_cmd, \"STATEMENT\");\n\telse\n\t\tTcl_DStringAppendElement(&tcl_cmd, \"UNKNOWN\");\n\n\t/* Build the data list for the trigtuple */\n\tplperl_build_tuple_argument(trigdata->tg_trigtuple,\n\t\t\t\t\t\t\t\ttupdesc, &tcl_trigtup);\n\n\t/*\n\t * Now the command part of the event for TG_op and data for NEW and\n\t * OLD\n\t */\n\tif (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event))\n\t{\n\t\tTcl_DStringAppendElement(&tcl_cmd, \"INSERT\");\n\n\t\tTcl_DStringAppendElement(&tcl_cmd, Tcl_DStringValue(&tcl_trigtup));\n\t\tTcl_DStringAppendElement(&tcl_cmd, \"\");\n\n\t\trettup = trigdata->tg_trigtuple;\n\t}\n\telse if (TRIGGER_FIRED_BY_DELETE(trigdata->tg_event))\n\t{\n\t\tTcl_DStringAppendElement(&tcl_cmd, \"DELETE\");\n\n\t\tTcl_DStringAppendElement(&tcl_cmd, \"\");\n\t\tTcl_DStringAppendElement(&tcl_cmd, Tcl_DStringValue(&tcl_trigtup));\n\n\t\trettup = trigdata->tg_trigtuple;\n\t}\n\telse if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))\n\t{\n\t\tTcl_DStringAppendElement(&tcl_cmd, \"UPDATE\");\n\n\t\tplperl_build_tuple_argument(trigdata->tg_newtuple,\n\t\t\t\t\t\t\t\t\ttupdesc, &tcl_newtup);\n\n\t\tTcl_DStringAppendElement(&tcl_cmd, Tcl_DStringValue(&tcl_newtup));\n\t\tTcl_DStringAppendElement(&tcl_cmd, Tcl_DStringValue(&tcl_trigtup));\n\n\t\trettup = trigdata->tg_newtuple;\n\t}\n\telse\n\t{\n\t\tTcl_DStringAppendElement(&tcl_cmd, \"UNKNOWN\");\n\n\t\tTcl_DStringAppendElement(&tcl_cmd, Tcl_DStringValue(&tcl_trigtup));\n\t\tTcl_DStringAppendElement(&tcl_cmd, Tcl_DStringValue(&tcl_trigtup));\n\n\t\trettup = trigdata->tg_trigtuple;\n\t}\n\n\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\tTcl_DStringFree(&tcl_trigtup);\n\tTcl_DStringFree(&tcl_newtup);\n\n\t/************************************************************\n\t * Finally append the arguments from CREATE TRIGGER\n\t ************************************************************/\n\tfor (i = 0; i < trigdata->tg_trigger->tgnargs; i++)\n\t\tTcl_DStringAppendElement(&tcl_cmd, trigdata->tg_trigger->tgargs[i]);\n\n\t/************************************************************\n\t * Call the Tcl function\n\t ************************************************************/\n\ttcl_rc = Tcl_GlobalEval(plperl_safe_interp, Tcl_DStringValue(&tcl_cmd));\n\tTcl_DStringFree(&tcl_cmd);\n\n\t/************************************************************\n\t * Check the return code from Tcl and handle\n\t * our special restart mechanism to get rid\n\t * of all nested call levels on transaction\n\t * abort.\n\t ************************************************************/\n\tif (tcl_rc == TCL_ERROR || plperl_restart_in_progress)\n\t{\n\t\tif (!plperl_restart_in_progress)\n\t\t{\n\t\t\tplperl_restart_in_progress = 1;\n\t\t\tif (--plperl_call_level == 0)\n\t\t\t\tplperl_restart_in_progress = 0;\n\t\t\telog(ERROR, \"plperl: %s\", plperl_safe_interp->result);\n\t\t}\n\t\tif (--plperl_call_level == 0)\n\t\t\tplperl_restart_in_progress = 0;\n\t\tsiglongjmp(Warn_restart, 1);\n\t}\n\n\tswitch (tcl_rc)\n\t{\n\t\tcase TCL_OK:\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\telog(ERROR, \"plperl: unsupported TCL return code %d\", tcl_rc);\n\t}\n\n\t/************************************************************\n\t * The return value from the procedure might be one of\n\t * the magic strings OK or SKIP or a list from array get\n\t ************************************************************/\n\tif (SPI_finish() != SPI_OK_FINISH)\n\t\telog(ERROR, \"plperl: SPI_finish() failed\");\n\n\tif (strcmp(plperl_safe_interp->result, \"OK\") == 0)\n\t\treturn rettup;\n\tif (strcmp(plperl_safe_interp->result, \"SKIP\") == 0)\n\t{\n\t\treturn (HeapTuple) NULL;;\n\t}\n\n\t/************************************************************\n\t * Convert the result value from the safe interpreter\n\t * and setup structures for SPI_modifytuple();\n\t ************************************************************/\n\tif (Tcl_SplitList(plperl_safe_interp, plperl_safe_interp->result,\n\t\t\t\t\t &ret_numvals, &ret_values) != TCL_OK)\n\t{\n\t\telog(NOTICE, \"plperl: cannot split return value from trigger\");\n\t\telog(ERROR, \"plperl: %s\", plperl_safe_interp->result);\n\t}\n\n\tif (ret_numvals % 2 != 0)\n\t{\n\t\tckfree(ret_values);\n\t\telog(ERROR, \"plperl: invalid return list from trigger - must have even # of elements\");\n\t}\n\n\tmodattrs = (int *) palloc(tupdesc->natts * sizeof(int));\n\tmodvalues = (Datum *) palloc(tupdesc->natts * sizeof(Datum));\n\tfor (i = 0; i < tupdesc->natts; i++)\n\t{\n\t\tmodattrs[i] = i + 1;\n\t\tmodvalues[i] = (Datum) NULL;\n\t}\n\n\tmodnulls = palloc(tupdesc->natts + 1);\n\tmemset(modnulls, 'n', tupdesc->natts);\n\tmodnulls[tupdesc->natts] = '\\0';\n\n\t/************************************************************\n\t * Care for possible elog(ERROR)'s below\n\t ************************************************************/\n\tif (sigsetjmp(Warn_restart, 1) != 0)\n\t{\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\tckfree(ret_values);\n\t\tplperl_restart_in_progress = 1;\n\t\tif (--plperl_call_level == 0)\n\t\t\tplperl_restart_in_progress = 0;\n\t\tsiglongjmp(Warn_restart, 1);\n\t}\n\n\ti = 0;\n\twhile (i < ret_numvals)\n\t{\n\t\tint\t\t\tattnum;\n\t\tHeapTuple\ttypeTup;\n\t\tOid\t\t\ttypinput;\n\t\tOid\t\t\ttypelem;\n\t\tFmgrInfo\tfinfo;\n\n\t\t/************************************************************\n\t\t * Ignore pseudo elements with a dot name\n\t\t ************************************************************/\n\t\tif (*(ret_values[i]) == '.')\n\t\t{\n\t\t\ti += 2;\n\t\t\tcontinue;\n\t\t}\n\n\t\t/************************************************************\n\t\t * Get the attribute number\n\t\t ************************************************************/\n\t\tattnum = SPI_fnumber(tupdesc, ret_values[i++]);\n\t\tif (attnum == SPI_ERROR_NOATTRIBUTE)\n\t\t\telog(ERROR, \"plperl: invalid attribute '%s'\", ret_values[--i]);\n\n\t\t/************************************************************\n\t\t * Lookup the attribute type in the syscache\n\t\t * for the input function\n\t\t ************************************************************/\n\t\ttypeTup = SearchSysCacheTuple(TYPEOID,\n\t\t\t\t ObjectIdGetDatum(tupdesc->attrs[attnum - 1]->atttypid),\n\t\t\t\t\t\t\t\t\t 0, 0, 0);\n\t\tif (!HeapTupleIsValid(typeTup))\n\t\t{\n\t\t\telog(ERROR, \"plperl: Cache lookup for attribute '%s' type %u failed\",\n\t\t\t\t ret_values[--i],\n\t\t\t\t tupdesc->attrs[attnum - 1]->atttypid);\n\t\t}\n\t\ttypinput = (Oid) (((Form_pg_type) GETSTRUCT(typeTup))->typinput);\n\t\ttypelem = (Oid) (((Form_pg_type) GETSTRUCT(typeTup))->typelem);\n\n\t\t/************************************************************\n\t\t * Set the attribute to NOT NULL and convert the contents\n\t\t ************************************************************/\n\t\tmodnulls[attnum - 1] = ' ';\n\t\tfmgr_info(typinput, &finfo);\n\t\tmodvalues[attnum - 1] =\n\t\t\tFunctionCall3(&finfo,\n\t\t\t\t\t\t CStringGetDatum(ret_values[i++]),\n\t\t\t\t\t\t ObjectIdGetDatum(typelem),\n\t\t\t\t\t\t Int32GetDatum(tupdesc->attrs[attnum-1]->atttypmod));\n\t}\n\n\n\trettup = SPI_modifytuple(trigdata->tg_relation, rettup, tupdesc->natts,\n\t\t\t\t\t\t\t modattrs, modvalues, modnulls);\n\n\tpfree(modattrs);\n\tpfree(modvalues);\n\tpfree(modnulls);\n\n\tif (rettup == NULL)\n\t\telog(ERROR, \"plperl: SPI_modifytuple() failed - RC = %d\\n\", SPI_result);\n\n\tckfree(ret_values);\n\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\n\treturn rettup;\n}\n\n\n/**********************************************************************\n * plperl_elog()\t\t- elog() support for PLTcl\n **********************************************************************/\nstatic int\nplperl_elog(ClientData cdata, Tcl_Interp *interp,\n\t\t\tint argc, char *argv[])\n{\n\tint\t\t\tlevel;\n\tsigjmp_buf\tsave_restart;\n\n\t/************************************************************\n\t * Suppress messages during the restart process\n\t ************************************************************/\n\tif (plperl_restart_in_progress)\n\t\treturn TCL_ERROR;\n\n\t/************************************************************\n\t * Catch the restart longjmp and begin a controlled\n\t * return though all interpreter levels if it happens\n\t ************************************************************/\n\tmemcpy(&save_restart, &Warn_restart, sizeof(save_restart));\n\tif (sigsetjmp(Warn_restart, 1) != 0)\n\t{\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\tplperl_restart_in_progress = 1;\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (argc != 3)\n\t{\n\t\tTcl_SetResult(interp, \"syntax error - 'elog level msg'\",\n\t\t\t\t\t TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (strcmp(argv[1], \"NOTICE\") == 0)\n\t\tlevel = NOTICE;\n\telse if (strcmp(argv[1], \"WARN\") == 0)\n\t\tlevel = ERROR;\n\telse if (strcmp(argv[1], \"ERROR\") == 0)\n\t\tlevel = ERROR;\n\telse if (strcmp(argv[1], \"FATAL\") == 0)\n\t\tlevel = FATAL;\n\telse if (strcmp(argv[1], \"DEBUG\") == 0)\n\t\tlevel = DEBUG;\n\telse if (strcmp(argv[1], \"NOIND\") == 0)\n\t\tlevel = NOIND;\n\telse\n\t{\n\t\tTcl_AppendResult(interp, \"Unknown elog level '\", argv[1],\n\t\t\t\t\t\t \"'\", NULL);\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * Call elog(), restore the original restart address\n\t * and return to the caller (if not catched)\n\t ************************************************************/\n\telog(level, argv[2]);\n\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\treturn TCL_OK;\n}\n\n\n/**********************************************************************\n * plperl_quote()\t- quote literal strings that are to\n *\t\t\t be used in SPI_exec query strings\n **********************************************************************/\nstatic int\nplperl_quote(ClientData cdata, Tcl_Interp *interp,\n\t\t\t int argc, char *argv[])\n{\n\tchar\t *tmp;\n\tchar\t *cp1;\n\tchar\t *cp2;\n\n\t/************************************************************\n\t * Check call syntax\n\t ************************************************************/\n\tif (argc != 2)\n\t{\n\t\tTcl_SetResult(interp, \"syntax error - 'quote string'\", TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * Allocate space for the maximum the string can\n\t * grow to and initialize pointers\n\t ************************************************************/\n\ttmp = palloc(strlen(argv[1]) * 2 + 1);\n\tcp1 = argv[1];\n\tcp2 = tmp;\n\n\t/************************************************************\n\t * Walk through string and double every quote and backslash\n\t ************************************************************/\n\twhile (*cp1)\n\t{\n\t\tif (*cp1 == '\\'')\n\t\t\t*cp2++ = '\\'';\n\t\telse\n\t\t{\n\t\t\tif (*cp1 == '\\\\')\n\t\t\t\t*cp2++ = '\\\\';\n\t\t}\n\t\t*cp2++ = *cp1++;\n\t}\n\n\t/************************************************************\n\t * Terminate the string and set it as result\n\t ************************************************************/\n\t*cp2 = '\\0';\n\tTcl_SetResult(interp, tmp, TCL_VOLATILE);\n\tpfree(tmp);\n\treturn TCL_OK;\n}\n\n\n/**********************************************************************\n * plperl_SPI_exec()\t\t- The builtin SPI_exec command\n *\t\t\t\t for the safe interpreter\n **********************************************************************/\nstatic int\nplperl_SPI_exec(ClientData cdata, Tcl_Interp *interp,\n\t\t\t\tint argc, char *argv[])\n{\n\tint\t\t\tspi_rc;\n\tchar\t\tbuf[64];\n\tint\t\t\tcount = 0;\n\tchar\t *arrayname = NULL;\n\tint\t\t\tquery_idx;\n\tint\t\t\ti;\n\tint\t\t\tloop_rc;\n\tint\t\t\tntuples;\n\tHeapTuple *tuples;\n\tTupleDesc\ttupdesc = NULL;\n\tsigjmp_buf\tsave_restart;\n\n\tchar\t *usage = \"syntax error - 'SPI_exec \"\n\t\"?-count n? \"\n\t\"?-array name? query ?loop body?\";\n\n\t/************************************************************\n\t * Don't do anything if we are already in restart mode\n\t ************************************************************/\n\tif (plperl_restart_in_progress)\n\t\treturn TCL_ERROR;\n\n\t/************************************************************\n\t * Check the call syntax and get the count option\n\t ************************************************************/\n\tif (argc < 2)\n\t{\n\t\tTcl_SetResult(interp, usage, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\ti = 1;\n\twhile (i < argc)\n\t{\n\t\tif (strcmp(argv[i], \"-array\") == 0)\n\t\t{\n\t\t\tif (++i >= argc)\n\t\t\t{\n\t\t\t\tTcl_SetResult(interp, usage, TCL_VOLATILE);\n\t\t\t\treturn TCL_ERROR;\n\t\t\t}\n\t\t\tarrayname = argv[i++];\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (strcmp(argv[i], \"-count\") == 0)\n\t\t{\n\t\t\tif (++i >= argc)\n\t\t\t{\n\t\t\t\tTcl_SetResult(interp, usage, TCL_VOLATILE);\n\t\t\t\treturn TCL_ERROR;\n\t\t\t}\n\t\t\tif (Tcl_GetInt(interp, argv[i++], &count) != TCL_OK)\n\t\t\t\treturn TCL_ERROR;\n\t\t\tcontinue;\n\t\t}\n\n\t\tbreak;\n\t}\n\n\tquery_idx = i;\n\tif (query_idx >= argc)\n\t{\n\t\tTcl_SetResult(interp, usage, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * Prepare to start a controlled return through all\n\t * interpreter levels on transaction abort\n\t ************************************************************/\n\tmemcpy(&save_restart, &Warn_restart, sizeof(save_restart));\n\tif (sigsetjmp(Warn_restart, 1) != 0)\n\t{\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\tplperl_restart_in_progress = 1;\n\t\tTcl_SetResult(interp, \"Transaction abort\", TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * Execute the query and handle return codes\n\t ************************************************************/\n\tspi_rc = SPI_exec(argv[query_idx], count);\n\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\n\tswitch (spi_rc)\n\t{\n\t\tcase SPI_OK_UTILITY:\n\t\t\tTcl_SetResult(interp, \"0\", TCL_VOLATILE);\n\t\t\treturn TCL_OK;\n\n\t\tcase SPI_OK_SELINTO:\n\t\tcase SPI_OK_INSERT:\n\t\tcase SPI_OK_DELETE:\n\t\tcase SPI_OK_UPDATE:\n\t\t\tsprintf(buf, \"%d\", SPI_processed);\n\t\t\tTcl_SetResult(interp, buf, TCL_VOLATILE);\n\t\t\treturn TCL_OK;\n\n\t\tcase SPI_OK_SELECT:\n\t\t\tbreak;\n\n\t\tcase SPI_ERROR_ARGUMENT:\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t\t\t\"plperl: SPI_exec() failed - SPI_ERROR_ARGUMENT\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\n\t\tcase SPI_ERROR_UNCONNECTED:\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t\t \"plperl: SPI_exec() failed - SPI_ERROR_UNCONNECTED\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\n\t\tcase SPI_ERROR_COPY:\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t\t\t \"plperl: SPI_exec() failed - SPI_ERROR_COPY\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\n\t\tcase SPI_ERROR_CURSOR:\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t\t\t \"plperl: SPI_exec() failed - SPI_ERROR_CURSOR\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\n\t\tcase SPI_ERROR_TRANSACTION:\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t\t \"plperl: SPI_exec() failed - SPI_ERROR_TRANSACTION\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\n\t\tcase SPI_ERROR_OPUNKNOWN:\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t\t \"plperl: SPI_exec() failed - SPI_ERROR_OPUNKNOWN\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\n\t\tdefault:\n\t\t\tsprintf(buf, \"%d\", spi_rc);\n\t\t\tTcl_AppendResult(interp, \"plperl: SPI_exec() failed - \",\n\t\t\t\t\t\t\t \"unknown RC \", buf, NULL);\n\t\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * Only SELECT queries fall through to here - remember the\n\t * tuples we got\n\t ************************************************************/\n\n\tntuples = SPI_processed;\n\tif (ntuples > 0)\n\t{\n\t\ttuples = SPI_tuptable->vals;\n\t\ttupdesc = SPI_tuptable->tupdesc;\n\t}\n\n\t/************************************************************\n\t * Again prepare for elog(ERROR)\n\t ************************************************************/\n\tif (sigsetjmp(Warn_restart, 1) != 0)\n\t{\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\tplperl_restart_in_progress = 1;\n\t\tTcl_SetResult(interp, \"Transaction abort\", TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * If there is no loop body given, just set the variables\n\t * from the first tuple (if any) and return the number of\n\t * tuples selected\n\t ************************************************************/\n\tif (argc == query_idx + 1)\n\t{\n\t\tif (ntuples > 0)\n\t\t\tplperl_set_tuple_values(interp, arrayname, 0, tuples[0], tupdesc);\n\t\tsprintf(buf, \"%d\", ntuples);\n\t\tTcl_SetResult(interp, buf, TCL_VOLATILE);\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\treturn TCL_OK;\n\t}\n\n\t/************************************************************\n\t * There is a loop body - process all tuples and evaluate\n\t * the body on each\n\t ************************************************************/\n\tquery_idx++;\n\tfor (i = 0; i < ntuples; i++)\n\t{\n\t\tplperl_set_tuple_values(interp, arrayname, i, tuples[i], tupdesc);\n\n\t\tloop_rc = Tcl_Eval(interp, argv[query_idx]);\n\n\t\tif (loop_rc == TCL_OK)\n\t\t\tcontinue;\n\t\tif (loop_rc == TCL_CONTINUE)\n\t\t\tcontinue;\n\t\tif (loop_rc == TCL_RETURN)\n\t\t{\n\t\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\t\treturn TCL_RETURN;\n\t\t}\n\t\tif (loop_rc == TCL_BREAK)\n\t\t\tbreak;\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * Finally return the number of tuples\n\t ************************************************************/\n\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\tsprintf(buf, \"%d\", ntuples);\n\tTcl_SetResult(interp, buf, TCL_VOLATILE);\n\treturn TCL_OK;\n}\n\n\n/**********************************************************************\n * plperl_SPI_prepare()\t\t- Builtin support for prepared plans\n *\t\t\t\t The Tcl command SPI_prepare\n *\t\t\t\t allways saves the plan using\n *\t\t\t\t SPI_saveplan and returns a key for\n *\t\t\t\t access. There is no chance to prepare\n *\t\t\t\t and not save the plan currently.\n **********************************************************************/\nstatic int\nplperl_SPI_prepare(ClientData cdata, Tcl_Interp *interp,\n\t\t\t\t int argc, char *argv[])\n{\n\tint\t\t\tnargs;\n\tchar\t **args;\n\tplperl_query_desc *qdesc;\n\tvoid\t *plan;\n\tint\t\t\ti;\n\tHeapTuple\ttypeTup;\n\tTcl_HashEntry *hashent;\n\tint\t\t\thashnew;\n\tsigjmp_buf\tsave_restart;\n\n\t/************************************************************\n\t * Don't do anything if we are already in restart mode\n\t ************************************************************/\n\tif (plperl_restart_in_progress)\n\t\treturn TCL_ERROR;\n\n\t/************************************************************\n\t * Check the call syntax\n\t ************************************************************/\n\tif (argc != 3)\n\t{\n\t\tTcl_SetResult(interp, \"syntax error - 'SPI_prepare query argtypes'\",\n\t\t\t\t\t TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * Split the argument type list\n\t ************************************************************/\n\tif (Tcl_SplitList(interp, argv[2], &nargs, &args) != TCL_OK)\n\t\treturn TCL_ERROR;\n\n\t/************************************************************\n\t * Allocate the new querydesc structure\n\t ************************************************************/\n\tqdesc = (plperl_query_desc *) malloc(sizeof(plperl_query_desc));\n\tsprintf(qdesc->qname, \"%lx\", (long) qdesc);\n\tqdesc->nargs = nargs;\n\tqdesc->argtypes = (Oid *) malloc(nargs * sizeof(Oid));\n\tqdesc->arginfuncs = (FmgrInfo *) malloc(nargs * sizeof(FmgrInfo));\n\tqdesc->argtypelems = (Oid *) malloc(nargs * sizeof(Oid));\n\tqdesc->argvalues = (Datum *) malloc(nargs * sizeof(Datum));\n\tqdesc->arglen = (int *) malloc(nargs * sizeof(int));\n\n\t/************************************************************\n\t * Prepare to start a controlled return through all\n\t * interpreter levels on transaction abort\n\t ************************************************************/\n\tmemcpy(&save_restart, &Warn_restart, sizeof(save_restart));\n\tif (sigsetjmp(Warn_restart, 1) != 0)\n\t{\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\tplperl_restart_in_progress = 1;\n\t\tfree(qdesc->argtypes);\n\t\tfree(qdesc->arginfuncs);\n\t\tfree(qdesc->argtypelems);\n\t\tfree(qdesc->argvalues);\n\t\tfree(qdesc->arglen);\n\t\tfree(qdesc);\n\t\tckfree(args);\n\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * Lookup the argument types by name in the system cache\n\t * and remember the required information for input conversion\n\t ************************************************************/\n\tfor (i = 0; i < nargs; i++)\n\t{\n\t\ttypeTup = SearchSysCacheTuple(TYPNAME,\n\t\t\t\t\t\t\t\t\t PointerGetDatum(args[i]),\n\t\t\t\t\t\t\t\t\t 0, 0, 0);\n\t\tif (!HeapTupleIsValid(typeTup))\n\t\t\telog(ERROR, \"plperl: Cache lookup of type %s failed\", args[i]);\n\t\tqdesc->argtypes[i] = typeTup->t_data->t_oid;\n\t\tfmgr_info(((Form_pg_type) GETSTRUCT(typeTup))->typinput,\n\t\t\t\t &(qdesc->arginfuncs[i]));\n\t\tqdesc->argtypelems[i] = ((Form_pg_type) GETSTRUCT(typeTup))->typelem;\n\t\tqdesc->argvalues[i] = (Datum) NULL;\n\t\tqdesc->arglen[i] = (int) (((Form_pg_type) GETSTRUCT(typeTup))->typlen);\n\t}\n\n\t/************************************************************\n\t * Prepare the plan and check for errors\n\t ************************************************************/\n\tplan = SPI_prepare(argv[1], nargs, qdesc->argtypes);\n\n\tif (plan == NULL)\n\t{\n\t\tchar\t\tbuf[128];\n\t\tchar\t *reason;\n\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\n\t\tswitch (SPI_result)\n\t\t{\n\t\t\tcase SPI_ERROR_ARGUMENT:\n\t\t\t\treason = \"SPI_ERROR_ARGUMENT\";\n\t\t\t\tbreak;\n\n\t\t\tcase SPI_ERROR_UNCONNECTED:\n\t\t\t\treason = \"SPI_ERROR_UNCONNECTED\";\n\t\t\t\tbreak;\n\n\t\t\tcase SPI_ERROR_COPY:\n\t\t\t\treason = \"SPI_ERROR_COPY\";\n\t\t\t\tbreak;\n\n\t\t\tcase SPI_ERROR_CURSOR:\n\t\t\t\treason = \"SPI_ERROR_CURSOR\";\n\t\t\t\tbreak;\n\n\t\t\tcase SPI_ERROR_TRANSACTION:\n\t\t\t\treason = \"SPI_ERROR_TRANSACTION\";\n\t\t\t\tbreak;\n\n\t\t\tcase SPI_ERROR_OPUNKNOWN:\n\t\t\t\treason = \"SPI_ERROR_OPUNKNOWN\";\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\tsprintf(buf, \"unknown RC %d\", SPI_result);\n\t\t\t\treason = buf;\n\t\t\t\tbreak;\n\n\t\t}\n\n\t\telog(ERROR, \"plperl: SPI_prepare() failed - %s\", reason);\n\t}\n\n\t/************************************************************\n\t * Save the plan\n\t ************************************************************/\n\tqdesc->plan = SPI_saveplan(plan);\n\tif (qdesc->plan == NULL)\n\t{\n\t\tchar\t\tbuf[128];\n\t\tchar\t *reason;\n\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\n\t\tswitch (SPI_result)\n\t\t{\n\t\t\tcase SPI_ERROR_ARGUMENT:\n\t\t\t\treason = \"SPI_ERROR_ARGUMENT\";\n\t\t\t\tbreak;\n\n\t\t\tcase SPI_ERROR_UNCONNECTED:\n\t\t\t\treason = \"SPI_ERROR_UNCONNECTED\";\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\tsprintf(buf, \"unknown RC %d\", SPI_result);\n\t\t\t\treason = buf;\n\t\t\t\tbreak;\n\n\t\t}\n\n\t\telog(ERROR, \"plperl: SPI_saveplan() failed - %s\", reason);\n\t}\n\n\t/************************************************************\n\t * Insert a hashtable entry for the plan and return\n\t * the key to the caller\n\t ************************************************************/\n\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\thashent = Tcl_CreateHashEntry(plperl_query_hash, qdesc->qname, &hashnew);\n\tTcl_SetHashValue(hashent, (ClientData) qdesc);\n\n\tTcl_SetResult(interp, qdesc->qname, TCL_VOLATILE);\n\treturn TCL_OK;\n}\n\n\n/**********************************************************************\n * plperl_SPI_execp()\t\t- Execute a prepared plan\n **********************************************************************/\nstatic int\nplperl_SPI_execp(ClientData cdata, Tcl_Interp *interp,\n\t\t\t\t int argc, char *argv[])\n{\n\tint\t\t\tspi_rc;\n\tchar\t\tbuf[64];\n\tint\t\t\ti,\n\t\t\t\tj;\n\tint\t\t\tloop_body;\n\tTcl_HashEntry *hashent;\n\tplperl_query_desc *qdesc;\n\tchar\t *nulls = NULL;\n\tchar\t *arrayname = NULL;\n\tint\t\t\tcount = 0;\n\tint\t\t\tcallnargs;\n\tstatic char **callargs = NULL;\n\tint\t\t\tloop_rc;\n\tint\t\t\tntuples;\n\tHeapTuple *tuples = NULL;\n\tTupleDesc\ttupdesc = NULL;\n\tsigjmp_buf\tsave_restart;\n\n\tchar\t *usage = \"syntax error - 'SPI_execp \"\n\t\"?-nulls string? ?-count n? \"\n\t\"?-array name? query ?args? ?loop body?\";\n\n\t/************************************************************\n\t * Tidy up from an earlier abort\n\t ************************************************************/\n\tif (callargs != NULL)\n\t{\n\t\tckfree(callargs);\n\t\tcallargs = NULL;\n\t}\n\n\t/************************************************************\n\t * Don't do anything if we are already in restart mode\n\t ************************************************************/\n\tif (plperl_restart_in_progress)\n\t\treturn TCL_ERROR;\n\n\t/************************************************************\n\t * Get the options and check syntax\n\t ************************************************************/\n\ti = 1;\n\twhile (i < argc)\n\t{\n\t\tif (strcmp(argv[i], \"-array\") == 0)\n\t\t{\n\t\t\tif (++i >= argc)\n\t\t\t{\n\t\t\t\tTcl_SetResult(interp, usage, TCL_VOLATILE);\n\t\t\t\treturn TCL_ERROR;\n\t\t\t}\n\t\t\tarrayname = argv[i++];\n\t\t\tcontinue;\n\t\t}\n\t\tif (strcmp(argv[i], \"-nulls\") == 0)\n\t\t{\n\t\t\tif (++i >= argc)\n\t\t\t{\n\t\t\t\tTcl_SetResult(interp, usage, TCL_VOLATILE);\n\t\t\t\treturn TCL_ERROR;\n\t\t\t}\n\t\t\tnulls = argv[i++];\n\t\t\tcontinue;\n\t\t}\n\t\tif (strcmp(argv[i], \"-count\") == 0)\n\t\t{\n\t\t\tif (++i >= argc)\n\t\t\t{\n\t\t\t\tTcl_SetResult(interp, usage, TCL_VOLATILE);\n\t\t\t\treturn TCL_ERROR;\n\t\t\t}\n\t\t\tif (Tcl_GetInt(interp, argv[i++], &count) != TCL_OK)\n\t\t\t\treturn TCL_ERROR;\n\t\t\tcontinue;\n\t\t}\n\n\t\tbreak;\n\t}\n\n\t/************************************************************\n\t * Check minimum call arguments\n\t ************************************************************/\n\tif (i >= argc)\n\t{\n\t\tTcl_SetResult(interp, usage, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * Get the prepared plan descriptor by it's key\n\t ************************************************************/\n\thashent = Tcl_FindHashEntry(plperl_query_hash, argv[i++]);\n\tif (hashent == NULL)\n\t{\n\t\tTcl_AppendResult(interp, \"invalid queryid '\", argv[--i], \"'\", NULL);\n\t\treturn TCL_ERROR;\n\t}\n\tqdesc = (plperl_query_desc *) Tcl_GetHashValue(hashent);\n\n\t/************************************************************\n\t * If a nulls string is given, check for correct length\n\t ************************************************************/\n\tif (nulls != NULL)\n\t{\n\t\tif (strlen(nulls) != qdesc->nargs)\n\t\t{\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t \"length of nulls string doesn't match # of arguments\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\t\t}\n\t}\n\n\t/************************************************************\n\t * If there was a argtype list on preparation, we need\n\t * an argument value list now\n\t ************************************************************/\n\tif (qdesc->nargs > 0)\n\t{\n\t\tif (i >= argc)\n\t\t{\n\t\t\tTcl_SetResult(interp, \"missing argument list\", TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\t\t}\n\n\t\t/************************************************************\n\t\t * Split the argument values\n\t\t ************************************************************/\n\t\tif (Tcl_SplitList(interp, argv[i++], &callnargs, &callargs) != TCL_OK)\n\t\t\treturn TCL_ERROR;\n\n\t\t/************************************************************\n\t\t * Check that the # of arguments matches\n\t\t ************************************************************/\n\t\tif (callnargs != qdesc->nargs)\n\t\t{\n\t\t\tTcl_SetResult(interp,\n\t\t\t\"argument list length doesn't match # of arguments for query\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\tif (callargs != NULL)\n\t\t\t{\n\t\t\t\tckfree(callargs);\n\t\t\t\tcallargs = NULL;\n\t\t\t}\n\t\t\treturn TCL_ERROR;\n\t\t}\n\n\t\t/************************************************************\n\t\t * Prepare to start a controlled return through all\n\t\t * interpreter levels on transaction abort during the\n\t\t * parse of the arguments\n\t\t ************************************************************/\n\t\tmemcpy(&save_restart, &Warn_restart, sizeof(save_restart));\n\t\tif (sigsetjmp(Warn_restart, 1) != 0)\n\t\t{\n\t\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\t\tfor (j = 0; j < callnargs; j++)\n\t\t\t{\n\t\t\t\tif (qdesc->arglen[j] < 0 &&\n\t\t\t\t\tqdesc->argvalues[j] != (Datum) NULL)\n\t\t\t\t{\n\t\t\t\t\tpfree((char *) (qdesc->argvalues[j]));\n\t\t\t\t\tqdesc->argvalues[j] = (Datum) NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t\tckfree(callargs);\n\t\t\tcallargs = NULL;\n\t\t\tplperl_restart_in_progress = 1;\n\t\t\tTcl_SetResult(interp, \"Transaction abort\", TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\t\t}\n\n\t\t/************************************************************\n\t\t * Setup the value array for the SPI_execp() using\n\t\t * the type specific input functions\n\t\t ************************************************************/\n\t\tfor (j = 0; j < callnargs; j++)\n\t\t{\n\t\t\tqdesc->argvalues[j] =\n\t\t\t\tFunctionCall3(&qdesc->arginfuncs[j],\n\t\t\t\t\t\t\t CStringGetDatum(callargs[j]),\n\t\t\t\t\t\t\t ObjectIdGetDatum(qdesc->argtypelems[j]),\n\t\t\t\t\t\t\t Int32GetDatum(qdesc->arglen[j]));\n\t\t}\n\n\t\t/************************************************************\n\t\t * Free the splitted argument value list\n\t\t ************************************************************/\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\tckfree(callargs);\n\t\tcallargs = NULL;\n\t}\n\telse\n\t\tcallnargs = 0;\n\n\t/************************************************************\n\t * Remember the index of the last processed call\n\t * argument - a loop body for SELECT might follow\n\t ************************************************************/\n\tloop_body = i;\n\n\t/************************************************************\n\t * Prepare to start a controlled return through all\n\t * interpreter levels on transaction abort\n\t ************************************************************/\n\tmemcpy(&save_restart, &Warn_restart, sizeof(save_restart));\n\tif (sigsetjmp(Warn_restart, 1) != 0)\n\t{\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\tfor (j = 0; j < callnargs; j++)\n\t\t{\n\t\t\tif (qdesc->arglen[j] < 0 && qdesc->argvalues[j] != (Datum) NULL)\n\t\t\t{\n\t\t\t\tpfree((char *) (qdesc->argvalues[j]));\n\t\t\t\tqdesc->argvalues[j] = (Datum) NULL;\n\t\t\t}\n\t\t}\n\t\tplperl_restart_in_progress = 1;\n\t\tTcl_SetResult(interp, \"Transaction abort\", TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * Execute the plan\n\t ************************************************************/\n\tspi_rc = SPI_execp(qdesc->plan, qdesc->argvalues, nulls, count);\n\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\n\t/************************************************************\n\t * For varlena data types, free the argument values\n\t ************************************************************/\n\tfor (j = 0; j < callnargs; j++)\n\t{\n\t\tif (qdesc->arglen[j] < 0 && qdesc->argvalues[j] != (Datum) NULL)\n\t\t{\n\t\t\tpfree((char *) (qdesc->argvalues[j]));\n\t\t\tqdesc->argvalues[j] = (Datum) NULL;\n\t\t}\n\t}\n\n\t/************************************************************\n\t * Check the return code from SPI_execp()\n\t ************************************************************/\n\tswitch (spi_rc)\n\t{\n\t\tcase SPI_OK_UTILITY:\n\t\t\tTcl_SetResult(interp, \"0\", TCL_VOLATILE);\n\t\t\treturn TCL_OK;\n\n\t\tcase SPI_OK_SELINTO:\n\t\tcase SPI_OK_INSERT:\n\t\tcase SPI_OK_DELETE:\n\t\tcase SPI_OK_UPDATE:\n\t\t\tsprintf(buf, \"%d\", SPI_processed);\n\t\t\tTcl_SetResult(interp, buf, TCL_VOLATILE);\n\t\t\treturn TCL_OK;\n\n\t\tcase SPI_OK_SELECT:\n\t\t\tbreak;\n\n\t\tcase SPI_ERROR_ARGUMENT:\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t\t\t\"plperl: SPI_exec() failed - SPI_ERROR_ARGUMENT\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\n\t\tcase SPI_ERROR_UNCONNECTED:\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t\t \"plperl: SPI_exec() failed - SPI_ERROR_UNCONNECTED\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\n\t\tcase SPI_ERROR_COPY:\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t\t\t \"plperl: SPI_exec() failed - SPI_ERROR_COPY\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\n\t\tcase SPI_ERROR_CURSOR:\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t\t\t \"plperl: SPI_exec() failed - SPI_ERROR_CURSOR\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\n\t\tcase SPI_ERROR_TRANSACTION:\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t\t \"plperl: SPI_exec() failed - SPI_ERROR_TRANSACTION\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\n\t\tcase SPI_ERROR_OPUNKNOWN:\n\t\t\tTcl_SetResult(interp,\n\t\t\t\t\t \"plperl: SPI_exec() failed - SPI_ERROR_OPUNKNOWN\",\n\t\t\t\t\t\t TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\n\t\tdefault:\n\t\t\tsprintf(buf, \"%d\", spi_rc);\n\t\t\tTcl_AppendResult(interp, \"plperl: SPI_exec() failed - \",\n\t\t\t\t\t\t\t \"unknown RC \", buf, NULL);\n\t\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * Only SELECT queries fall through to here - remember the\n\t * tuples we got\n\t ************************************************************/\n\n\tntuples = SPI_processed;\n\tif (ntuples > 0)\n\t{\n\t\ttuples = SPI_tuptable->vals;\n\t\ttupdesc = SPI_tuptable->tupdesc;\n\t}\n\n\t/************************************************************\n\t * Prepare to start a controlled return through all\n\t * interpreter levels on transaction abort during\n\t * the ouput conversions of the results\n\t ************************************************************/\n\tmemcpy(&save_restart, &Warn_restart, sizeof(save_restart));\n\tif (sigsetjmp(Warn_restart, 1) != 0)\n\t{\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\tplperl_restart_in_progress = 1;\n\t\tTcl_SetResult(interp, \"Transaction abort\", TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * If there is no loop body given, just set the variables\n\t * from the first tuple (if any) and return the number of\n\t * tuples selected\n\t ************************************************************/\n\tif (loop_body >= argc)\n\t{\n\t\tif (ntuples > 0)\n\t\t\tplperl_set_tuple_values(interp, arrayname, 0, tuples[0], tupdesc);\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\tsprintf(buf, \"%d\", ntuples);\n\t\tTcl_SetResult(interp, buf, TCL_VOLATILE);\n\t\treturn TCL_OK;\n\t}\n\n\t/************************************************************\n\t * There is a loop body - process all tuples and evaluate\n\t * the body on each\n\t ************************************************************/\n\tfor (i = 0; i < ntuples; i++)\n\t{\n\t\tplperl_set_tuple_values(interp, arrayname, i, tuples[i], tupdesc);\n\n\t\tloop_rc = Tcl_Eval(interp, argv[loop_body]);\n\n\t\tif (loop_rc == TCL_OK)\n\t\t\tcontinue;\n\t\tif (loop_rc == TCL_CONTINUE)\n\t\t\tcontinue;\n\t\tif (loop_rc == TCL_RETURN)\n\t\t{\n\t\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\t\treturn TCL_RETURN;\n\t\t}\n\t\tif (loop_rc == TCL_BREAK)\n\t\t\tbreak;\n\t\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\t\treturn TCL_ERROR;\n\t}\n\n\t/************************************************************\n\t * Finally return the number of tuples\n\t ************************************************************/\n\tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n\tsprintf(buf, \"%d\", ntuples);\n\tTcl_SetResult(interp, buf, TCL_VOLATILE);\n\treturn TCL_OK;\n}\n\n\n/**********************************************************************\n * plperl_set_tuple_values() - Set variables for all attributes\n *\t\t\t\t of a given tuple\n **********************************************************************/\nstatic void\nplperl_set_tuple_values(Tcl_Interp *interp, char *arrayname,\n\t\t\t\t\t\tint tupno, HeapTuple tuple, TupleDesc tupdesc)\n{\n\tint\t\t\ti;\n\tchar\t *outputstr;\n\tchar\t\tbuf[64];\n\tDatum\t\tattr;\n\tbool\t\tisnull;\n\n\tchar\t *attname;\n\tHeapTuple\ttypeTup;\n\tOid\t\t\ttypoutput;\n\tOid\t\t\ttypelem;\n\n\tchar\t **arrptr;\n\tchar\t **nameptr;\n\tchar\t *nullname = NULL;\n\n\t/************************************************************\n\t * Prepare pointers for Tcl_SetVar2() below and in array\n\t * mode set the .tupno element\n\t ************************************************************/\n\tif (arrayname == NULL)\n\t{\n\t\tarrptr = &attname;\n\t\tnameptr = &nullname;\n\t}\n\telse\n\t{\n\t\tarrptr = &arrayname;\n\t\tnameptr = &attname;\n\t\tsprintf(buf, \"%d\", tupno);\n\t\tTcl_SetVar2(interp, arrayname, \".tupno\", buf, 0);\n\t}\n\n\tfor (i = 0; i < tupdesc->natts; i++)\n\t{\n\t\t/************************************************************\n\t\t * Get the attribute name\n\t\t ************************************************************/\n\t\tattname = tupdesc->attrs[i]->attname.data;\n\n\t\t/************************************************************\n\t\t * Get the attributes value\n\t\t ************************************************************/\n\t\tattr = heap_getattr(tuple, i + 1, tupdesc, &isnull);\n\n\t\t/************************************************************\n\t\t * Lookup the attribute type in the syscache\n\t\t * for the output function\n\t\t ************************************************************/\n\t\ttypeTup = SearchSysCacheTuple(TYPEOID,\n\t\t\t\t\t\t ObjectIdGetDatum(tupdesc->attrs[i]->atttypid),\n\t\t\t\t\t\t\t\t\t 0, 0, 0);\n\t\tif (!HeapTupleIsValid(typeTup))\n\t\t{\n\t\t\telog(ERROR, \"plperl: Cache lookup for attribute '%s' type %u failed\",\n\t\t\t\t attname, tupdesc->attrs[i]->atttypid);\n\t\t}\n\n\t\ttypoutput = (Oid) (((Form_pg_type) GETSTRUCT(typeTup))->typoutput);\n\t\ttypelem = (Oid) (((Form_pg_type) GETSTRUCT(typeTup))->typelem);\n\n\t\t/************************************************************\n\t\t * If there is a value, set the variable\n\t\t * If not, unset it\n\t\t *\n\t\t * Hmmm - Null attributes will cause functions to\n\t\t *\t\t crash if they don't expect them - need something\n\t\t *\t\t smarter here.\n\t\t ************************************************************/\n\t\tif (!isnull && OidIsValid(typoutput))\n\t\t{\n\t\t\toutputstr = DatumGetCString(OidFunctionCall3(typoutput,\n\t\t\t\t\t\t\t\t\t\tattr,\n\t\t\t\t\t\t\t\t\t\tObjectIdGetDatum(typelem),\n\t\t\t\t\t\t\t\t\t\tInt32GetDatum(tupdesc->attrs[i]->attlen)));\n\t\t\tTcl_SetVar2(interp, *arrptr, *nameptr, outputstr, 0);\n\t\t\tpfree(outputstr);\n\t\t}\n\t\telse\n\t\t\tTcl_UnsetVar2(interp, *arrptr, *nameptr, 0);\n\t}\n}\n\n\n#endif\n/**********************************************************************\n * plperl_build_tuple_argument() - Build a string for a ref to a hash\n *\t\t\t\t from all attributes of a given tuple\n **********************************************************************/\nstatic SV *\nplperl_build_tuple_argument(HeapTuple tuple, TupleDesc tupdesc)\n{\n\tint\t\t\ti;\n\tSV\t\t *output;\n\tDatum\t\tattr;\n\tbool\t\tisnull;\n\n\tchar\t *attname;\n\tchar\t *outputstr;\n\tHeapTuple\ttypeTup;\n\tOid\t\t\ttypoutput;\n\tOid\t\t\ttypelem;\n\n\toutput = sv_2mortal(newSVpv(\"{\", 0));\n\n\tfor (i = 0; i < tupdesc->natts; i++)\n\t{\n\t\t/************************************************************\n\t\t * Get the attribute name\n\t\t ************************************************************/\n\t\tattname = tupdesc->attrs[i]->attname.data;\n\n\t\t/************************************************************\n\t\t * Get the attributes value\n\t\t ************************************************************/\n\t\tattr = heap_getattr(tuple, i + 1, tupdesc, &isnull);\n\n\t\t/************************************************************\n\t\t * Lookup the attribute type in the syscache\n\t\t * for the output function\n\t\t ************************************************************/\n\t\ttypeTup = SearchSysCacheTuple(TYPEOID,\n\t\t\t\t\t\t ObjectIdGetDatum(tupdesc->attrs[i]->atttypid),\n\t\t\t\t\t\t\t\t\t 0, 0, 0);\n\t\tif (!HeapTupleIsValid(typeTup))\n\t\t{\n\t\t\telog(ERROR, \"plperl: Cache lookup for attribute '%s' type %u failed\",\n\t\t\t\t attname, tupdesc->attrs[i]->atttypid);\n\t\t}\n\n\t\ttypoutput = (Oid) (((Form_pg_type) GETSTRUCT(typeTup))->typoutput);\n\t\ttypelem = (Oid) (((Form_pg_type) GETSTRUCT(typeTup))->typelem);\n\n\t\t/************************************************************\n\t\t * If there is a value, append the attribute name and the\n\t\t * value to the list.\n\t\t *\tIf it is null it will be set to undef.\n\t\t ************************************************************/\n\t\tif (!isnull && OidIsValid(typoutput))\n\t\t{\n\t\t\toutputstr = DatumGetCString(OidFunctionCall3(typoutput,\n\t\t\t\t\t\t\t\t\t\tattr,\n\t\t\t\t\t\t\t\t\t\tObjectIdGetDatum(typelem),\n\t\t\t\t\t\t\t\t\t\tInt32GetDatum(tupdesc->attrs[i]->attlen)));\n\t\t\tsv_catpvf(output, \"'%s' => '%s',\", attname, outputstr);\n\t\t\tpfree(outputstr);\n\t\t}\n\t\telse\n\t\t\tsv_catpvf(output, \"'%s' => undef,\", attname);\n\t}\n\tsv_catpv(output, \"}\");\n\toutput = perl_eval_pv(SvPV_nolen(output), TRUE);\n\treturn output;\n}",
"msg_date": "Mon, 16 Oct 2000 14:36:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] PL/Perl compilation error"
},
{
"msg_contents": "Gilles DAROLD <[email protected]> writes:\n>>>> I have take a look to the source code concerning PL/Perl, it seems that 2 variables\n>>>> have a bad call : errgv and na.\n>>>> \n>>>> If you replace them by their normal call (in 5.6.0) PL_errgv and PL_na you will get\n>>>> success to compile the lib plperl.so.\n>>>> \n\n> This patch (simple diff) applies to postgresql-7.0.2.\n\n\nThe problem is this will break on older copies of Perl.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 16:07:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Gilles DAROLD <[email protected]> writes:\n> >>>> I have take a look to the source code concerning PL/Perl, it seems that 2 variables\n> >>>> have a bad call : errgv and na.\n> >>>>\n> >>>> If you replace them by their normal call (in 5.6.0) PL_errgv and PL_na you will get\n> >>>> success to compile the lib plperl.so.\n> >>>>\n>\n> > This patch (simple diff) applies to postgresql-7.0.2.\n>\n> The problem is this will break on older copies of Perl.\n>\n> regards, tom lane\n\nThis problem is solved by perl itself !\n\nI know it work under perl > 5.005_3 and certainly all versions after perl 5.004.\nGive me a reason to keep buggy perl versions compatibility ! People still\nrunning version prior of 5.005_3 does not really want perl running well so\nwhy plperl :-)\n\nIf you are not agree with my last comment, just take a look to the change log\nof the perl version history and you will understand what I mean (security, memory,\netc.) ...\n\nRegards\n\nGilles DAROLD\n\n",
"msg_date": "Tue, 17 Oct 2000 12:52:58 +0200",
"msg_from": "Gilles DAROLD <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> I can not apply this. Seems it has changed in the current tree. Here\n> is the current plperl.c file.\n>\n\nIt seems that the current file has been fixed. There's no more call to the\nbuggy variables in it. I don't know what you want me to do ?\nDo you still have problem to compiling this code ? If so send me an url\nwhere i can find the complete PG distribution you want to see working.\nI will check if it works for me and try to fix if there is problems.\nNot sure of what I can do...\n\nRegards\n\nGilles DAROLD\n\n",
"msg_date": "Tue, 17 Oct 2000 13:12:51 +0200",
"msg_from": "Gilles DAROLD <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] PL/Perl compilation error"
},
{
"msg_contents": "hello.\n\ni have the following trigger:\n\nCREATE TRIGGER trig_person_accessorclass BEFORE INSERT ON Person FOR EACH\nROW EXECUTE PROCEDURE sp_person_accessorclass();\n\nthe corresponding function inserts a row into the accessor_class table.\n\nthe issue is that when i insert a row into person and immediately query the\naccessor_class table, i don't find anything. does it take some amount of\ntime for the trigger/sp to run? is it just placed in a queue or something?\ncan i speed this up or is it best to not count on the performance of the\nfunction?\n\nthanks\n\nchris\n\n\n",
"msg_date": "Tue, 17 Oct 2000 07:37:33 -0400",
"msg_from": "\"chris markiewicz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "do triggers/procedures run instantly?"
},
{
"msg_contents": "Gilles DAROLD <[email protected]> writes:\n>> The problem is this will break on older copies of Perl.\n\n> This problem is solved by perl itself !\n\nYeah, it is: there is a module called Devel::PPPort that isolates\nuser C code from the incompatibilities of different Perl APIs. Until\nsomeone gets around to submitting a proper fix using PPPort, we'll stick\nwith the POLLUTE=1 solution we have now. I see no reason to install an\nincomplete solution that will fail on older Perls --- we are not in the\nbusiness of forcing people to update their Perls.\n\nI was going to point you to the pgsql-bugs archive for 3/25/00, but\nthere seems to be a gap in the archive in March, so attached are the\nrelevant messages.\n\n\t\t\tregards, tom lane\n\n\n------- Forwarded Messages\n\nDate: Sat, 25 Mar 2000 13:15:28 +0100\nFrom: Marc Lehmann <[email protected]>\nTo: [email protected]\nSubject: [BUGS] perl5 interface won't compile\n\n============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name\t\t:\tMarc Lehmann\nYour email address\t:\[email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: \n\n Operating System (example: Linux 2.0.26 ELF) \t:\n\n PostgreSQL version (example: PostgreSQL-6.5.1): PostgreSQL-7.0beta3\n\n Compiler used (example: gcc 2.8.0)\t\t:\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nthe perl interface does not compile with newer perl versions (5.006 and\nprobably 5.005 without options).\n\nPlease describe a way to repeat the problem. Please try to provide a\n\n(sorry, just found out that plperl also won't compile, so I have \"re-added\"\nanother, a second diff against plperl ;)\n\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\n\n\"make\"\n\n\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\n\nA diff against Pg.xs is attached, however, it will not compile with older\nperl versions (it is the prefered long-term solution).\n\nSo, for the forseeable future, it might be a better to create the Makefile\nusing\n\n perl Makefile.PL POLLUTE=1\n\nwhich will enable some kind of compatibility mode.\n\nA preferable but better solution would be to use the Devel::PPPort module\n(on CPAN) to get rid of versiondependonitis (in which case you will need\nto apply both diffs and additionally include ppport.h, preferably after\nrenaming it to something else.\n\n===PATCH 1===================================================================\n\ndiff -r -u perl5o/Pg.c perl5/Pg.c\n--- perl5o/Pg.c\tSat Mar 25 13:09:05 2000\n+++ perl5/Pg.c\tSat Mar 25 13:10:38 2000\n@@ -1407,7 +1407,7 @@\n \t\tps.caption = caption;\n \t\tNewz(0, ps.fieldName, items + 1 - 11, char*);\n \t\tfor (i = 11; i < items; i++) {\n-\t\t\tps.fieldName[i - 11] = (char *)SvPV(ST(i), na);\n+\t\t\tps.fieldName[i - 11] = (char *)SvPV_nolen(ST(i));\n \t\t}\n \t\tPQprint(fout, res, &ps);\n \t\tSafefree(ps.fieldName);\n@@ -3182,7 +3182,7 @@\n \t\t\t\tEXTEND(sp, cols);\n \t\t\t\twhile (col < cols) {\n \t\t\t\t\tif (PQgetisnull(res->result, res->row, col)) {\n-\t\t\t\t\t\tPUSHs(&sv_undef);\n+\t\t\t\t\t\tPUSHs(&PL_sv_undef);\n \t\t\t\t\t} else {\n \t\t\t\t\t\tchar *val = PQgetvalue(res->result, res->row, col);\n \t\t\t\t\t\tPUSHs(sv_2mortal((SV*)newSVpv(val, 0)));\n@@ -3238,7 +3238,7 @@\n \t\tps.caption = caption;\n \t\tNewz(0, ps.fieldName, items + 1 - 11, char*);\n \t\tfor (i = 11; i < items; i++) {\n-\t\t\tps.fieldName[i - 11] = (char *)SvPV(ST(i), na);\n+\t\t\tps.fieldName[i - 11] = (char *)SvPV_nolen(ST(i));\n \t\t}\n \t\tPQprint(fout, res->result, &ps);\n \t\tSafefree(ps.fieldName);\ndiff -r -u perl5o/Pg.xs perl5/Pg.xs\n--- perl5o/Pg.xs\tSat Mar 11 04:08:37 2000\n+++ perl5/Pg.xs\tSat Mar 25 13:10:36 2000\n@@ -581,7 +581,7 @@\n \t\tps.caption = caption;\n \t\tNewz(0, ps.fieldName, items + 1 - 11, char*);\n \t\tfor (i = 11; i < items; i++) {\n-\t\t\tps.fieldName[i - 11] = (char *)SvPV(ST(i), na);\n+\t\t\tps.fieldName[i - 11] = (char *)SvPV_nolen(ST(i));\n \t\t}\n \t\tPQprint(fout, res, &ps);\n \t\tSafefree(ps.fieldName);\n@@ -1252,7 +1252,7 @@\n \t\t\t\tEXTEND(sp, cols);\n \t\t\t\twhile (col < cols) {\n \t\t\t\t\tif (PQgetisnull(res->result, res->row, col)) {\n-\t\t\t\t\t\tPUSHs(&sv_undef);\n+\t\t\t\t\t\tPUSHs(&PL_sv_undef);\n \t\t\t\t\t} else {\n \t\t\t\t\t\tchar *val = PQgetvalue(res->result, res->row, col);\n \t\t\t\t\t\tPUSHs(sv_2mortal((SV*)newSVpv(val, 0)));\n@@ -1292,7 +1292,7 @@\n \t\tps.caption = caption;\n \t\tNewz(0, ps.fieldName, items + 1 - 11, char*);\n \t\tfor (i = 11; i < items; i++) {\n-\t\t\tps.fieldName[i - 11] = (char *)SvPV(ST(i), na);\n+\t\t\tps.fieldName[i - 11] = (char *)SvPV_nolen(ST(i));\n \t\t}\n \t\tPQprint(fout, res->result, &ps);\n \t\tSafefree(ps.fieldName);\n\n===PATCH 2===================================================================\n\ndiff -u -r plperlo/plperl.c plperl/plperl.c\n--- plperlo/plperl.c\tSat Mar 25 13:17:31 2000\n+++ plperl/plperl.c\tSat Mar 25 13:18:32 2000\n@@ -309,12 +309,12 @@\n \tperl_eval_sv(s, G_SCALAR | G_EVAL | G_KEEPERR);\n \tSPAGAIN;\n \n-\tif (SvTRUE(GvSV(errgv))) {\n+\tif (SvTRUE(GvSV(PL_errgv))) {\n \t\tPOPs;\n \t\tPUTBACK;\n \t\tFREETMPS;\n \t\tLEAVE;\n-\t\telog(ERROR, \"creation of function failed : %s\", SvPV(GvSV(errgv), na));\n+\t\telog(ERROR, \"creation of function failed : %s\", SvPV_nolen(GvSV(PL_errgv)));\n \t}\n \n \t/*\n@@ -413,12 +413,12 @@\n \t\telog(ERROR, \"plperl : didn't get a return item from function\");\n \t}\n \n-\tif (SvTRUE(GvSV(errgv))) {\n+\tif (SvTRUE(GvSV(PL_errgv))) {\n \t\tPOPs;\n \t\tPUTBACK ;\n \t\tFREETMPS ;\n \t\tLEAVE;\n-\t\telog(ERROR, \"plperl : error from function : %s\", SvPV(GvSV(errgv), na));\n+\t\telog(ERROR, \"plperl : error from function : %s\", SvPV_nolen(GvSV(PL_errgv)));\n \t}\n \n \tretval = newSVsv(POPs);\n@@ -632,7 +632,7 @@\n \t\telog(ERROR, \"plperl: SPI_finish() failed\");\n \n \tretval = (Datum) (*fmgr_faddr(&prodesc->result_in_func))\n-\t\t\t(SvPV(perlret, na),\n+\t\t\t(SvPV_nolen(perlret),\n \t\t\t prodesc->result_in_elem,\n \t\t prodesc->result_in_len);\n \n@@ -2168,6 +2168,6 @@\n \t\t}\n \t}\n \tsv_catpv(output, \"}\");\n-\toutput = perl_eval_pv(SvPV(output, na), TRUE);\n+\toutput = perl_eval_pv(SvPV_nolen(output), TRUE);\n \treturn output;\n }\n\n=============================================================================\n\n-- \n -----==- |\n ----==-- _ |\n ---==---(_)__ __ ____ __ Marc Lehmann +--\n --==---/ / _ \\/ // /\\ \\/ / [email protected] |e|\n -=====/_/_//_/\\_,_/ /_/\\_\\ XX11-RIPE --+\n The choice of a GNU generation |\n |\n\n------- Message 2\n\nDate: Sat, 25 Mar 2000 11:49:09 -0500\nFrom: Tom Lane <[email protected]>\nTo: Marc Lehmann <[email protected]>\ncc: [email protected], [email protected]\nSubject: Re: [BUGS] perl5 interface won't compile \n\nMarc Lehmann <[email protected]> writes:\n> the perl interface does not compile with newer perl versions (5.006 and\n> probably 5.005 without options).\n\nWe've seen this reported a few times, but in fact the perl code *does*\ncompile against 5.005_03 --- without options --- and AFAICT that is\nstill considered the current stable release of Perl. I'm pretty\nhesitant to break backwards compatibility with pre-5.005 versions\njust yet.\n\nHowever, you are the first complainant who has suggested approaches\nother than a non-backward-compatible source patch, so I'm all ears.\n\n> So, for the forseeable future, it might be a better to create the Makefile\n> using\n> perl Makefile.PL POLLUTE=1\n> which will enable some kind of compatibility mode.\n\nInteresting. I could not find anything about POLLUTE at www.perl.com.\nWhat does it do, and will it cause problems on pre-5.005 perls?\n\n> A preferable but better solution would be to use the Devel::PPPort module\n> (on CPAN) to get rid of versiondependonitis (in which case you will need\n> to apply both diffs and additionally include ppport.h, preferably after\n> renaming it to something else.\n\nThis looks like it could be the Right Thing To Do. Anyone have time to\nmake it happen (and perhaps even access to a few different perl versions\nto test it)?\n\n\t\t\tregards, tom lane\n\n------- Message 3\n\nDate: Sat, 25 Mar 2000 15:27:17 -0500\nFrom: Tom Lane <[email protected]>\nTo: Bruce Momjian <[email protected]>, Marc Lehmann <[email protected]>,\n\t [email protected]\nSubject: Re: [BUGS] perl5 interface won't compile \n\nI said\n> Bruce Momjian <[email protected]> writes:\n>> I have added your POLLUTE=1 solution to interfaces/perl5 and\n>> plperl. Please try tomorrow's snapshot to see if this works for you.\n\n> I think the more interesting question is whether that breaks older\n> Perls...\n\nI have now tried it with perl 5.004_04 (which was current about two\nyears ago), and I get\n\n$ make plperl/Makefile\ncd plperl && perl Makefile.PL POLLUTE=1\n'POLLUTE' is not a known MakeMaker parameter name.\nWriting Makefile for plperl\n\nafter which it seems to be happy. Assuming this fixes the problem\nfor bleeding-edge perls, this looks like a good stopgap answer until\nsomeone feels like doing something with Devel::PPPort.\n\n\t\t\tregards, tom lane\n\n------- End of Forwarded Messages\n",
"msg_date": "Tue, 17 Oct 2000 10:32:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error "
},
{
"msg_contents": "On Tue, 17 Oct 2000, Tom Lane wrote:\n\n> Gilles DAROLD <[email protected]> writes:\n> >> The problem is this will break on older copies of Perl.\n> \n> > This problem is solved by perl itself !\n> \n> Yeah, it is: there is a module called Devel::PPPort that isolates\n> user C code from the incompatibilities of different Perl APIs. Until\n> someone gets around to submitting a proper fix using PPPort, we'll stick\n> with the POLLUTE=1 solution we have now. I see no reason to install an\n> incomplete solution that will fail on older Perls --- we are not in the\n> business of forcing people to update their Perls.\nI believe that POLLUTE should be a default. People who are using perl5.004\nare definitely a minority now. 5.004 is 3 years old now...\n\n-alex\n\n",
"msg_date": "Tue, 17 Oct 2000 11:11:32 -0400 (EDT)",
"msg_from": "Alex Pilosov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error "
},
{
"msg_contents": "Alex Pilosov <[email protected]> writes:\n> I believe that POLLUTE should be a default.\n\nIt is --- the src/pl and src/interfaces Makefiles will create the\nsub-makefiles with POLLUTE=1. Unfortunately it's easy to miss that\nfine point when you're building the Perl modules by hand. Not sure\nif there's a good way to remind people about it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2000 11:28:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error "
},
{
"msg_contents": "Hi,\n\nI have done a little work concerning the famous PL/Perl compilation Error and\nalso into Interfaces/Perl5.\n\nThe confusing POLLUTE option is no more used to see these parts compiled.\nI thinks it's now fully compatible with all Perl versions, yes Tom I use PPPort :-)\n\nThe way to put it into the distribution package is very simple.\n\n1) Replace the current GNUmakefile in these directories src/interface/perl5 and src/pl/plperl\n\nby those given in the attachment.\n2) Copy the lastest version of the ppport.h file into the same directories (latest can be\nfound\non CPAN) I provide one in the attachment (the latest at this day Version 1.0007)\n\nThat done, just compile postgresql exactly as before (with ./configure --with-perl at least).\n\nWhat I have done is very simple :\n\n - cp Devel-PPPort-1.0007/ppport.h postgresql-snapshotsrc/pl/plperl/\n - cp Devel-PPPort-1.0007/ppport.h postgresql-snapshot/src/interfaces/perl5/\n\nAnd in the 2 GNUmakefile in the \"Makefile: Makefile.PL\" section:\n\n - I have remove the call to the POLLUTE option\n - Added the following lines at the begining of the section:\n $(PERL) -x ppport.h *.c *.h *.xs > ppport.patch\n patch < ppport.patch\n rm ppport.patch\n\nThanks to Kenneth Albanowski for his PPPort.pm usefull package and to Tom Lane\nfor his ligth.\n\nNote: the attachment is a tar of all modified and added files in the source tree.\n\nRegards,\n\nGilles DAROLD",
"msg_date": "Tue, 24 Oct 2000 12:51:24 +0200",
"msg_from": "Gilles DAROLD <[email protected]>",
"msg_from_op": false,
"msg_subject": "PL/Perl compilation error"
},
{
"msg_contents": "Gilles DAROLD <[email protected]> writes:\n> The confusing POLLUTE option is no more used to see these parts compiled.\n> I thinks it's now fully compatible with all Perl versions,\n> yes Tom I use PPPort :-)\n\nExcellent! I'll check it over and put it in the tree. Thank you.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Oct 2000 08:39:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl compilation error "
},
{
"msg_contents": "Broke my build on UnixWare 7.1.1... May be perl version confusion...\n\nSee my post to -hackers.\n\nLarry\n* Tom Lane <[email protected]> [001024 18:38]:\n> Gilles DAROLD <[email protected]> writes:\n> > The confusing POLLUTE option is no more used to see these parts compiled.\n> > I thinks it's now fully compatible with all Perl versions,\n> > yes Tom I use PPPort :-)\n> \n> Excellent! I'll check it over and put it in the tree. Thank you.\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 24 Oct 2000 20:17:22 -0500",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: PL/Perl compilation error"
},
{
"msg_contents": "Hi,\n\nDo you use the file GNUmakefile and ppport.h I recently send to the list ?\nWhat is your version of Perl ?\nCould you send me output of your build ?\n\nRegards,\n\nGilles DAROLD\n\nLarry Rosenman wrote:\n\n> Broke my build on UnixWare 7.1.1... May be perl version confusion...\n>\n> See my post to -hackers.\n>\n> Larry\n> * Tom Lane <[email protected]> [001024 18:38]:\n> > Gilles DAROLD <[email protected]> writes:\n> > > The confusing POLLUTE option is no more used to see these parts compiled.\n> > > I thinks it's now fully compatible with all Perl versions,\n> > > yes Tom I use PPPort :-)\n> >\n> > Excellent! I'll check it over and put it in the tree. Thank you.\n> >\n> > regards, tom lane\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "Wed, 25 Oct 2000 10:30:42 +0200",
"msg_from": "Gilles DAROLD <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: PL/Perl compilation error"
}
] |
[
{
"msg_contents": "The Makefiles in doc/src and doc/src/sgml are slightly broken. They\nrefer back to Makefile.global, but do not have SRCDIR defined so that\nMakefile.custom (and perhaps other stuff) is not found. Also, there is a\nvariable ZIP defined at the top of doc/src/Makefile, but it is referred\nto as GZIP farther down the file.\n\nI've patched up the source tree in my tree on hub.org:CURRENT/pgsql so\nthat the nightly doc builds will work. But I'm not sure I got the\nrelationships between top_builddir, subdir, and SRCDIR correct for the\nnew Makefile scheme. Would you have a chance to look at those files and\nspiff them up? Also, I made a small change allow something other than\ngzip to be used by defining ZIPSUFFIX in one of the Makefiles. Not sure\nif it is worth the effort to keep it in, but...\n\nI can send patches if that would be easier.\n\n - Thomas\n",
"msg_date": "Sat, 02 Sep 2000 16:31:11 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "(Slightly) broken makefiles"
}
] |
[
{
"msg_contents": "There's two problems here that kept me up all night hacking\nin order to keep my system from crashing an burning so bear with\nme if you can.\n\nIf you define a table and then create a select query rule over it\nthen drop the rule the table will be gone.\n\nAnother related problem is that let's say you have done this and\nthe table you've \"hidden\" with a view is rather large and has\nindexes then postgresql will seriously choke on trying to \nvacuum and/or vacuum analyze the table which is really a view!\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Sat, 2 Sep 2000 09:42:57 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Really bad/weird stuff with views over tables in 7.0.2"
},
{
"msg_contents": "\nOn Sat, 2 Sep 2000, Alfred Perlstein wrote:\n\n> If you define a table and then create a select query rule over it\n> then drop the rule the table will be gone.\n\nWhat were you doing precisely? When I made a simple table and \nthen turned it into a view with a rule, dropping the rule\ndidn't seem to drop the table for me, I could still select\nfrom it, etc after the rule dropped. [I think I probably\nmisunderstood what you were doing, but...]\n\ncreate table aa1 (a int);\ncreate rule \"_RETaa1\" as on select to aa1 do instead \n select anum as a from a;\nselect * from aa1;\ndrop rule \"_RETaa1\";\nselect * from aa1; \n\nseems to work. The first select gives me whatever was in a\nand the second gives me anything i inserted into aa1 before\nmaking the rule.\n\n",
"msg_date": "Sat, 2 Sep 2000 10:51:03 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Really bad/weird stuff with views over tables in 7.0.2"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> If you define a table and then create a select query rule over it\n> then drop the rule the table will be gone.\n\n> Another related problem is that let's say you have done this and\n> the table you've \"hidden\" with a view is rather large and has\n> indexes then postgresql will seriously choke on trying to \n> vacuum and/or vacuum analyze the table which is really a view!\n\nregression=# create table foo(f1 int primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'foo_pkey' for table 'foo'\nCREATE\nregression=# insert into foo values(1);\nINSERT 272365 1\nregression=# insert into foo values(2);\nINSERT 272366 1\nregression=# insert into foo values(3);\nINSERT 272367 1\nregression=# select * from foo;\n f1\n----\n 1\n 2\n 3\n(3 rows)\n\nregression=# create rule \"_RETfoo\" as on select to foo do instead\nregression-# select f1+10 as f1 from int4_tbl;\nCREATE\nregression=# select * from foo;\n f1\n-------------\n 10\n 123466\n -123446\n -2147483639\n -2147483637\n(5 rows)\n\nregression=# drop rule \"_RETfoo\" ;\nDROP\nregression=# select * from foo;\n f1\n----\n 1\n 2\n 3\n(3 rows)\n\nregression=# vacuum foo;\nVACUUM\nregression=# vacuum verbose analyze foo;\nNOTICE: --Relation foo--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 3: Vac 0, Keep/VTL 0/\n0, Crash 0, UnUsed 0, MinLen 36, MaxLen 36; Re-using: Free/Avail. Space 0/0; End\nEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec.\nNOTICE: Index foo_pkey: Pages 2; Tuples 3. CPU 0.00s/0.00u sec.\nNOTICE: Analyzing...\nVACUUM\nregression=#\n\nLooks OK from here ... how about a reproducible example?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Sep 2000 14:06:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Really bad/weird stuff with views over tables in 7.0.2 "
},
{
"msg_contents": "* Tom Lane <[email protected]> [000902 11:06] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > If you define a table and then create a select query rule over it\n> > then drop the rule the table will be gone.\n> \n> > Another related problem is that let's say you have done this and\n> > the table you've \"hidden\" with a view is rather large and has\n> > indexes then postgresql will seriously choke on trying to \n> > vacuum and/or vacuum analyze the table which is really a view!\n> \n> Looks OK from here ... how about a reproducible example?\n\nOk, typo on my part, if you type \"DROP VIEW foo;\" that nukes the rule and\nthe table behind it. Is that the expected behavior? I'll try to\nfigure out a way to demonstrate the problem I thought I was having\nwith data in both tables later right now I desperately need sleep. :)\n\nthanks,\n-Alfred\n",
"msg_date": "Sat, 2 Sep 2000 11:21:59 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Really bad/weird stuff with views over tables in 7.0.2"
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes:\n> Ok, typo on my part, if you type \"DROP VIEW foo;\" that nukes the rule and\n> the table behind it. Is that the expected behavior?\n\nWell, yeah: a view *is* a table + ON SELECT rule, at least in current\nreleases. Mark Hollomon just submitted a patch that would create a\ndistinction, but it's not even been applied to CVS yet...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Sep 2000 15:11:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Really bad/weird stuff with views over tables in 7.0.2 "
}
] |
[
{
"msg_contents": "I had another thought about fixing our problems with deriving index\nbounds for LIKE patterns in non-ASCII locales. (If you don't remember\nthe gory details here, please re-read thread\n\tSigh, LIKE indexing is *still* broken in foreign locales\nfrom pgsql-hackers archives of 7 to 10 June, 2000; there are also many\nprevious go-rounds about this long-standing issue.)\n\nThe problems that I've been told about seem to center around one- and\ntwo-character patterns that have special sort rules in some locales.\nCould we work around these problems by dropping one or perhaps two\ncharacters from the end of the given LIKE prefix? For example, given\n\tWHERE name LIKE 'foobar%'\ndrop the last fixed character ('r') and generate index bounds from what\nremains, using the same algorithm as in 7.0. So the index bounds would\nbecome\n\tWHERE name >= 'fooba' AND name < 'foobb'\n(at least in ASCII locale --- to make the upper bound, we'd search for\na string considered greater than 'fooba' by the local strcmp()).\n\nThe truncation would need to be multibyte-aware, of course.\n\nThis would, for example, fix the example given by Erich Stamberger:\n\n> Another interresting feature of Czech collation is:\n> \n> H < \"CH\" < I\n> \n> and:\n> \n> B < C < C + CARON < D .. < H < \"CH\" < I\n> \n> So what happens with \"WHERE name like 'Czec%`\" ?\n\nOur existing code fails because it generates WHERE name >= 'Czec' AND\nname < 'Czed'; it will therefore not find names beginning 'Czech'\nbecause those are in another part of the index, between 'Czeh' and\n'Czei'. But WHERE name >= 'Cze' AND name < 'Czf' would work.\n\nAre there examples where this still doesn't work? (Funny sort rules\nfor trigraphs would break it, I'm sure, unless we drop two characters\ninstead of just one.)\n\nObviously we could still keep the last character in ASCII locale.\nThat would be a good thing since it'd reduce the number of tuples\nscanned. Is there a portable way to determine whether it's safe to\ndo so in other locales? (Some inquiry function about whether the sort\nordering has any digraph or two-to-one rules might help, but I don't\nknow if there is one.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Sep 2000 13:39:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Yet another LIKE-indexing scheme"
},
{
"msg_contents": "Hello,\n\nOn Sat, 2 Sep 2000, Tom Lane wrote:\n\n> This would, for example, fix the example given by Erich Stamberger:\n> \n> > Another interresting feature of Czech collation is:\n> > \n> > H < \"CH\" < I\n> > \n> > and:\n> > \n> > B < C < C + CARON < D .. < H < \"CH\" < I\n> > \n> > So what happens with \"WHERE name like 'Czec%`\" ?\n> \n> Our existing code fails because it generates WHERE name >= 'Czec' AND\n> name < 'Czed'; it will therefore not find names beginning 'Czech'\n> because those are in another part of the index, between 'Czeh' and\n> 'Czei'. But WHERE name >= 'Cze' AND name < 'Czf' would work.\n\nThe Problem is: What tells us, that it is 'f' which sorts\nafter 'e' in that locale? In the \"C\" locale you can\nsimply add One to the character's code to get the next one,\nsince the numerical ordering of the encoding is identical to the\ncollation: 'e' + 1 = 'f' and 'e' < 'f'. This is *not* true\nfor *every* real-world language-encoding-pair in the world -\nat least not for characters with codes above 127 and characters\nwith codes below 65.\n\nIn the example above we are in luck, although there are\nadditional characters between 'e' and 'f' in Czech sorting:\n\nCollation => .. < 'e' = 'e + acute' = 'e + caron' < 'f'\nEncoding => 101 < 233 < 236 > 102\n \nTo my knowledge the ISO C API doesn't provide an interface\nto collation information (IAPITA!). There are no succ()\nand pred() functions like in PASCAL for example.\n\nAnd even if these functions could be emulated, I'm not\nsure about possible \"side effects\". French for example,\nhas even more funny rules (\"funny\" from a programmer's point\nof view): Accented characters which appear later in a string\nare more important than accented characters which appear earlier.\n\nIMHO, using the OS's locale support in databases asks\nfor trouble anyway:\n\nWho guaratees that the strcolls/localedefs floating\naround behave the same way?\n\nWhat, if some kind soul of system admistrator updates the\nOS and fixes a buggy locale definition file (maybe without\nknowing)? The next UPDATE or INSERT coming along will\ndamage the indices of all databases using the affected locale.\nEven a simple SELECT may yield strange results.\n\n>\n> Are there examples where this still doesn't work? (Funny sort rules\n> for trigraphs would break it, I'm sure, unless we drop two characters\n> instead of just one.)\n>\n\nI don't know if there are any locales, where removing/appending\n\"something\" from/to a string can result in a higher/lower\ncollation weight: \"xyzab < xyz\" or \"xyz > xyzab\". \n\n> Obviously we could still keep the last character in ASCII locale.\n> That would be a good thing since it'd reduce the number of tuples\n> scanned. Is there a portable way to determine whether it's safe to\n> do so in other locales? (Some inquiry function about whether the sort\n> ordering has any digraph or two-to-one rules might help, but I don't\n> know if there is one.)\n>\n \nEven ASCII (7-bit) encoded *locales* may be in big trouble here:\n\ngewi:~$ cat en.txt\n1\n2\n?\n?2\n?A\n?a\n-A\n-a\n+\n-\n/\na\nb\nA\nB\n\ngewi:~$ export LANG=\"C\"\ngewi:~$ sort en.txt \n+\n-\n-A\n-a\n/\n1\n2\n?\n?2\n?A\n?a\nA\nB\na\nb\n\ngewi:~$ export LANG=\"en_US\"\ngewi:~$ sort en.txt\n-\n?\n/\n+\n1\n?2\n2\n-A\n?A\nA\n-a\n?a\na\nB\nb\n\n\n.. at least strings with punctuation characters will fail\nin certain cases.\n\n\n--\nErich (still thinking)\n\n",
"msg_date": "Sun, 3 Sep 2000 19:35:05 +0200 (CEST)",
"msg_from": "Erich Stamberger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another LIKE-indexing scheme"
},
{
"msg_contents": "Erich Stamberger <[email protected]> writes:\n>> Our existing code fails because it generates WHERE name >= 'Czec' AND\n>> name < 'Czed'; it will therefore not find names beginning 'Czech'\n>> because those are in another part of the index, between 'Czeh' and\n>> 'Czei'. But WHERE name >= 'Cze' AND name < 'Czf' would work.\n\n> The Problem is: What tells us, that it is 'f' which sorts\n> after 'e' in that locale?\n\nWe keep trying until we find a character that *does* sort after 'e'.\nI did say I was assuming that people had read the previous discussion\nand knew what the existing approach was ;-)\n\nHowever I've since thought of a different counterexample: if the LIKE\npattern is 'Czech%' and we strip off the 'h', we lose since we'll be\nlooking between 'Czec' and 'Czed' but the desired strings are in the\nindex between 'Czeh' and 'Czei'. Back to the drawing board...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Sep 2000 18:48:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Yet another LIKE-indexing scheme "
},
{
"msg_contents": "On Sat, Sep 02, 2000 at 01:39:47PM -0400, Tom Lane wrote:\n> > So what happens with \"WHERE name like 'Czec%`\" ?\n> \n> Our existing code fails because it generates WHERE name >= 'Czec' AND\n> name < 'Czed'; it will therefore not find names beginning 'Czech'\n> because those are in another part of the index, between 'Czeh' and\n> 'Czei'. But WHERE name >= 'Cze' AND name < 'Czf' would work.\n\n(OK, I haven't read the previous discussion. Guilty, m'lud)\n\nWhy should it? If 'ch' is one letter, then surely 'czech' isn't LIKE\n'czec%'. Because 'czec%' has a second c, wheres, 'czech' only has one\n'c' and one 'ch'?\n\nJules\n",
"msg_date": "Wed, 6 Sep 2000 09:49:14 +0100",
"msg_from": "Jules Bean <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another LIKE-indexing scheme"
},
{
"msg_contents": "Any status on this?\n\n\n> Erich Stamberger <[email protected]> writes:\n> >> Our existing code fails because it generates WHERE name >= 'Czec' AND\n> >> name < 'Czed'; it will therefore not find names beginning 'Czech'\n> >> because those are in another part of the index, between 'Czeh' and\n> >> 'Czei'. But WHERE name >= 'Cze' AND name < 'Czf' would work.\n> \n> > The Problem is: What tells us, that it is 'f' which sorts\n> > after 'e' in that locale?\n> \n> We keep trying until we find a character that *does* sort after 'e'.\n> I did say I was assuming that people had read the previous discussion\n> and knew what the existing approach was ;-)\n> \n> However I've since thought of a different counterexample: if the LIKE\n> pattern is 'Czech%' and we strip off the 'h', we lose since we'll be\n> looking between 'Czec' and 'Czed' but the desired strings are in the\n> index between 'Czeh' and 'Czei'. Back to the drawing board...\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 12:53:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another LIKE-indexing scheme"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Any status on this?\n\nStill broken, no known fix short of disabling LIKE optimization in\nnon-ASCII locales ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 12:55:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Yet another LIKE-indexing scheme "
},
{
"msg_contents": "Can you give me a TODO item?\n\n\n> Bruce Momjian <[email protected]> writes:\n> > Any status on this?\n> \n> Still broken, no known fix short of disabling LIKE optimization in\n> non-ASCII locales ...\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 13:03:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another LIKE-indexing scheme"
},
{
"msg_contents": "> Can you give me a TODO item?\n\n* Fix LIKE indexing optimization for non-ASCII locales\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 13:10:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Yet another LIKE-indexing scheme "
}
] |
[
{
"msg_contents": "I was bemused to notice this afternoon that the backend does not build\nif you have not defined HAVE_TEST_AND_SET; furthermore, this has been\ntrue at least since 6.4. (slock() is compiled anyway, and it calls\nTAS(), which will be an undefined symbol.) From the lack of\ncomplaints we can deduce that no one has run Postgres on a\nnon-TEST_AND_SET platform in quite a while.\n\nKinda makes me wonder what other bit-rot has set in in the non-TAS\ncode, and whether we ought not just rip it out rather than try to\n\"maintain\" exceedingly delicate code that's gone untested for years.\nbufmgr.c, in particular, has behavior that's nontrivially different\nwhen HAVE_TEST_AND_SET isn't defined --- who wants to promise that\nthat still works?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Sep 2000 15:06:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Isn't non-TEST_AND_SET code long dead?"
},
{
"msg_contents": "\nYank her ... \n\nOn Sat, 2 Sep 2000, Tom Lane wrote:\n\n> I was bemused to notice this afternoon that the backend does not build\n> if you have not defined HAVE_TEST_AND_SET; furthermore, this has been\n> true at least since 6.4. (slock() is compiled anyway, and it calls\n> TAS(), which will be an undefined symbol.) From the lack of\n> complaints we can deduce that no one has run Postgres on a\n> non-TEST_AND_SET platform in quite a while.\n> \n> Kinda makes me wonder what other bit-rot has set in in the non-TAS\n> code, and whether we ought not just rip it out rather than try to\n> \"maintain\" exceedingly delicate code that's gone untested for years.\n> bufmgr.c, in particular, has behavior that's nontrivially different\n> when HAVE_TEST_AND_SET isn't defined --- who wants to promise that\n> that still works?\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 2 Sep 2000 16:21:50 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Isn't non-TEST_AND_SET code long dead?"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> Yank her ...\n> \n> On Sat, 2 Sep 2000, Tom Lane wrote:\u001a\n> > Kinda makes me wonder what other bit-rot has set in in the non-TAS\n> > code, and whether we ought not just rip it out rather than try to\n> > \"maintain\" exceedingly delicate code that's gone untested for years.\n> > bufmgr.c, in particular, has behavior that's nontrivially different\n> > when HAVE_TEST_AND_SET isn't defined --- who wants to promise that\n> > that still works?\n> >\n> > regards, tom lane\n> >\n\nOn a somewhat related note, what about the NO_SECURITY defines\nstrewn throughout the backend? Does anyone run the server with\nNO_SECURITY defined? And if so, what benefit is that over just\nrunning with everything owned by the same user?\n\nJust curious, \n\nMike Mascari\n",
"msg_date": "Sat, 02 Sep 2000 17:35:17 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Isn't non-TEST_AND_SET code long dead?"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n> On a somewhat related note, what about the NO_SECURITY defines\n> strewn throughout the backend? Does anyone run the server with\n> NO_SECURITY defined? And if so, what benefit is that over just\n> running with everything owned by the same user?\n\nI suppose the idea was to avoid expending *any* cycles on security\nchecks if you didn't need them in your particular situation. But\noffhand I've never heard of anyone actually using the feature. I'm\ndubious whether the amount of time saved would be worth the trouble.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Sep 2000 01:25:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Isn't non-TEST_AND_SET code long dead? "
},
{
"msg_contents": "> Mike Mascari <[email protected]> writes:\n> > On a somewhat related note, what about the NO_SECURITY defines\n> > strewn throughout the backend? Does anyone run the server with\n> > NO_SECURITY defined? And if so, what benefit is that over just\n> > running with everything owned by the same user?\n> \n> I suppose the idea was to avoid expending *any* cycles on security\n> checks if you didn't need them in your particular situation. But\n> offhand I've never heard of anyone actually using the feature. I'm\n> dubious whether the amount of time saved would be worth the trouble.\n\nNO_SECURITY define removed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 13:08:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Isn't non-TEST_AND_SET code long dead?"
}
] |
[
{
"msg_contents": "Helge Bahmann ([email protected]) reports a bug with a severity of 4\nThe lower the number the more severe it is.\n\nShort Description\nunique/references not honored when inheriting tables\n\nLong Description\nIf a table inherits fields carrying the \"references\" or \"unique\" constraint, they are not honoured but silently dropped. It is necessary to manually create the triggers/indices.\n\nIt would be nice if it were possible to create an index across a table and all sub-tables.\n\n\nSample Code\nCREATE TABLE foo(id int unique)\nCREATE TABLE bar() INHERITS (foo)\nINSERT INTO bar VALUES(1)\nINSERT INTO bar VALUES(1)\n\n\nNo file was uploaded with this report\n\n",
"msg_date": "Sat, 2 Sep 2000 16:00:01 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "unique/references not honored when inheriting tables"
},
{
"msg_contents": "Is this still true in 7.1?\n\n\n> Helge Bahmann ([email protected]) reports a bug with a severity of 4\n> The lower the number the more severe it is.\n> \n> Short Description\n> unique/references not honored when inheriting tables\n> \n> Long Description\n> If a table inherits fields carrying the \"references\" or \"unique\" constraint, they are not honoured but silently dropped. It is necessary to manually create the triggers/indices.\n> \n> It would be nice if it were possible to create an index across a table and all sub-tables.\n> \n> \n> Sample Code\n> CREATE TABLE foo(id int unique)\n> CREATE TABLE bar() INHERITS (foo)\n> INSERT INTO bar VALUES(1)\n> INSERT INTO bar VALUES(1)\n> \n> \n> No file was uploaded with this report\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 12:59:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unique/references not honored when inheriting tables"
},
{
"msg_contents": "\nAFAIK, yes. Inheriting unique constraints would be a \nspecial case of inheriting indexes theoretically, since I'd\nassume that inheriting a unique column should mean unique\nthrough entire inheritance tree. References is similar, \nwe'd need to do something to inherit the triggers, and\nreferencing to an inheritance tree needs the unique \nconstraint on that entire tree.\n\nI'm not sure what it would take to do a trigger over an\ninheritance tree. I guess the other option is to create\nappropriate triggers when you inherit and having alter\ntable add/drop constraint do the same thing).\n\nOn Mon, 16 Oct 2000, Bruce Momjian wrote:\n\n> Is this still true in 7.1?\n> \n> \n> > Helge Bahmann ([email protected]) reports a bug with a severity of 4\n> > The lower the number the more severe it is.\n> > \n> > Short Description\n> > unique/references not honored when inheriting tables\n> > \n> > Long Description\n> > If a table inherits fields carrying the \"references\" or \"unique\" constraint, they are not honoured but silently dropped. It is necessary to manually create the triggers/indices.\n> > \n> > It would be nice if it were possible to create an index across a table and all sub-tables.\n> > \n> > \n> > Sample Code\n> > CREATE TABLE foo(id int unique)\n> > CREATE TABLE bar() INHERITS (foo)\n> > INSERT INTO bar VALUES(1)\n> > INSERT INTO bar VALUES(1)\n> > \n> > \n> > No file was uploaded with this report\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n",
"msg_date": "Mon, 16 Oct 2000 14:36:37 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unique/references not honored when inheriting tables"
}
] |
[
{
"msg_contents": "Alexandru Popa ([email protected]) reports a bug with a severity of 1\nThe lower the number the more severe it is.\n\nShort Description\npsql can crash the backend on login\n\nLong Description\nOn a FreeBSD 4.1-STABLE system, launching /usr/local/pgsql/bin/psql and giving control-d as a password will crash the backend.\nNote I only tried this on a localhost connection (Unix sockets)\n\nSample Code\nmachine% /usr/local/pgsql/bin/psql -U validuser\nPassword: (hit control-d here)\nPassword:\nPassword:\nPassword:\nPassword:\nPassword:\nPassword:\nPassword:\nPassword:\n(more of those)\nPassword:\npsql: pqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nmachine% /usr/local/pgsql/bin/psql -U validuser\npsql: connectDBStart() -- connect() failed: No such file or directory\n Is the postmaster running at 'localhost'\n and accepting connections on Unix socket '5432'?\nmachine% ps auxw|grep 'post[m]aster'\nmachine% \n\n\n\nNo file was uploaded with this report\n\n",
"msg_date": "Sun, 3 Sep 2000 12:40:58 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "psql can crash the backend on login"
},
{
"msg_contents": "[email protected] writes:\n> machine% /usr/local/pgsql/bin/psql -U validuser\n> Password: (hit control-d here)\n> Password:\n> Password:\n> Password:\n> Password:\n> Password:\n> Password:\n> Password:\n> Password:\n> (more of those)\n> Password:\n> psql: pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> machine% /usr/local/pgsql/bin/psql -U validuser\n> psql: connectDBStart() -- connect() failed: No such file or directory\n> Is the postmaster running at 'localhost'\n> and accepting connections on Unix socket '5432'?\n> machine% ps auxw|grep 'post[m]aster'\n> machine% \n\nInteresting. What seems to be happening is that the postmaster is\nquitting because it runs out of open files. The quit is already fixed\nin current sources, I believe; when I try this I have to hit ^D about\n180 times, but eventually I get\n\nPassword:\nPassword:\nPassword:\npsql: Missing or erroneous pg_hba.conf file, see postmaster log for details\n$ \n\nand in the postmaster log\n\nfind_hba_entry: Unable to open authentication config file \"/home/postgres/testversion/data/pg_hba.conf\": Too many open files\nMissing or erroneous pg_hba.conf file, see postmaster log for details\n\nSo *why* is it running out of open files? Seems to be psql's fault:\npsql is opening a new connection for each Password: cycle, and not\nclosing the old one, which is still awaiting a response to the\npostmaster's demand for a password. psql would fail for lack of open\nfiles too, except the postmaster has a few more open to begin with and\nso fails first. Haven't yet dug into why exactly (maybe the bug is in\nlibpq not psql?)\n\nIf you run it across TCP instead of Unix socket, there's a different\nbad behavior. Not sure why the difference, since psql really shouldn't\ncare, but it seems to be stuck inside psql's password prompting code\nin both cases.\n\nThis is clearly a client-side bug, but it does point up the fact that\nclients can cause a denial-of-service problem if they open up enough\nconnection requests and leave the requests hanging in an incomplete\nauthentication handshake. Perhaps we should make the postmaster\ntime-out such connection requests after some not very large number\nof seconds. People who aren't quick about typing their passwords\nmight get annoyed though...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Sep 2000 18:16:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql can crash the backend on login "
}
] |
[
{
"msg_contents": "I'm trying to figure out how to implement the behaviour of returning all\nsubclass fields. i.e. the \"SELECT **\" behaviour talked about in the\npast.\n\nWhat I'm thinking is that transformTargetList will do the same thing as\nper \"*\" except that it will set another flag \"and_the_rest\".\n\nplan_inherit_queries will then notice the flag and then expand the\ntarget list as per each sub-class.\n\nThis seems to be the way to do it as far as I can see. It doesn't seem\nideal is so far as \"*\" and \"**\" would not be handled in the same place,\nbut since currently \"*\" and the breaking up of inheritance queries into\nan append plan happen in different places, this seems inevitable unless\none were to totally rearrange the order things are done in the backend.\n\nAny comments?\n",
"msg_date": "Mon, 04 Sep 2000 16:51:35 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "OO inheritance implementation"
},
{
"msg_contents": "Chris <[email protected]> writes:\n> plan_inherit_queries will then notice the flag and then expand the\n> target list as per each sub-class.\n\nKeep in mind that the existing implementation of inheritance expansion\nis not only pretty slow (is it *really* desirable to re-plan the whole\nquery for each child class?) but also broken for OUTER JOIN. In an\nouter join, concatenating the final query results is not isomorphic to\nconcatenating the child class contents and then doing the query, which\nis how I'd expect\n\tSELECT * FROM classA OUTER JOIN classB*\nto behave.\n\nIn this situation I think we'll need to push down the Append node to\nbe executed just on classB*, before the join occurs.\n\nBTW, the notion of ** isn't even well-defined in this example: what set\nof classB child attributes would you propose to attach to the unmatched\nrows from classA?\n\nThe planner's inheritance code and UNION code are both unholy messes,\nand I have hopes of scrapping and rewriting essentially all of\nprepunion.c when we redo querytree structures for 7.2. So I'd advise\nnot hanging a new-feature implementation on the existing code structure\nthere.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Sep 2000 12:29:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OO inheritance implementation "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris <[email protected]> writes:\n> > plan_inherit_queries will then notice the flag and then expand the\n> > target list as per each sub-class.\n> \n> Keep in mind that the existing implementation of inheritance expansion\n> is not only pretty slow (is it *really* desirable to re-plan the whole\n> query for each child class?)\n\nHmm. Sometimes it wouldn't be, at least when indexes that span\ninheritance trees are implemented. I would imagine that it could be\ndesirable for different plans on occasion though.\n\n> In this situation I think we'll need to push down the Append node to\n> be executed just on classB*, before the join occurs.\n\nSo outer join is broken for inheritance queries?\n\n> BTW, the notion of ** isn't even well-defined in this example: what set\n> of classB child attributes would you propose to attach to the unmatched\n> rows from classA?\n\nWell ** is not enormously useful for doing joins in the first place\nsince its prime purpose is for constructing real objects on the client\nside. I'm guessing that OQL doesn't support outer joins. But to nominate\na behaviour I would say the minimum set of fields as per * behaviour.\n\n> The planner's inheritance code and UNION code are both unholy messes,\n> and I have hopes of scrapping and rewriting essentially all of\n> prepunion.c when we redo querytree structures for 7.2. So I'd advise\n> not hanging a new-feature implementation on the existing code structure\n> there.\n\nWell if I leave it alone until you've done your querytree redesign, can\nyou keep this in mind for your design? \n\nIt's not clear in my mind what all the issues are, but moving around\ndiffering tuples and knowing when a new set of tuples has started are\nthe issues in my mind.\n",
"msg_date": "Tue, 05 Sep 2000 09:51:34 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OO inheritance implementation"
}
] |
[
{
"msg_contents": "\n\n Hi,\n\n I have a question... why RULE call nexval() and data in RULE statement are\ndifferend than data in original stmt.\n\n An example:\n\ncreate sequence a;\ncreate table aa (id int DEFAULT nextval('a'), data text);\n\ninsert into aa (data) values ('xxxx');\ninsert into aa (data) values ('yyyy');\n\nselect * from aa;\nid|data\n--+----\n 1|xxxx\n 2|yyyy\n(2 rows)\n\n... all is right.\n\ncreate table log (aid int, act text);\n\ncreate rule a_ins as \n\ton insert to aa \n\tdo insert into log (aid, act) values (NEW.id, 'INSERT');\n\ninsert into aa (data) values ('zzzz');\ninsert into aa (data) values ('qqqq');\n\ntest=> select * from aa;\nid|data\n--+----\n 1|xxxx\n 2|yyyy\n 4|zzzz <----------\n 6|qqqq\n(4 rows)\n\nselect * from log;\naid|act\n---+------\n 3|INSERT <----------\n 5|INSERT\n(2 rows)\n\n But I expect in 'log' table as 'aid' same number as numbers for 'zzzz' and\n'qqqq'...\n \n It's interesting feature (6.5, 7.0, 7.1...). How is a possible in RULE\nobtain same data as in 'aa' table for a default data from the sequence.\n\n\t\t\t\t\tKarel\n\n",
"msg_date": "Mon, 4 Sep 2000 12:09:40 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "RULE vs. SEQUENCE"
},
{
"msg_contents": "Karel Zak wrote:\n> \n> On Mon, 4 Sep 2000, Jan Wieck wrote:\n> \n> > > I have a question... why RULE call nexval() and data in RULE statement are\n> > > differend than data in original stmt.\n> >\n...\n> \n> But executor can knows that somethig was already executed, we can mark\n> some already executed expressions in rewriter and not execute it again in\n> final executor... like:\n...\n> \n> IMHO this is a good point for 7.2 ...\n\nBut if instead of nextval() you had random(), would you still want to execute\nit \nonly once ? And how should postgres know ?\n\n----------\nHannu\n",
"msg_date": "Mon, 04 Sep 2000 14:01:14 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RULE vs. SEQUENCE"
},
{
"msg_contents": "\nOn Mon, 4 Sep 2000, Jan Wieck wrote:\n\n> > I have a question... why RULE call nexval() and data in RULE statement are\n> > differend than data in original stmt.\n> \n> It's a known \"feature\", and I don't know any way of changing\n> it.\n\n IMHO docs is quiet about it... \n\n> The problem is, that NEW.attname in a rule means, \"whatever\n> is in the targetlist of the INSERT when applying the rule\".\n> In your example, it'll be a call to nextval(). The rule\n> system doesn't know that this targetlist expression has a\n> side-effect (incrementing the sequence).\n\n But, why 'NEW' tuple is in the rewriter created again, why is not used \noriginal tuple from original statement ... like in triggers? \n\n Ooops yes, rewriter is before executor...hmm...\n\n> Thus, the rule creates a second query which does it's own\n> calls to nextval() when executed.\n\n But executor can knows that somethig was already executed, we can mark \nsome already executed expressions in rewriter and not execute it again in \nfinal executor... like:\n\ntypedef some_expr {\n\tbool\texecuted;\n\tDatum\t*result;\n\t....\n} some_expr;\n\n\n IMHO this is a good point for 7.2 ...\n\n\n\t\t\t\t\t\tKarel\n\n",
"msg_date": "Mon, 4 Sep 2000 13:16:45 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RULE vs. SEQUENCE"
},
{
"msg_contents": "Karel Zak wrote:\n>\n>\n> Hi,\n>\n> I have a question... why RULE call nexval() and data in RULE statement are\n> differend than data in original stmt.\n\n It's a known \"feature\", and I don't know any way of changing\n it.\n\n The problem is, that NEW.attname in a rule means, \"whatever\n is in the targetlist of the INSERT when applying the rule\".\n In your example, it'll be a call to nextval(). The rule\n system doesn't know that this targetlist expression has a\n side-effect (incrementing the sequence).\n\n Thus, the rule creates a second query which does it's own\n calls to nextval() when executed.\n\n> It's interesting feature (6.5, 7.0, 7.1...). How is a possible in RULE\n> obtain same data as in 'aa' table for a default data from the sequence.\n\n The query rewrite rule system behaves like this since 4.2 (or\n even earlier). Since 6.4 it does the right things on UPDATE\n and DELETE too. Don't know when we introduced sequences or\n better \"functions that have such nasty side-effects\".\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Mon, 4 Sep 2000 07:02:46 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RULE vs. SEQUENCE"
},
{
"msg_contents": "> > But executor can knows that somethig was already executed, we can mark\n> > some already executed expressions in rewriter and not execute it again in\n> > final executor... like:\n> ...\n> > \n> > IMHO this is a good point for 7.2 ...\n> \n> But if instead of nextval() you had random(), would you still want to execute\n> it \n> only once ? And how should postgres know ?\n\n Talking you still about RULEs? \n\n ...I don't undestand you. What is a 'NEW' in RULE? I (and probably more \nusers) expect that new data from tuple which go into original table. Right?\n\nNot ... if you use sequence. IMHO it's not \"feature\" but nice bug that\ncrash your data integrity...\n\n\t\t\t\tKarel \n\n",
"msg_date": "Mon, 4 Sep 2000 14:05:05 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RULE vs. SEQUENCE"
},
{
"msg_contents": "Karel Zak wrote:\n>\n> On Mon, 4 Sep 2000, Jan Wieck wrote:\n>\n> > The problem is, that NEW.attname in a rule means, \"whatever\n> > is in the targetlist of the INSERT when applying the rule\".\n> > In your example, it'll be a call to nextval(). The rule\n> > system doesn't know that this targetlist expression has a\n> > side-effect (incrementing the sequence).\n>\n> But, why 'NEW' tuple is in the rewriter created again, why is not used\n> original tuple from original statement ... like in triggers?\n>\n> Ooops yes, rewriter is before executor...hmm...\n\n More Ooops: the rewriter doesn't create any tuples. He\n creates another query tree, which is then optimized, planned\n and finally executed (to produce tuples).\n\n>\n> > Thus, the rule creates a second query which does it's own\n> > calls to nextval() when executed.\n>\n> But executor can knows that somethig was already executed, we can mark\n> some already executed expressions in rewriter and not execute it again in\n> final executor... like:\n>\n> typedef some_expr {\n> bool executed;\n> Datum *result;\n> ....\n> } some_expr;\n>\n>\n> IMHO this is a good point for 7.2 ...\n\n Impossible - period.\n\n Think about this (a little longer - sorry):\n\n CREATE TABLE category (\n cat_id serial,\n cat_name text\n );\n\n CREATE TABLE prod_attrs (\n pa_prodid integer,\n pa_attkey integer,\n pa_attval text\n );\n\n CREATE TABLE prod_attdefaults (\n pdef_catid integer,\n pdef_attkey integer,\n pdef_attval text,\n );\n\n CREATE TABLE product (\n prod_id serial,\n prod_category integer,\n prod_name text\n );\n\n CREATE TABLE new_products (\n new_category integer,\n new_name text\n );\n\n So far, so good. For each product we store in \"product\", a\n variable number of attributes can be stored in \"prod_attrs\".\n At the time of \"INSERT INTO product\", the rows from\n \"prod_attdefaults\" where \"pdef_catid = NEW.prod_category\"\n should be copied into \"prod_attrs\".\n\n The \"NOT WORKING\" rule for doing so would look like\n\n CREATE RULE attdefaults AS ON INSERT TO product DO\n INSERT INTO prod_attrs\n SELECT NEW.prod_id, D.pdef_attkey, D.pdef_attval\n FROM prod_attdefaults D\n WHERE D.pdef_catid = NEW.prod_category;\n\n Now let's have in \"prod_attdefaults\" 7 rows for category 1, 5\n rows for category 2, 6 rows for category 3 and no rows for\n category 4. And we do\n\n INSERT INTO new_products VALUES (1, 'chair');\n INSERT INTO new_products VALUES (1, 'table');\n INSERT INTO new_products VALUES (1, 'sofa');\n INSERT INTO new_products VALUES (1, 'cupboard');\n INSERT INTO new_products VALUES (2, 'shirt');\n INSERT INTO new_products VALUES (2, 'shoe');\n INSERT INTO new_products VALUES (3, 'butter');\n INSERT INTO new_products VALUES (4, 'shampoo');\n\n The query\n\n INSERT INTO product (prod_category, prod_name)\n SELECT new_category, new_name FROM new_product;\n\n must then create 8 new rows in \"product\" and 44 rows in\n \"prod_attrs\". The first 7 with the nextval() allocated for\n the chair, the next 7 with the nextval() for the table, etc.\n\n I can't see how this should be doable with the rewriter on\n the querylevel.\n\n This is something for a trigger.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Mon, 4 Sep 2000 09:16:42 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RULE vs. SEQUENCE"
},
{
"msg_contents": "Karel Zak wrote:\n> > > But executor can knows that somethig was already executed, we can mark\n> > > some already executed expressions in rewriter and not execute it again in\n> > > final executor... like:\n> > ...\n> > >\n> > > IMHO this is a good point for 7.2 ...\n> >\n> > But if instead of nextval() you had random(), would you still want to execute\n> > it\n> > only once ? And how should postgres know ?\n>\n> Talking you still about RULEs?\n\n Yes, he is.\n\n>\n> ...I don't undestand you. What is a 'NEW' in RULE? I (and probably more\n> users) expect that new data from tuple which go into original table. Right?\n\n Most people would expect that - but it is the targetlist\n expression of this column from the query which fired the\n rule! That's a little difference.\n\n> Not ... if you use sequence. IMHO it's not \"feature\" but nice bug that\n> crash your data integrity...\n\n The PostgreSQL rule system is based on a general productional\n query rewrite rule system, designed decades ago without\n thinking about column values like nextval() or random(). The\n usage of those expressions in a query firing rules leads to\n unpredictable results.\n\n To understand how rules work in detail you should read\n chapter 8 of the programmers manual.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Mon, 4 Sep 2000 09:34:03 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RULE vs. SEQUENCE"
}
] |
[
{
"msg_contents": "When is the next release due to go out?\n\nWith the recent move, and work having to take priority, I've been sort of\nout of the loop a bit. Anyhow, I've got a lot of catching up to do,\nespecially with a few patches that need to be checked and committed.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support Officer, Maidstone Borough Council\nEmail: [email protected]\nWWW: http://www.maidstone.gov.uk\nAll views expressed within this email are not the views of Maidstone Borough\nCouncil\n\n",
"msg_date": "Mon, 4 Sep 2000 15:18:50 +0100 ",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "The next release"
},
{
"msg_contents": "\nlooking at Beta in late October or so ... as always, not set in stone\n... :)\n\n\nOn Mon, 4 Sep 2000, Peter Mount wrote:\n\n> When is the next release due to go out?\n> \n> With the recent move, and work having to take priority, I've been sort of\n> out of the loop a bit. Anyhow, I've got a lot of catching up to do,\n> especially with a few patches that need to be checked and committed.\n> \n> Peter\n> \n> -- \n> Peter Mount\n> Enterprise Support Officer, Maidstone Borough Council\n> Email: [email protected]\n> WWW: http://www.maidstone.gov.uk\n> All views expressed within this email are not the views of Maidstone Borough\n> Council\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 6 Sep 2000 01:16:50 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The next release"
},
{
"msg_contents": "Peter Mount <[email protected]> writes:\n> When is the next release due to go out?\n\nI believe the plan for 7.1 is beta ~ 1 Oct, release ~ 1 Nov,\nLord willin' an' the creek don't rise...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Sep 2000 01:18:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The next release "
}
] |
[
{
"msg_contents": "Hello again.\n\nI was wondering if anyone has made some .dsp .dsw files (Visual \nStudio project and workspace files) which correspond to the Win32 \nmakefiles for the .dll and .lib files (in src/interfaces/libpq), and \nperhaps also for the psql ultility (in src/bin/psql). I would prefer \nVisual Studio 6 files, but could perhaps use other versions too (the \nmakefiles are most likely made with Version 5, but works fine with \n6.0 nmake).\n\nI have very little experience using Visual Studio (well, espcially \nwith the C++ part), so I don't know exactly how to make them myself, \nbut would of course also like any hint that you can supply to me.\n\nYours faithfully.\nFinn Kettner.\n",
"msg_date": "Tue, 5 Sep 2000 14:56:36 +0100",
"msg_from": "\"Finn Kettner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Visual Studio 6 project/workspace files"
}
] |
[
{
"msg_contents": "I have installed the libraries from RPM\n(postgresql-devel-7.0.2-2.i386.rpm). I couldn't compile a project using\nthe C++ library because in some config.h there's a line saying\n\n#include \"os.h\"\n\nthat points to a link to a non-existent linux.h. Any ideas? I just\ncommented the line and worked fine so far, but I don't like it a bit.\n\nLeandro Fanzone\nCompa��a HASAR\nBuenos Aires\nArgentina\n\n\nI have installed the libraries from RPM (postgresql-devel-7.0.2-2.i386.rpm).\nI couldn't compile a project using the C++ library because in some config.h\nthere's a line saying\n#include \"os.h\"\nthat points to a link to a non-existent linux.h. Any ideas? I just commented\nthe line and worked fine so far, but I don't like it a bit.\nLeandro Fanzone\nCompañía HASAR\nBuenos Aires\nArgentina",
"msg_date": "Tue, 05 Sep 2000 11:41:23 -0300",
"msg_from": "Leandro Fanzone <[email protected]>",
"msg_from_op": true,
"msg_subject": "C++ library probs"
},
{
"msg_contents": "Leandro Fanzone <[email protected]> writes:\n> --------------158FC9AA6F4DB960E871948D\n> I have installed the libraries from RPM\n> (postgresql-devel-7.0.2-2.i386.rpm). I couldn't compile a project using\n> the C++ library because in some config.h there's a line saying\n> #include \"os.h\"\n> that points to a link to a non-existent linux.h. Any ideas? I just\n> commented the line and worked fine so far, but I don't like it a bit.\n\nHmm. Are you speaking of installed headers (stored in something like\n/usr/local/include/pgsql/) or are you looking at a full Postgres\nsource-code tree?\n\nIn the source tree, os.h is a symlink made during the configure process,\nbut in the installed tree it ought to be a copy of the linked-to file.\nAt least that's how it's always worked for me.\n\nI wonder whether this RPM was made with an \"install\" script that tries\nto copy symlinks as symlinks rather than copying the underlying file.\nIf so, we need to change the install process to prevent that from\nhappening.\n\nLamar, Peter, any thoughts here?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Sep 2000 11:40:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "RPMs and symlinks (was Re: [NOVICE] C++ library probs)"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Hmm. Are you speaking of installed headers (stored in something like\n> /usr/local/include/pgsql/) or are you looking at a full Postgres\n> source-code tree?\n\nInstalled headers on /usr/include/pgsql. Didn't installed the source. The\nlink actually points to .././include/port/linux.h which doesn't exist. On the\nother hand I have some \"port\" directory /usr/include/pgsql (it's not the\ndirectory where the link is pointing, needless to say) but has no linux.h\ninside, anyway.\n\nLeandro Fanzone\nCompa��a HASAR\nBuenos Aires\nArgentina\n\n",
"msg_date": "Tue, 05 Sep 2000 12:47:48 -0300",
"msg_from": "Leandro Fanzone <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RPMs and symlinks (was Re: [NOVICE] C++ library probs)"
},
{
"msg_contents": "Leandro Fanzone <[email protected]> writes:\n> Installed headers on /usr/include/pgsql. Didn't installed the source. The\n> link actually points to .././include/port/linux.h which doesn't exist.\n\nI figured as much --- that's what the symlink should look like, in the\nsource tree, but it ought not get installed that way. Looks like we\nhave a bug in the RPM build process. (Fairly recent bug too, I bet,\nor it would've been noticed before.)\n\nI've attached a copy of 7.0.2's port/linux.h, which you can use to\nreplace the os.h symlink so you can get some work done meanwhile.\n\n\t\t\tregards, tom lane\n\n\n/* __USE_POSIX, __USE_BSD, and __USE_BSD_SIGNAL used to be defined either\n here or with -D compile options, but __ macros should be set and used by C\n library macros, not Postgres code. __USE_POSIX is set by features.h,\n __USE_BSD is set by bsd/signal.h, and __USE_BSD_SIGNAL appears not to\n be used.\n*/\n#define JMP_BUF\n#define USE_POSIX_TIME\n\n#if defined(__i386__)\ntypedef unsigned char slock_t;\n\n#define HAS_TEST_AND_SET\n\n#elif defined(__sparc__)\ntypedef unsigned char slock_t;\n\n#define HAS_TEST_AND_SET\n\n#elif defined(__powerpc__)\ntypedef unsigned int slock_t;\n\n#define HAS_TEST_AND_SET\n\n#elif defined(__alpha__)\ntypedef long int slock_t;\n\n#define HAS_TEST_AND_SET\n\n#elif defined(__mips__)\ntypedef unsigned int slock_t;\n\n#define HAS_TEST_AND_SET\n\n#elif defined(__arm__)\ntypedef unsigned char slock_t\n\n#define HAS_TEST_AND_SET\n\n#endif\n\n#if defined(__GLIBC__) && (__GLIBC__ >= 2)\n#ifdef HAVE_INT_TIMEZONE\n#undef HAVE_INT_TIMEZONE\n#endif\n#endif\n\n#if defined(__powerpc__)\n#undef HAVE_INT_TIMEZONE\n#endif\n",
"msg_date": "Tue, 05 Sep 2000 11:52:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RPMs and symlinks (was Re: [NOVICE] C++ library probs) "
}
] |
[
{
"msg_contents": "I can tell you the results Informix produces:\n\n> Am I right in thinking that the WHERE clause of a query must logically\n> be applied *after* any joins specified in the FROM clause?\n> \n> For example, suppose that we have table t1 (x int) containing the\n> values 1, 2, 3, 4, and table t2 (y int) containing the values 1, 2, 4.\n> It's clear that the result of\n> \tSELECT * FROM t1 LEFT JOIN t2 ON (x = y);\n> should be\n> \tx\ty\n> \n> \t1\t1\n> \t2\t2\n> \t3\tNULL\n> \t4\t4\n\nsame\n\n> \n> But suppose we make the query\n> \tSELECT * FROM t1 LEFT JOIN t2 ON (x = y) WHERE y <> 2;\n> It seems to me this should yield\n> \tx\ty\n> \n> \t1\t1\n> \t3\tNULL\n> \t4\t4\n> \n> and not\n> \tx\ty\n> \n> \t1\t1\n> \t2\tNULL\n> \t3\tNULL\n> \t4\t4\n\n x y\n\n 1 1\n 4 4\n\n> \n> which is what you'd get if the y=2 tuple were filtered out before\n> reaching the left-join stage. Does anyone read the spec differently,\n> or get the latter result from another implementation?\n> \n> The reason this is interesting is that this example breaks a rather\n> fundamental assumption in our planner/optimizer, namely that WHERE\n> conditions can be pushed down to the lowest level at which all the\n> variables they mention are available. Thus the planner would normally\n> apply \"y <> 2\" during its bottom-level scan of t2, which \n> would cause the\n> LEFT JOIN to decide that x = 2 is an unmatched value, and thus produce\n> a \"2 NULL\" output row.\n> \n> An even more interesting example is\n> \tSELECT * FROM t1 FULL JOIN t2 ON (x = y AND y <> 2);\n> My interpretation is that this should produce\n> \tx\ty\n> \n> \t1\t1\n> \t2\tNULL\n> \tNULL\t2\n> \t3\tNULL\n> \t4\t4\n\n x y\n\n 1 1\n 4 4\n\n> since both t1's x=2 and t2's y=2 tuple will appear \"unmatched\".\n> This is *not* the same output you'd get from\n> \tSELECT * FROM t1 FULL JOIN t2 ON (x = y) WHERE y <> 2;\n> which I think should yield\n> \tx\ty\n> \n> \t1\t1\n> \t3\tNULL\n> \t4\t4\n> This shows that JOIN/ON conditions for outer joins are not \n> semantically\n> interchangeable with WHERE conditions.\n\n x y\n\n 1 1\n 4 4\n\n> \n> This is going to be a bit of work to fix, so I thought I'd better\n> confirm that I'm reading the spec correctly before I dive into it.\n\nNo idea if they interpret correctly, but seems they hand it interchangeably.\nSomeone want to check Oracle and MS Sql ?\n\nAndreas\n",
"msg_date": "Tue, 5 Sep 2000 16:56:59 +0200 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: A fine point about OUTER JOIN semantics"
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> But suppose we make the query\n>> SELECT * FROM t1 LEFT JOIN t2 ON (x = y) WHERE y <> 2;\n>> It seems to me this should yield\n>> x\ty\n>> \n>> 1\t1\n>> 3\tNULL\n>> 4\t4\n\n> x y\n\n> 1 1\n> 4 4\n\nOh, my mistake, I forgot that the WHERE clause would filter out NULLs.\nTry\nSELECT * FROM t1 LEFT JOIN t2 ON (x = y) WHERE y <> 2 OR y IS NULL;\n\n>> An even more interesting example is\n>> SELECT * FROM t1 FULL JOIN t2 ON (x = y AND y <> 2);\n>> My interpretation is that this should produce\n>> x\ty\n>> \n>> 1\t1\n>> 2\tNULL\n>> NULL\t2\n>> 3\tNULL\n>> 4\t4\n\n> x y\n\n> 1 1\n> 4 4\n\nHere I believe Informix is broken. Their result clearly does not\nagree with the spec's definition of a FULL JOIN ... indeed it looks\nexactly like an inner join.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Sep 2000 11:30:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: A fine point about OUTER JOIN semantics "
}
] |
[
{
"msg_contents": "Hello everyone. Hopefully someone can give me an answer and workaround on\nthis...\n\nI have a table with an array like so:\n\n\tcreate table foo (\n\t\tid\tint4,\n\t\tnames\tvarchar(80)[]\n\t);\n\nand would like to place an index on name[1]. I try\n\n\tcreate table foo_idx1 on foo (names[1]);\n\nbut I get...\n\n\tERROR: parser: parse error at or near \"[\"\n\nThoughts and suggestions are welcome.\n\n- K\n\nKristofer Munn * KMI * 732-254-9305 * AIM KrMunn * http://www.munn.com/\n\n\n",
"msg_date": "Tue, 5 Sep 2000 13:39:20 -0400 (EDT)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Indexes on Arrays"
},
{
"msg_contents": "Kristofer Munn <[email protected]> writes:\n> I have a table with an array like so:\n> \tcreate table foo (\n> \t\tid\tint4,\n> \t\tnames\tvarchar(80)[]\n> \t);\n> and would like to place an index on name[1].\n\nRight now the only way is to make a function that extracts names[1]\nand create a functional index on yourfunction(names).\n\nAlthough functional indexes are a perfectly good general solution from\nan academic point of view, this is still a pain in the neck :-(.\nAlso kinda slow, unless you code the function in C.\n\nI recall talking to someone who was going to look at supporting more\ngeneral expressions for index values, but nothing's come of the idea\nso far.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Sep 2000 01:02:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes on Arrays "
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\nSorry to bother you with this question, but I've found a bug with operators:\nExample:\n\ncreate table dummy ( a numeric(12,0), b float8);\ninsert into dummy (a,b) values (1, 2);\ninsert into dummy (a,b) values (7, 7);\ninsert into dummy (a,b) values (3, 2);\ninsert into dummy (a,b) values (4, 2);\n\nNow try:\nselect * from dummy where a=b;\nERROR: Unable to identify an operator '=' for types 'numeric' and 'float8'\n You will have to retype this query using an explicit cast\n\nSo I tried to define an operator:\ncreate function num_eq_float (numeric, float8) returns bool as \n 'select $1::float8 = $2::float8' language 'sql';\n\nselect * from dummy where num_eq_float(a,b)=true; \n a | b\n- ---+---\n 7 | 7\n(1 row) \n\nWorks fine so far. Now I tried:\ncreate operator = (\n leftarg = numeric, \n rightarg=float8, \n procedure = num_eq_float\n);\nCREATE\n\nAnd now I tried:\nselect * from dummy where a=b;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nObviously the backend process crashed, but I have no clue what I might be \ndoing wrong.\n\nRegards,\n\tMario Weilguni\n\n\n- -- \nWhy is it always Segmentation's fault?\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.3i\nCharset: noconv\n\niQCVAwUBObU8xwotfkegMgnVAQGuqAP/TFk6HYqVmKdmv5WqRiIlChYQbGNWnEDv\nBYG183EXfeYoPkCZXU2ZJVbYUZObHVssxrFNEmoXgOdVJ1BLaVoVwIA3UFjAkZ4f\nmPaS7kSSWYDf1EvGPMCiFc9TYdLDI0M1GsBUKNjeLEqwlAdXWiVEjrSLBgnMZXa+\n+e+3vMSv4Fc=\n=ok3E\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 5 Sep 2000 20:34:47 +0200",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Crash on userdefined operator"
},
{
"msg_contents": "Mario Weilguni <[email protected]> writes:\n> Sorry to bother you with this question, but I've found a bug with operators:\n\nOperators backed by SQL-language functions aren't supported in 7.0 (nor\nany previous version). You should be able to do this in plpgsql though.\n\nThis restriction is fixed for 7.1 btw...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Sep 2000 18:39:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Crash on userdefined operator "
}
] |
[
{
"msg_contents": "Greetings,\n I was trying to use arrays today, and can't seem to get it right. \n\nWhat am I doing wrong?\n\nler=# create table ia_standby (hsrp_group int2,\nler(# router_interfaces[] varchar(64),\nler(# routers[] varchar(64));\nERROR: parser: parse error at or near \"[\"\nler=# create table ia_standby (hsrp_group int2,\nler(# router_interfaces[] text,\nler(# routers[] text);\nERROR: parser: parse error at or near \"[\"\nler=#\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 5 Sep 2000 14:56:30 -0500",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.0.2: Arrays"
},
{
"msg_contents": "On Tue, 5 Sep 2000, Larry Rosenman wrote:\n\n> Greetings,\n> I was trying to use arrays today, and can't seem to get it right. \n> \n> What am I doing wrong?\n> \n> ler=# create table ia_standby (hsrp_group int2,\n> ler(# router_interfaces[] varchar(64),\n> ler(# routers[] varchar(64));\n\nWhat you want to do is...\n\ncreate table ia_standby (\n\thsrp_group int2,\n\trouter_interfaces varchar(64)[],\n\trouters varchar(64)[]\n);\n\n- K\n\nKristofer Munn * KMI * 732-254-9305 * AIM KrMunn * http://www.munn.com/\n\n",
"msg_date": "Tue, 5 Sep 2000 16:22:39 -0400 (EDT)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 7.0.2: Arrays"
},
{
"msg_contents": "Ok, so I can't read. Thanks!\n\nLER\n\n* Kristofer Munn <[email protected]> [000905 15:27]:\n> On Tue, 5 Sep 2000, Larry Rosenman wrote:\n> \n> > Greetings,\n> > I was trying to use arrays today, and can't seem to get it right. \n> > \n> > What am I doing wrong?\n> > \n> > ler=# create table ia_standby (hsrp_group int2,\n> > ler(# router_interfaces[] varchar(64),\n> > ler(# routers[] varchar(64));\n> \n> What you want to do is...\n> \n> create table ia_standby (\n> \thsrp_group int2,\n> \trouter_interfaces varchar(64)[],\n> \trouters varchar(64)[]\n> );\n> \n> - K\n> \n> Kristofer Munn * KMI * 732-254-9305 * AIM KrMunn * http://www.munn.com/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 5 Sep 2000 15:30:20 -0500",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 7.0.2: Arrays"
},
{
"msg_contents": "Hi, there\n\nYour syntax is not correct, pls check the Pg documentatation, the\ncorrection as following.\n\n\nLarry Rosenman wrote:\n\n> Greetings,\n> I was trying to use arrays today, and can't seem to get it right.\n>\n> What am I doing wrong?\n>\n> ler=# create table ia_standby (hsrp_group int2,\n> ler(# router_interfaces[] varchar(64),\n\n==>router_interfaces varchar(64)[],\n\n>\n> ler(# routers[] varchar(64));\n> ERROR: parser: parse error at or near \"[\"\n> ler=# create table ia_standby (hsrp_group int2,\n> ler(# router_interfaces[] text,\n\n==>router_interfaces text[],\n\n>\n> ler(# routers[] text);\n> ERROR: parser: parse error at or near \"[\"\n> ler=#\n>\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n--\nJie LIANG\n\nInternet Products Inc.\n\n10350 Science Center Drive\nSuite 100, San Diego, CA 92121\nOffice:(858)320-4873\n\[email protected]\nwww.ipinc.com\n\n\n\n",
"msg_date": "Tue, 05 Sep 2000 13:38:47 -0700",
"msg_from": "Jie Liang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.2: Arrays"
}
] |
[
{
"msg_contents": "\nTom, the following was said in the apache turbine project mailing list a\nwhile back regarding telling the difference between an ordinary oid\ncolumn and an oid column that points to a large object. Can you confirm\nif this is true regarding the reference to change in v 7.1 ?\n\n--\n\"That's why they didn't want my patch which changed the SQL Type\nreturned by the metadata from Integer to Varbinary, because it'll break\nanybody who uses it as an int (which is what it really is). But the\nproblem is that there is no datatype called, \"Image\" or \"varbinary,\" the\nonly way you can use a large object field is by setting the column\ndatatype to OID. It's ambiguous...I don't use that datatype for\nanything but large objects, so the patch works for me, but I understand\nwhy they don't want it in the main dist. of the driver.\n\nTom Lane from the pgsql team told me that in v 7.1 there will be a way\nto identify the difference between a binary column and an oid column.\"\n--\n",
"msg_date": "Wed, 06 Sep 2000 16:42:02 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tom Lane"
},
{
"msg_contents": "Chris <[email protected]> writes:\n> Tom, the following was said in the apache turbine project mailing list a\n> while back regarding telling the difference between an ordinary oid\n> column and an oid column that points to a large object. Can you confirm\n> if this is true regarding the reference to change in v 7.1 ?\n\n> Tom Lane from the pgsql team told me that in v 7.1 there will be a way\n> to identify the difference between a binary column and an oid column.\"\n\nI don't recall saying any such thing, sorry (at least not as far as the\nbackend is concerned --- the particular issue you are quoting seemed to\nbe just an ODBC driver question).\n\nThere already is a solution of sorts in contrib/lo, if you care to use\nit. I seem to recall speculating that it'd be a good idea to move that\ninto the mainstream, but nothing's been done about it.\n\nI think we are mostly waiting to see how much usage of LOs remains after\npeople get comfortable with TOAST --- it may be that improving the LO\nfacilities beyond where they are will just be gilding a dead lily.\n\nI do plan to check over and commit Denis Perchine's fix to store large\nobjects in a single table, instead of two files per LO (see patches list\nfor 6/27/00). That should solve our existing performance problems with\nthousands of LOs, and since he already did the work it'd be silly not to\ninclude it. Beyond that I'm in wait-and-see mode for more LO work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Sep 2000 10:26:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tom Lane "
},
{
"msg_contents": "> I think we are mostly waiting to see how much usage of LOs remains after\n> people get comfortable with TOAST --- it may be that improving the LO\n> facilities beyond where they are will just be gilding a dead lily.\n\nBTW, I'd really want to use BLOB/CLOB using TOAST. Anyone working on\nthis part?\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 07 Sep 2000 13:59:46 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tom Lane "
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> > I think we are mostly waiting to see how much usage of LOs remains after\n> > people get comfortable with TOAST --- it may be that improving the LO\n> > facilities beyond where they are will just be gilding a dead lily.\n>\n> BTW, I'd really want to use BLOB/CLOB using TOAST. Anyone working on\n> this part?\n\n Thinking, making plans. But it's nothing to be written down\n quickly.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Thu, 7 Sep 2000 04:50:13 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tom Lane"
}
] |
[
{
"msg_contents": "> From: Tim Perdue <[email protected]>\n>\n> Here's some fun new problems in Pgsql 7.0.2.\n> \n> My nightly job failed last night because supposedly the tables already\n> existed. If you do a \\d, you can see the tables. If you issue drop\n> table, look at the output below.\n> \n\nUnfortunately, PostgreSQL cannot rollback transactions with\nDDL statements in them. I suspect that what happened was\nthat the underlying file was unlinked, but the entry from pg_class\nwasn't marked deleted, because the backend performing the\nDROP TABLE crashed before the pg_class delete could be\ncommitted.\n\nUnlike eveything everyone else has told you about transactions,\nas of 7.0.2, I wouldn't run DDL statements in them, \nonly DML statements. Rolling back DDL statements properly\nin a MVCC transaction environment is very difficult, as \nyou can imagine. IIRC Oracle cheats, Informix and DEC Rdb\nlock the DDL target until transaction commit, etc. If \nthe PostgreSQL team implements their stated goal in this\narea, it will be far superior to its commercial counterparts.\n\nHope that helps, \n\nMike Mascari\n\n",
"msg_date": "Wed, 6 Sep 2000 01:47:05 -0400",
"msg_from": "\"Mike Mascari\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fragged State in 7.0.2"
},
{
"msg_contents": "\"Mike Mascari\" <[email protected]> writes:\n> Rolling back DDL statements properly\n> in a MVCC transaction environment is very difficult, as \n> you can imagine. IIRC Oracle cheats, Informix and DEC Rdb\n> lock the DDL target until transaction commit, etc. If \n> the PostgreSQL team implements their stated goal in this\n> area, it will be far superior to its commercial counterparts.\n\nAFAIK we intend to keep the current behavior of exclusively\nlocking any table you try to drop or modify. So it'll be\npretty much like the Informix/RDB behavior.\n\nBut yes, at the moment DROP or RENAME inside a transaction is\npretty risky (and 7.0 tells you so, with an annoying NOTICE).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Sep 2000 10:52:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fragged State in 7.0.2 "
},
{
"msg_contents": "It doesn't suprise me that it doesn't work, but I am surprised to get\ninto a half-baked state if an abort does happen for some reason.\n\nNOTICE: Caution: DROP TABLE cannot be rolled back, so don't abort now\n\nProbably what should happen is it should error out if you try to do\nthings like this in a transaction, or autocommit for you no matter what.\nWhatever it takes to not be in a half-baked state.\n\nTim\n\n\nTom Lane wrote:\n> \n> \"Mike Mascari\" <[email protected]> writes:\n> > Rolling back DDL statements properly\n> > in a MVCC transaction environment is very difficult, as\n> > you can imagine. IIRC Oracle cheats, Informix and DEC Rdb\n> > lock the DDL target until transaction commit, etc. If\n> > the PostgreSQL team implements their stated goal in this\n> > area, it will be far superior to its commercial counterparts.\n> \n> AFAIK we intend to keep the current behavior of exclusively\n> locking any table you try to drop or modify. So it'll be\n> pretty much like the Informix/RDB behavior.\n> \n> But yes, at the moment DROP or RENAME inside a transaction is\n> pretty risky (and 7.0 tells you so, with an annoying NOTICE).\n> \n> regards, tom lane\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n",
"msg_date": "Wed, 06 Sep 2000 08:14:42 -0700",
"msg_from": "Tim Perdue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fragged State in 7.0.2"
}
] |
[
{
"msg_contents": "\n> Oh, my mistake, I forgot that the WHERE clause would filter out NULLs.\n> Try\n> SELECT * FROM t1 LEFT JOIN t2 ON (x = y) WHERE y <> 2 OR y IS NULL;\n\n x y\n\n 1 1\n 3 NULL\n 4 4\n\nAndreas\n",
"msg_date": "Wed, 6 Sep 2000 17:02:01 +0200 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: A fine point about OUTER JOIN semantics "
}
] |
[
{
"msg_contents": "\n> On Sat, Sep 02, 2000 at 01:39:47PM -0400, Tom Lane wrote:\n> > > So what happens with \"WHERE name like 'Czec%`\" ?\n> > \n> > Our existing code fails because it generates WHERE name >= \n> 'Czec' AND\n> > name < 'Czed'; it will therefore not find names beginning 'Czech'\n> > because those are in another part of the index, between 'Czeh' and\n> > 'Czei'. But WHERE name >= 'Cze' AND name < 'Czf' would work.\n> \n> (OK, I haven't read the previous discussion. Guilty, m'lud)\n> \n> Why should it? If 'ch' is one letter, then surely 'czech' isn't LIKE\n> 'czec%'. Because 'czec%' has a second c, wheres, 'czech' only has one\n> 'c' and one 'ch'?\n\nIndeed an interesting interpretation, but what I guess makes it bogus is\nthat\nwords can exist that have a h after the c that do not represent the ch\ncharacter.\n\nAndreas\n",
"msg_date": "Wed, 6 Sep 2000 17:19:46 +0200 ",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Yet another LIKE-indexing scheme"
},
{
"msg_contents": "On Wed, Sep 06, 2000 at 05:19:46PM +0200, Zeugswetter Andreas SB wrote:\n> \n> > On Sat, Sep 02, 2000 at 01:39:47PM -0400, Tom Lane wrote:\n> > > > So what happens with \"WHERE name like 'Czec%`\" ?\n> > > \n> > > Our existing code fails because it generates WHERE name >= \n> > 'Czec' AND\n> > > name < 'Czed'; it will therefore not find names beginning 'Czech'\n> > > because those are in another part of the index, between 'Czeh' and\n> > > 'Czei'. But WHERE name >= 'Cze' AND name < 'Czf' would work.\n> > \n> > (OK, I haven't read the previous discussion. Guilty, m'lud)\n> > \n> > Why should it? If 'ch' is one letter, then surely 'czech' isn't LIKE\n> > 'czec%'. Because 'czec%' has a second c, wheres, 'czech' only has one\n> > 'c' and one 'ch'?\n> \n> Indeed an interesting interpretation, but what I guess makes it bogus is\n> that\n> words can exist that have a h after the c that do not represent the ch\n> character.\n\nThis is an excellent point.\n\nBut in that case, how is the collating system to cope? How can the\ncomputer know which 'ch's are 'ch's and which are 'c''h's (IYSWIM)?\n\nJules\n",
"msg_date": "Wed, 6 Sep 2000 16:22:18 +0100",
"msg_from": "Jules Bean <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another LIKE-indexing scheme"
}
] |
[
{
"msg_contents": "It seems to have vanished from my source after some updates from the\narchive.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 6 Sep 2000 18:20:22 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Where is ./configure?"
},
{
"msg_contents": "\nin /pgsql itself, instead of /pgsql/src ...\n\n\nOn Wed, 6 Sep 2000, Michael Meskes wrote:\n\n> It seems to have vanished from my source after some updates from the\n> archive.\n> \n> Michael\n> -- \n> Michael Meskes\n> [email protected]\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 6 Sep 2000 20:27:35 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Where is ./configure?"
},
{
"msg_contents": "On Wed, Sep 06, 2000 at 08:27:35PM -0300, The Hermit Hacker wrote:\n> in /pgsql itself, instead of /pgsql/src ...\n\nArgh! Of course. Stupid me.\n\nThanks Marc. That's what happens if you are trying to do some stuff with\njust 5 minutes of spare time.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 7 Sep 2000 09:22:30 -0700",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Where is ./configure?"
}
] |
[
{
"msg_contents": "I am experimenting with some new datatypes that require some\nmetadata to control their action (e.g., like the DATESTYLE does for\ndate/time types). It seems that some new runtime variables make most\nsense for this, though other suggestions are welcome.\n\nIs there any documentation on how to implement runtime variables like\nDATESTYLE? Presumably, I need to implement actions for the SET/SHOW\ncommands and some mechanism for retrieving values within backend code.\nAnything else? How is this accomplished?\n\nThanks for your help.\n\nCheers,\nBrook\n",
"msg_date": "Wed, 6 Sep 2000 12:04:10 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "new runtime variables"
}
] |
[
{
"msg_contents": "> Is there any way to take a look at the code?\n> Is it in the CVS ?\n\nLog manager code are in CVS. Heap redo/undo will be there\nsoon.\n\nVadim\n",
"msg_date": "Wed, 6 Sep 2000 11:24:37 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: WAL"
}
] |
[
{
"msg_contents": "\n* Disallow LOCK on view\n\tchange to\n* -Disallow LOCK on view (Mark H)\n\twell, at least when my patch is applied :)\n\n\n* Allow SQL function indexes\n\tThis seems to work in the CVS code, or I have misunderstood:\n\tCREATE TABLE t ( a int);\n\tCREATE FUNCTION mod5(int) RETURNS int AS 'select $1 % 5' LANGUAGE 'sql';\n\tCREATE INDEX sql_index ON t ( mod5(a) );\n\n\n* Add ALTER TABLE command to change table ownership\n\tDibs on this.\n\n\n-- \nMark Hollomon\[email protected]\n",
"msg_date": "Thu, 7 Sep 2000 10:19:57 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "TODO list updates"
},
{
"msg_contents": "At 10:19 7/09/00 -0400, Mark Hollomon wrote:\n>\n>* Add ALTER TABLE command to change table ownership\n>\tDibs on this.\n\nAny chance of doing more than just tables? pg_dump does many \\connect's to\nset ownership of things appropriately. If there were appropriate ALTER\ncommands, then it's job would be simpler.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 08 Sep 2000 09:28:41 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TODO list updates"
},
{
"msg_contents": "Thanks. TODO updated.\n\n> \n> * Disallow LOCK on view\n> \tchange to\n> * -Disallow LOCK on view (Mark H)\n> \twell, at least when my patch is applied :)\n> \n> \n> * Allow SQL function indexes\n> \tThis seems to work in the CVS code, or I have misunderstood:\n> \tCREATE TABLE t ( a int);\n> \tCREATE FUNCTION mod5(int) RETURNS int AS 'select $1 % 5' LANGUAGE 'sql';\n> \tCREATE INDEX sql_index ON t ( mod5(a) );\n> \n> \n> * Add ALTER TABLE command to change table ownership\n> \tDibs on this.\n> \n> \n> -- \n> Mark Hollomon\n> [email protected]\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 15:49:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TODO list updates"
}
] |
[
{
"msg_contents": "In the CVS as of ~ 13:00 EDT \n\nmake[4]: Entering directory `/home/mhh/src/pgsql.work/src/backend/access/common'\ngcc -c -I../../../../src/include -O2 -g -Wall -Wmissing-prototypes -Wmissing-declarations -MMD heaptuple.c -o heaptuple.o\nIn file included from ../../../../src/include/storage/lmgr.h:18,\n from ../../../../src/include/storage/buf_internals.h:18,\n from ../../../../src/include/storage/bufmgr.h:17,\n from ../../../../src/include/storage/bufpage.h:18,\n from ../../../../src/include/access/htup.h:17,\n from ../../../../src/include/access/heapam.h:18,\n from heaptuple.c:23:\n../../../../src/include/utils/rel.h:22: storage/relfilenode.h: No such file or directory\nIn file included from ../../../../src/include/access/heapam.h:18,\n from heaptuple.c:23:\n../../../../src/include/access/htup.h:18: storage/relfilenode.h: No such file or directory\nmake[4]: *** [heaptuple.o] Error 1 \n-- \nMark Hollomon\[email protected]\n",
"msg_date": "Thu, 7 Sep 2000 13:03:04 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "missing file relfilenode.h"
},
{
"msg_contents": "Mark Hollomon <[email protected]> writes:\n> In the CVS as of ~ 13:00 EDT \n> ../../../../src/include/access/htup.h:18: storage/relfilenode.h: No such file or directory\n\nLooks like Vadim forgot to 'cvs add' a file before his latest commit...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Sep 2000 02:00:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: missing file relfilenode.h "
}
] |
[
{
"msg_contents": "\nI have a database where, periodically, I get a query that is producing\npg_noname files that are >1gig in size ... according to syslog, for that\nprocess:\n\nSep 7 18:36:39 pgsql postgres[47078]: DEBUG: ExecRestrPos: node type 17 not supported\nSep 7 18:36:39 pgsql postgres[47078]: DEBUG: ExecRestrPos: node type 17 not supported\nSep 7 18:36:40 pgsql postgres[47078]: DEBUG: ExecRestrPos: node type 17 not supported\nSep 7 18:36:56 pgsql postgres[47078]: DEBUG: ExecMarkPos: node type 17 not supported\n%\n\nthe query that appears to be causing this, in this particular case, is:\n\nSELECT distinct s.gid, s.created , \n geo_distance(pd.location, '(-97.4382912597586,37.7021126098755)') \n FROM status s, personal_data pd, relationship_wanted rw, \n personal_ethnicity pe, personal_religion pr, personal_bodytype pb,\n personal_smoking ps \n WHERE s.active \n AND s.status != 0 \n AND (s.gid = pd.gid AND pd.gender = 0)\n AND (s.gid = rw.gid AND rw.gender = 1) \n AND geo_distance(pd.location, '(-97.4382912597586,37.7021126098755)') <= 500\nORDER BY geo_distance( pd.location, '(-97.4382912597586,37.7021126098755)'), \n s.created desc;\n\nnow, its a reasonable oft run query, and from a debugging log that I keep,\nit normally takes <1sec to run:\n\n[0.38 secs]: SELECT distinct s.gid, s.created , geo_distance(pd.location, '(-97.4382912597586,37.7021126098755)')\n FROM status s, personal_data pd, relationship_wanted rw\n WHERE s.active AND s.status != 0\n AND (s.gid = pd.gid AND pd.gender = 0)\n AND (s.gid = rw.gid AND rw.gender = 1 )\n AND geo_distance( pd.location, '(-97.4382912597586,37.7021126098755)' ) <= 500\n ORDER BY geo_distance( pd.location, '(-97.4382912597586,37.7021126098755)'), s.created desc;\n\nSo, I'm curious as to why it periodically just hangs ... how do you debug\nsomething like this? :( Its been happening ~once per day, so should be\nreasonably debugging (unless, of course, now that I mention something it\nnever comes back *sigh*) ...\n\nThoughts? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n",
"msg_date": "Thu, 7 Sep 2000 20:56:10 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "[7.0.2] node type 17 not supported ..."
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> I have a database where, periodically, I get a query that is producing\n> pg_noname files that are >1gig in size ... according to syslog, for that\n> process:\n\n> Sep 7 18:36:39 pgsql postgres[47078]: DEBUG: ExecRestrPos: node type 17 not supported\n> Sep 7 18:36:39 pgsql postgres[47078]: DEBUG: ExecRestrPos: node type 17 not supported\n> Sep 7 18:36:40 pgsql postgres[47078]: DEBUG: ExecRestrPos: node type 17 not supported\n> Sep 7 18:36:56 pgsql postgres[47078]: DEBUG: ExecMarkPos: node type 17 not supported\n> %\n\nThis is the planner bug that I was just alluding to in other email ---\nthe planner is trying to use a nestloop as the inner input to a\nmergejoin, and that doesn't work :-(. But you only see the problem\nif the outer side contains multiple matches to a single inside tuple.\n\nI have a fix for current sources; let me see if I can retrofit it for\n7.0.*.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Sep 2000 20:13:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] node type 17 not supported ... "
},
{
"msg_contents": "On Thu, 7 Sep 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > I have a database where, periodically, I get a query that is producing\n> > pg_noname files that are >1gig in size ... according to syslog, for that\n> > process:\n> \n> > Sep 7 18:36:39 pgsql postgres[47078]: DEBUG: ExecRestrPos: node type 17 not supported\n> > Sep 7 18:36:39 pgsql postgres[47078]: DEBUG: ExecRestrPos: node type 17 not supported\n> > Sep 7 18:36:40 pgsql postgres[47078]: DEBUG: ExecRestrPos: node type 17 not supported\n> > Sep 7 18:36:56 pgsql postgres[47078]: DEBUG: ExecMarkPos: node type 17 not supported\n> > %\n> \n> This is the planner bug that I was just alluding to in other email ---\n> the planner is trying to use a nestloop as the inner input to a\n> mergejoin, and that doesn't work :-(. But you only see the problem if\n> the outer side contains multiple matches to a single inside tuple.\n> \n> I have a fix for current sources; let me see if I can retrofit it for\n> 7.0.*.\n\nthat would be perfect ... if we can get that retrofit'd, I'd be quite\ntempted to put out a 7.0.3 for this, considering that its obviously not an\nisolated incident ;(\n\nThanks ...\n\n",
"msg_date": "Thu, 7 Sep 2000 21:17:05 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [7.0.2] node type 17 not supported ... "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Thu, 7 Sep 2000, Tom Lane wrote:\n>> This is the planner bug that I was just alluding to in other email ---\n>> the planner is trying to use a nestloop as the inner input to a\n>> mergejoin, and that doesn't work :-(. But you only see the problem if\n>> the outer side contains multiple matches to a single inside tuple.\n>> \n>> I have a fix for current sources; let me see if I can retrofit it for\n>> 7.0.*.\n\n> that would be perfect ... if we can get that retrofit'd, I'd be quite\n> tempted to put out a 7.0.3 for this, considering that its obviously not an\n> isolated incident ;(\n\nI have committed a fix into REL7_0 branch. Although it seems to work,\nI don't trust it really far because it depends on heap_markpos() and\nheap_restrpos(), which haven't been used in a long time and are full\nof alarmed-sounding comments. (The equivalent fix in current sources\ndoes not use these routines, but that's because nodeMaterial.c has been\ncompletely rewritten, so back-patching that code doesn't seem like a\nrisk-free choice either.)\n\nI'd suggest running the REL7_0 sources on your machine for awhile before\ndeciding it's safe to call it 7.0.3.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Sep 2000 22:18:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] node type 17 not supported ... "
},
{
"msg_contents": "On Thu, 7 Sep 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > On Thu, 7 Sep 2000, Tom Lane wrote:\n> >> This is the planner bug that I was just alluding to in other email ---\n> >> the planner is trying to use a nestloop as the inner input to a\n> >> mergejoin, and that doesn't work :-(. But you only see the problem if\n> >> the outer side contains multiple matches to a single inside tuple.\n> >> \n> >> I have a fix for current sources; let me see if I can retrofit it for\n> >> 7.0.*.\n> \n> > that would be perfect ... if we can get that retrofit'd, I'd be quite\n> > tempted to put out a 7.0.3 for this, considering that its obviously not an\n> > isolated incident ;(\n> \n> I have committed a fix into REL7_0 branch. Although it seems to work,\n> I don't trust it really far because it depends on heap_markpos() and\n> heap_restrpos(), which haven't been used in a long time and are full\n> of alarmed-sounding comments. (The equivalent fix in current sources\n> does not use these routines, but that's because nodeMaterial.c has been\n> completely rewritten, so back-patching that code doesn't seem like a\n> risk-free choice either.)\n> \n> I'd suggest running the REL7_0 sources on your machine for awhile before\n> deciding it's safe to call it 7.0.3.\n\nOkay, I'm going to upgrade to it on Friday night, most likely, and will\nlet her run for a few days ...\n\nDo you have any thoughts as to what sorts of problems *might*\narise? Like, are we talking database corruption possibilities, or bad\nresults, or ... ? Just want to have an idea of what to try and keep an\neye out for ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 7 Sep 2000 23:41:39 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [7.0.2] node type 17 not supported ... "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Thu, 7 Sep 2000, Tom Lane wrote:\n>> I have committed a fix into REL7_0 branch. Although it seems to work,\n>> I don't trust it really far because it depends on heap_markpos() and\n>> heap_restrpos(), which haven't been used in a long time and are full\n>> of alarmed-sounding comments.\n\n> Do you have any thoughts as to what sorts of problems *might*\n> arise? Like, are we talking database corruption possibilities, or bad\n> results, or ... ? Just want to have an idea of what to try and keep an\n> eye out for ...\n\nI may be overstating the cause for worry. All of the \"alarmed-sounding\ncomments\" appear to date back to the original Postgres95 sources, and\nare probably obsolete. The only thing I really have any concern about\nis whether buffer pin/unpin bookkeeping is correct. If it's not,\nyou'd see an Assert failure from too many unpins (you are running with\n--enable-cassert I hope) or \"Buffer Leak\" notices in the log from too\nmany pins.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Sep 2000 23:02:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] node type 17 not supported ... "
},
{
"msg_contents": "On Thu, 7 Sep 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > On Thu, 7 Sep 2000, Tom Lane wrote:\n> >> I have committed a fix into REL7_0 branch. Although it seems to work,\n> >> I don't trust it really far because it depends on heap_markpos() and\n> >> heap_restrpos(), which haven't been used in a long time and are full\n> >> of alarmed-sounding comments.\n> \n> > Do you have any thoughts as to what sorts of problems *might*\n> > arise? Like, are we talking database corruption possibilities, or bad\n> > results, or ... ? Just want to have an idea of what to try and keep an\n> > eye out for ...\n> \n> I may be overstating the cause for worry. All of the \"alarmed-sounding\n> comments\" appear to date back to the original Postgres95 sources, and\n> are probably obsolete. The only thing I really have any concern about\n> is whether buffer pin/unpin bookkeeping is correct. If it's not,\n> you'd see an Assert failure from too many unpins (you are running with\n> --enable-cassert I hope) or \"Buffer Leak\" notices in the log from too\n> many pins.\n\nHaven't been running it with cassert, but will enable it *nod*\n\nThanks for the backpatch ...:)\n\n\n",
"msg_date": "Fri, 8 Sep 2000 00:06:38 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [7.0.2] node type 17 not supported ... "
},
{
"msg_contents": "Marc - \nIf you're going to consider a point release, should we try to collect up\nany other small patches that would go cleanly into the 7.0 tree? Just for\ninstance, the revised view rule name truncation patch I posted to PATCHES\nthat no one has commented on, yeah or neah. I don't think there are very\nmany of these: it seems to me that most small fixes that could be patched\nto both trees, have been, but I haven't been trying to keep an accurate\ncount. (Hmm, Bruce's has been quiet this week. Is he on vacation?)\n\nRoss\n\nOn Fri, Sep 08, 2000 at 12:06:38AM -0300, The Hermit Hacker wrote:\n> On Thu, 7 Sep 2000, Tom Lane wrote:\n> \n> > The Hermit Hacker <[email protected]> writes:\n> > > On Thu, 7 Sep 2000, Tom Lane wrote:\n> > >> I have committed a fix into REL7_0 branch. Although it seems to work,\n> > >> I don't trust it really far because it depends on heap_markpos() and\n> > >> heap_restrpos(), which haven't been used in a long time and are full\n> > >> of alarmed-sounding comments.\n> > \n> \n> Thanks for the backpatch ...:)\n> \n> \n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Fri, 8 Sep 2000 09:53:12 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [7.0.2] node type 17 not supported ..."
},
{
"msg_contents": "On Fri, 8 Sep 2000, Ross J. Reedstrom wrote:\n\n> Marc - \n\n> If you're going to consider a point release, should we try to collect\n> up any other small patches that would go cleanly into the 7.0 tree?\n> Just for instance, the revised view rule name truncation patch I\n> posted to PATCHES that no one has commented on, yeah or neah. I don't\n> think there are very many of these: it seems to me that most small\n> fixes that could be patched to both trees, have been, but I haven't\n> been trying to keep an accurate count. (Hmm, Bruce's has been quiet\n> this week. Is he on vacation?)\n\nUnless its considered critical to the stable running of the server, like\nthe patch Tom just committed is, it won't go in ... I'm planning on\nrunning this patch through the weekend and watching things, if all goes\nwell by Mon/Tues, I'll put out v7.0.3 ... since the above named patch\nisn't even in the -CURRENT tree yet, I'm leary of slapping it into\nsomething we consider to be stable, no? :)\n\n\n\n > > Ross\n> \n> On Fri, Sep 08, 2000 at 12:06:38AM -0300, The Hermit Hacker wrote:\n> > On Thu, 7 Sep 2000, Tom Lane wrote:\n> > \n> > > The Hermit Hacker <[email protected]> writes:\n> > > > On Thu, 7 Sep 2000, Tom Lane wrote:\n> > > >> I have committed a fix into REL7_0 branch. Although it seems to work,\n> > > >> I don't trust it really far because it depends on heap_markpos() and\n> > > >> heap_restrpos(), which haven't been used in a long time and are full\n> > > >> of alarmed-sounding comments.\n> > > \n> > \n> > Thanks for the backpatch ...:)\n> > \n> > \n> \n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 8 Sep 2000 11:58:54 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [7.0.2] node type 17 not supported ..."
}
] |
[
{
"msg_contents": "Syntax:\n\nALTER TABLE <table> OWNER TO <newowner>\n\nSecurity:\n\nThe owner of a table will be able to change the owner to any other user.\nThe superuser will NOT have special privileges.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Fri, 08 Sep 2000 09:26:12 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposal : changing table ownership"
},
{
"msg_contents": "\"Mark Hollomon\" <[email protected]> writes:\n> ALTER TABLE <table> OWNER TO <newowner>\n\n> The owner of a table will be able to change the owner to any other user.\n\nDoesn't this create risks parallel to file give-away (chown) in Unix?\nA lot of Unices disallow chown except to the superuser.\n\nTables aren't currently active objects, but we've been talking about\nthings like making trigger functions run \"setuid\" to the table owner.\nIf that happens then table ownership giveaway is a big security hole.\n\n> The superuser will NOT have special privileges.\n\nSay *what* ? That's just silly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Sep 2000 10:43:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal : changing table ownership "
},
{
"msg_contents": "On Fri, 8 Sep 2000, Tom Lane wrote:\n\n> \"Mark Hollomon\" <[email protected]> writes:\n> > ALTER TABLE <table> OWNER TO <newowner>\n> \n> > The owner of a table will be able to change the owner to any other user.\n> \n> Doesn't this create risks parallel to file give-away (chown) in Unix?\n> A lot of Unices disallow chown except to the superuser.\n\nAgreed ...\n\n> Tables aren't currently active objects, but we've been talking about\n> things like making trigger functions run \"setuid\" to the table owner.\n> If that happens then table ownership giveaway is a big security hole.\n> \n> > The superuser will NOT have special privileges.\n> \n> Say *what* ? That's just silly.\n\n*Only* superuser should be able to run the above command ... \n\n",
"msg_date": "Fri, 8 Sep 2000 11:54:30 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal : changing table ownership "
},
{
"msg_contents": "The Hermit Hacker wrote:\n>\n> > \"Mark Hollomon\" <[email protected]> writes:\n> > > ALTER TABLE <table> OWNER TO <newowner>\n>\n> *Only* superuser should be able to run the above command ...\n\n\nFine with me.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Fri, 08 Sep 2000 11:30:37 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal : changing table ownership"
}
] |
[
{
"msg_contents": "\nI've got a situation while looking at \nmatch partial fks that I figured I'd ask for\nsuggestions about, in case there's a better\nsolution than the one I've come up with.\n\nI believe for referential actions on match\npartial, we need to lock all referencing rows\nin the fk table as well as all rows that those\nrows reference in the pk table. The thing that\nI'd be worried about is:\n fk table has a row that is a referencing row\n of two rows in the pk table.\n one transaction tries to delete one of them\n while another tries to delete the other.\n each one thinks the fk row is not a unique\n referencing row (since the other row still\n exists in their world)\n\nI would just do a check in an exists clause,\nbut then I don't think I can lock the rows (since\nfor update doesn't seem to exist in subclauses\nand an outside one doesn't seem to affect rows\nin the subclauses either) I'm also worried\nabout potential deadlocks (I'm assuming that\nordering the select for update's output won't\naffect the order the rows are locked in).\n\nSo, I've been thinking I can do something like:\n get all the referencing rows for this table\n for each one,\n get its referenced rows (other than this one)\n if none, do any action necessary\n\nBut this seems less than optimal and I'm reasonably\ncertain I'm just missing something obvious (my brain's\ndecided to take a vacation recently :( ).\n\n",
"msg_date": "Fri, 8 Sep 2000 09:14:24 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Looking for suggestions "
}
] |
[
{
"msg_contents": "Email was fried, so one more time...\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n---------- Forwarded message ----------\nDate: Wed, 6 Sep 2000 16:19:08 +0200 (CEST)\nFrom: Peter Eisentraut <[email protected]>\nTo: PostgreSQL Development <[email protected]>\nSubject: \"setuid\" functions, a solution to the RI privilege problem\n\nWith the code cleanup that is just coming through it is now easily\npossible to change the current user id during a session/connection. Hence\nwe can now attack the issue of \"setuid\" functions, which would also\nprovide a means to circumvent the well-known RI privilege problem.\n\nI haven't looked closely, but I envision it working like this: Add a\nboolean attribute to pg_proc, maybe \"setuid\", but I'd rather avoid that\nname. If it is false then everything happens as usual. If it is true then\nthe function manager saves the uid before the function call, sets the\ncurrent uid to the uid of the function creator, and restores it\nafterwards. It might end up touching only a few dozen lines in fmgr.c.\n\nAs for syntax, we can easily do with the \"CREATE FUNCTION WITH\" mechanism,\nuntil we implement the standard syntax.\n\nWhat this means in particular for the RI triggers is that they would then\nalways execute with the permission of the bootstrap user (usually\n\"postgres\"), which would give them a free ticket. OTOH, that would commit\nus that the \"postgres\" user always has to be a superuser, which should be\nokay I should think.\n\nFor those interested in the standards, I append here a relevant section.\nNote that it actually requires SQL language functions to be \"setuid\" by\ndefault, but I think we can safely ignore that little artifact.\n\n[4.23]\n When the <routine body> of an SQL-invoked routine is executed and\n the new SQL-session context for the SQL-session is created, the\n SQL-session user identifier in the new SQL-session context is set\n to the current user identifier in the SQL-session context that was\n active when the SQL-session caused the execution of the <routine\n body>. The authorization stack of this new SQL-session context is\n initially set to empty and a new pair of identifiers is immediately\n appended to the authorization stack such that:\n \n - The user identifier is the newly initialized SQL-session user\n identifier.\n \n - The role name is the current role name of the SQL-session\n context that was active when the SQL-session caused the\n execution of the <routine body>.\n \n The identifiers in this new entry of the authorization stack\n are then modified depending on whether the SQL-invoked routine\n is an SQL routine or an external routine. If the SQL-invoked\n routine is an SQL routine, then, if the routine authorization\n identifier is a user identifier, the user identifier is set to\n the routine authorization identifier and the role name is set to\n null; otherwise, the role name is set to the routine authorization\n and the user identifier is set to null.\n \n If the SQL-invoked routine is an external routine, then the\n identifiers are determined according to the external security\n characteristic of the SQL-invoked routine:\n - If the external security characteristic is DEFINER, then:\n \n o If the routine authorization identifier is a user identifier,\n then the user identifier is set to the routine authorization\n identifier and the role name is set to the null value.\n \n o Otherwise, the role name is set to the routine authorization\n identifier and the user identifier is set to the null value.\n \n - If the external security characteristic is INVOKER, then the\n identifiers remain unchanged.\n \n - If the external security characteristic is IMPLEMENTATION\n DEFINED, then the identifiers are set to implementation-defined\n values.\n\n[11.49]\n <external security clause> ::=\n EXTERNAL SECURITY DEFINER\n | EXTERNAL SECURITY INVOKER\n | EXTERNAL SECURITY IMPLEMENTATION DEFINED\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 8 Sep 2000 19:14:54 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"setuid\" functions, a solution to the RI privilege problem"
},
{
"msg_contents": "On Fri, Sep 08, 2000 at 07:14:54PM +0200, Peter Eisentraut wrote:\n> Date: Wed, 6 Sep 2000 16:19:08 +0200 (CEST)\n> From: Peter Eisentraut <[email protected]>\n> To: PostgreSQL Development <[email protected]>\n> Subject: \"setuid\" functions, a solution to the RI privilege problem\n> \n> With the code cleanup that is just coming through it is now easily\n> possible to change the current user id during a session/connection. Hence\n> we can now attack the issue of \"setuid\" functions, which would also\n> provide a means to circumvent the well-known RI privilege problem.\n> \n\nThis sounds good.\n\n> I haven't looked closely, but I envision it working like this: Add a\n> boolean attribute to pg_proc, maybe \"setuid\", but I'd rather avoid that\n> name. If it is false then everything happens as usual. If it is true then\n> the function manager saves the uid before the function call, sets the\n> current uid to the uid of the function creator, and restores it\n> afterwards. It might end up touching only a few dozen lines in fmgr.c.\n> \n\nGood for functions. Rather than a boolean, how about something to store\nthe three standard defined behaviors DEFINER,INVOKER,IMPLEMENTATION\nDEFINED: \"proauth\" int, with #DEFINES, perhaps? Or, we could store\nthe userid that this procedure will run as, with null signifying\ninvoker. (BTW, that's the first time I've seen 'IMPLEMENTATION DEFINED'\nin a standard leaking out into the defined grammar!)\n\nI have some concerns about views, see below.\n\n> \n> For those interested in the standards, I append here a relevant section.\n> Note that it actually requires SQL language functions to be \"setuid\" by\n> default, but I think we can safely ignore that little artifact.\n> \n\nWell, currently, views access the tables in their FROM clause with the\npriviliges of the creating user, which means 'setuid' by default. As I\nrecently found out, subselects in a view definition do _not_ run as the\ncreating user, however.\n\nI wonder if your approach might also be useful for views? I realize this\nis off topic for your suggestion for functions. And I have a sneaking\nsuspicion that the only fix for VIEWs requires the planner rewrite Tom's\nbeen working on.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Fri, 8 Sep 2000 13:56:46 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"setuid\" functions, a solution to the RI privilege problem"
},
{
"msg_contents": "Ross J. Reedstrom writes:\n\n> This sounds good.\n\nThe sad part of this story is that, while setuid functions work well for\nme in my tree, they cannot be used for the RI problem after all. The\nproblem is that the lookup table for builtin function is already generated\nat compile time (Gen_fmgrtab.sh), whereas we don't know the user id of\ntheir owner until initdb at the earliest. Hence setuid functions don't\nwork for builtins, currently.\n\n(With 7.2 I plan to get rid of pg_shadow.usesysid and identify users via\npg_shadow.oid and the superuser oid will be hard-coded into\ninclude/catalog/pg_shadow.h, so at that point they will work.)\n\nAn alternative answer would be to tie the user id not to the owner of the\nfunction but to the owner of the trigger, as an additional feature.\n\n\n> Good for functions. Rather than a boolean, how about something to store\n> the three standard defined behaviors DEFINER,INVOKER,IMPLEMENTATION\n> DEFINED: \"proauth\" int, with #DEFINES, perhaps? Or, we could store\n> the userid that this procedure will run as, with null signifying\n> invoker.\n\nWell, the standards defines these three behaviours, in terms of its\n\"authorization stack\":\n\n* INVOKER -- nothing changes when the function is called\n\n* DEFINER -- push function owner's identifier on top of stack\n\n* IMPLEMENTATION DEFINED -- put whatever you want on the stack\n\nSince IMPLEMENTATION DEFINED is the default it has to be what we have now,\nwhich in turn is INVOKER. So ISTM that we do not have 3 options really.\n\n> (BTW, that's the first time I've seen 'IMPLEMENTATION DEFINED'\n> in a standard leaking out into the defined grammar!)\n\nI have a sneaking suspicion that this was done because certain vendors\nwith large interests in the SQL specification process had completely\nincompatible behaviours by default that would not fit in with the SQL\nmodel at all, so they could only be described by \"do what you want\". ;-)\n\n> I have some concerns about views, see below.\n\nNo clue about views... :-\\\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 9 Sep 2000 15:58:25 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"setuid\" functions, a solution to the RI privilege problem"
}
] |
[
{
"msg_contents": "I just read across the code in command/trigger.c:ExecCallTriggerFunc() and\napparently the trigger function is called unconditionally even if the\n\"strict\" flag is set. Perhaps this should be amended somewhere.\n\nFor coding clarity and convenience I'd suggest that we add another\nfunction as a wrapper around FunctionCallInvoke() which does the right\nthing with \"strict\". We could call that FunctionCallInvoke(), and call the\ncurrent version FunctionCallInvokeNoNulls() or some such.\n\nBtw., FunctionCallInvoke() would look to be the most prominent place to\nhook in the \"setuid\" feature. For that purpose I'd make the macro an\ninline function instead.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 8 Sep 2000 19:17:28 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trigger functions don't obey \"strict\" setting?"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I just read across the code in command/trigger.c:ExecCallTriggerFunc() and\n> apparently the trigger function is called unconditionally even if the\n> \"strict\" flag is set. Perhaps this should be amended somewhere.\n\nSince triggers take no parameters, I'm not sure this is wrong. But if\nit is wrong, then the code at fault is ExecCallTriggerFunc, not\nFunctionCallInvoke. FunctionCallInvoke is just for *invoking*, not\nfor deciding whether to invoke.\n\n> Btw., FunctionCallInvoke() would look to be the most prominent place to\n> hook in the \"setuid\" feature. For that purpose I'd make the macro an\n> inline function instead.\n\nUgh. The performance cost would be excessive. Instead, when fmgr is\nsetting up to call a setuid function, have it insert an extra level of\nfunction handler that does the save/setup/restore of current UID.\nThat way, the cost is zero when you're not using the feature (which is\nnearly always).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Sep 2000 18:22:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger functions don't obey \"strict\" setting? "
},
{
"msg_contents": "Tom Lane writes:\n\n> > Btw., FunctionCallInvoke() would look to be the most prominent place to\n> > hook in the \"setuid\" feature. For that purpose I'd make the macro an\n> > inline function instead.\n> \n> Ugh. The performance cost would be excessive.\n\nIn the path of a \"normal\" function call is only one extra `if (bool)'\nstatement. There are certainly more \"excessive\" performance problems than\nthat, no?\n\n> Instead, when fmgr is setting up to call a setuid function, have it\n> insert an extra level of function handler that does the\n> save/setup/restore of current UID.\n\nI don't quite understand. Do you mean like a PL function handler? But then\nthis thing wouldn't work for external PL's unless we either have a setuid\nversion of each or have nested handlers.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 9 Sep 2000 19:43:59 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Where to stick function setuid"
},
{
"msg_contents": "Where were we on this? Yes/No/Maybe?\n\nPeter Eisentraut writes:\n\n> Tom Lane writes:\n> \n> > > Btw., FunctionCallInvoke() would look to be the most prominent place to\n> > > hook in the \"setuid\" feature. For that purpose I'd make the macro an\n> > > inline function instead.\n> > \n> > Ugh. The performance cost would be excessive.\n> \n> In the path of a \"normal\" function call is only one extra `if (bool)'\n> statement. There are certainly more \"excessive\" performance problems than\n> that, no?\n> \n> > Instead, when fmgr is setting up to call a setuid function, have it\n> > insert an extra level of function handler that does the\n> > save/setup/restore of current UID.\n> \n> I don't quite understand. Do you mean like a PL function handler? But then\n> this thing wouldn't work for external PL's unless we either have a setuid\n> version of each or have nested handlers.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 17 Sep 2000 15:11:17 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Where to stick function setuid"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Where were we on this? Yes/No/Maybe?\n>> \n>>>> Instead, when fmgr is setting up to call a setuid function, have it\n>>>> insert an extra level of function handler that does the\n>>>> save/setup/restore of current UID.\n>> \n>> I don't quite understand. Do you mean like a PL function handler? But then\n>> this thing wouldn't work for external PL's unless we either have a setuid\n>> version of each or have nested handlers.\n\nSorry, I forgot to reply. Nested handlers were exactly what I was\nadvocating. Or more accurately, *a* nested handler; you'd only need\none regardless of the target function's language. I'm envisioning\nthat it'd have fn_extra pointing at a block that contains the UID to\nbe used for the call and the FmgrInfo for the underlying function.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Sep 2000 13:52:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Where to stick function setuid "
}
] |
[
{
"msg_contents": "Targets of the form\n\ndepend dep:\n $(CC) -MM $(CFLAGS) *.c >depend\n\nare wrong, there should be at least a $(CPPFLAGS) in there. In fact, the\nwhole CPPFLAGS vs CFLAGS issue is completely messed up, in case you ever\nwondered why you get duplicate `-I' options on the compile line.\n\nNow the variable naming issue I want to fix, but I wonder whether it's\nworth fixing the `depend' targets. After all, they have been replaced by\nsomething better now. To be clear: fixing the variable naming without\nfixing the depend targets would break the depend targets, it's just a\ndecision about the amount of work.\n\nAny comments?\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 8 Sep 2000 19:19:21 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "CFLAGS vs CPPFLAGS, or The future of `make depend'"
}
] |
[
{
"msg_contents": "Is there a compelling reason to continue to ship a \"pre-7\" version of the\nODBC catalog extension in contrib/odbc?\n\nIf not, could we not instead move the odbc.sql file into interfaces/odbc\nand install it with the odbc driver? (\"install\" here refers to installing\ninto the file system, not into the database system) That would also make\nthings easier for users of binary packages.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 8 Sep 2000 19:20:04 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"Pre-7\" odbc extension files"
},
{
"msg_contents": "Okay, so this will happen then...\n\nPeter Eisentraut writes:\n\n> Is there a compelling reason to continue to ship a \"pre-7\" version of the\n> ODBC catalog extension in contrib/odbc?\n> \n> If not, could we not instead move the odbc.sql file into interfaces/odbc\n> and install it with the odbc driver? (\"install\" here refers to installing\n> into the file system, not into the database system) That would also make\n> things easier for users of binary packages.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 17 Sep 2000 22:38:24 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"Pre-7\" odbc extension files"
},
{
"msg_contents": "> Is there a compelling reason to continue to ship a \"pre-7\" version of the\n> ODBC catalog extension in contrib/odbc?\n\nSounds OK. This was desirable for the 7.0 release (to allow a pre-7.0\nbackend to work with newer clients via ODBC), but the backward\ncompatibility issue should be less now that folks have had time to do a\ntransition.\n\n> If not, could we not instead move the odbc.sql file into interfaces/odbc\n> and install it with the odbc driver? (\"install\" here refers to installing\n> into the file system, not into the database system) That would also make\n> things easier for users of binary packages.\n\nOr should we think about how to make a full-up \"package\", which can be\ninstalled, uninstalled, etc etc. A problem with burying it down into the\nmain odbc area is that it may be unclear that it is an optional addition\nto the ODBC capabilities that needs to be installed into template1 or\ninto a specific database.\n\n - Thomas\n",
"msg_date": "Fri, 22 Sep 2000 14:01:28 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"Pre-7\" odbc extension files"
}
] |
[
{
"msg_contents": "A few weeks ago there was a discussion on how views should be\ntreated during dump/reload, and whether or not to change pg_dump\nto use CREATE VIEW instead of the current CREATE TABLE/CREATE\nRULE sequence. I suggested at that time that the dependencies\nissue could be resolved if PostgreSQL would allow for the\ncreation of function prototypes in conjunction with ALTER\nFUNCTION to supply the body of the function later.\n\nAlthough I don't see it on the TODO list, one of the FAQs on\nGeneral is the problem related to triggers or rules which are\nbased upon functions. If a user executes a DROP/CREATE sequence\nto change the function's body, those triggers or rules based upon\nthe function break, since the OID has changed.\n\nThe solution tossed around is to create a SQL command such as\n\"ALTER FUNCTION\" or \"CREATE OR REPLACE FUNCTION\", etc. Under the\nassumption that such a command were to exist, its sole purpose\nwould be to change the function implementation without changing\nthe OID. \n\nNow back to pg_dump. Since the temporary solution to eliminating\ndependency problems is to dump in OID order, with the current\ncode, things won't break. But with an ALTER FUNCTION, dumping in\nOID order could very well break the schema:\n\nCREATE TABLE employees (key int4);\n\nCREATE FUNCTION numpeople() RETURNS int4 AS\n 'SELECT COUNT(*) FROM employees' LANGUAGE 'plpgsql';\n\nCREATE VIEW AS SELECT numpeople();\n\nCREATE TABLE managers (key int4);\n\nALTER FUNCTION numpeople() AS \n 'SELECT COUNT(*) FROM managers' LANGUAGE 'plpgsql';\n\nSo what to do?\n\n1) Don't create an ALTER FUNCTION command?\n2) Change pg_dump to walk through dependencies?\n3) Or devise a manner by which pg_dump can dump objects in a\ncertain sequence such that dependencies never break?\n\nAny comments?\n\nMike Mascari\n\nP.S. The reason why I'm asking is I thought I could at least\ncontribute some trivial SQL commands such as a CREATE FUNCTION\n... AS NULL/ALTER FUNCTION, but it seems there are many\nconsequences to everything.\n",
"msg_date": "Fri, 08 Sep 2000 20:48:25 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "I remember why I suggested CREATE FUNCTION...AS NULL"
},
{
"msg_contents": "\nOn Fri, 8 Sep 2000, Mike Mascari wrote:\n> The solution tossed around is to create a SQL command such as\n> \"ALTER FUNCTION\" or \"CREATE OR REPLACE FUNCTION\", etc. Under the\n> assumption that such a command were to exist, its sole purpose\n> would be to change the function implementation without changing\n> the OID. \n> \n> Now back to pg_dump. Since the temporary solution to eliminating\n> dependency problems is to dump in OID order, with the current\n> code, things won't break. But with an ALTER FUNCTION, dumping in\n> OID order could very well break the schema:\n> \n> CREATE TABLE employees (key int4);\n> \n> CREATE FUNCTION numpeople() RETURNS int4 AS\n> 'SELECT COUNT(*) FROM employees' LANGUAGE 'plpgsql';\n> \n> CREATE VIEW AS SELECT numpeople();\n> \n> CREATE TABLE managers (key int4);\n> \n> ALTER FUNCTION numpeople() AS \n> 'SELECT COUNT(*) FROM managers' LANGUAGE 'plpgsql';\n\nActually, I think this would still work, because it\ndoesn't check the table name existance until it's \nused for the first time, not when it's created.\n\n> So what to do?\n> \n> 1) Don't create an ALTER FUNCTION command?\n> 2) Change pg_dump to walk through dependencies?\n> 3) Or devise a manner by which pg_dump can dump objects in a\n> certain sequence such that dependencies never break?\n\nWell, I was discussing something related a few weeks (?) ago\nfor constraints, some kind of system that kept track of\nwhat objects were dependent on what other objects. Unfortunately,\nit's not possible to do it completely without an awful lot\nof work because you could have arguments to functions that\nturned into objects to be dependent to.\n\nYou could limit the effect by defining the functions as NULL\n(if you were to do that) in oid order and changing their code\nlater in the dump. That way all of the objects are there\nbut may not be usable at that time.\n\n\n",
"msg_date": "Fri, 8 Sep 2000 18:21:48 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: I remember why I suggested CREATE FUNCTION...AS NULL"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n> Now back to pg_dump. Since the temporary solution to eliminating\n> dependency problems is to dump in OID order, with the current\n> code, things won't break. But with an ALTER FUNCTION, dumping in\n> OID order could very well break the schema:\n\nYes, that's been understood all along to be the weak spot of dumping\nin OID order. But you can already break dump-in-OID-order with\nexisting commands like ALTER TABLE ADD CONSTRAINT. I think that the\nadvantages of ALTER FUNCTION are well worth the slightly increased risk\nof dump/reload difficulties.\n\n> 2) Change pg_dump to walk through dependencies?\n\nThe trouble with that is that dependency analysis is a monstrous job,\nand one that would make pg_dump even more fragile and backend-version-\ndependent than it is now. Besides, with ALTER it is possible to create\n*circular* dependencies, so even after you'd done the work you'd still\nnot have a bulletproof solution, just a 99.9% solution instead of a 99%\nsolution. (Exact numbers open to debate, obviously, but you see my\npoint.)\n\n> 3) Or devise a manner by which pg_dump can dump objects in a\n> certain sequence such that dependencies never break?\n\nIf you've got one, step right up to the plate and swing away ;-).\nYou might be on the right track with the notion of creating functions\nwith dummy bodies and then doing ALTER later. I haven't thought it\nthrough in detail, but maybe something based on that attack could work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Sep 2000 21:43:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: I remember why I suggested CREATE FUNCTION...AS NULL "
},
{
"msg_contents": "Stephan Szabo <[email protected]> writes:\n>> ALTER FUNCTION numpeople() AS \n>> 'SELECT COUNT(*) FROM managers' LANGUAGE 'plpgsql';\n\n> Actually, I think this would still work, because it\n> doesn't check the table name existance until it's \n> used for the first time, not when it's created.\n\nBut SQL-language function bodies are checked at entry, not only\nat first use. (I consider it a deficiency of plpgsql that it\ndoesn't do likewise.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Sep 2000 21:48:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: I remember why I suggested CREATE FUNCTION...AS NULL "
}
] |
[
{
"msg_contents": "> >> ALTER FUNCTION numpeople() AS \n> >> 'SELECT COUNT(*) FROM managers' LANGUAGE 'plpgsql';\n> \n > > Actually, I think this would still work, because it\n> > doesn't check the table name existance until it's \n> > used for the first time, not when it's created.\n> \n> But SQL-language function bodies are checked at entry, not only\n> at first use. (I consider it a deficiency of plpgsql that it\n> doesn't do likewise.)\n\nYeah (didn't think about SQL ones). Although the deferred \ncreation with like a nulled out prosrc would probably solve \nthese problems as long as you never wanted to actually \ncall the function until the restore was done, so we wouldn't\nwant any default values, rules or triggers that might use them\nto be activated.\n\n\n",
"msg_date": "Fri, 8 Sep 2000 19:02:42 -0700",
"msg_from": "\"Stephan Szabo\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fw: I remember why I suggested CREATE FUNCTION...AS NULL "
}
] |
[
{
"msg_contents": "The last sources cannot be complied because the file src/include/storage/relfilenode.h is absent\n\n",
"msg_date": "Sat, 9 Sep 2000 11:22:56 +0800",
"msg_from": "Alexey Raschepkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sources from the CVS cannot be compiled"
}
] |
[
{
"msg_contents": "I've searched Mosix, Dipc, Postgres mailing-lists and used google about the \npossibility to cluster ,load balance, Postgresql databases.\n\nMosix isn't good to clucter Postgresql because of shared memory.\n\nCan be Dipc (http://wallybox.cei.net/dipc/) suitable for this task without \nchanging postgres' sources (probably only cpu balancing, no data)?\n\n\nIf PostgreSQL Inc. will do a replication server, will be possible?\n\nAnd Mariposa (http://mariposa.CS.Berkeley.EDU/download.html) ?\n\nMy question isn't an academic one, i (probably WE) really need this feature.\n\nthank you in advance for you reply.\n\nvalter\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\nShare information about yourself, create your own public profile at \nhttp://profiles.msn.com.\n\n",
"msg_date": "Sat, 09 Sep 2000 15:34:52 CEST",
"msg_from": "\"Valter Mazzola\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Scalability, Clustering"
},
{
"msg_contents": "\nOn Sat, 9 Sep 2000, Valter Mazzola wrote:\n\n> If PostgreSQL Inc. will do a replication server, will be possible?\n> \n> And Mariposa (http://mariposa.CS.Berkeley.EDU/download.html) ?\n> \n> My question isn't an academic one, i (probably WE) really need this feature.\n> \n> thank you in advance for you reply.\n\n Depends on what kind of scalability you need. Replication does not\nusually equal scalability.\n\n I know that someone was working on a commercial extension to PostgreSQL\nto add clustering based on a shared disk system. Basically he was added a\nraw storage manager to PostgreSQL plus a lock manager to co-oridinate\naccess to the shared disk. That way the two nodes could co-ordinate\naccess to the shared disk. This is very similar to Oracle Parallel\nServer.\n\n Replication is a different beast.\n\nTom\n\n",
"msg_date": "Sat, 9 Sep 2000 13:25:22 -0700 (PDT)",
"msg_from": "Tom Samplonius <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scalability, Clustering"
}
] |
[
{
"msg_contents": "Seems I found the problem with the 'select pg_class from pg_class'\nerror. In fact this error arises not only with the pg_class table but\nwith any select statement where one of the select attributes is the\nsame as the table name and doesn't exist in the table.\n\nThe problem is in the transformIdent() function. The ident is first\nchecked whether it is a table and the check succeeds. After that\naccording to the precedence the check for an attribute with the name\nident->name occurs and fails. As a result the function returns ident\nwith isRel set to true and that confuses the engine afterwards.\n\nThe obvious proposal to eliminate the bug is to check the precedence\nfirst and if it is EXPR_COLUMN_FIRST not to try to check if the ident\nis a relation. I'm not an experienced PostgreSQL hacker though so I\ncannot predict the implications of such change (I did run the regress test\nwith the relation check commented out whatsoever and the results were\nthe same as for the unchanged engine). Perhaps someone will\nhave some comments on this issue?\n\n",
"msg_date": "Sun, 10 Sep 2000 00:03:30 +0800",
"msg_from": "Alexey Raschepkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "'select pg_class from pg_class' error"
}
] |
[
{
"msg_contents": "I found out during a pgdump/restore that not all tables were being\nbacked up. After some investigation I suspect that this was because the\ntables in question were not owned by anybody! I don't know how this\nhappened perhaps a user that owned those tabled was dropped. Another odd\nthing was that some of the tables had read permission for a user named\n99. There was no such user created that I know of (of course one of the\nother admins may have fiddled with it I don't know, nobody has taken\ncredit for it).\n\nIt seems to me the deletion of a user does not cascade to permissions\nand ownerships. I suggest that either dropping user ought to be\ndisalloved at this point or the tables revert to being owned by the\npostgres user.\n\nI am looking forward to being able to change owners in a more\nstraightforward matter.\n\nThanks for listening.\n\n",
"msg_date": "Sat, 09 Sep 2000 13:54:45 -0700",
"msg_from": "malcontent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd Happening."
}
] |
[
{
"msg_contents": ">> In the CVS as of ~ 13:00 EDT \n>> ../../../../src/include/access/htup.h:18:\n> storage/relfilenode.h: No such file or directory\n>\n> Looks like Vadim forgot to 'cvs add' a file before\n> his latest commit...\n\nThanks, fixed. BTW, what's the status of new table\nfilenaming?\n\nVadim\n\n-----------------------------------------------\nFREE! The World's Best Email Address @email.com\nReserve your name now at http://www.email.com\n\n\n",
"msg_date": "Sat, 9 Sep 2000 20:09:03 -0400 (EDT)",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: missing file relfilenode.h"
}
] |
[
{
"msg_contents": "It seems PL/pgSQL accepts only ASCII identifiers. This results in\ncolumn names of Europian or Asian languages are syntax errors for\nexample. Fix for this looks simple (see attached patches) but I would\nlike to know if there's any intentional reasons for this.\n--\nTatsuo Ishii\n\n*** scan.l~\tThu May 27 05:55:06 1999\n--- scan.l\tThu Sep 7 19:25:36 2000\n***************\n*** 48,55 ****\n #define YY_INPUT(buf,res,max)\tplpgsql_input(buf, &res, max)\n %}\n \n! WS\t[[:alpha:]_\"]\n! WC\t[[:alnum:]_\"]\n \n %x\tIN_STRING IN_COMMENT\n \n--- 48,55 ----\n #define YY_INPUT(buf,res,max)\tplpgsql_input(buf, &res, max)\n %}\n \n! WS\t[\\200-\\377_A-Za-z\"]\n! WC\t[\\200-\\377_A-Za-z0-9\"]\n \n %x\tIN_STRING IN_COMMENT\n \n",
"msg_date": "Sun, 10 Sep 2000 09:45:56 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "PL/pgSQL does not accept none ASCII identifiers"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> It seems PL/pgSQL accepts only ASCII identifiers. This results in\n> column names of Europian or Asian languages are syntax errors for\n> example. Fix for this looks simple (see attached patches) but I would\n> like to know if there's any intentional reasons for this.\n\n No other reason than that I just forgot about non-ascii\n identifiers when I coded that (long long ago). Go ahead and\n change it.\n\n\nJan\n\n> --\n> Tatsuo Ishii\n>\n> *** scan.l~ Thu May 27 05:55:06 1999\n> --- scan.l Thu Sep 7 19:25:36 2000\n> ***************\n> *** 48,55 ****\n> #define YY_INPUT(buf,res,max) plpgsql_input(buf, &res, max)\n> %}\n>\n> ! WS [[:alpha:]_\"]\n> ! WC [[:alnum:]_\"]\n>\n> %x IN_STRING IN_COMMENT\n>\n> --- 48,55 ----\n> #define YY_INPUT(buf,res,max) plpgsql_input(buf, &res, max)\n> %}\n>\n> ! WS [\\200-\\377_A-Za-z\"]\n> ! WC [\\200-\\377_A-Za-z0-9\"]\n>\n> %x IN_STRING IN_COMMENT\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 13 Sep 2000 02:42:54 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL does not accept none ASCII identifiers"
},
{
"msg_contents": "> > It seems PL/pgSQL accepts only ASCII identifiers. This results in\n> > column names of Europian or Asian languages are syntax errors for\n> > example. Fix for this looks simple (see attached patches) but I would\n> > like to know if there's any intentional reasons for this.\n> \n> No other reason than that I just forgot about non-ascii\n> identifiers when I coded that (long long ago). Go ahead and\n> change it.\n> \n> \n> Jan\n\nOk, I'll go ahead and commit the changes into the current and stable\ntree.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 13 Sep 2000 19:41:04 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL does not accept none ASCII identifiers"
}
] |
[
{
"msg_contents": "\nsomeone notice anything wrong with this query? :) *slap forehead*\n\nexplain\n SELECT distinct s.gid, s.created , geo_distance(pd.location, '(-90.3690233918754,38.7788148984854)')\n FROM status s, personal_data pd, relationship_wanted rw , personal_ethnicity pe , personal_religion pr , personal_bodytype pb\n WHERE s.active AND s.status != 0\n AND (s.gid = pd.gid AND pd.gender = 0)\n AND (s.gid = rw.gid AND rw.gender = 1)\n AND geo_distance( pd.location, '(-90.3690233918754,38.7788148984854)' ) <= 75\nORDER BY geo_distance( pd.location, '(-90.3690233918754,38.7788148984854)'), s.created desc;\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 10 Sep 2000 19:38:52 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "man, I feel like a beginner ..."
},
{
"msg_contents": "The Hermit Hacker wrote:\n>\n> someone notice anything wrong with this query? :) *slap forehead*\n>\n> explain\n> SELECT distinct s.gid, s.created , geo_distance(pd.location, '(-90.3690233918754,38.7788148984854)')\n> FROM status s, personal_data pd, relationship_wanted rw , personal_ethnicity pe , personal_religion pr , personal_bodytype pb\n> WHERE s.active AND s.status != 0\n> AND (s.gid = pd.gid AND pd.gender = 0)\n> AND (s.gid = rw.gid AND rw.gender = 1)\n> AND geo_distance( pd.location, '(-90.3690233918754,38.7788148984854)' ) <= 75\n> ORDER BY geo_distance( pd.location, '(-90.3690233918754,38.7788148984854)'), s.created desc;\n\n What's the purpose of joining it with \"pb\"?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n",
"msg_date": "Wed, 13 Sep 2000 02:58:02 -0500 (EST)",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: man, I feel like a beginner ..."
},
{
"msg_contents": "On Wed, 13 Sep 2000, Jan Wieck wrote:\n\n> The Hermit Hacker wrote:\n> >\n> > someone notice anything wrong with this query? :) *slap forehead*\n> >\n> > explain\n> > SELECT distinct s.gid, s.created , geo_distance(pd.location, '(-90.3690233918754,38.7788148984854)')\n> > FROM status s, personal_data pd, relationship_wanted rw , personal_ethnicity pe , personal_religion pr , personal_bodytype pb\n> > WHERE s.active AND s.status != 0\n> > AND (s.gid = pd.gid AND pd.gender = 0)\n> > AND (s.gid = rw.gid AND rw.gender = 1)\n> > AND geo_distance( pd.location, '(-90.3690233918754,38.7788148984854)' ) <= 75\n> > ORDER BY geo_distance( pd.location, '(-90.3690233918754,38.7788148984854)'), s.created desc;\n> \n> What's the purpose of joining it with \"pb\"?\n\nif the proper clause was in place, ooddles of purpose ... it wasn't until\nafter I upgraded to the newest code that Tom put the fix in for, and it\nwas *still* causing problems, that I clued into the fact that the AND\nclause that was supposed to be associated with 'pb' *wasn't* there ...\n\nFor the whole time we were debugging this, none of us clued into it :)\n\n",
"msg_date": "Wed, 13 Sep 2000 10:31:13 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: man, I feel like a beginner ..."
}
] |
[
{
"msg_contents": "\n Row versioning in the ODBC driver apparently does not work because of a\nmissing operator. The FAQ solution is contained below. The answer\nsuggests that this problem should be fixed in PostgreSQL 6.4, but I'm\nusing 7.0.2 and I still get the missing operator error if I attempt to\nenable row versioning.\n\n Is it possible to get this operator included in the next release?\n\n\nHow do I use the row versioning -OR- why do I get a message about no\noperator for xid and int4? \n\nSome of the operators are missing in the current release of Postgres so in\norder to use row versioning, you must overload the int4eq function for use\nwith the xid type. Also, you need to create an operator to compare xid to\nint4. You must do this for each database you want to use this feature on.\nThis will probably not be necessary in Postgres 6.4 since it will be\nadded. Here are the details: \ncreate function int4eq(xid,int4) \n returns bool \n as '' \n language 'internal'; \n\ncreate operator = ( \n leftarg=xid, \n rightarg=int4, \n procedure=int4eq, \n commutator='=', \n negator='<>', \n restrict=eqsel, \n join=eqjoinsel \n);\n\n\n\n\n",
"msg_date": "Sun, 10 Sep 2000 19:27:01 -0700 (PDT)",
"msg_from": "Tom Samplonius <[email protected]>",
"msg_from_op": true,
"msg_subject": "Row versioning in the ODBC driver..."
}
] |
[
{
"msg_contents": "Hi,\n\nI'm going to Bulgaria this week to setup FreeBSD server running\npostgres and would like to know if somebody has an experience\nwith postgres and bulgarian locale. Actually, I need\nbg_BG locale for FreeBSD. interesting that searching for\nsubject in internet doesn't provide any information.\nThe only thing I found is bg_BG locale for Linux (Redhat)\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 11 Sep 2000 10:40:31 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "need asap: bg_BG locale for FreeBSD"
},
{
"msg_contents": "Oleg Bartunov wrote:\n> \n> Hi,\n> \n> I'm going to Bulgaria this week to setup FreeBSD server running\n> postgres and would like to know if somebody has an experience\n> with postgres and bulgarian locale. Actually, I need\n> bg_BG locale for FreeBSD. interesting that searching for\n> subject in internet doesn't provide any information.\n> The only thing I found is bg_BG locale for Linux (Redhat)\n\nI created a Swedish LC_COLLATE locale part for FreeBSD a while\nback, with the specific purpose of using it with PostgreSQL,\nand it was not very hard.\n\nFor sorting stuff:\nCheck /usr/src/share/colldef (the sorting algoritm is defined\nby LC_COLLATE). I don't anything about bulgarian, though. Is it\nusing cyrillic characters? Try using one of the russion locales\nto start with.\n\nFor character representation (LC_CTYPE), I have no experience,\nbut it should also be fairly easy. Maybe Bulgarian uses a\nsimilar character set to for example Russian? You will only\nneed a character locale for each Check /usr/src/share/mklocale\n\nFor a complete locale, you would also need timedef\n(/usr/src/share/timedef).\n\ncolldef(1) is man page to check for sorting (collate)...\n\nmklocale(1) is for locale creation.\n\nGood luck!\nPalle\n",
"msg_date": "Mon, 11 Sep 2000 15:51:23 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: need asap: bg_BG locale for FreeBSD"
},
{
"msg_contents": "Palle,\n\nthanks for the message. I've created bulgarian locale\nfrom similar russian sources (I believe it's CP1251 charset).\nHope it would be ok.\n\n\tRegards,\n\t\tOleg\n\nOn Mon, 11 Sep 2000, Palle Girgensohn wrote:\n\n> Date: Mon, 11 Sep 2000 15:51:23 +0200\n> From: Palle Girgensohn <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected], [email protected]\n> Subject: Re: [SQL] need asap: bg_BG locale for FreeBSD\n> \n> Oleg Bartunov wrote:\n> > \n> > Hi,\n> > \n> > I'm going to Bulgaria this week to setup FreeBSD server running\n> > postgres and would like to know if somebody has an experience\n> > with postgres and bulgarian locale. Actually, I need\n> > bg_BG locale for FreeBSD. interesting that searching for\n> > subject in internet doesn't provide any information.\n> > The only thing I found is bg_BG locale for Linux (Redhat)\n> \n> I created a Swedish LC_COLLATE locale part for FreeBSD a while\n> back, with the specific purpose of using it with PostgreSQL,\n> and it was not very hard.\n> \n> For sorting stuff:\n> Check /usr/src/share/colldef (the sorting algoritm is defined\n> by LC_COLLATE). I don't anything about bulgarian, though. Is it\n> using cyrillic characters? Try using one of the russion locales\n> to start with.\n> \n> For character representation (LC_CTYPE), I have no experience,\n> but it should also be fairly easy. Maybe Bulgarian uses a\n> similar character set to for example Russian? You will only\n> need a character locale for each Check /usr/src/share/mklocale\n> \n> For a complete locale, you would also need timedef\n> (/usr/src/share/timedef).\n> \n> colldef(1) is man page to check for sorting (collate)...\n> \n> mklocale(1) is for locale creation.\n> \n> Good luck!\n> Palle\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 11 Sep 2000 17:32:03 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: need asap: bg_BG locale for FreeBSD"
}
] |
[
{
"msg_contents": "\n> I know that someone was working on a commercial extension to PostgreSQL\n> to add clustering based on a shared disk system. Basically he was added a\n> raw storage manager to PostgreSQL plus a lock manager to co-oridinate\n> access to the shared disk. That way the two nodes could co-ordinate\n> access to the shared disk. This is very similar to Oracle Parallel\nServer.\n\nThis is sad. Good Cluster DB design is based on shared nothing architecture \nand \"function shipping\". OPS is known to have a bad and antiquated\narchitecture\nthat only works well with extremely well thought out application design.\n\nAndreas\n",
"msg_date": "Mon, 11 Sep 2000 10:41:50 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Scalability, Clustering"
}
] |
[
{
"msg_contents": "\nWhat am I doing wrong ?\n\n[hannu@me cvs_downloads]$ cvs -d\n:pserver:[email protected]:/usr/local/cvsroot login\n(Logging in to [email protected])\nCVS password: \n\n## here I enter password: postgresql\n\ncvs [login aborted]: authorization failed: server postgresql.org\nrejected access\n\n\n------------\nHannu\n",
"msg_date": "Mon, 11 Sep 2000 11:48:13 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "I'm unable to access CVS"
},
{
"msg_contents": "Karel Zak wrote:\n> \n> On Mon, 11 Sep 2000, Hannu Krosing wrote:\n> \n> >\n> > What am I doing wrong ?\n> >\n> > [hannu@me cvs_downloads]$ cvs -d\n> > :pserver:[email protected]:/usr/local/cvsroot login\n> \n> Sure, right path is:\n> \n> postgresql.org:/home/projects/pgsql/cvsroot\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> > (Logging in to [email protected])\n> > CVS password:\n> >\n> > ## here I enter password: postgresql\n> \n> rather 'postgres'\n> \n> Karel\n\n\nThanks!\n\n\nCould someone fix it in docs ?\n\nhttp://www.postgresql.org/users-lounge/docs/7.0/programmer/cvs8365.htm\n",
"msg_date": "Mon, 11 Sep 2000 12:54:47 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] I'm unable to access CVS"
},
{
"msg_contents": "\nOn Mon, 11 Sep 2000, Hannu Krosing wrote:\n\n> \n> What am I doing wrong ?\n> \n> [hannu@me cvs_downloads]$ cvs -d\n> :pserver:[email protected]:/usr/local/cvsroot login\n\n Sure, right path is:\n\npostgresql.org:/home/projects/pgsql/cvsroot\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n> (Logging in to [email protected])\n> CVS password: \n> \n> ## here I enter password: postgresql\n\n rather 'postgres'\n\n\tKarel\n\n",
"msg_date": "Mon, 11 Sep 2000 12:00:23 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: I'm unable to access CVS"
}
] |
[
{
"msg_contents": "\n> someone notice anything wrong with this query? :) *slap forehead*\n> \n> explain\n> SELECT distinct s.gid, s.created , \n> geo_distance(pd.location, '(-90.3690233918754,38.7788148984854)')\n> FROM status s, personal_data pd, relationship_wanted rw , \n> personal_ethnicity pe , personal_religion pr , personal_bodytype pb\n> WHERE s.active AND s.status != 0\n> AND (s.gid = pd.gid AND pd.gender = 0)\n> AND (s.gid = rw.gid AND rw.gender = 1)\n> AND geo_distance( pd.location, \n> '(-90.3690233918754,38.7788148984854)' ) <= 75\n> ORDER BY geo_distance( pd.location, \n> '(-90.3690233918754,38.7788148984854)'), s.created desc;\n\nYou have not restricted the join on pe, pr and pb leading to a cartesian\nproduct ?\n\nAndreas\n",
"msg_date": "Mon, 11 Sep 2000 10:52:05 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: man, I feel like a beginner ..."
}
] |
[
{
"msg_contents": "\n> (With 7.2 I plan to get rid of pg_shadow.usesysid and \n> identify users via\n> pg_shadow.oid and the superuser oid will be hard-coded into\n> include/catalog/pg_shadow.h, so at that point they will work.)\n\nImho it is fine to get rid of the usesysid in our internal authorization\nsystem,\nbut we should not get rid of the only field that can tie a db user \nto an os user. Imho we should not do a \"by name\" lookup\nand eliminate the field. The extra field adds additional flexibility,\nlike using one os user for many db users, or using different names \nfor os users.\n\nIn the long run we will need a tie to os users for os level setuid user\nfunctions.\n\nAndreas\n",
"msg_date": "Mon, 11 Sep 2000 11:06:59 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: \"setuid\" functions, a solution to the RI privilege\n\t problem"
},
{
"msg_contents": "Zeugswetter Andreas SB writes:\n\n> Imho it is fine to get rid of the usesysid in our internal\n> authorization system, but we should not get rid of the only field that\n> can tie a db user to an os user. Imho we should not do a \"by name\"\n> lookup and eliminate the field.\n\nUm, well, the only possible way to determine the session user when the\nbackend starts is to use the textual user name provided by the\nauthentication subsystem.\n\n> The extra field adds additional flexibility, like using one os user\n> for many db users, or using different names for os users.\n\n> In the long run we will need a tie to os users for os level setuid\n> user functions.\n\nBut the pg_shadow authentication is based on credentials provided by the\nclient whereas what you propose here would run on the server, so this\ndoesn't make sense. When we get around to these setuid user functions\n(while I have no idea how we would do so) we certainly need to have a\nseparate control mechanism.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 17 Sep 2000 12:14:49 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: \"setuid\" functions, a solution to the RI privilege problem"
}
] |
[
{
"msg_contents": "> > 2) Change pg_dump to walk through dependencies?\n> \n> The trouble with that is that dependency analysis is a monstrous job,\n> and one that would make pg_dump even more fragile and backend-version-\n> dependent than it is now. \n\nOne way to get around that might be to make the dumping routine a part\nof the backend instead of a frontend. This is what at least MS SQL does.\nSo I can do for example:\n\nBACKUP DATABASE mydb TO DISK 'c:\\foo.dump'\nor, if I want to send the backup directly to my backup program\nBACKUP DATABASE mydb TO PIPE 'somepipe'\n\nThen to reload it I just do\nRESTORE DATABASE mydb FROM DISK 'c:\\foo.dump'\n\n\nDoing this might also help with permissions issues, since the entire\nprocess can be run inside tbe backend (and skip security checks at some\npoints, assuming that it was a user with backup permissions who started\nthe operation)?\n\n\n//Magnus\n",
"msg_date": "Mon, 11 Sep 2000 12:57:14 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: I remember why I suggested CREATE FUNCTION...AS NUL\n\tL"
},
{
"msg_contents": "On Mon, 11 Sep 2000, Magnus Hagander wrote:\n\n> > > 2) Change pg_dump to walk through dependencies?\n> > \n> > The trouble with that is that dependency analysis is a monstrous job,\n> > and one that would make pg_dump even more fragile and backend-version-\n> > dependent than it is now. \n> \n> One way to get around that might be to make the dumping routine a part\n> of the backend instead of a frontend. This is what at least MS SQL does.\n> So I can do for example:\n> \n> BACKUP DATABASE mydb TO DISK 'c:\\foo.dump'\n> or, if I want to send the backup directly to my backup program\n> BACKUP DATABASE mydb TO PIPE 'somepipe'\n> \n> Then to reload it I just do\n> RESTORE DATABASE mydb FROM DISK 'c:\\foo.dump'\n> \n> \n> Doing this might also help with permissions issues, since the entire\n> process can be run inside tbe backend (and skip security checks at some\n> points, assuming that it was a user with backup permissions who started\n> the operation)?\n\nOne issue with this comes to mind ... if I allocate X meg for a database,\ngiven Y meg extra for temp tables, pg_sort, etc ... if someone has the\nability to dump to the server itself (assuming server is seperate from\nclient machine), doesn't that run a major risk of filling up disk space\nawfully quick? For instance, the dba for that database decides to backup\nbefore and after making changes, or before/after a large update ... what\nsorts of checks/balances are you proposing to prevent disk space problems? \n\nalso, this is gonig to require database level access controls,\nno? something we don't have right now ... so that too is going to have to\nbe implemented ... basically, you're gonna want some sort of 'GRANT\nBACKUP to dba;' command so that there can be more then one person with\ncomplete access to the database, but only one person with access to\ninitial a backup ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 11 Sep 2000 11:44:41 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: I remember why I suggested CREATE FUNCTION...AS NUL\n L"
}
] |
[
{
"msg_contents": "I'm running postgresql 7.0.2 on Solaris 8 and I get errors when I try togrant a user. The query and error are ike this:\n\nhorde=# GRANT SELECT, INSERT, UPDATE ON active_sessions TO martin;\nERROR: aclparse: non-existent user \"martin\"\nhorde=# \n\nNow, user martin exists on as a system user (it's my personal user acount),\nand I'm runnig the query as postgres user (the database superuser).\n\nWhat can be wrong?\n\nThanks!\n\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Mon, 11 Sep 2000 08:14:24 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "problems with GRANT on Solaris 8"
},
{
"msg_contents": "> What can be wrong?\n\nPostgres needs to be told about martin:\n\ncreateuser martin\n\n - Thomas\n",
"msg_date": "Mon, 11 Sep 2000 15:45:15 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with GRANT on Solaris 8"
},
{
"msg_contents": "On Mon, 11 Sep 2000, Thomas Lockhart wrote:\n\n> > What can be wrong?\n> \n> Postgres needs to be told about martin:\n> \n> createuser martin\n\nSorry for the stupid question, but I come from Informix 7.30 which\ndoesn't have a user database (AFAIK, at least doesn't have a create user\nquery).\n\nThaks for the answer!!!\n\n\n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n\n",
"msg_date": "Mon, 11 Sep 2000 18:42:29 -0300 (ART)",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: problems with GRANT on Solaris 8"
}
] |
[
{
"msg_contents": "I'm sure your aware of these limitations, but I'd thought I'd mention\nthem just in case, and to see if you have plans to sort them out:\n\nI have a query of the form:\n\nSELECT * FROM .... WHERE (now()-date1) > 'interval 1 day';\n\n..i.e. all rows 'older' than 1 day. This could be efficiently\nprocessed using the index on date1, but sadly pg doesn't know this ;-(\nThis transforms an operation which should be O(1) to O(rows)....\n\nMore worryingly, when I investigated the above, I find it doesn't even\nuse the index for\n\nSELECT * FROM .... \nWHERE date1 > '2000-09-11 00:00:00'::datetime - '1 hour'::interval;\n\n...so it doesn't realise that constant-constant is constant,\nnotwithstanding the more complex issues that now() is pseudo-constant.\n\nThis could be fixed by constant folding, I guess; any plans for that?\n\nJules\n",
"msg_date": "Mon, 11 Sep 2000 13:26:13 +0100",
"msg_from": "Jules Bean <[email protected]>",
"msg_from_op": true,
"msg_subject": "Constant propagation and similar issues"
},
{
"msg_contents": "Jules Bean <[email protected]> writes:\n> I have a query of the form:\n> SELECT * FROM .... WHERE (now()-date1) > 'interval 1 day';\n> ..i.e. all rows 'older' than 1 day. This could be efficiently\n> processed using the index on date1, but sadly pg doesn't know this ;-(\n\nNo, and I don't think it should. Should we implement a general\nalgebraic equation solver, and fire it up for every single query,\nin order to see if the user has written an indexable condition in\na peculiar form? I don't think we want to expend either the development\neffort or the runtime on that. If you are concerned about performance\nof this sort of query, you'll need to transform it to\n\n\tSELECT * FROM .... WHERE date1 < now() - interval '1 day';\n\nOf course that still leaves you with problem (b),\n\n> SELECT * FROM .... \n> WHERE date1 > '2000-09-11 00:00:00'::datetime - '1 hour'::interval;\n\n> ...so it doesn't realise that constant-constant is constant,\n> notwithstanding the more complex issues that now() is pseudo-constant.\n\nMost of the datetime operations are not considered constant-foldable.\nThe reason is that type timestamp has a special value CURRENT that\nis a symbolic representation of current time (this is NOT what now()\nproduces, but might be thought of as a data-driven way of invoking\nnow()). This value will get reduced to a simple constant when it is\nfed into an arithmetic operation. Hence, premature evaluation changes\nthe results and would not be a correct optimization.\n\nAFAIK hardly anyone actually uses CURRENT, and I've been thinking of\nproposing that we eliminate it to make the world safe for constant-\nfolding timestamp operations. (Thomas, any comments here?)\n\nIn the meantime, there is a workaround that's been discussed on the\nmailing lists before --- create a function that hides the\n\"unsafe-to-fold\" operations and mark it iscachable:\n\n\tcreate function ago(interval) returns timestamp as\n\t'select now() - $1' language 'sql' with (iscachable);\n\nThen something like\n\n\tSELECT * FROM .... WHERE date1 < ago('1 day');\n\nwill be considered indexable. You can shoot yourself in the foot with\nthis --- don't try to write ago(constant) in a rule or function\ndefinition --- but in interactive queries it'll get the job done.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Sep 2000 11:15:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constant propagation and similar issues "
},
{
"msg_contents": "On Mon, Sep 11, 2000 at 11:15:58AM -0400, Tom Lane wrote:\n> Jules Bean <[email protected]> writes:\n> > I have a query of the form:\n> > SELECT * FROM .... WHERE (now()-date1) > 'interval 1 day';\n> > ..i.e. all rows 'older' than 1 day. This could be efficiently\n> > processed using the index on date1, but sadly pg doesn't know this ;-(\n> \n> No, and I don't think it should. Should we implement a general\n> algebraic equation solver, and fire it up for every single query,\n> in order to see if the user has written an indexable condition in\n> a peculiar form? I don't think we want to expend either the development\n> effort or the runtime on that. If you are concerned about performance\n> of this sort of query, you'll need to transform it to\n> \n> \tSELECT * FROM .... WHERE date1 < now() - interval '1 day';\n\nWell, I shall speak quietly and timidly, for I'm not offering to do\nthe work, and I respect that other tasks are both more interesting and\nmore important. However, it does seem to me that PostgreSQL /should/\nbe able to make these transformations (at least, it should IMO\nrecognise that given an expression of the form a + b - c + d < e - f\n+ g where exactly one of a..g is a column name, and the rest are\nconstant, that is a candidate for using the index).\n\n> \n> Of course that still leaves you with problem (b),\n> \n> > SELECT * FROM .... \n> > WHERE date1 > '2000-09-11 00:00:00'::datetime - '1 hour'::interval;\n> \n> > ...so it doesn't realise that constant-constant is constant,\n> > notwithstanding the more complex issues that now() is pseudo-constant.\n> \n> Most of the datetime operations are not considered constant-foldable.\n> The reason is that type timestamp has a special value CURRENT that\n> is a symbolic representation of current time (this is NOT what now()\n> produces, but might be thought of as a data-driven way of invoking\n> now()). This value will get reduced to a simple constant when it is\n> fed into an arithmetic operation. Hence, premature evaluation changes\n> the results and would not be a correct optimization.\n> \n> AFAIK hardly anyone actually uses CURRENT, and I've been thinking of\n> proposing that we eliminate it to make the world safe for constant-\n> folding timestamp operations. (Thomas, any comments here?)\n\nYes. I came across CURRENT in some examples somewhere, got very\nconfused, decided I didn't like it, and used now() instead ;-) I now\nunderstand the problem. Personally, I'm thinking drop CURRENT, but\nonly because I've never used it myself...\n\n> \n> In the meantime, there is a workaround that's been discussed on the\n> mailing lists before --- create a function that hides the\n> \"unsafe-to-fold\" operations and mark it iscachable:\n> \n> \tcreate function ago(interval) returns timestamp as\n> \t'select now() - $1' language 'sql' with (iscachable);\n> \n> Then something like\n> \n> \tSELECT * FROM .... WHERE date1 < ago('1 day');\n> \n> will be considered indexable. You can shoot yourself in the foot with\n> this --- don't try to write ago(constant) in a rule or function\n> definition --- but in interactive queries it'll get the job done.\n\nThanks very much. I shall try that.\n\nJules\n",
"msg_date": "Mon, 11 Sep 2000 16:45:36 +0100",
"msg_from": "Jules Bean <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Constant propagation and similar issues"
},
{
"msg_contents": "On Mon, Sep 11, 2000 at 11:15:58AM -0400, Tom Lane wrote:\n> \n> Most of the datetime operations are not considered constant-foldable.\n> The reason is that type timestamp has a special value CURRENT that\n> is a symbolic representation of current time (this is NOT what now()\n> produces, but might be thought of as a data-driven way of invoking\n> now()). This value will get reduced to a simple constant when it is\n> fed into an arithmetic operation. Hence, premature evaluation changes\n> the results and would not be a correct optimization.\n> \n> AFAIK hardly anyone actually uses CURRENT, and I've been thinking of\n> proposing that we eliminate it to make the world safe for constant-\n> folding timestamp operations. (Thomas, any comments here?)\n> \n\nI checked the ansi SQL'99 docs, and CURRENT as a date special constant\nis not a part of the standard (although CURRENT is a keyword: it is \nused in the context of cursors)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Mon, 11 Sep 2000 10:47:04 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constant propagation and similar issues"
},
{
"msg_contents": "Jules Bean <[email protected]> writes:\n> However, it does seem to me that PostgreSQL /should/\n> be able to make these transformations (at least, it should IMO\n> recognise that given an expression of the form a + b - c + d < e - f\n> + g where exactly one of a..g is a column name, and the rest are\n> constant, that is a candidate for using the index).\n\nMumble. I think that'd be a very difficult thing to do without losing\nthe datatype extensibility of the system. Right now, the only reason\nthat \"a < b\" is considered indexable is that the optimizer has a table\nthat tells it \"<\" is an indexable operator for btree indexes with\ncertain datatypes (opclasses). Neither the optimizer nor the btree code\nhas any real understanding of the relationships between \"<\" and \"-\", say.\nThere is no part of the system anywhere with understanding of algebraic\nidentities like \"a - b < c can be transformed to a < b + c\", and no way\nI can see to add such knowledge without making it *substantially* harder\nto add new datatypes and operators.\n\nBetween that and the runtime that would be wasted during typical queries\n(IMHO searching for rearrangeable clauses would usually be fruitless),\nI really doubt that this is a good goal to pursue in Postgres.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Sep 2000 12:22:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constant propagation and similar issues "
},
{
"msg_contents": "On Mon, Sep 11, 2000 at 12:22:39PM -0400, Tom Lane wrote:\n> Jules Bean <[email protected]> writes:\n> > However, it does seem to me that PostgreSQL /should/\n> > be able to make these transformations (at least, it should IMO\n> > recognise that given an expression of the form a + b - c + d < e - f\n> > + g where exactly one of a..g is a column name, and the rest are\n> > constant, that is a candidate for using the index).\n> \n> Mumble. I think that'd be a very difficult thing to do without losing\n> the datatype extensibility of the system. Right now, the only reason\n> that \"a < b\" is considered indexable is that the optimizer has a table\n> that tells it \"<\" is an indexable operator for btree indexes with\n> certain datatypes (opclasses). Neither the optimizer nor the btree code\n> has any real understanding of the relationships between \"<\" and \"-\", say.\n> There is no part of the system anywhere with understanding of algebraic\n> identities like \"a - b < c can be transformed to a < b + c\", and no way\n> I can see to add such knowledge without making it *substantially* harder\n> to add new datatypes and operators.\n\nYes, actually something like this occurred to me after I sent the\nabove email. I had forgotten about the (rather pretty) extensible\ntype system; I can see that makes spotting optimisations such as the\nabove much more difficult. Seems like it might make a nice subject for\na paper, actually.\n\n> \n> Between that and the runtime that would be wasted during typical queries\n> (IMHO searching for rearrangeable clauses would usually be fruitless),\n> I really doubt that this is a good goal to pursue in Postgres.\n\nI'm afraid I can't buy that second argument ;-) The time it takes to\noptimise a query is asymptotically irrelevant, after all...\n\nJules\n\n",
"msg_date": "Mon, 11 Sep 2000 17:28:31 +0100",
"msg_from": "Jules Bean <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Constant propagation and similar issues"
},
{
"msg_contents": "On Mon, Sep 11, 2000 at 10:47:04AM -0500, Ross J. Reedstrom wrote:\n> On Mon, Sep 11, 2000 at 11:15:58AM -0400, Tom Lane wrote:\n> > \n> > Most of the datetime operations are not considered constant-foldable.\n> > The reason is that type timestamp has a special value CURRENT that\n> > is a symbolic representation of current time (this is NOT what now()\n> > produces, but might be thought of as a data-driven way of invoking\n> > now()). This value will get reduced to a simple constant when it is\n> > fed into an arithmetic operation. Hence, premature evaluation changes\n> > the results and would not be a correct optimization.\n> > \n> > AFAIK hardly anyone actually uses CURRENT, and I've been thinking of\n> > proposing that we eliminate it to make the world safe for constant-\n> > folding timestamp operations. (Thomas, any comments here?)\n> > \n> \n> I checked the ansi SQL'99 docs, and CURRENT as a date special constant\n> is not a part of the standard (although CURRENT is a keyword: it is \n> used in the context of cursors)\n> \n\nFollowing up to myself: \n\nAh, I had forgotten that CURRENT is a magic value, like 'infinity'.\n\nThe standard does specify in section 6.19:\n\nCURRENT_DATE, CURRENT_TIME, LOCALTIME, CURRENT_TIMESTAMP, and LOCALTIMESTAMP\n\nas <datetime value function>\n\nWhich are currently implemented as generating the special value 'CURRENT',\nwhich then get's stored in the column. This strikes me as _not_ standards\ncompliant. What do other DB's do with these? I think that they should\nbe equivalent to now(), returning a static date that is stored.\n\nI do find the timestamp special values 'infinity' and '-infinity' very\nuseful, but have never found a use for 'current'.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Mon, 11 Sep 2000 11:33:19 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constant propagation and similar issues"
},
{
"msg_contents": "> AFAIK hardly anyone actually uses CURRENT, and I've been thinking of\n> proposing that we eliminate it to make the world safe for constant-\n> folding timestamp operations. (Thomas, any comments here?)\n\nWell, it is a feature from \"the old days\". Pretty neat one at that, and\nis an example of a useful feature not found in other DBs or in\nstandards, but which might show up someday because they are useful.\nThrowing those things away one at a time will end us up at the lowest\ncommon denominator, eventually :(\n\nAnother way of looking at the problem is to ask how we could retain this\nfeature in the face of the other optimization \"desirements\". istm that\ntypes which have multiple behaviors could be queried for the behavior of\na particular example by the optimizer. For most types, a \"query\" would\nnot be necessary (so there is minimal overhead), but for this case a\nfunction could return the property of an example as either cachable or\nnot.\n\nPerhaps a true \"serial type\" would need similar behaviors, as might\nother future types.\n\n - Thomas\n",
"msg_date": "Mon, 11 Sep 2000 16:37:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constant propagation and similar issues"
},
{
"msg_contents": "Jules Bean <[email protected]> writes:\n> Presumably then, the optimizer doesn't even know that + is\n> commutative, so it can't fold the constant in (5 + a + 4).\n\nActually, it does know that + is commutative, courtesy of the oprcom\ncolumn in pg_operator. But it doesn't know that + is associative,\nwhich is something you must also assume to transform 5 + a + 4\n(really (5 + a) + 4 in the parser output) into a + (5 + 4).\n\nSince, in general, the associative law does NOT hold for computer\narithmetic (think about intermediate-result overflow, not to mention\nroundoff error in floating-point cases), I'm hesitant to want to put\nsuch assumptions in.\n\nBut the more serious problem is the search space involved in discovering\nthat there is a path of manipulations that leads to a more completely\nreducible expression. With a large expression that could be an\nexponential time cost --- even if the resultant savings is zero.\n\nI do not buy your previous comment that the cost of optimization is\nnegligible. We've already had to put in heuristics to prevent the\nsystem from blowing up in cnfify() for moderately large WHERE clauses\n--- it used to be that a few dozen OR (a=1 AND b=2) OR (a=4 AND b=5)\nkinds of clauses would bring the optimizer to its knees. And that's\nfor a well-understood deterministic simplification that affects only\nAND/OR/NOT operators, with no search involved to decide what to do\nwith them. What you're proposing would affect many more operators\nand require a combinatorial search to see what could be done to the\nexpression.\n\nSo I stand by my opinion that this isn't likely to be a profitable\npath to pursue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Sep 2000 16:55:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constant propagation and similar issues "
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> AFAIK hardly anyone actually uses CURRENT, and I've been thinking of\n>> proposing that we eliminate it to make the world safe for constant-\n>> folding timestamp operations. (Thomas, any comments here?)\n\n> Well, it is a feature from \"the old days\". Pretty neat one at that, and\n> is an example of a useful feature not found in other DBs or in\n> standards, but which might show up someday because they are useful.\n\nI'm not convinced that it is useful. What I think it is is a good way\nof shooting yourself in the foot, because it's so hard to control when\n'CURRENT' will be reduced to a specific time value.\n\nI have no problem with the datetime input converters accepting the input\nstring 'CURRENT' and immediately replacing it with the current time.\nThat behavior is clearly useful and creates no semantic issues. But\nI don't think that a special data value that symbolically represents\ncurrent time is either useful or well-defined.\n\nJust to give one example of why the concept is broken: consider an index\non a timestamp column that contains some CURRENT values. Today the\nindex might look like\n\t2000-01-01 11:33:05-04\n\t2000-09-17 14:39:44-04\n\tCURRENT\n\t2000-09-18 14:11:07-04\nwhich is fine. But twenty-four hours from now, this index will be out\nof order and hence broken. (The btree routines do not cope at all\ngracefully with logically-inconsistent indexes.)\n\nSo I still recommend that we remove the special value CURRENT. Then we\ncan mark the datetime-related operators constant-foldable, which will\neliminate a complaint that we can otherwise expect to hear constantly\n(saw another instance today in pgsql-sql).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Sep 2000 14:46:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constant propagation and similar issues "
},
{
"msg_contents": "At 16:37 11/09/00 +0000, Thomas Lockhart wrote:\n>> AFAIK hardly anyone actually uses CURRENT, and I've been thinking of\n>> proposing that we eliminate it to make the world safe for constant-\n>> folding timestamp operations. (Thomas, any comments here?)\n>\n>Well, it is a feature from \"the old days\". Pretty neat one at that, and\n>is an example of a useful feature not found in other DBs or in\n>standards, but which might show up someday because they are useful.\n>Throwing those things away one at a time will end us up at the lowest\n>common denominator, eventually :(\n\nWell, Dec RDB has it: the concept of a 'computed by' field. You can define\nview-like elements in a table, eg:\n\n Create Table t1 ( \n f1 integer, \n f2 real, \n f12_avg computed by (f1 + f2)/2, \n f4 computed by current_timestamp).\n\nIt's actually quite a useful feature, but unlike PGSQL, Dec RDB does not\nallow indexes to be created on the fields. Clearly one can just define a\nview to get similar results, but it is not as clean.\n\n\n>Another way of looking at the problem is to ask how we could retain this\n>feature in the face of the other optimization \"desirements\". istm that\n>types which have multiple behaviors could be queried for the behavior of\n>a particular example by the optimizer. For most types, a \"query\" would\n>not be necessary (so there is minimal overhead), but for this case a\n>function could return the property of an example as either cachable or\n>not.\n>\n>Perhaps a true \"serial type\" would need similar behaviors, as might\n>other future types.\n\nThis seems like a nice and extensible idea - allowing types to define a\nfunction for testing constancy. At least to allow:\n\n field < (current_timestamp - interval '1 hour')\n\nto use an index properly.\n\nI guess an alternative would be to define a special 'iscacheable' function:\neg.\n\n function constant(arg <all-types>) returns <same-type-as-arg>\n\nso we could have:\n\n field < constant(current_timestamp - interval '1 hour')\n\nalthough this looks suspiciously like an optimizer hint which I think\npeople have been opposed to in the past...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 18 Sep 2000 19:05:18 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constant propagation and similar issues"
},
{
"msg_contents": "At 19:05 18/09/00 +1000, Philip Warner wrote:\n>\n>I guess an alternative would be to define a special 'iscacheable' function:\n>eg.\n>\n> function constant(arg <all-types>) returns <same-type-as-arg>\n>\n>so we could have:\n>\n> field < constant(current_timestamp - interval '1 hour')\n>\n>although this looks suspiciously like an optimizer hint which I think\n>people have been opposed to in the past...\n>\n\nI just saw the flaw with this: iscacheable is not enough when the args are\ndeemed non-constant. I guess we'd need some kind of 'evaluate_once' flag,\nwhich is getting a little obtuse.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 18 Sep 2000 19:34:35 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constant propagation and similar issues"
},
{
"msg_contents": "> So I still recommend that we remove the special value CURRENT. Then we\n> can mark the datetime-related operators constant-foldable, which will\n> eliminate a complaint that we can otherwise expect to hear constantly\n> (saw another instance today in pgsql-sql).\n\nOK.\n\n - Thomas\n",
"msg_date": "Mon, 18 Sep 2000 16:41:09 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constant propagation and similar issues"
}
] |
[
{
"msg_contents": "Mariposa (http://mariposa.CS.Berkeley.EDU/download.html) has a BSD licence \nbut it refers to Postgres95.\nMariposa is a patch aganist postgres sources and alpha release, there are a \nlot of Papers describing this.\nI've compiled under linux but no success.\n\nIt's possible that there isn't a solution or a trick to load-balance \npostgres (at least cpu load balancing with a central high speed location for \ndata-bases) ?\n\n\nthank you for your reply.\n\nvalter\n\n>From: Zeugswetter Andreas SB <[email protected]>\n>To: \"'Tom Samplonius'\" <[email protected]>, Valter Mazzola <[email protected]>\n>CC: [email protected]\n>Subject: AW: [HACKERS] Scalability, Clustering\n>Date: Mon, 11 Sep 2000 10:41:50 +0200\n>\n>\n> > I know that someone was working on a commercial extension to \n>PostgreSQL\n> > to add clustering based on a shared disk system. Basically he was added \n>a\n> > raw storage manager to PostgreSQL plus a lock manager to co-oridinate\n> > access to the shared disk. That way the two nodes could co-ordinate\n> > access to the shared disk. This is very similar to Oracle Parallel\n>Server.\n>\n>This is sad. Good Cluster DB design is based on shared nothing architecture\n>and \"function shipping\". OPS is known to have a bad and antiquated\n>architecture\n>that only works well with extremely well thought out application design.\n>\n>Andreas\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\nShare information about yourself, create your own public profile at \nhttp://profiles.msn.com.\n\n",
"msg_date": "Mon, 11 Sep 2000 13:33:26 CEST",
"msg_from": "\"Valter Mazzola\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: Scalability, Clustering"
}
] |
[
{
"msg_contents": "Hi,\n\nI am tring to use the qnx version of postgresql 7.0.0\n\nI have qnx 4.25 and TCP/IP\n\nI have compiled postgres using gcc\nI used this commands\ngmake all\ngmake install\n\nthen I have started postgres with -D and -i options.\n\nEvery time I execute the command createdb I have SIGSEGV error.\n\nHave I to configure something in POTGRESSQL, in TCP/IP or in the kernel?\n\nWhere I'm wrong?\n\nI have attached a log file with the error.\n\n\nThanks.\nDREAMTECH\nMaurizio Cauci",
"msg_date": "Mon, 11 Sep 2000 15:55:09 +0200",
"msg_from": "\"Maurizio\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgressql QNX version"
}
] |
[
{
"msg_contents": "I sent to the following patch to the PATCHES list.\n\n\nThis patch implements the following command:\n\nALTER TABLE <tablename> OWNER TO <username>\n\nOnly a superuser may execute the command.\n\n-- \nMark Hollomon\[email protected]\n",
"msg_date": "Mon, 11 Sep 2000 10:10:42 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "ALTER TABLE OWNER patch"
},
{
"msg_contents": "I know that this feature is often asked for, but be aware that it is\nincompatible with the future, namely SQL schema support. Under that\nscheme, a schema has an owner and all objects belong to a schema. I guess\nat that point we'd have to disable this command with a reference to some\nsort of command to move objects between schemas.\n\n\nMark Hollomon writes:\n\n> I sent to the following patch to the PATCHES list.\n> \n> \n> This patch implements the following command:\n> \n> ALTER TABLE <tablename> OWNER TO <username>\n> \n> Only a superuser may execute the command.\n> \n> \n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 16 Sep 2000 20:00:20 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE OWNER patch"
}
] |
[
{
"msg_contents": "I have a script that dumps the content of a 7.0.2 database using\npg_dump. I dump the data using the -acD flags, then the schema using\nthe -scD flags. For databases with no user defined types, this works\nfine.\n\nHowever, I have one database with a user defined type and get the\nfollowing error from pg_dump:\n\n pg_dump -sc -D test > pg_dump.schema || true\n failed sanity check, type with oid 3516132 was not found\n\nAny clues about this? Does it matter?\n\nThanks for your help.\n\nCheers,\nBrook\n",
"msg_date": "Mon, 11 Sep 2000 08:37:14 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump failed sanity check and user defined types"
},
{
"msg_contents": "Brook Milligan <[email protected]> writes:\n> However, I have one database with a user defined type and get the\n> following error from pg_dump:\n> pg_dump -sc -D test > pg_dump.schema || true\n> failed sanity check, type with oid 3516132 was not found\n> Any clues about this? Does it matter?\n\nSounds like you dropped a user type without remembering to drop all\nthe functions/operators defined for it. Unfortunately there's no\nsafety cross-check in DROP TYPE (probably there should be).\n\nIt does matter, since IIRC pg_dump aborts when it finds such an\ninconsistency; so you're getting an incomplete dump.\n\nYou should be able to find the offending entries by searching through\nthe system catalogs with queries like\n\tselect * from pg_operator where oprleft = 3516132\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Sep 2000 11:48:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump failed sanity check and user defined types "
},
{
"msg_contents": " > pg_dump -sc -D test > pg_dump.schema || true\n > failed sanity check, type with oid 3516132 was not found\n\n Sounds like you dropped a user type without remembering to drop all\n the functions/operators defined for it. Unfortunately there's no\n safety cross-check in DROP TYPE (probably there should be).\n\nThat's what I would have guessed, but I'm pretty sure that is not the\ncase (but I'm new to UDTs, so bear with me; maybe I'm not constructing\nmy script right). See the script below that does the installation of\nthe types and functions. The problem occurs after running this script\nfollowed by the pg_dump above.\n\nIs there some order dependency for dropping types and functions?\nShould I not be dropping these before creating them (I do this to\nallow rerunning the script)? Does it have anything to do with the\nfact that a single object.so provides all the entry points?\n\n You should be able to find the offending entries by searching through\n the system catalogs with queries like\n\t select * from pg_operator where oprleft = 3516132\n\nThere are no rows found.\n\nCheers,\nBrook\n\n===========================================================================\n\n-- type_xxx\n\nDROP TYPE type_xxx;\n\nDROP FUNCTION type_xxx_in (opaque);\n\nCREATE FUNCTION type_xxx_in (opaque)\n RETURNS type_xxx\n AS '/path/to/object.so', 'type_xxx_in'\n LANGUAGE 'c';\n\nDROP FUNCTION type_xxx_out(opaque);\n\nCREATE FUNCTION type_xxx_out(opaque)\n RETURNS opaque\n AS '/path/to/object.so', 'type_xxx_out'\n LANGUAGE 'c';\n\nCREATE TYPE type_xxx (\n internallength = 72,\n input = type_xxx_in,\n output = type_xxx_out\n);\n\n\n-- type_yyy\n\nDROP TYPE type_yyy;\n\nDROP FUNCTION type_yyy_in (opaque);\n\nCREATE FUNCTION type_yyy_in (opaque)\n RETURNS type_yyy\n AS '/path/to/object.so', 'type_yyy_in'\n LANGUAGE 'c';\n\nDROP FUNCTION type_yyy_out(opaque);\n\nCREATE FUNCTION type_yyy_out(opaque)\n RETURNS opaque\n AS '/path/to/object.so', 'type_yyy_out'\n LANGUAGE 'c';\n\nCREATE TYPE type_yyy (\n internallength = 76,\n input = type_yyy_in,\n output = type_yyy_out\n);\n\n-- type_zzz\n\nDROP TYPE type_zzz;\n\nDROP FUNCTION type_zzz_in (opaque);\n\nCREATE FUNCTION type_zzz_in (opaque)\n RETURNS type_zzz\n AS '/path/to/object.so', 'type_zzz_in'\n LANGUAGE 'c';\n\nDROP FUNCTION type_zzz_out(opaque);\n\nCREATE FUNCTION type_zzz_out(opaque)\n RETURNS opaque\n AS '/path/to/object.so', 'type_zzz_out'\n LANGUAGE 'c';\n\nCREATE TYPE type_zzz (\n internallength = 112,\n input = type_zzz_in,\n output = type_zzz_out\n);\n\n-- conversions\n\nDROP FUNCTION type_xxx (type_yyy);\nCREATE FUNCTION type_xxx (type_yyy)\n RETURNS type_xxx\n AS '/path/to/object.so', 'type_xxx_from_type_yyy'\n LANGUAGE 'c';\n\nDROP FUNCTION type_xxx (type_zzz);\nCREATE FUNCTION type_xxx (type_zzz)\n RETURNS type_xxx\n AS '/path/to/object.so', 'type_xxx_from_type_zzz'\n LANGUAGE 'c';\n\nDROP FUNCTION type_yyy (type_xxx);\nCREATE FUNCTION type_yyy (type_xxx)\n RETURNS type_yyy\n AS '/path/to/object.so', 'type_yyy_from_type_xxx'\n LANGUAGE 'c';\n\nDROP FUNCTION type_yyy (type_zzz);\nCREATE FUNCTION type_yyy (type_zzz)\n RETURNS type_yyy\n AS '/path/to/object.so', 'type_yyy_from_type_zzz'\n LANGUAGE 'c';\n\nDROP FUNCTION type_zzz (type_xxx);\nCREATE FUNCTION type_zzz (type_xxx)\n RETURNS type_zzz\n AS '/path/to/object.so', 'type_zzz_from_type_xxx'\n LANGUAGE 'c';\n\nDROP FUNCTION type_zzz (type_yyy);\nCREATE FUNCTION type_zzz (type_yyy)\n RETURNS type_zzz\n AS '/path/to/object.so', 'type_zzz_from_type_yyy'\n LANGUAGE 'c';\n\n",
"msg_date": "Mon, 11 Sep 2000 10:22:10 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump failed sanity check and user defined types"
},
{
"msg_contents": " > pg_dump -sc -D test > pg_dump.schema || true\n > failed sanity check, type with oid 3516132 was not found\n\nThe problem seems to be related to trying to install conversion\nfunctions from one user defined type to another. Scripts like the\nfollowing are fine:\n\n DROP TYPE xxx;\n\n DROP FUNCTION xxx_in (opaque);\n CREATE FUNCTION xxx_in (opaque) RETURNS xxx AS '_OBJWD_/xxx.so', 'xxx_in' LANGUAGE 'c';\n\n DROP FUNCTION xxx_out(opaque);\n CREATE FUNCTION xxx_out(opaque) RETURNS opaque AS '_OBJWD_/xxx.so', 'xxx_out' LANGUAGE 'c';\n\n CREATE TYPE xxx (internallength = 8, input = xxx_in, output = xxx_out);\n\n DROP TYPE yyy;\n\n DROP FUNCTION yyy_in (opaque);\n CREATE FUNCTION yyy_in (opaque) RETURNS yyy AS '_OBJWD_/xxx.so', 'yyy_in' LANGUAGE 'c';\n\n DROP FUNCTION yyy_out(opaque);\n CREATE FUNCTION yyy_out(opaque) RETURNS opaque AS '_OBJWD_/xxx.so', 'yyy_out' LANGUAGE 'c';\n\n CREATE TYPE yyy (internallength = 8, input = yyy_in, output = yyy_out);\n\nBut as soon as I add a conversion like the following to the end (I\npresume conversion functions must follow the type definitions), I get\nfailed sanity checks.\n\n DROP FUNCTION xxx (yyy);\n CREATE FUNCTION xxx (yyy) RETURNS xxx AS '_OBJWD_/xxx.so', 'xxx_int' LANGUAGE 'c';\n\nI presume that notices like the following\n\n NOTICE: ProcedureCreate: type 'xxx' is not yet defined\n\nare fine, because you must create the I/O functions before the type.\n\nSo, how is one really supposed to create user defined types with\nconversion functions without tripping on failed sanity checks?\nWhere else in the system catalogs can I look to find references to\nthe missing OIDs?\n\nThanks again for your help.\n\nCheers,\nBrook\n",
"msg_date": "Mon, 11 Sep 2000 11:16:03 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump failed sanity check and user defined types"
},
{
"msg_contents": "Brook Milligan <[email protected]> writes:\n>> failed sanity check, type with oid 3516132 was not found\n\n> Sounds like you dropped a user type without remembering to drop all\n> the functions/operators defined for it. Unfortunately there's no\n> safety cross-check in DROP TYPE (probably there should be).\n\n> That's what I would have guessed, but I'm pretty sure that is not the\n> case (but I'm new to UDTs, so bear with me; maybe I'm not constructing\n> my script right). See the script below that does the installation of\n> the types and functions. The problem occurs after running this script\n> followed by the pg_dump above.\n\nI can't duplicate that, either in current sources or 7.0.2. Are you\nsure you're blaming the right bit of script?\n\n> Is there some order dependency for dropping types and functions?\n> Should I not be dropping these before creating them (I do this to\n> allow rerunning the script)? Does it have anything to do with the\n> fact that a single object.so provides all the entry points?\n\nWhat you showed looks fine.\n\n> You should be able to find the offending entries by searching through\n> the system catalogs with queries like\n> \t select * from pg_operator where oprleft = 3516132\n\n> There are no rows found.\n\nYou may need to dig into pg_dump and see exactly what it's complaining\nabout ... it's getting that OID from someplace ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Sep 2000 16:37:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump failed sanity check and user defined types "
},
{
"msg_contents": "Brook Milligan <[email protected]> writes:\n> But as soon as I add a conversion like the following to the end (I\n> presume conversion functions must follow the type definitions), I get\n> failed sanity checks.\n\n> DROP FUNCTION xxx (yyy);\n\nSure. By the time you execute that, you've already deleted the old\nyyy type and created a new one. So this is trying to delete a function\nnamed xxx that takes the *new* yyy type, which there isn't one of (and\nDROP FUNCTION complains accordingly).\n\nThe old function xxx(old-yyy-type) is still in the catalogs, and will\nconfuse pg_dump. Moreover, there's no way to specify that function by\nname, because there's no longer any name for its argument type. If\nyou don't want to drop the whole DB, you'll have to delete the pg_proc\ntuple by OID, after you figure out which one it is. Try\n\tselect oid,* from pg_proc where not exists\n\t(select 1 from pg_type where oid = proargtypes[0]);\n(ditto for prorettype and the other proargtypes entries).\n\nMy advice would be to make your script drop all the function definitions\nbefore you drop the type names.\n\nWhat we really need is some sort of \"DROP TYPE foo CASCADE\" command\nthat will clean up all the relevant entries at once ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Sep 2000 17:20:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump failed sanity check and user defined types "
}
] |
[
{
"msg_contents": "> > > > 2) Change pg_dump to walk through dependencies?\n> > > \n> > > The trouble with that is that dependency analysis is a \n> monstrous job,\n> > > and one that would make pg_dump even more fragile and \n> backend-version-\n> > > dependent than it is now. \n> > \n> > One way to get around that might be to make the dumping \n> routine a part\n> > of the backend instead of a frontend. This is what at least \n> MS SQL does.\n> > So I can do for example:\n> > \n> > BACKUP DATABASE mydb TO DISK 'c:\\foo.dump'\n> > or, if I want to send the backup directly to my backup program\n> > BACKUP DATABASE mydb TO PIPE 'somepipe'\n> > \n> > Then to reload it I just do\n> > RESTORE DATABASE mydb FROM DISK 'c:\\foo.dump'\n> > \n> > \n> > Doing this might also help with permissions issues, since the entire\n> > process can be run inside tbe backend (and skip security \n> checks at some\n> > points, assuming that it was a user with backup permissions \n> who started\n> > the operation)?\n> \n> One issue with this comes to mind ... if I allocate X meg for \n> a database,\n> given Y meg extra for temp tables, pg_sort, etc ... if someone has the\n> ability to dump to the server itself (assuming server is \n> seperate from\n> client machine), doesn't that run a major risk of filling up \n> disk space\n> awfully quick? For instance, the dba for that database \n> decides to backup\n> before and after making changes, or before/after a large \n> update ... what\n> sorts of checks/balances are you proposing to prevent disk \n> space problems? \nHmm. I was actually not thinking of that part :-) More in the way of \"if he\nhas permissions to dump the db, he shuold check for space\".\n\nBut say there are different permissions for BACKUP TO DISK and BACKUP TO\nPIPE (or whatever it would be called). So you could let somebody dump it\nthrough a TCP/IP connection without needing to trust them with handling\nlocal disk?\n\n\n> also, this is gonig to require database level access controls,\n> no? something we don't have right now ... so that too is \n> going to have to\n> be implemented ... basically, you're gonna want some sort of 'GRANT\n> BACKUP to dba;' command so that there can be more then one person with\n> complete access to the database, but only one person with access to\n> initial a backup ...\nUmm. Yeah, that's probably right. One could make a \"cheap\" solution by\nsaying only superuser can do it. But I guess there can be situations out\nthere where non-superusers need to run backups.. Hmm.\n\nIn MS SQL, the commands BACKUP and RESTORE can be executed by any member of\nthe database wide roles db_owner (who is like the postgres superuser, I\nthink, except on a per-database level) or the db_backupoperator role. I\ndon't know how SQL standards compliant the \"roles\" handling in MSSQL is, but\nit is a very neat tool to work with :-)\n\n\nAdding database level access controls probably wouldn't be the hardest part\nof doing it, I think...\n\n//Magnus\n",
"msg_date": "Mon, 11 Sep 2000 17:22:17 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: I remember why I suggested CREATE FUNCTION...AS NUL\n\t L"
}
] |
[
{
"msg_contents": "I have defined various operators for some user defined types. How do\nI use them for setting up an index on the types?\n\nThe naive approach yields the following\n\n\tERROR: DefineIndex: type geocentric_point has no default operator class\n\nso presumably one must define a default operator. How is this done\nand what exactly are the semantics of a default operator?\n\nThanks for your help.\n\nCheers,\nBrook\n",
"msg_date": "Mon, 11 Sep 2000 12:22:59 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": true,
"msg_subject": "operators and indexes"
},
{
"msg_contents": "Brook Milligan <[email protected]> writes:\n> I have defined various operators for some user defined types. How do\n> I use them for setting up an index on the types?\n\n> The naive approach yields the following\n\n> \tERROR: DefineIndex: type geocentric_point has no default operator class\n\n> so presumably one must define a default operator. How is this done\n> and what exactly are the semantics of a default operator?\n\nYou parsed it wrong, what you need is an index \"operator class\" for the\ntype, which you then mark as being the default opclass for the type.\nThe operator class is just an abstract concept that binds together a\nspecific set of index-related operators and functions for a particular\ndatatype.\n\nTo support btree indexes on a datatype, you need a 3-way comparison\nfunction for the type, plus all six standard relational operators for\nthe type (< = > <= <> >=). Then define an index opclass for the type\n(in pg_opclass) and make entries associating it to the comparison\nfunction (in pg_amproc) and to the relational operators (in pg_amop).\nThere is a tutorial about this in the SGML docs, or see the existing\nentries for any standard datatype.\n\nThe motivation for opclasses is that you might have more than one\nreasonable ordering for a datatype; if so, you can associate multiple\nopclasses with the type so that different indexes can follow different\norderings. For example, a complex-number datatype might possibly want\nopclasses associated with absolute-value ordering and with real-part\nordering. I can imagine multiple opclasses for a point datatype too...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Sep 2000 17:08:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: operators and indexes "
}
] |
[
{
"msg_contents": "There seems to a race condition somewhere where that if you're\nrunning let's say pg_dumpall and happen to drop a table mid-dump\npg_dumpall will die because it looses the table.\n\nWould it make sense to use a transaction system so that when a table\nis renamed/dropped it doesn't actually go away until all transactions\nthat started before the drop take place?\n\none could do probably implement this using refcounts and translating\ndropped tables into temporary mangled names.\n\n\ntable foo\n begin transaction\ndrop table foo\n foo becomes foo~1 \n for all transactions\n started before the drop\n\n end transaction\nfoo~1 and mapping are\ndropped.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Mon, 11 Sep 2000 13:54:45 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "bug with dropping tables and transactions."
},
{
"msg_contents": "> -----Original Message-----\n> From: Alfred Perlstein\n> \n> There seems to a race condition somewhere where that if you're\n> running let's say pg_dumpall and happen to drop a table mid-dump\n> pg_dumpall will die because it looses the table.\n> \n> Would it make sense to use a transaction system so that when a table\n> is renamed/dropped it doesn't actually go away until all transactions\n> that started before the drop take place?\n> \n> one could do probably implement this using refcounts and translating\n> dropped tables into temporary mangled names.\n> \n\nYour proposal seems to be an extension of how to commit/rollback\nDDL (drop/alter/rename etc ..) commands properly. There has been\na long discussion about it but unfortunately we have no consensus\nfor it AFAIK.\n\nThere may be another way.\npg_dump(all) may be able to acquire a e.g share lock for pg_class\nto prevent drop/rename/.. operations of other backends. Of cource\nDDL(drop/rename/..) commands should acquire a row exclusive\nlock on pg_class.\n\nRegards.\n\nHiroshi Inoue\n\n",
"msg_date": "Tue, 12 Sep 2000 16:47:58 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: bug with dropping tables and transactions."
},
{
"msg_contents": "* Hiroshi Inoue <[email protected]> [000912 00:45] wrote:\n> > -----Original Message-----\n> > From: Alfred Perlstein\n> > \n> > There seems to a race condition somewhere where that if you're\n> > running let's say pg_dumpall and happen to drop a table mid-dump\n> > pg_dumpall will die because it looses the table.\n> > \n> > Would it make sense to use a transaction system so that when a table\n> > is renamed/dropped it doesn't actually go away until all transactions\n> > that started before the drop take place?\n> > \n> > one could do probably implement this using refcounts and translating\n> > dropped tables into temporary mangled names.\n> > \n> \n> Your proposal seems to be an extension of how to commit/rollback\n> DDL (drop/alter/rename etc ..) commands properly. There has been\n> a long discussion about it but unfortunately we have no consensus\n> for it AFAIK.\n> \n> There may be another way.\n> pg_dump(all) may be able to acquire a e.g share lock for pg_class\n> to prevent drop/rename/.. operations of other backends. Of cource\n> DDL(drop/rename/..) commands should acquire a row exclusive\n> lock on pg_class.\n\nNo long term locks is better than a locking system, I'd prefer to\nbe able to proceed as normal since a transaction exists 'at a point\nin time' there's no reason to delay a drop that happens in the\nfuture so long as there's still something for the old transaction\nto grab onto.\n\nYour solution may be simpler (and I thought of something like it\nalready) but honestly it's not what I'd like to see implemented.\n\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Tue, 12 Sep 2000 01:32:23 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bug with dropping tables and transactions."
}
] |
[
{
"msg_contents": "Hello,\n\nI don't know if this is worth mentioning, but when I tried to\nbuild the Sept. 11 snapshot on a machine which has the 7.0.2 RPMS\ninstalled, I did a:\n\n./configure\ngmake\n\nand got the following error:\n\ngmake[4]: Entering directory\n`/usr/src/pgsql/src/backend/storage/ipc'\ngcc -c -I../../../../src/include -O2 -Wall -Wmissing-prototypes\n-Wmissing-declarations ipc.c -o ipc.o\nipc.c: In function `IPCPrivateSemaphoreKill':\nipc.c:240: storage size of `semun' isn't known\nipc.c:240: warning: unused variable `semun'\nipc.c: In function `IpcSemaphoreCreate':\nipc.c:293: storage size of `semun' isn't known\nipc.c:293: warning: unused variable `semun'\nipc.c: In function `IpcSemaphoreKill':\nipc.c:392: storage size of `semun' isn't known\nipc.c:392: warning: unused variable `semun'\nipc.c: In function `IpcSemaphoreGetCount':\nipc.c:495: storage size of `dummy' isn't known\nipc.c:495: warning: unused variable `dummy'\nipc.c: In function `IpcSemaphoreGetValue':\nipc.c:506: storage size of `dummy' isn't known\nipc.c:506: warning: unused variable `dummy'\ngmake[4]: *** [ipc.o] Error 1\ngmake[4]: Leaving directory\n`/usr/src/pgsql/src/backend/storage/ipc'\ngmake[3]: *** [ipc-recursive] Error 2\ngmake[3]: Leaving directory `/usr/src/pgsql/src/backend/storage'\ngmake[2]: *** [storage-recursive] Error 2\ngmake[2]: Leaving directory `/usr/src/pgsql/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/usr/src/pgsql/src'\ngmake: *** [all] Error 2\n\nIt seems to be that HAVE_UNION_SEMUN is set by configure, because\nit appears in the file /usr/include/pgsql/storage/ipc.h, which is\nsomehow included in the configure test. During the build process,\nhowever, the RPM headers are, properly, not included. Moving\n/usr/include/pgsql to /tmp allows for the build to take place, so\nno harm no foul. I'm not sure if this is a problem or not, but\nits nice to be able to run snapshots on the same machine as\nRPM-based production versions for development.\n\nFor what its worth, \n\nMike Mascari\n",
"msg_date": "Tue, 12 Sep 2000 01:42:38 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "FYI - Build problems when an RPM version is installed"
},
{
"msg_contents": "> It seems to be that HAVE_UNION_SEMUN is set by configure, because\n> it appears in the file /usr/include/pgsql/storage/ipc.h, which is\n> somehow included in the configure test. During the build process,\n> however, the RPM headers are, properly, not included. Moving\n> /usr/include/pgsql to /tmp allows for the build to take place, so\n> no harm no foul. I'm not sure if this is a problem or not, but\n> its nice to be able to run snapshots on the same machine as\n> RPM-based production versions for development.\n\nHmm. I've been building in the same kind of environment (7.0.2 RPMs\ninstalled; cvs snapshot in my home directory) and do not see this\nsymptom. I did a \"make distclean\" this evening so had a fairly fresh\nstart.\n\nbtw, I had to blow away all of my installation area to get things to\ninitdb this evening. Not sure what was not getting\nreplaced/updated/removed by the normal installation process, but\nwhatever it was led to core dumps when building the template database.\n\n - Thomas\n",
"msg_date": "Tue, 12 Sep 2000 06:14:51 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FYI - Build problems when an RPM version is installed"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > It seems to be that HAVE_UNION_SEMUN is set by configure, because\n> > it appears in the file /usr/include/pgsql/storage/ipc.h, which is\n> > somehow included in the configure test. During the build process,\n> > however, the RPM headers are, properly, not included. Moving\n> > /usr/include/pgsql to /tmp allows for the build to take place, so\n> > no harm no foul. I'm not sure if this is a problem or not, but\n> > its nice to be able to run snapshots on the same machine as\n> > RPM-based production versions for development.\n> \n> Hmm. I've been building in the same kind of environment (7.0.2 RPMs\n> installed; cvs snapshot in my home directory) and do not see this\n> symptom. I did a \"make distclean\" this evening so had a fairly fresh\n> start.\n> \n> btw, I had to blow away all of my installation area to get things to\n> initdb this evening. Not sure what was not getting\n> replaced/updated/removed by the normal installation process, but\n> whatever it was led to core dumps when building the template database.\n> \n> - Thomas\n\nSorry. It looks like config.cache is being distributed with the\nsnapshot. I must have performed a make distclean after moving the\ninclude directory and concluded incorrectly regarding the ipc.h\nheader.\n\nMike Mascari\n",
"msg_date": "Tue, 12 Sep 2000 06:03:43 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FYI - Build problems when an RPM version is installed"
}
] |
[
{
"msg_contents": "I've just committed changes to the main tree which fixes date to\ntimestamp conversion around daylight savings time boundaries. A\npreviously-uncalled system routine, mktime(), is used for this\nconversion, so we need to keep an eye out for portability issues.\n\nI've added explicit regression tests for the date, time, and time with\ntime zone data types.\n\nI also uncovered a formatting bug when printing a zero time interval\nusing the ISO format, and have fixed it. The bug led to three zeros in\nthe minutes field of the interval.\n\nAll regression tests pass on my Linux box (except for the usual\ngeometric type rounding errors).\n\n - Thomas\n",
"msg_date": "Tue, 12 Sep 2000 06:10:27 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Changes to date->timestamp"
}
] |
[
{
"msg_contents": "I'm nearly ready to commit fairly wide-ranging changes for outer join\nsupport. The spurt of commits today took me far out of sync again, so\nbefore I try to merge my changes --- is anyone about to commit a bunch\nmore stuff?\n\nI currently have uncommitted changes in the attached list of files.\nPlease let me know if you have pending changes in these files...\n\n\t\t\tregards, tom lane\n\nsrc/backend/catalog/heap.c\nsrc/backend/commands/command.c\nsrc/backend/commands/creatinh.c\nsrc/backend/commands/explain.c\nsrc/backend/commands/remove.c\nsrc/backend/commands/view.c\nsrc/backend/executor/execMain.c\nsrc/backend/executor/execTuples.c\nsrc/backend/executor/execUtils.c\nsrc/backend/executor/nodeHashjoin.c\nsrc/backend/executor/nodeMergejoin.c\nsrc/backend/executor/nodeNestloop.c\nsrc/backend/nodes/copyfuncs.c\nsrc/backend/nodes/equalfuncs.c\nsrc/backend/nodes/list.c\nsrc/backend/nodes/outfuncs.c\nsrc/backend/nodes/print.c\nsrc/backend/nodes/readfuncs.c\nsrc/backend/optimizer/README\nsrc/backend/optimizer/geqo/geqo_eval.c\nsrc/backend/optimizer/path/allpaths.c\nsrc/backend/optimizer/path/indxpath.c\nsrc/backend/optimizer/path/joinpath.c\nsrc/backend/optimizer/path/joinrels.c\nsrc/backend/optimizer/path/orindxpath.c\nsrc/backend/optimizer/path/pathkeys.c\nsrc/backend/optimizer/plan/createplan.c\nsrc/backend/optimizer/plan/initsplan.c\nsrc/backend/optimizer/plan/planmain.c\nsrc/backend/optimizer/plan/planner.c\nsrc/backend/optimizer/plan/setrefs.c\nsrc/backend/optimizer/plan/subselect.c\nsrc/backend/optimizer/prep/prepkeyset.c\nsrc/backend/optimizer/prep/prepunion.c\nsrc/backend/optimizer/util/clauses.c\nsrc/backend/optimizer/util/pathnode.c\nsrc/backend/optimizer/util/relnode.c\nsrc/backend/optimizer/util/restrictinfo.c\nsrc/backend/optimizer/util/var.c\nsrc/backend/parser/analyze.c\nsrc/backend/parser/gram.y\nsrc/backend/parser/parse_agg.c\nsrc/backend/parser/parse_clause.c\nsrc/backend/parser/parse_expr.c\nsrc/backend/parser/parse_func.c\nsrc/backend/parser/parse_node.c\nsrc/backend/parser/parse_relation.c\nsrc/backend/parser/parse_target.c\nsrc/backend/parser/parser.c\nsrc/backend/parser/scan.l\nsrc/backend/rewrite/locks.c\nsrc/backend/rewrite/rewriteHandler.c\nsrc/backend/rewrite/rewriteManip.c\nsrc/backend/storage/buffer/buf_init.c\nsrc/backend/storage/buffer/bufmgr.c\nsrc/backend/storage/buffer/freelist.c\nsrc/backend/utils/adt/ruleutils.c\nsrc/include/catalog/catversion.h\nsrc/include/executor/execdebug.h\nsrc/include/executor/execdefs.h\nsrc/include/executor/executor.h\nsrc/include/nodes/execnodes.h\nsrc/include/nodes/nodes.h\nsrc/include/nodes/parsenodes.h\nsrc/include/nodes/pg_list.h\nsrc/include/nodes/plannodes.h\nsrc/include/nodes/primnodes.h\nsrc/include/nodes/relation.h\nsrc/include/optimizer/clauses.h\nsrc/include/optimizer/pathnode.h\nsrc/include/optimizer/paths.h\nsrc/include/optimizer/planmain.h\nsrc/include/optimizer/restrictinfo.h\nsrc/include/parser/gramparse.h\nsrc/include/parser/parse_clause.h\nsrc/include/parser/parse_func.h\nsrc/include/parser/parse_node.h\nsrc/include/parser/parse_relation.h\nsrc/include/parser/parsetree.h\nsrc/include/rewrite/rewriteHandler.h\nsrc/include/rewrite/rewriteManip.h\nsrc/include/storage/buf_internals.h\nsrc/test/regress/expected/geometry-cygwin-precision.out\nsrc/test/regress/expected/geometry-i86-gnulibc.out\nsrc/test/regress/expected/geometry-positive-zeros-bsd.out\nsrc/test/regress/expected/geometry-positive-zeros.out\nsrc/test/regress/expected/geometry-powerpc-aix4.out\nsrc/test/regress/expected/geometry-powerpc-linux-gnulibc1.out\nsrc/test/regress/expected/geometry-solaris-precision.out\nsrc/test/regress/expected/geometry.out\nsrc/test/regress/expected/join.out\nsrc/test/regress/expected/rules.out\nsrc/test/regress/sql/join.sql\nsrc/test/regress/sql/rules.sql\n",
"msg_date": "Tue, 12 Sep 2000 02:58:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dibs for upcoming commit"
},
{
"msg_contents": "> I'm nearly ready to commit fairly wide-ranging changes for outer join\n> support. The spurt of commits today took me far out of sync again, so\n> before I try to merge my changes --- is anyone about to commit a bunch\n> more stuff?\n\nOK here...\n\n - Thomas\n",
"msg_date": "Tue, 12 Sep 2000 13:54:39 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dibs for upcoming commit"
},
{
"msg_contents": "> I'm nearly ready to commit fairly wide-ranging changes for outer join\n> support. The spurt of commits today took me far out of sync again, so\n> before I try to merge my changes --- is anyone about to commit a bunch\n> more stuff?\n\nI am caught up, at least in patches.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 10:12:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dibs for upcoming commit"
}
] |
[
{
"msg_contents": "\n> There seems to a race condition somewhere where that if you're\n> running let's say pg_dumpall and happen to drop a table mid-dump\n> pg_dumpall will die because it looses the table.\n> \n> Would it make sense to use a transaction system so that when a table\n> is renamed/dropped it doesn't actually go away until all transactions\n> that started before the drop take place?\n> \n> one could do probably implement this using refcounts and translating\n> dropped tables into temporary mangled names.\n\nImho if I dropped a table I would not like another session to still access\nit,\nso we should imho rather fix pg_dump.\n\nAndreas\n",
"msg_date": "Tue, 12 Sep 2000 09:37:02 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: bug with dropping tables and transactions."
},
{
"msg_contents": "* Zeugswetter Andreas SB <[email protected]> [000912 00:37] wrote:\n> \n> > There seems to a race condition somewhere where that if you're\n> > running let's say pg_dumpall and happen to drop a table mid-dump\n> > pg_dumpall will die because it looses the table.\n> > \n> > Would it make sense to use a transaction system so that when a table\n> > is renamed/dropped it doesn't actually go away until all transactions\n> > that started before the drop take place?\n> > \n> > one could do probably implement this using refcounts and translating\n> > dropped tables into temporary mangled names.\n> \n> Imho if I dropped a table I would not like another session to still access\n> it,\n> so we should imho rather fix pg_dump.\n\nNot a session, but a transaction. I'm not adverse to an option that\nextends DROP to behave the way I'd like it to rather than having\npg_dump fail, but I'm not happy with pg_dump locking up my database,\nI'm already hacking around way to much to avoid deadlocks and stalls\ndue to vacuum, and I'd really rather not have pg_dump become my\nnew nemisis.\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Tue, 12 Sep 2000 01:34:46 -0700",
"msg_from": "\"'Alfred Perlstein'\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug with dropping tables and transactions."
}
] |
[
{
"msg_contents": "\n> > 2. The executor complains if a DELETE or\n> > \tINSERT references a view.\n\nI think this is for new todo items:\n\tcreate insert, update and delete rules for simple one table views\n\tchange elog for complex view ins|upd|del to \"cannot {ins|upd|del}\n[into|from] complex view without an on {ins|upd|del} rule\"\n\tadd the functionality for \"with check option\" clause of create view\n\nAndreas\n",
"msg_date": "Tue, 12 Sep 2000 09:42:41 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: new relkind for view"
},
{
"msg_contents": "On Tue, Sep 12, 2000 at 09:42:41AM +0200, Zeugswetter Andreas SB wrote:\n>\n> \tadd the functionality for \"with check option\" clause of create view\n>\n\nI'm not familiar with this. What does it do?\n\n-- \nMark Hollomon\[email protected]\n",
"msg_date": "Fri, 15 Sep 2000 13:59:10 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: new relkind for view"
},
{
"msg_contents": "Added to TODO>\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > > 2. The executor complains if a DELETE or\n> > > \tINSERT references a view.\n> \n> I think this is for new todo items:\n> \tcreate insert, update and delete rules for simple one table views\n> \tchange elog for complex view ins|upd|del to \"cannot {ins|upd|del}\n> [into|from] complex view without an on {ins|upd|del} rule\"\n> \tadd the functionality for \"with check option\" clause of create view\n> \n> Andreas\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 17:14:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: new relkind for view"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Added to TODO>\n>> \n>> I think this is for new todo items:\n>> create insert, update and delete rules for simple one table views\n>> change elog for complex view ins|upd|del to \"cannot {ins|upd|del}\n>> [into|from] complex view without an on {ins|upd|del} rule\"\n>> add the functionality for \"with check option\" clause of create view\n\nThe second of these three items is done already (in the rewriter,\nnot the executor):\n\nregression=# create view vv1 as select * from int4_tbl;\nCREATE\nregression=# insert into vv1 values (33);\nERROR: Cannot insert into a view without an appropriate rule\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Oct 2000 18:02:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: new relkind for view "
},
{
"msg_contents": "TODO updated.\n\n> Bruce Momjian <[email protected]> writes:\n> > Added to TODO>\n> >> \n> >> I think this is for new todo items:\n> >> create insert, update and delete rules for simple one table views\n> >> change elog for complex view ins|upd|del to \"cannot {ins|upd|del}\n> >> [into|from] complex view without an on {ins|upd|del} rule\"\n> >> add the functionality for \"with check option\" clause of create view\n> \n> The second of these three items is done already (in the rewriter,\n> not the executor):\n> \n> regression=# create view vv1 as select * from int4_tbl;\n> CREATE\n> regression=# insert into vv1 values (33);\n> ERROR: Cannot insert into a view without an appropriate rule\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 18:03:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: new relkind for view"
}
] |
[
{
"msg_contents": "Under both 6.5 and 7.0:\n----------------------\nstocks=# create table test (key int4);\nCREATE\nstocks=# create function crap(int4) returns int4 as \n'select sum(key) from test' language 'sql';\nCREATE\nstocks=# select version();\n \nversion \n---------------------------------------------------------------------\n PostgreSQL 7.0.0 on i686-pc-linux-gnu, compiled by gcc\negcs-2.91.66\n\n\nUnder the snapshot from yesterday:\n---------------------------------\n\ntemplate1=# create table test (key int4);\nCREATE\ntemplate1=# create function crap(int4) returns int4 \nas 'select sum(key) from test' language 'sql';\nERROR: return type mismatch in function: declared to return\nint4, returns numeric\ntemplate1=# select version();\n \nversion \n------------------------------------------------------------------------\n PostgreSQL 7.1devel on i586-pc-linux-gnu, compiled by GCC\negcs-2.91.66\n\n\nIs this correct behavior? All of the regression tests pass on the\nsnapshot version, BTW. \n\nMike Mascari\n",
"msg_date": "Tue, 12 Sep 2000 07:06:28 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Weird function behavior from Sept 11 snapshot"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n> Under the snapshot from yesterday:\n> ---------------------------------\n\n> template1=# create function crap(int4) returns int4 \n> as 'select sum(key) from test' language 'sql';\n> ERROR: return type mismatch in function: declared to return\n> int4, returns numeric\n\nI changed sum() on integer types to return numeric as a way of\navoiding overflow. Also avg() on integers now returns numeric\nso that you can get some fractional precision. If you think this\nwas a bad idea, there's still time to debate it ... but we've had\nrepeated complaints about both of those issues.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2000 09:53:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird function behavior from Sept 11 snapshot "
},
{
"msg_contents": "> Is this correct behavior? All of the regression tests pass on the\n> snapshot version, BTW.\n\nThis is the expected behavior, and is \"correct\". There was a change\nrecently to the aggregate functions to make them more robust. So\nsum(int4) now calculates and returns a numeric result rather than an\nint4.\n\nThe problem is that numeric is extremely slow compared to an int4\ncalculation, and I'd like us to consider doing the calculation in int4\n(downside: silent overflow when dealing with non-trivial data), int8\n(downside: no support on a few platforms), or float8 (downside: silent\ntruncation on non-trivial data).\n\nTom, do you recall measuring the performance difference on aggregate\nfunctions between int4 and numeric for small-value cases? We probably\ndon't want to take order-of-magnitude performance hits to get this more\ncorrect behavior, but I'm not sure what the performance actually is.\n\nbtw, Mike's function works when defined as\n\ncreate function c(int4) returns int4\n as 'select cast(sum(key) as int4) from test' language 'sql';\n\n - Thomas\n",
"msg_date": "Tue, 12 Sep 2000 14:10:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird function behavior from Sept 11 snapshot"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Tom, do you recall measuring the performance difference on aggregate\n> functions between int4 and numeric for small-value cases? We probably\n> don't want to take order-of-magnitude performance hits to get this more\n> correct behavior, but I'm not sure what the performance actually is.\n\nI have not tried to measure it --- I was sort of assuming that for\nrealistic-size problems, disk I/O would swamp any increase in CPU time\nanyway. Does anyone want to check the time for sum() or avg() on an\nint4 column over a large table, under both 7.0.* and current?\n\n> The problem is that numeric is extremely slow compared to an int4\n> calculation, and I'd like us to consider doing the calculation in int4\n> (downside: silent overflow when dealing with non-trivial data), int8\n> (downside: no support on a few platforms), or float8 (downside: silent\n> truncation on non-trivial data).\n\nActually, using a float8 accumulator would work pretty well; assuming\nIEEE float8, you'd only start to get roundoff error when the running\nsum exceeds 2^52 or so. However the SQL92 spec is insistent that sum()\ndeliver an exact-numeric result when applied to exact-numeric data,\nand with a float accumulator we'd be at the mercy of the quality of the\nlocal implementation of floating point.\n\nI could see offering variant aggregates, say \"sumf\" and \"avgf\", that\nuse float8 accumulation. Right now the user can get the same result\nby writing \"sum(foo::float8)\" but it might be wise to formalize the\nidea ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2000 10:24:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird function behavior from Sept 11 snapshot "
},
{
"msg_contents": "> ... Does anyone want to check the time for sum() or avg() on an\n> int4 column over a large table, under both 7.0.* and current?\n\nFor 262144 rows on the current tree, I get the following:\n\nsum(int4): 12.0 seconds\nsum(float8): 5.2 seconds\nsum(cast(int4 as float8): 5.7 seconds\n\nThis includes startup costs, etc, and are the minimum times from several\nruns (there is pretty wide variability, presumably due to disk caching,\nswapping, etc on my laptop). It is a safe bet that the original int4\nimplementation was as fast or faster than the float8 result above (int4\ndoes not require palloc() calls).\n\n> Actually, using a float8 accumulator would work pretty well; assuming\n> IEEE float8, you'd only start to get roundoff error when the running\n> sum exceeds 2^52 or so. However the SQL92 spec is insistent that sum()\n> deliver an exact-numeric result when applied to exact-numeric data,\n> and with a float accumulator we'd be at the mercy of the quality of the\n> local implementation of floating point.\n\nA problem with float8 is that it is possible to reach a point in the\naccumulation where subsequent input values are ignored in the sum. This\nis different than just roundoff error, since it degrades ungracefully\nfrom that point on.\n\n> I could see offering variant aggregates, say \"sumf\" and \"avgf\", that\n> use float8 accumulation. Right now the user can get the same result\n> by writing \"sum(foo::float8)\" but it might be wise to formalize the\n> idea ...\n\nHow about using int8 for the accumulator (on machines which support it\nof course)? Falling back to float8 or numeric on other machines? Or\nperhaps we could have an option (runtime??) to switch accumulator modes.\n\nI like the idea of something like \"sumf\" to get alternative algorithms,\nbut it would be nice if basic sum() could be a bit more optimized than\ncurrently.\n\n - Thomas\n",
"msg_date": "Tue, 12 Sep 2000 14:58:12 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird function behavior from Sept 11 snapshot"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> How about using int8 for the accumulator (on machines which support it\n> of course)? Falling back to float8 or numeric on other machines?\n\nint8 would still pose some overflow risk (at least for int8 input),\nand would likely be no faster than a float8 implementation, since\nboth would require palloc().\n\nYour test suggests that the performance differential is *at most*\n2X --- probably much less in real-world situations where the disk\npages aren't already cached. I can't get excited about introducing\nplatform-dependent behavior and overflow risk for that. If it were\n10X then I would, but right now I think we are OK as is. I think\nany speedup efforts here would be better put into making NUMERIC\nops go faster ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2000 11:14:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird function behavior from Sept 11 snapshot "
},
{
"msg_contents": "> int8 would still pose some overflow risk (at least for int8 input),\n> and would likely be no faster than a float8 implementation, since\n> both would require palloc().\n\nRight. On 32-bit machines, int8 is likely to be substantially slower,\nsince the int8 math is done in a library rather than in a single machine\ninstruction.\n\n> Your test suggests that the performance differential is *at most*\n> 2X --- probably much less in real-world situations where the disk\n> pages aren't already cached.\n\nHmm. sum(int4) on the same table is 1.8 seconds for 7.0.2 (vs 12.5 for\nsnapshot). But I *am* compiling with asserts turned on for the other\ntests (with maybe some other differences too), so maybe it is not (yet)\na fair comparison. Still a pretty big performance difference for\nsomething folks expect to be a fast operation.\n\n - Thomas\n",
"msg_date": "Tue, 12 Sep 2000 15:37:00 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird function behavior from Sept 11 snapshot"
},
{
"msg_contents": "> Your test suggests that the performance differential is *at most*\n> 2X --- probably much less in real-world situations where the disk\n> pages aren't already cached. I can't get excited about introducing\n> platform-dependent behavior and overflow risk for that. If it were\n> 10X then I would, but right now I think we are OK as is. I think\n> any speedup efforts here would be better put into making NUMERIC\n> ops go faster ...\n\nAnother followup: on 7.0.2, with different optimizations etc,\nsum(float8) takes 1.95 seconds, rather than the 5.2 on the current tree.\nI'd better look at the compilation optimizations; is there another\nexplanation for the factor of 2.6 difference (!!)?\n\nSo I'd expect int4 to be closer to float8 in performance than my\nprevious mail suggested.\n\n - Thomas\n",
"msg_date": "Tue, 12 Sep 2000 15:45:08 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird function behavior from Sept 11 snapshot"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Another followup: on 7.0.2, with different optimizations etc,\n> sum(float8) takes 1.95 seconds, rather than the 5.2 on the current tree.\n> I'd better look at the compilation optimizations; is there another\n> explanation for the factor of 2.6 difference (!!)?\n\nIf you are running with --enable-cassert then there is a whole bunch\nof memory-stomp debugging overhead turned on in current sources,\nincluding such time-consuming stuff as clearing every pfree'd block.\n7.0.*'s --enable-cassert is not nearly as expensive.\n\nI plan to make that stuff not-default when we go beta, but right now\nit seems like a good idea to have it on for testing...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2000 12:39:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird function behavior from Sept 11 snapshot "
},
{
"msg_contents": "Hmm. I recompiled the current snapshot with the optimizations from my\nMandrake RPM (using the Mandrake defaults, except for disabling\n\"fast-math\"), and get the following:\n\n7.0.2 current test\n 1.8 5.3\t sum(i)\n 1.95 1.77\t sum(f)\n 2.3\t 1.9\t sum(cast(i as float8))\n\nMy previous tests on the current tree were with -O0, asserts enabled,\nand few other options specified (mostly, the defaults for the Postgres\nLinux build).\n\nThe Linux defaults in the Postgres tarball are:\n\n -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n\nwhereas the defaults for Mandrake (with fast-math turned off since it\ngives rounding trouble in date/time math):\n\n -O3 -fomit-frame-pointer -fno-exceptions -fno-rtti -pipe -s\n -mpentiumpro -mcpu=pentiumpro -march=pentiumpro\n -fexpensive-optimizations\n -malign-loops=2 -malign-jumps=2 -malign-functions=2\n -mpreferred-stack-boundary=2 -fno-fast-math\n\nI'll do some more tests with the default compiler options. The good news\nis that the new fmgr interface is apparently as fast or faster than the\nold one :)\n\n - Thomas\n",
"msg_date": "Tue, 12 Sep 2000 16:45:28 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird function behavior from Sept 11 snapshot"
}
] |
[
{
"msg_contents": "yes, i can find this useful\n\nvalter\n\n>From: Bruce Momjian <[email protected]>\n>To: [email protected]\n>CC: Postgres Hacker Lister <[email protected]>\n>Subject: Re: [HACKERS] Patch for TNS services\n>Date: Tue, 12 Sep 2000 01:13:33 -0400 (EDT)\n>\n>Sounds like people want it. Can you polish it off, add SGML docs and\n>send it over?\n>\n> > -----BEGIN PGP SIGNED MESSAGE-----\n> >\n> > Last week I created a patch for the Postgres client side libraries to \n>allow\n> > something like a (not so mighty) form of Oracle TNS, but nobody showed \n>any\n> > interest. Currently, the patch is not perfect yet, but works fine for \n>us. I\n> > want to avoid improving the patch if there is no interest in it, so if \n>you\n> > think it might be a worthy improvement please drop me a line.\n\n............\n............\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\nShare information about yourself, create your own public profile at \nhttp://profiles.msn.com.\n\n",
"msg_date": "Tue, 12 Sep 2000 11:27:11 CEST",
"msg_from": "\"Valter Mazzola\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Patch for TNS services"
}
] |
[
{
"msg_contents": "Hi,\n\nI experience a strange error with 7.0.2. I cannot get any results with\ncertain queries. For example, a foo table is defined with a few columns,\nit has a \n\nid_string varchar(100) \n\ncolumn, too. I filling this table, it contains e.g. a row with\n'something' in the column id_string. I give the next query:\n\n> select * from foo where id_string = 'something';\n\nI get no result.\n\n> select * from foo where id_string like '%something';\n\nI get the row. Strange. Then, if I try to check the result:\n\n> select substr(id_string,1,1) from foo where id_string like '%something';\n\nnow I will get 's' as expected... Dumping the database out and bringing it\nback the problem doesn't appear anymore... for a while... I cannot give\nan exact report, but usually this bug occurs when I stop the database\nand I start it again.\n\nDid anybody experience such a behaviour?\n\nTIA, Zoltan\n\n Kov\\'acs, Zolt\\'an\n [email protected]\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n",
"msg_date": "Tue, 12 Sep 2000 14:28:34 +0200 (CEST)",
"msg_from": "Kovacs Zoltan <[email protected]>",
"msg_from_op": true,
"msg_subject": "strange behaviour (bug)"
},
{
"msg_contents": "Kovacs Zoltan <[email protected]> writes:\n> now I will get 's' as expected... Dumping the database out and bringing it\n> back the problem doesn't appear anymore... for a while... I cannot give\n> an exact report, but usually this bug occurs when I stop the database\n> and I start it again.\n\nHmm. Is it possible that when you restart the postmaster, you are\naccidentally starting it with a different environment --- in particular,\ndifferent LOCALE or LC_xxx settings --- than it had before?\n\nIf there is an index on id_string then\n> select * from foo where id_string = 'something';\nwould try to use the index, and so could get messed up by a change\nin LOCALE; the index would now appear to be out of order according to\nthe new LOCALE value.\n\nWe really ought to fix things so that all the LOCALE settings are saved\nby \"initdb\" and then re-established during postmaster start, rather than\nrelying on the user always to start the postmaster with the same\nenvironment. People have been burnt by this before :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2000 09:59:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange behaviour (bug) "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> Kovacs Zoltan <[email protected]> writes:\n> > now I will get 's' as expected... Dumping the database out and \n> bringing it\n> > back the problem doesn't appear anymore... for a while... I cannot give\n> > an exact report, but usually this bug occurs when I stop the database\n> > and I start it again.\n> \n> Hmm. Is it possible that when you restart the postmaster, you are\n> accidentally starting it with a different environment --- in particular,\n> different LOCALE or LC_xxx settings --- than it had before?\n> \n> If there is an index on id_string then\n> > select * from foo where id_string = 'something';\n> would try to use the index, and so could get messed up by a change\n> in LOCALE; the index would now appear to be out of order according to\n> the new LOCALE value.\n>\n\nThere could be another cause.\nIf a B-tree page A was splitted to the page A(changed) and a page B but\nthe transaction was rolled back,the pages A,B would not be written to\ndisc and the followings could occur for example.\n1) The changed non-leaf page of A and B may be written to disc later.\n2) An index entry may be inserted into the page B and committed later.\n\nI don't know how often those could occur.\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Wed, 13 Sep 2000 06:24:38 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: strange behaviour (bug) "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> If a B-tree page A was splitted to the page A(changed) and a page B but\n> the transaction was rolled back,the pages A,B would not be written to\n> disc and the followings could occur for example.\n\nYes. I have been thinking that it's a mistake not to write changed\npages to disk at transaction abort, because that just makes for a longer\nwindow where a system crash might leave you with corrupted indexes.\nI don't think fsync is really essential, but leaving the pages unwritten\nin shared memory is bad. (For example, if we next shut down the\npostmaster, then the pages will NEVER get written.)\n\nSkipping the update is a bit silly anyway; we aren't really that\nconcerned about optimizing performance of abort, are we?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2000 17:39:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange behaviour (bug) "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > If a B-tree page A was splitted to the page A(changed) and a page B but\n> > the transaction was rolled back,the pages A,B would not be written to\n> > disc and the followings could occur for example.\n> \n> Yes. I have been thinking that it's a mistake not to write changed\n> pages to disk at transaction abort, because that just makes for a longer\n> window where a system crash might leave you with corrupted indexes.\n> I don't think fsync is really essential, but leaving the pages unwritten\n> in shared memory is bad. (For example, if we next shut down the\n> postmaster, then the pages will NEVER get written.)\n> \n> Skipping the update is a bit silly anyway; we aren't really that\n> concerned about optimizing performance of abort, are we?\n>\n\nProbably WAL would solve this phenomenon by rolling\nback the content of disc and shared buffer in reality.\nHowever if 7.0.x would be released we had better change \nbufmgr IMHO.\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Wed, 13 Sep 2000 08:26:17 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: strange behaviour (bug) "
}
] |
[
{
"msg_contents": "Hello,\nI have encountered problems with particular query so that\na started to dug into sources. I've two questions/ideas:\n\n1) when optimizer computes size of join it does it as\n card(R1)*card(R2)*selectivity. Suppose two relations\n (R1 & R2) each 10000 rows. If you (inner) join them\n using equality operator, the result is at most 10000\n rows (min(card(R1),card(R2)). But pg estimates\n 1 000 000 (uses selectivity 0.01 here).\n Then when computing cost it will result in very high\n cost in case of hash and loop join BUT low (right)\n cost for merge join. It is because for hash and loop\n joins the cost is estimated from row count but merge\n join uses another estimation (as it always know that\n merge join can be done only on equality op).\n It then leads to use of mergejoin for majority of joins.\n Unfortunately I found that in majority of such cases\n the hash join is two times faster.\n I tested it using SET ENABLE_MERGEJOIN=OFF ...\n What about to change cost estimator to use min(card(R1),\n card(R2)) instead of card(R1)*card(R2)*selectivity in\n case where R1 and R2 are connected using equality ?\n It should lead to much faster plans for majority of SQLs.\n\n2) suppose we have relation R1(id,name) and index ix(id,name)\n on it. In query like: select id,name from R1 order by id\n planner will prefer to do seqscan+sort (althought the R1\n is rather big). And yes it is really faster than using\n indexscan.\n But indexscan always lookups actual record in heap even if\n all needed attributes are contained in the index.\n Oracle and even MSSQL reads attributes directly from index\n without looking for actual tuple at heap.\n Is there any need to do it in such ineffecient way ?\n\nregards, devik\n\n",
"msg_date": "Tue, 12 Sep 2000 14:30:09 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Performance improvement hints"
},
{
"msg_contents": "On Tue, Sep 12, 2000 at 02:30:09PM +0200, [email protected] wrote:\n> Hello,\n> I have encountered problems with particular query so that\n> a started to dug into sources. I've two questions/ideas:\n> \n> 1) when optimizer computes size of join it does it as\n> card(R1)*card(R2)*selectivity. Suppose two relations\n> (R1 & R2) each 10000 rows. If you (inner) join them\n> using equality operator, the result is at most 10000\n> rows (min(card(R1),card(R2)). But pg estimates\n> 1 000 000 (uses selectivity 0.01 here).\n\nSurely not. If you inner join, you can get many more than min\n(card(R1),card(R2)), if you are joining over non-unique keys (a common\ncase). For example:\n\nemployee:\n\nname\tjob\n\nJon\tProgrammer\nGeorge\tProgrammer\n\njob_drinks\n\njob\t\tdrink\n\nProgrammer\tJolt\nProgrammer\tCoffee\nProgrammer\tBeer\n\n\nThe natural (inner) join between these two tables results in 6 rows,\ncard(R1)*card(R2). \n\nI think you mean that min(card(R1),card(R2)) is the correct figure\nwhen the join is done over a unique key in both tables. \n\n\n> \n> 2) suppose we have relation R1(id,name) and index ix(id,name)\n> on it. In query like: select id,name from R1 order by id\n> planner will prefer to do seqscan+sort (althought the R1\n> is rather big). And yes it is really faster than using\n> indexscan.\n> But indexscan always lookups actual record in heap even if\n> all needed attributes are contained in the index.\n> Oracle and even MSSQL reads attributes directly from index\n> without looking for actual tuple at heap.\n> Is there any need to do it in such ineffecient way ?\n\n\nI believe this is because PgSQL doesn't remove entries from the index\nat DELETE time, thus it is always necessary to refer to the main table\nin case the entry found in the index has since been deleted.\nPresumably this speeds up deletes (but I find this behaviour suprising\ntoo).\n\nJules\n",
"msg_date": "Tue, 12 Sep 2000 13:45:45 +0100",
"msg_from": "Jules Bean <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance improvement hints"
},
{
"msg_contents": "[email protected] writes:\n> 1) when optimizer computes size of join it does it as\n> card(R1)*card(R2)*selectivity. Suppose two relations\n> (R1 & R2) each 10000 rows. If you (inner) join them\n> using equality operator, the result is at most 10000\n> rows (min(card(R1),card(R2)). But pg estimates\n> 1 000 000 (uses selectivity 0.01 here).\n\n0.01 is only the default estimate used if you've never done a VACUUM\nANALYZE (hint hint). After ANALYZE, there are column statistics\navailable that will give a better estimate.\n\nNote that your claim above is incorrect unless you are joining on unique\ncolumns, anyway. In the extreme case, if all the entries have the same\nvalue in the column being used, you'd get card(R1)*card(R2) output rows.\nI'm unwilling to make the system assume column uniqueness without\nevidence to back it up, because the consequences of assuming an overly\nsmall output row count are a lot worse than assuming an overly large\none.\n\nOne form of evidence that the planner should take into account here is\nthe existence of a UNIQUE index on a column --- if one has been created,\nwe could assume column uniqueness even if no VACUUM ANALYZE has ever\nbeen done on the table. This is on the to-do list, but I don't feel\nit's real high priority. The planner's results are pretty much going\nto suck in the absence of VACUUM ANALYZE stats anyway :-(\n\n> Then when computing cost it will result in very high\n> cost in case of hash and loop join BUT low (right)\n> cost for merge join. It is because for hash and loop\n> joins the cost is estimated from row count but merge\n> join uses another estimation (as it always know that\n> merge join can be done only on equality op).\n> It then leads to use of mergejoin for majority of joins.\n> Unfortunately I found that in majority of such cases\n> the hash join is two times faster.\n\nThe mergejoin cost calculation may be overly optimistic. The cost\nestimates certainly need further work.\n\n> But indexscan always lookups actual record in heap even if\n> all needed attributes are contained in the index.\n> Oracle and even MSSQL reads attributes directly from index\n> without looking for actual tuple at heap.\n\nDoesn't work in Postgres' storage management scheme --- the heap\ntuple must be consulted to see if it's still valid.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2000 10:12:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance improvement hints "
},
{
"msg_contents": "> > using equality operator, the result is at most 10000\n> > rows (min(card(R1),card(R2)). But pg estimates\n> > 1 000 000 (uses selectivity 0.01 here).\n> \n> Surely not. If you inner join, you can get many more than min\n> (card(R1),card(R2)), if you are joining over non-unique keys (a common\n> case). For example:\n\nOhh yes. You are right. Also I found that my main problem\nwas not running VACUUM ANALYZE so that I have invalid value\nof column's disbursion.\nI ran it and now hash join estimates row count correctly.\n\n> > But indexscan always lookups actual record in heap even if\n> > all needed attributes are contained in the index.\n> > Oracle and even MSSQL reads attributes directly from index\n> > without looking for actual tuple at heap.\n> \n> I believe this is because PgSQL doesn't remove entries from the index\n> at DELETE time, thus it is always necessary to refer to the main table\n> in case the entry found in the index has since been deleted.\n\nHmm it looks reasonable. But it still should not prevent us\nto retrieve data directly from index whether possible. What\ndo you think ? Only problem I can imagine is if it has to\ndo something with locking ..\n\nregards, devik\n\n",
"msg_date": "Tue, 12 Sep 2000 17:56:50 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Performance improvement hints"
},
{
"msg_contents": "> > But indexscan always lookups actual record in heap even if\n> > all needed attributes are contained in the index.\n> > Oracle and even MSSQL reads attributes directly from index\n> > without looking for actual tuple at heap.\n> \n> Doesn't work in Postgres' storage management scheme --- the heap\n> tuple must be consulted to see if it's still valid.\n\nyes, I just spent another day by looking into sources and\nit seems that we need xmin, xmax stuff.\nWhat do you think about this approach:\n\n1) add all validity & tx fields from heap tuple into \n index tuple too\n2) when generating plan for index scan try to determine\n whether we can satisfy target list using only data\n from index tuples, if yes then compute cost without\n accounting random heap page reads - it will lead into\n much lower cost\n3) whenever you update/delete heap tuple's tx fields, update\n then also in indices (you don't have to delete them from\n index)\n\nIt will cost more storage space and slightly more work when\nupdating indices but should give excelent performance when\nindex is used. \n\nMeasurements:\nI've table with about 2 mil. rows declared as\nbigrel(namex varchar(50),cnt integer,sale datetime). \nI regulary need to run this query against it:\nselect nazev,sum(cnt) from bigrel group by name;\nIt took (in seconds):\n\nServer\\Index YES NO\npg7.01 linux 58 264\nMSSQL7 winnt 17 22\n\nI compared on the same machine (PII/375,128RAM) using\nWINNT under VMWARE and native linux 2.2. pq was \nvaccum analyzed.\nWhy is pgsql so slow ? The mssql plan without index uses\nhash aggregating but pg sorts while relation.\nWith index, in pg there is a big overhead of heap tuple\nreading - mssql uses data directly from scanned index.\n\nAlso I noticed another problem, when I added \nwhere nazev<'0' it took 110ms on pg when I used\nset enable_seqscan=on;.\nWithout is, planner still tried to use seqscan+sort\nwhich took 27s in this case.\n\nI'm not sure how complex the proposed changes are. Another\nway would be to implement another aggregator like HashAgg\nwhich will use hashing. \nBut it could be even more complicated as one has to use\ntemp relation to store all hash buckets ..\n\nStill I think that direct index reads should give us huge\nspeed improvement for all indexed queries.\nI'm prepared to implement it but I'd like to know your\nhints/complaints.\n\nRegards, devik\n\n",
"msg_date": "Wed, 13 Sep 2000 15:37:24 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Performance improvement hints + measurement"
},
{
"msg_contents": "[email protected] writes:\n> What do you think about this approach:\n\n> 1) add all validity & tx fields from heap tuple into \n> index tuple too\n\nNon-starter I'm afraid. That would mean that whenever we update a\ntuple, we'd have to find and update all the index entries that refer to\nit. You'd be taking a tremendous performance hit on all update\noperations in the hope of saving time on only a relatively small number\nof inquiries.\n\nThis has been discussed before (repeatedly, IIRC). Please peruse the\npghackers archives.\n\n> I regulary need to run this query against it:\n> select nazev,sum(cnt) from bigrel group by name;\n> With index, in pg there is a big overhead of heap tuple\n> reading - mssql uses data directly from scanned index.\n\nHow exactly is MSSQL going to do that with only an index on \"name\"?\nYou need to have access to the cnt field as well, which wouldn't be\npresent in an index entry for name.\n\n> I'm not sure how complex the proposed changes are. Another\n> way would be to implement another aggregator like HashAgg\n> which will use hashing. \n\nThat would be worth looking at --- we have no such plan type now.\n\n> But it could be even more complicated as one has to use\n> temp relation to store all hash buckets ..\n\nYou could probably generalize the existing code for hashjoin tables\nto support hash aggregation as well. Now that I think about it, that\nsounds like a really cool idea. Should put it on the TODO list.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2000 10:47:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance improvement hints + measurement "
},
{
"msg_contents": "[email protected] writes:\n>> You could probably generalize the existing code for hashjoin tables\n>> to support hash aggregation as well. Now that I think about it, that\n>> sounds like a really cool idea. Should put it on the TODO list.\n\n> Yep. It should be easy. It could be used as part of Hash\n> node by extending ExecHash to return all hashed rows and\n> adding value{1,2}[nbuckets] to HashJoinTableData.\n\nActually I think what we want is a hash table indexed by the\ngrouping-column value(s) and storing the current running aggregate\nstates for each agg function being computed. You wouldn't really\nneed to store any of the original tuples. You might want to form\nthe agg states for each entry into a tuple just for convenience of\nstorage though.\n\n> By the way, what is the \"portal\" and \"slot\" ?\n\nAs far as the hash code is concerned, a portal is just a memory\nallocation context. Destroying the portal gets rid of all the\nmemory allocated therein, without the hassle of finding and freeing\neach palloc'd block individually.\n\nAs for slots, you are probably thinking of tuple table slots, which\nare used to hold the tuples returned by plan nodes. The input\ntuples read by the hash node are stored in a slot that's filled \nby the child Plan node each time it's called. Similarly, the hash\njoin node has to return a new tuple in its output slot each time\nit's called. It's a pretty simplistic form of memory management,\nbut it works fine for plan node output tuples.\n\nIf you are interested in working on this idea, you should be looking\nat current sources --- both the memory management for hash tables\nand the implementation of aggregate state storage have changed\nmaterially since 7.0, so code based on 7.0 would need a lot of work\nto be usable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Sep 2000 20:17:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance improvement hints + measurement "
}
] |
[
{
"msg_contents": "Where are we in subj?\nOid.version or UniqueId?\nIf noone is going to implement it soon then I'll have to\nchange code to OID file naming for WAL.\n\nVadim\n",
"msg_date": "Tue, 12 Sep 2000 09:12:03 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Status of new relation file naming"
},
{
"msg_contents": "[ Charset KOI8-R unsupported, converting... ]\n> Where are we in subj?\n> Oid.version or UniqueId?\n> If noone is going to implement it soon then I'll have to\n> change code to OID file naming for WAL.\n\nWell, we can move to one of these, but we need tools so people can find\nthe real table names that go with the files, and even then, it will\nnever be as good as the system we currently have.\n\nMy idea was to append a version number or oid on to the end of the file\nname, and use that somehow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 12:31:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
},
{
"msg_contents": "On Tue, 12 Sep 2000, Bruce Momjian wrote:\n\n> [ Charset KOI8-R unsupported, converting... ]\n> > Where are we in subj?\n> > Oid.version or UniqueId?\n> > If noone is going to implement it soon then I'll have to\n> > change code to OID file naming for WAL.\n> \n> Well, we can move to one of these, but we need tools so people can find\n> the real table names that go with the files, and even then, it will\n> never be as good as the system we currently have.\n\nI thought it was generally agreed that this wasn't a requirement, and that\nis someone felt it was required in the future, like any open source\nproject, they could ante up the time to build it ...\n\n\n",
"msg_date": "Tue, 12 Sep 2000 17:59:56 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
},
{
"msg_contents": "> On Tue, 12 Sep 2000, Bruce Momjian wrote:\n> \n> > [ Charset KOI8-R unsupported, converting... ]\n> > > Where are we in subj?\n> > > Oid.version or UniqueId?\n> > > If noone is going to implement it soon then I'll have to\n> > > change code to OID file naming for WAL.\n> > \n> > Well, we can move to one of these, but we need tools so people can find\n> > the real table names that go with the files, and even then, it will\n> > never be as good as the system we currently have.\n> \n> I thought it was generally agreed that this wasn't a requirement, and that\n> is someone felt it was required in the future, like any open source\n> project, they could ante up the time to build it ...\n\nWell, if we release 7.1 without those tools, we can expect lots of\ncomplaints.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 17:02:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
},
{
"msg_contents": "On Tue, 12 Sep 2000, Bruce Momjian wrote:\n\n> > On Tue, 12 Sep 2000, Bruce Momjian wrote:\n> > \n> > > [ Charset KOI8-R unsupported, converting... ]\n> > > > Where are we in subj?\n> > > > Oid.version or UniqueId?\n> > > > If noone is going to implement it soon then I'll have to\n> > > > change code to OID file naming for WAL.\n> > > \n> > > Well, we can move to one of these, but we need tools so people can find\n> > > the real table names that go with the files, and even then, it will\n> > > never be as good as the system we currently have.\n> > \n> > I thought it was generally agreed that this wasn't a requirement, and that\n> > is someone felt it was required in the future, like any open source\n> > project, they could ante up the time to build it ...\n> \n> Well, if we release 7.1 without those tools, we can expect lots of\n> complaints.\n\nstock answer \"we look forward to seeing patches to correct this\nproblem\" :)\n\n\n",
"msg_date": "Tue, 12 Sep 2000 18:05:46 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
},
{
"msg_contents": "> > > I thought it was generally agreed that this wasn't a requirement, and that\n> > > is someone felt it was required in the future, like any open source\n> > > project, they could ante up the time to build it ...\n> > \n> > Well, if we release 7.1 without those tools, we can expect lots of\n> > complaints.\n> \n> stock answer \"we look forward to seeing patches to correct this\n> problem\" :)\n\nThe problem is that I have no idea how to suggest writing such a tool\nthat fits all needs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 17:07:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
},
{
"msg_contents": "On Tue, 12 Sep 2000, Bruce Momjian wrote:\n\n> > > > I thought it was generally agreed that this wasn't a requirement, and that\n> > > > is someone felt it was required in the future, like any open source\n> > > > project, they could ante up the time to build it ...\n> > > \n> > > Well, if we release 7.1 without those tools, we can expect lots of\n> > > complaints.\n> > \n> > stock answer \"we look forward to seeing patches to correct this\n> > problem\" :)\n> \n> The problem is that I have no idea how to suggest writing such a tool\n> that fits all needs.\n\nIMHO, we have a choice ... we either move forward with a change that I\n*believe* everyone agrees has to happen, which will prompt someone to come\nup with a solution to (again, what I believe) is the only drawback ... or,\nwe don't implement while waiting for someone to come up with a solution\n...\n\nif we wait, again, IMHO, it will never happen, since there is no impetus\nfor someone to \"fix the problem\" ...\n\ndepending on what it takes to implement, if we implement it now, that\nleaves ~1.5mos for someone to come up with a solution before release ...\n\n\n",
"msg_date": "Tue, 12 Sep 2000 18:19:59 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
},
{
"msg_contents": "> > > stock answer \"we look forward to seeing patches to correct this\n> > > problem\" :)\n> > \n> > The problem is that I have no idea how to suggest writing such a tool\n> > that fits all needs.\n> \n> IMHO, we have a choice ... we either move forward with a change that I\n> *believe* everyone agrees has to happen, which will prompt someone to come\n> up with a solution to (again, what I believe) is the only drawback ... or,\n> we don't implement while waiting for someone to come up with a solution\n> ...\n> \n> if we wait, again, IMHO, it will never happen, since there is no impetus\n> for someone to \"fix the problem\" ...\n> \n> depending on what it takes to implement, if we implement it now, that\n> leaves ~1.5mos for someone to come up with a solution before release ...\n\nWell, we are 18 days from beta. Do we think we can handle that at this\ntime? If so, let's do it. I guess my hope was that WAL could record\nthe file/version# name in its logs, and use those to find the files,\nrather than making the file names themselves just numbers, but I know\nsome thought that there was a chick-and-egg problem in doing that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 17:24:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
},
{
"msg_contents": "On Tue, 12 Sep 2000, Bruce Momjian wrote:\n\n> > > > stock answer \"we look forward to seeing patches to correct this\n> > > > problem\" :)\n> > > \n> > > The problem is that I have no idea how to suggest writing such a tool\n> > > that fits all needs.\n> > \n> > IMHO, we have a choice ... we either move forward with a change that I\n> > *believe* everyone agrees has to happen, which will prompt someone to come\n> > up with a solution to (again, what I believe) is the only drawback ... or,\n> > we don't implement while waiting for someone to come up with a solution\n> > ...\n> > \n> > if we wait, again, IMHO, it will never happen, since there is no impetus\n> > for someone to \"fix the problem\" ...\n> > \n> > depending on what it takes to implement, if we implement it now, that\n> > leaves ~1.5mos for someone to come up with a solution before release ...\n> \n> Well, we are 18 days from beta. Do we think we can handle that at this\n> time? \n\nMy feeling was that the implementation was pretty much decided upon, it\nwas just a matter of saying \"go ahead, do it\" ... so, \"go ahead, do\nit\" ... we have 18 days before beta, and then another 30 or so until\nrelease, which should have us releasing, what, around Jan 1, 2001? :)\n\n\n",
"msg_date": "Tue, 12 Sep 2000 18:32:26 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
},
{
"msg_contents": "> > > depending on what it takes to implement, if we implement it now, that\n> > > leaves ~1.5mos for someone to come up with a solution before release ...\n> > \n> > Well, we are 18 days from beta. Do we think we can handle that at this\n> > time? \n> \n> My feeling was that the implementation was pretty much decided upon, it\n> was just a matter of saying \"go ahead, do it\" ... so, \"go ahead, do\n> it\" ... we have 18 days before beta, and then another 30 or so until\n> release, which should have us releasing, what, around Jan 1, 2001? :)\n\nYes, Jan 1 would be my guess. My only question is whether we are making\nthings harder for administrators and whether putting some smarts in WAL\nto keep readable file names would be easier.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 17:35:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
},
{
"msg_contents": "On Tue, 12 Sep 2000, Bruce Momjian wrote:\n\n> > > > depending on what it takes to implement, if we implement it now, that\n> > > > leaves ~1.5mos for someone to come up with a solution before release ...\n> > > \n> > > Well, we are 18 days from beta. Do we think we can handle that at this\n> > > time? \n> > \n> > My feeling was that the implementation was pretty much decided upon, it\n> > was just a matter of saying \"go ahead, do it\" ... so, \"go ahead, do\n> > it\" ... we have 18 days before beta, and then another 30 or so until\n> > release, which should have us releasing, what, around Jan 1, 2001? :)\n> \n> Yes, Jan 1 would be my guess. My only question is whether we are making\n> things harder for administrators and whether putting some smarts in WAL\n> to keep readable file names would be easier.\n\nguess we'll find out ... considering what I've seen of Oracle and its\nlayouts, I doubt we're making anything harder then most are used to :)\n\nnow, guess the next question is ... who is the one implementing this and\nhow soon can we get it in place?\n\n",
"msg_date": "Tue, 12 Sep 2000 18:42:14 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
}
] |
[
{
"msg_contents": " Date: Tuesday, September 12, 2000 @ 14:56:03\nAuthor: momjian\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/src/interfaces/jdbc/org/postgresql/jdbc1\n from hub.org:/home/projects/pgsql/tmp/cvs-serv77884/pgsql/src/interfaces/jdbc/org/postgresql/jdbc1\n\nModified Files:\n\tDatabaseMetaData.java \n\n----------------------------- Log Message -----------------------------\n\nAs if my JDBC patch hasn't already caused enough grief, there is now a\none-line change necessary. Due to the Mark Holloman \"New Relkind for\nViews\" patch, my support for views in the driver will need to be updated\nto match. The change to DatabaseMetaData.getTableTypes[][] is as\nfollows:\n\n- {\"VIEW\", \"(relkind='r' and relhasrules='t' and relname !~\n'^pg_' and relname !~ '^xinv')\"},\n+ {\"VIEW\", \"(relkind='v' and relname !~ '^pg_' and relname\n!~ '^xinv')\"},\n\nChristopher Cain\n\n",
"msg_date": "Tue, 12 Sep 2000 14:56:04 -0400 (EDT)",
"msg_from": "Bruce Momjian - CVS <momjian>",
"msg_from_op": true,
"msg_subject": "pgsql/src/interfaces/jdbc/org/postgresql/jdbc1\n (DatabaseMetaData.java)"
},
{
"msg_contents": "> -----Original Message-----\n> From: Behalf Of Bruce Momjian - CVS\n>\n\n[snip] \n\n- ----------------------------- Log Message -----------------------------\n> \n> As if my JDBC patch hasn't already caused enough grief, there is now a\n> one-line change necessary. Due to the Mark Holloman \"New Relkind for\n> Views\" patch, my support for views in the driver will need to be updated\n> to match. The change to DatabaseMetaData.getTableTypes[][] is as\n> follows:\n> \n> - {\"VIEW\", \"(relkind='r' and relhasrules='t' and relname !~\n> '^pg_' and relname !~ '^xinv')\"},\n> + {\"VIEW\", \"(relkind='v' and relname !~ '^pg_' and relname\n> !~ '^xinv')\"},\n>\n\nCurrent jdbc driver seems to be able to get no VIEW information\nfrom any RELEASE version of backends.\n\nHmm,it seems that client app/libs don't mind backward incompatibility.\nDon't I have to bother about backward incomatibility which would be\ncaused by my change ?\nIf so,I would commit my change about ALTER TABLE DROP COLUMN.\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Wed, 13 Sep 2000 08:46:39 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: pgsql/src/interfaces/jdbc/org/postgresql/jdbc1\n\t(DatabaseMetaData.java)"
},
{
"msg_contents": "> > As if my JDBC patch hasn't already caused enough grief, there is now a\n> > one-line change necessary. Due to the Mark Holloman \"New Relkind for\n> > Views\" patch, my support for views in the driver will need to be updated\n> > to match. The change to DatabaseMetaData.getTableTypes[][] is as\n> > follows:\n> > \n> > - {\"VIEW\", \"(relkind='r' and relhasrules='t' and relname !~\n> > '^pg_' and relname !~ '^xinv')\"},\n> > + {\"VIEW\", \"(relkind='v' and relname !~ '^pg_' and relname\n> > !~ '^xinv')\"},\n> >\n> \n> Current jdbc driver seems to be able to get no VIEW information\n> from any RELEASE version of backends.\n> \n> Hmm,it seems that client app/libs don't mind backward incompatibility.\n> Don't I have to bother about backward incomatibility which would be\n> caused by my change ?\n> If so,I would commit my change about ALTER TABLE DROP COLUMN.\n\nSo the issue is how to make the 7.1 jdbc driver handle views from >=7.1\ndatabases, and <7.1 databases. Good question.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 19:59:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: pgsql/src/interfaces/jdbc/org/postgresql/jdbc1\n\t(DatabaseMetaData.java)"
},
{
"msg_contents": "\n\nOn Tue, 12 Sep 2000, Bruce Momjian wrote:\n\n> > > As if my JDBC patch hasn't already caused enough grief, there is now a\n> > > one-line change necessary. Due to the Mark Holloman \"New Relkind for\n> > > Views\" patch, my support for views in the driver will need to be updated\n> > > to match. The change to DatabaseMetaData.getTableTypes[][] is as\n> > > follows:\n> > > \n> > > - {\"VIEW\", \"(relkind='r' and relhasrules='t' and relname !~\n> > > '^pg_' and relname !~ '^xinv')\"},\n> > > + {\"VIEW\", \"(relkind='v' and relname !~ '^pg_' and relname\n> > > !~ '^xinv')\"},\n> > >\n> > \n> > Current jdbc driver seems to be able to get no VIEW information\n> > from any RELEASE version of backends.\n> > \n> > Hmm,it seems that client app/libs don't mind backward incompatibility.\n> > Don't I have to bother about backward incomatibility which would be\n> > caused by my change ?\n> > If so,I would commit my change about ALTER TABLE DROP COLUMN.\n> \n> So the issue is how to make the 7.1 jdbc driver handle views from >=7.1\n> databases, and <7.1 databases. Good question.\n\nIn the past, I've tried to keep compatibility within x.y.z releases (where\nz is the only changing value), but not when x or y change.\n\nie, 6.4.x drivers would be compatible with each other but not with\n6.2.x. Same with 7.0.x and 6.5.x\n\nSo if 7.1 has this change, then it shouldn't have to be compatible with\n7.0.x (going from past history).\n\nPeter (getting lost in a maze of patches)...\n\n;-)\n\n",
"msg_date": "Wed, 13 Sep 2000 02:17:05 +0100 (BST)",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RE: pgsql/src/interfaces/jdbc/org/postgresql/jdbc1\n\t(DatabaseMetaData.java)"
}
] |
[
{
"msg_contents": "With latest CVS, rules regress test is failing with\n\n -- Test for constraint updates/deletes\n --\n insert into rtest_system values ('orion', 'Linux Jan Wieck');\n+ ERROR: You can't change view relation rtest_system\n insert into rtest_system values ('notjw', 'WinNT Jan Wieck (notebook)');\n+ ERROR: You can't change view relation rtest_system\n insert into rtest_system values ('neptun', 'Fileserver');\n+ ERROR: You can't change view relation rtest_system\n insert into rtest_interface values ('orion', 'eth0');\n insert into rtest_interface values ('orion', 'eth1');\n insert into rtest_interface values ('notjw', 'eth0');\n insert into rtest_interface values ('neptun', 'eth0');\n insert into rtest_person values ('jw', 'Jan Wieck');\n+ ERROR: You can't change view relation rtest_person\n insert into rtest_person values ('bm', 'Bruce Momjian');\n+ ERROR: You can't change view relation rtest_person\n insert into rtest_admin values ('jw', 'orion');\n insert into rtest_admin values ('jw', 'notjw');\n insert into rtest_admin values ('bm', 'neptun');\n update rtest_system set sysname = 'pluto' where sysname = 'neptun';\n+ NOTICE: mdopen: couldn't open rtest_system: No such file or directory\n+ ERROR: cannot open relation rtest_system\n select * from rtest_interface;\n sysname | ifname \n\nand it goes downhill from there...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2000 16:14:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Doesn't anyone around here run the regression tests on patches?"
},
{
"msg_contents": "Hello,\nWhy is there cmax in tuple ? cxxx is used to determine\nif tuple was inserted/deleted by current command or\npast command. Because one command can't both insert\nand delete the same tuple, only something like \"cupd\"\nmight be needed and flag which tells you whether cupd\nis time of insert or delete. This saves 4byte from \nheader .. \n\ndevik\n\n",
"msg_date": "Wed, 25 Oct 2000 10:28:36 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Unneccessary cmax in heap tuple ?"
},
{
"msg_contents": "\n\[email protected] wrote:\n\n> Hello,\n> Why is there cmax in tuple ? cxxx is used to determine\n> if tuple was inserted/deleted by current command or\n> past command. Because one command can't both insert\n> and delete the same tuple, only something like \"cupd\"\n> might be needed and flag which tells you whether cupd\n> is time of insert or delete. This saves 4byte from\n> header ..\n>\n\nIf a tuple was inserted and updated in current transaction,\nhow could we judge if the tuple was valid for a given\nScanCommandId ?\nHowever there could be other improvements.\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Wed, 25 Oct 2000 18:49:05 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unneccessary cmax in heap tuple ?"
},
{
"msg_contents": "\n\[email protected] wrote:\n\n> > > Why is there cmax in tuple ? cxxx is used to determine\n> > > if tuple was inserted/deleted by current command or\n> > > past command. Because one command can't both insert\n> > > and delete the same tuple, only something like \"cupd\"\n> > > might be needed and flag which tells you whether cupd\n> > > is time of insert or delete. This saves 4byte from\n> > > header ..\n> >\n> > If a tuple was inserted and updated in current transaction,\n> > how could we judge if the tuple was valid for a given\n> > ScanCommandId ?\n> > However there could be other improvements.\n>\n> Ahh I did not know that there is need to test tuple for\n> validity for some past cid. I thought that we only need\n> to know whether tuple has been updated by current cid\n> to ensure that it will not be scanned again in the same\n> cid... Where am I wrong ?\n\nFor example,INSENSITIVE cursors(though not implemented) ?\nINSENSITIVE cursors see changes made by neither other\nbackends nor the backend itself.\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Wed, 25 Oct 2000 19:32:52 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unneccessary cmax in heap tuple ?"
},
{
"msg_contents": "> > Why is there cmax in tuple ? cxxx is used to determine\n> > if tuple was inserted/deleted by current command or\n> > past command. Because one command can't both insert\n> > and delete the same tuple, only something like \"cupd\"\n> > might be needed and flag which tells you whether cupd\n> > is time of insert or delete. This saves 4byte from\n> > header ..\n> \n> If a tuple was inserted and updated in current transaction,\n> how could we judge if the tuple was valid for a given\n> ScanCommandId ?\n> However there could be other improvements.\n\nAhh I did not know that there is need to test tuple for\nvalidity for some past cid. I thought that we only need\nto know whether tuple has been updated by current cid\nto ensure that it will not be scanned again in the same\ncid... Where am I wrong ?\ndevik\n\n\n",
"msg_date": "Wed, 25 Oct 2000 14:06:12 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Unneccessary cmax in heap tuple ?"
},
{
"msg_contents": "[email protected] writes:\n> Ahh I did not know that there is need to test tuple for\n> validity for some past cid. I thought that we only need\n> to know whether tuple has been updated by current cid\n> to ensure that it will not be scanned again in the same\n> cid... Where am I wrong ?\n\nIn situations like SQL function calls, it may be necessary to suspend\na table scan while we go off and do other commands, then come back and\nresume the table scan. So there can be multiple scans with different\ncommand IDs in progress within a transaction.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Oct 2000 14:04:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unneccessary cmax in heap tuple ? "
},
{
"msg_contents": "Tom Lane wrote:\n> In situations like SQL function calls, it may be necessary to suspend\n> a table scan while we go off and do other commands, then come back and\n> resume the table scan. So there can be multiple scans with different\n> command IDs in progress within a transaction.\n\nOhh yes .. you are right. Thanks for explanation.\ndevik\n\n",
"msg_date": "Thu, 26 Oct 2000 12:48:33 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Unneccessary cmax in heap tuple ?"
}
] |
[
{
"msg_contents": "I have committed initial support for outer joins. There's still more to do,\nbut I was getting antsy to get back in sync with the CVS repository. Also,\nI'm hoping to get other people to do some of the remaining work ;-)\n\nWhat works: ISO \"JOIN\" syntax for INNER, LEFT, RIGHT, FULL, CROSS joins.\nTable and column name aliases applied to JOIN results sort of work (see\nbelow). All regression tests pass, including some rule cases that 7.0\ngets wrong.\n\nThings that need to be fixed before 7.1:\n\n* Update user docs\n\n* GEQO planner is untested, possibly broken\n\n* Rule rewriter fails for views that are used inside JOIN clauses.\n Cleanest solution for this would be to implement sub-select in FROM\n so that a view can be translated to a subselect. I'm going to work\n on that next. If it seems too hard for 7.1, I'll hack up a patch\n instead.\n\n* parse_clause.c should check validity of JOIN/ON condition (make sure it\n doesn't refer to any rels not in the JOIN)\n\n* Scope of aliases in sub-joins needs work. For example, this fails:\n\tselect * from (int4_tbl i1(a) join int4_tbl i2(b) on (a<b));\n because sub-join aliases aren't visible in the jointree when the ON\n condition is analyzed.\n\n* I suspect ruleutils.c doesn't always choose an appropriate alias for\n displaying a variable that has multiple aliases.\n\n* Inheritance vs outer joins: existing scheme of repeating the whole plan\n for each inherited table is certainly no good if inherited table is used\n inside an outer join. Must do the Append plan at bottom level, not top,\n in that case. Probably means we must distinguish top-join-level and\n not-top-level RTEs that are inherited.\n\nI will take care of the first three of these to-do items, and I was hoping\nto talk Thomas into looking at the next three. Perhaps Chris (or someone\nwho cares more about inheritance than I do ;-)) would like to take up the\nlast one.\n\nThings I plan to leave for some future release:\n\n* UNION JOIN is not implemented yet. Should fix in 7.2 when we redo\n querytrees --- UNION is too much of a mess right now.\n\n* FULL JOIN is only supported with mergejoinable join conditions.\n Not clear that non-mergejoinable join conditions can ever be supported\n efficiently.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Sep 2000 17:08:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "First cut at OUTER JOINs committed"
}
] |
[
{
"msg_contents": "> Probably WAL would solve this phenomenon by rolling\n> back the content of disc and shared buffer in reality.\n> However if 7.0.x would be released we had better change \n> bufmgr IMHO.\n\nI'm going to handle btree split but currently there is no way\nto rollback it - we unlock splitted pages after parent\nis locked and concurrent backend may update one/both of\nsiblings before we get our locks back.\nWe have to continue with split or could leave parent unchanged\nand handle \"my bits moved...\" (ie continue split in another\nxaction if we found no parent for a page) ... or we could hold\nlocks on all splitted pages till some parent updated without\nsplit, but I wouldn't do this.\n\nVadim\n\n",
"msg_date": "Tue, 12 Sep 2000 16:36:23 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: strange behaviour (bug) "
},
{
"msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim [mailto:[email protected]]\n> \n> > Probably WAL would solve this phenomenon by rolling\n> > back the content of disc and shared buffer in reality.\n> > However if 7.0.x would be released we had better change \n> > bufmgr IMHO.\n> \n> I'm going to handle btree split but currently there is no way\n> to rollback it - we unlock splitted pages after parent\n> is locked and concurrent backend may update one/both of\n> siblings before we get our locks back.\n> We have to continue with split or could leave parent unchanged\n> and handle \"my bits moved...\" (ie continue split in another\n> xaction if we found no parent for a page) ... or we could hold\n> locks on all splitted pages till some parent updated without\n> split, but I wouldn't do this.\n>\n\nIt seems to me that btree split operations must always be\nrolled forward even in case of abort/crash. DO you have\nother ideas ?\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Thu, 14 Sep 2000 08:51:12 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: strange behaviour (bug) "
}
] |
[
{
"msg_contents": "> My idea was to append a version number or oid on to the end \n> of the file name, and use that somehow.\n\nYou'll lose all you would buy as soon as we'll begin to store many\nrelations in single file... and I would like to implement this in 7.1\n\nVadim\n",
"msg_date": "Tue, 12 Sep 2000 16:40:05 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Status of new relation file naming"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > My idea was to append a version number or oid on to the end \n> > of the file name, and use that somehow.\n> \n> You'll lose all you would buy as soon as we'll begin to store many\n> relations in single file... and I would like to implement this in 7.1\n\nYes, that is the key. You want to implement a new storage manager, so\nif that is coming, we may as well take the hit now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Sep 2000 20:41:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
},
{
"msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > My idea was to append a version number or oid on to the end\n> > of the file name, and use that somehow.\n> \n> You'll lose all you would buy as soon as we'll begin to store many\n> relations in single file...\n\nPerhaps we could then use the name of DATASPACE = filename ?\n\nOr will the fact that some relations are stored in the same file \nbe completely invisible to the user ?\n\n> and I would like to implement this in 7.1\n\nWill this new storage manager replace the current one or will one be \nable to choose which storage manager to use (at compile time, at \nstartup, for each table)?\n\nPostgreSQL started as an extensible ORDBMS, but IIRC at some stage \nall other SMs were thrown out.\n\nI don't think it would be a good idea to completely abandon the \nnotion of storage manager as a replacable component.\n\n\nOTOH, the idea of storing single-inheritance hierarchies \n(SQL3 CREATE UNDER) in one file would almost automatically get us \nmany benefits, like shared primary keys and automatic index inheriting.\n\n--------------\nHannu\n",
"msg_date": "Wed, 13 Sep 2000 09:44:42 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
},
{
"msg_contents": "At 09:44 13/09/00 +0300, Hannu Krosing wrote:\n>\"Mikheev, Vadim\" wrote:\n>> \n>> > My idea was to append a version number or oid on to the end\n>> > of the file name, and use that somehow.\n>> \n>> You'll lose all you would buy as soon as we'll begin to store many\n>> relations in single file...\n>\n>Perhaps we could then use the name of DATASPACE = filename ?\n>\n\nI don't want to (re)^n ignite the the debate, but file naming has been\ndiscussed many times before and my recollection of the result of the last\ndebate was that the names should either be random or OID based. It seems\nthat adding a different meaning to filenames at this stage would be a bad\nidea, and doom us to repeat the file naming debate again in a year or two.\n\nMy vote is for a random number, and then someone can write the tools to\ndisplay the file info. I'll even volunteer to work on them...\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 13 Sep 2000 19:02:22 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
}
] |
[
{
"msg_contents": "It seems current source is broken if MB is enabled.\n\ngcc -c -I../../../src/interfaces/libpq -I../../../src/include -I../../../src/interfaces/libpq -O2 -Wall -Wmissing-prototypes -Wmissing-declarations pg_dump.c -o pg_dump.o\npg_dump.c: In function `isViewRule':\npg_dump.c:267: parse error before `int'\npg_dump.c:268: `len' undeclared (first use in this function)\npg_dump.c:268: (Each undeclared identifier is reported only once\npg_dump.c:268: for each function it appears in.)\npg_dump.c:268: warning: implicit declaration of function `pg_mbcliplen'\nmake[3]: *** [pg_dump.o] Error 1\n\nThe problem is:\n\n#ifdef MULTIBYTE\n\tint len;\n\tlen = pg_mbcliplen(rulename,strlen(rulename),NAMEDATALEN-1);\n\trulename[len] = '\\0';\n#else\n\t:\n\t:\n\nMoreover, pg_mbcliplen cannot be used in frontend. It seems what we\nneed here is new backendside functiontion what does same as\npg_mbcliplen. Will look into this...\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 13 Sep 2000 13:07:22 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "current is broken"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> It seems current source is broken if MB is enabled.\n> gcc -c -I../../../src/interfaces/libpq -I../../../src/include -I../../../src/interfaces/libpq -O2 -Wall -Wmissing-prototypes -Wmissing-declarations pg_dump.c -o pg_dump.o\n> pg_dump.c: In function `isViewRule':\n> pg_dump.c:267: parse error before `int'\n\nI just fixed one of these in the backend --- looks like someone was\ntesting with a C++ compiler instead of an ANSI-C-compliant compiler.\nNeed to put the 'int len;' declaration at the top of the function.\n\n> pg_dump.c:268: warning: implicit declaration of function `pg_mbcliplen'\n\n> Moreover, pg_mbcliplen cannot be used in frontend.\n\nOoops. I guess libpq needs to supply a copy of this function?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2000 00:35:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: current is broken "
},
{
"msg_contents": "> I just fixed one of these in the backend --- looks like someone was\n> testing with a C++ compiler instead of an ANSI-C-compliant compiler.\n> Need to put the 'int len;' declaration at the top of the function.\n\nOk.\n\n> > pg_dump.c:268: warning: implicit declaration of function `pg_mbcliplen'\n> \n> > Moreover, pg_mbcliplen cannot be used in frontend.\n> \n> Ooops. I guess libpq needs to supply a copy of this function?\n\nSimply copying the function won't work since the way to know what\nencoding is used for this session is different between backend and\nfrontend.\n\nEven better idea would be creating a new function that returns the\nactual rule name (after being shorten) from given view name. I don't\nthink it's a good idea to have codes to get an actual rule name in two\nseparate places.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 13 Sep 2000 13:56:46 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: current is broken "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Ooops. I guess libpq needs to supply a copy of this function?\n\n> Simply copying the function won't work since the way to know what\n> encoding is used for this session is different between backend and\n> frontend.\n\nGood point --- in fact, the encoding itself might be different between\nthe backend and frontend. That seems to imply that \"truncate to\nNAMEDATALEN bytes\" could yield different results in the frontend than\nwhat the backend would get.\n\n> Even better idea would be creating a new function that returns the\n> actual rule name (after being shorten) from given view name. I don't\n> think it's a good idea to have codes to get an actual rule name in two\n> separate places.\n\nGiven the above point about encoding differences, I think we *must*\ndo the truncation in the backend ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2000 01:02:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: current is broken "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> Tatsuo Ishii <[email protected]> writes:\n> >> Ooops. I guess libpq needs to supply a copy of this function?\n> \n> > Simply copying the function won't work since the way to know what\n> > encoding is used for this session is different between backend and\n> > frontend.\n> \n> Good point --- in fact, the encoding itself might be different between\n> the backend and frontend. That seems to imply that \"truncate to\n> NAMEDATALEN bytes\" could yield different results in the frontend than\n> what the backend would get.\n> \n> > Even better idea would be creating a new function that returns the\n> > actual rule name (after being shorten) from given view name. I don't\n> > think it's a good idea to have codes to get an actual rule name in two\n> > separate places.\n> \n> Given the above point about encoding differences, I think we *must*\n> do the truncation in the backend ...\n>\n\nI agree with Tatsuo.\nHowever we already have relkind for views.\nWhy must we rely on rulename to implement isViewRule()\nin the first place ? \n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Wed, 13 Sep 2000 14:31:51 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: current is broken "
},
{
"msg_contents": "At 13:07 13/09/00 +0900, Tatsuo Ishii wrote:\n>It seems current source is broken if MB is enabled.\n>\n>gcc -c -I../../../src/interfaces/libpq -I../../../src/include\n-I../../../src/interfaces/libpq -O2 -Wall -Wmissing-prototypes\n-Wmissing-declarations pg_dump.c -o pg_dump.o\n>pg_dump.c: In function `isViewRule':\n>pg_dump.c:267: parse error before `int'\n>pg_dump.c:268: `len' undeclared (first use in this function)\n>pg_dump.c:268: (Each undeclared identifier is reported only once\n>pg_dump.c:268: for each function it appears in.)\n>pg_dump.c:268: warning: implicit declaration of function `pg_mbcliplen'\n>make[3]: *** [pg_dump.o] Error 1\n>\n\nI haven't looked at the code yet, but isViewRule is going to change to use\nthe new reltype for views. Maybe this will side-step the problem?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 13 Sep 2000 15:44:25 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: current is broken"
},
{
"msg_contents": "> > Good point --- in fact, the encoding itself might be different between\n> > the backend and frontend. That seems to imply that \"truncate to\n> > NAMEDATALEN bytes\" could yield different results in the frontend than\n> > what the backend would get.\n> > \n> > > Even better idea would be creating a new function that returns the\n> > > actual rule name (after being shorten) from given view name. I don't\n> > > think it's a good idea to have codes to get an actual rule name in two\n> > > separate places.\n> > \n> > Given the above point about encoding differences, I think we *must*\n> > do the truncation in the backend ...\n> >\n> \n> I agree with Tatsuo.\n> However we already have relkind for views.\n> Why must we rely on rulename to implement isViewRule()\n> in the first place ? \n\nOh, I forgot about that.\n\nBTW, does anybody know about the status of pg_dump? It seems tons of\nfeatures have been added, but I wonder if all of them are going to\nappear in 7.1. Especially now it seems to have an ability to dump\nBlobs, but the flag (-b) to enable the feature has been disabled. Why?\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 13 Sep 2000 19:41:36 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: current is broken "
},
{
"msg_contents": "> >BTW, does anybody know about the status of pg_dump? It seems tons of\n> >features have been added, but I wonder if all of them are going to\n> >appear in 7.1. Especially now it seems to have an ability to dump\n> >Blobs, but the flag (-b) to enable the feature has been disabled. Why?\n> \n> Is it? AFAIK, it should not have been disabled. The long params version is\n> --blob - does that work?\n\nOk, long params works. It seems just adding 'b' to the third argument\nof getopt() solves \"-b\" option problem. Do you wnat to fix by yourself?\n\n> Personally, I'd like to see them in 7.1, unless the testing period\n> reveals a swag of major flaws... Once CVS is up again, I intend to\n> do a little more work on pg_dump, so now would be a good time to let\n> me know if there are problems.\n\nNew pg_dump looks great. Could you add docs for it so that we can test\nit out?\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 13 Sep 2000 21:35:28 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: current is broken "
},
{
"msg_contents": "At 19:41 13/09/00 +0900, Tatsuo Ishii wrote:\n>\n>BTW, does anybody know about the status of pg_dump? It seems tons of\n>features have been added, but I wonder if all of them are going to\n>appear in 7.1. Especially now it seems to have an ability to dump\n>Blobs, but the flag (-b) to enable the feature has been disabled. Why?\n\nIs it? AFAIK, it should not have been disabled. The long params version is\n--blob - does that work?\n\nPersonally, I'd like to see them in 7.1, unless the testing period reveals\na swag of major flaws...\n\nOnce CVS is up again, I intend to do a little more work on pg_dump, so now\nwould be a good time to let me know if there are problems.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 13 Sep 2000 22:41:37 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: current is broken "
},
{
"msg_contents": "> >New pg_dump looks great. Could you add docs for it so that we can test\n> >it out?\n> It's on the list - I'm still waiting for an upgrade to Framemaker, since I\n> can't stomach raw SGML. If it doesn't arrive soon, I'll just have to do it\n> the hard way.\n\nIf necessary, you can type straight into the existing docs without doing\nmuch about markup, and I'll fix up the sgml tags later.\n\nafaik I've never seen sgml from FM; it will be interesting to see how it\nturns out.\n\n - Thomas\n",
"msg_date": "Wed, 13 Sep 2000 13:42:40 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: current is broken"
},
{
"msg_contents": "At 21:35 13/09/00 +0900, Tatsuo Ishii wrote:\n>> >BTW, does anybody know about the status of pg_dump? It seems tons of\n>> >features have been added, but I wonder if all of them are going to\n>> >appear in 7.1. Especially now it seems to have an ability to dump\n>> >Blobs, but the flag (-b) to enable the feature has been disabled. Why?\n>> \n>> Is it? AFAIK, it should not have been disabled. The long params version is\n>> --blob - does that work?\n>\n>Ok, long params works. It seems just adding 'b' to the third argument\n>of getopt() solves \"-b\" option problem. Do you wnat to fix by yourself?\n\nMay as well, since I'll be doing some other stuff.\n\n\n>> Personally, I'd like to see them in 7.1, unless the testing period\n>> reveals a swag of major flaws... Once CVS is up again, I intend to\n>> do a little more work on pg_dump, so now would be a good time to let\n>> me know if there are problems.\n>\n>New pg_dump looks great. Could you add docs for it so that we can test\n>it out?\n\nIt's on the list - I'm still waiting for an upgrade to Framemaker, since I\ncan't stomach raw SGML. If it doesn't arrive soon, I'll just have to do it\nthe hard way.\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 14 Sep 2000 00:17:02 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: current is broken "
},
{
"msg_contents": "On Wed, Sep 13, 2000 at 03:44:25PM +1000, Philip Warner wrote:\n> At 13:07 13/09/00 +0900, Tatsuo Ishii wrote:\n> >It seems current source is broken if MB is enabled.\n\nGah, I Was afraid of this. My patch, I'm afraid.\n\n> >\n> \n> I haven't looked at the code yet, but isViewRule is going to change to use\n> the new reltype for views. Maybe this will side-step the problem?\n> \n\nYes, it should. However, I've just done a quick test in a non-MB compile, and\nit looks like char(n) = 'a string constant' returns true if the first n chars\nmatch. If this is correct behavior, and holds in the multibyte case, then\nyou can strip out _all_ the rulename truncation from pg_dump.\nHere's the example:\n\nI've got a view named: \" this_is_a_really_really_long_vi\", with matching rule:\n\"_RETthis_is_a_really_really_lon\"\n\nWithout truncation, pgdump generates (with addition of relname for\nreadability) this query (hand wrapped)\n\nreedstrm=# select relname,rulename from pg_class, pg_rewrite \n where pg_class.oid = ev_class and pg_rewrite.ev_type = '1' \n and rulename = '_RETthis_is_a_really_really_long_vi';\n\n relname | rulename \n---------------------------------+---------------------------------\n this_is_a_really_really_long_vi | _RETthis_is_a_really_really_lon\n(1 row)\n\nreedstrm=# \n\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Thu, 14 Sep 2000 17:40:57 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: current is broken"
},
{
"msg_contents": "> -----Original Message-----\n> From: Ross J. Reedstrom\n> \n> On Wed, Sep 13, 2000 at 03:44:25PM +1000, Philip Warner wrote:\n> > At 13:07 13/09/00 +0900, Tatsuo Ishii wrote:\n> > >It seems current source is broken if MB is enabled.\n> \n> Gah, I Was afraid of this. My patch, I'm afraid.\n> \n> > >\n> > \n> > I haven't looked at the code yet, but isViewRule is going to \n> change to use\n> > the new reltype for views. Maybe this will side-step the problem?\n> > \n> \n> Yes, it should. However, I've just done a quick test in a non-MB \n> compile, and\n> it looks like char(n) = 'a string constant' returns true if the \n> first n chars\n> match. If this is correct behavior, and holds in the multibyte case, then\n> you can strip out _all_ the rulename truncation from pg_dump.\n\nIsn't it a problem of backend side ?\nIt seems quite strange to me that clients should/could assume\nsuch a complicated rule. I was surprized to see how many \napplications have used complicated(but incomplete in some\ncases) definiton(criterion ?)s of views to see if a table is a\nview. \nNow we have a relkind for views and in addtion haven't we\nalready had pg_views view to encapsulate the definition of\nviews. \n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Fri, 15 Sep 2000 10:40:13 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: current is broken"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> The only thing that's missing is a 'rulekind' for rules - it would be very\n> nice if pg_dump could use a simple method (that didn't involve munging\n> names) to determin is a rule is a 'view rule'.\n\nOh, I finally see the problem: when you come to dump out the rules, you\nneed to avoid dumping the rules that correspond to views because you're\ngoing to emit the CREATE VIEW commands separately.\n\nYou don't really need a rulekind though. If it's an ON SELECT rule for\na relation that you've determined to be a view, then the rule is a\nview rule. Otherwise, you print the rule.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Sep 2000 22:23:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: current is broken "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> Philip Warner <[email protected]> writes:\n> > The only thing that's missing is a 'rulekind' for rules - it \n> would be very\n> > nice if pg_dump could use a simple method (that didn't involve munging\n> > names) to determin is a rule is a 'view rule'.\n> \n> Oh, I finally see the problem: when you come to dump out the rules, you\n> need to avoid dumping the rules that correspond to views because you're\n> going to emit the CREATE VIEW commands separately.\n> \n> You don't really need a rulekind though. If it's an ON SELECT rule for\n> a relation that you've determined to be a view, then the rule is a\n> view rule. Otherwise, you print the rule.\n>\n\nIs it guaranteed that ON SELECT rule is unique per view in the future ?\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Fri, 15 Sep 2000 11:36:16 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: current is broken "
},
{
"msg_contents": "At 10:40 15/09/00 +0900, Hiroshi Inoue wrote:\n>\n>Now we have a relkind for views and in addtion haven't we\n>already had pg_views view to encapsulate the definition of\n>views. \n>\n\nThe only thing that's missing is a 'rulekind' for rules - it would be very\nnice if pg_dump could use a simple method (that didn't involve munging\nnames) to determin is a rule is a 'view rule'.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 15 Sep 2000 12:44:52 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: current is broken"
},
{
"msg_contents": "At 22:23 14/09/00 -0400, Tom Lane wrote:\n>\n>Oh, I finally see the problem: when you come to dump out the rules, you\n>need to avoid dumping the rules that correspond to views because you're\n>going to emit the CREATE VIEW commands separately.\n>\n>You don't really need a rulekind though. If it's an ON SELECT rule for\n>a relation that you've determined to be a view, then the rule is a\n>view rule. Otherwise, you print the rule.\n>\n\nIt looks like some nice person has defined the 'pg_rules' view which does\nnot return rules used in views. I'll try using this since it removes\nanother layer of backend knowledge from pg_dump.\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 15 Sep 2000 14:16:43 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: current is broken "
},
{
"msg_contents": "\nI've now committed the latest pg_dump. The following is the list of changes:\n\n- Support for relkind = RELKIND_VIEW.\n- Use symbols for tests on relkind (ie. use RELKIND_VIEW, not 'v')\n- Fix bug in support for -b option (== --blobs).\n- Dump views as views (using 'create view').\n- Remove 'isViewRule' since we check the relkind when getting tables.\n- Now uses temp table 'pgdump_oid' rather than 'pg_dump_oid' (errors\notherwise).\n- Added extra param for specifying handling of OID=0 and which typename to\noutput.\n- Fixed bug in SQL scanner when SQL contained braces. (in rules)\n- Use format_type function wherever possible\n\nIt works on all my DBs, and on the regression DB, so I am at least mildly\noptimistic. The main issue that I think might arise are from the use of\nformat_type in function definitions. It's easy to change if we have to go\nback to using typname.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 15 Sep 2000 15:43:25 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "New pg_dump committed..."
},
{
"msg_contents": "> Done. Do you know often the web-based version of the documentation get\n> updated?\n\nShould be twice a day. afaik you can go to hub.org:~thomas/CURRENT and\nrun ./docbuild. Make sure your umask is set to 2 (so I can update files\nafter that) and you may want to detach the command and log it to a file\nsince it will take 5-10min to run.\n\n - Thomas\n",
"msg_date": "Wed, 18 Oct 2000 13:55:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump docs"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Done. Do you know often the web-based version of the documentation get\n>> updated?\n\n> Should be twice a day.\n\nWhere is this auto-updated copy hiding? The bookmark I have,\n\thttp://www.postgresql.org/docs/postgres/index.html\nis pointing at files that haven't updated for months...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2000 13:28:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_dump docs "
},
{
"msg_contents": "> > Should be twice a day.\n> Where is this auto-updated copy hiding? The bookmark I have,\n> http://www.postgresql.org/docs/postgres/index.html\n> is pointing at files that haven't updated for months...\n\nRight. Last updated at the last release.\n\nThe developer's versions from the current tree are at\n\n http://www.postgresql.org/devel-corner/docs/postgres\n\n(and admin,programmer,tutorial,user)\n\nI don't see a reference from the developer's corner web page, which\nseems to point back to the released version instead.\n\n - Thomas\n",
"msg_date": "Thu, 19 Oct 2000 04:39:00 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_dump docs"
},
{
"msg_contents": "On Thu, 19 Oct 2000, Thomas Lockhart wrote:\n\n> > > Should be twice a day.\n> > Where is this auto-updated copy hiding? The bookmark I have,\n> > http://www.postgresql.org/docs/postgres/index.html\n> > is pointing at files that haven't updated for months...\n> \n> Right. Last updated at the last release.\n> \n> The developer's versions from the current tree are at\n> \n> http://www.postgresql.org/devel-corner/docs/postgres\n> \n> (and admin,programmer,tutorial,user)\n> \n> I don't see a reference from the developer's corner web page, which\n> seems to point back to the released version instead.\n\nThat's strange... I see it :) ...now\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 19 Oct 2000 05:56:27 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_dump docs"
},
{
"msg_contents": "> > I don't see a reference from the developer's corner web page, which\n> > seems to point back to the released version instead.\n> That's strange... I see it :) ...now\n\nGreat! But I don't, maybe due to caching somewhere? Should I be seeing \n\n http://www.postgresql.org/devel-corner/\n -> Current documentation\n\npointing at\n\n http://www.postgresql.org/devel-corner/docs/index.html\n\n(afaik this index.html does not yet exist, but could point to the\nvarious flavors of pages and tarballs) rather than\n\n http://www.postgresql.org/docs/index.html\n\nwhich is what I see now?\n\n - Thomas\n",
"msg_date": "Thu, 19 Oct 2000 13:16:55 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_dump docs"
},
{
"msg_contents": "On Thu, 19 Oct 2000, Thomas Lockhart wrote:\n\n> > > I don't see a reference from the developer's corner web page, which\n> > > seems to point back to the released version instead.\n> > That's strange... I see it :) ...now\n> \n> Great! But I don't, maybe due to caching somewhere? Should I be seeing \n> \n> http://www.postgresql.org/devel-corner/\n> -> Current documentation\n> \n> pointing at\n> \n> http://www.postgresql.org/devel-corner/docs/index.html\n> \n> (afaik this index.html does not yet exist, but could point to the\n> various flavors of pages and tarballs) rather than\n> \n> http://www.postgresql.org/docs/index.html\n> \n> which is what I see now?\n\nI thought /docs/index.html was to be for the current docs. Since they're\nnot, what ARE they pointing to?? Anyway, I've now got it pointing to\ndevel-contrib/docs/index.html and created an index. If you ever need to \nupdate the index, look at the script makeindex in that directory.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 19 Oct 2000 11:13:03 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_dump docs"
},
{
"msg_contents": "On Fri, 20 Oct 2000, Thomas Lockhart wrote:\n\n> Vince Vielhaber wrote:\n> > \n> > On Thu, 19 Oct 2000, Thomas Lockhart wrote:\n> > \n> > > > > I don't see a reference from the developer's corner web page, which\n> > > > > seems to point back to the released version instead.\n> > > > That's strange... I see it :) ...now\n> > >\n> > > Great! But I don't, maybe due to caching somewhere? Should I be seeing\n> > >\n> > > http://www.postgresql.org/devel-corner/\n> > > -> Current documentation\n> > >\n> > > pointing at\n> > >\n> > > http://www.postgresql.org/devel-corner/docs/index.html\n> > >\n> > > (afaik this index.html does not yet exist, but could point to the\n> > > various flavors of pages and tarballs) rather than\n> > >\n> > > http://www.postgresql.org/docs/index.html\n> > >\n> > > which is what I see now?\n> > \n> > I thought /docs/index.html was to be for the current docs. Since they're\n> > not, what ARE they pointing to?? Anyway, I've now got it pointing to\n> > devel-contrib/docs/index.html and created an index. If you ever need to\n> > update the index, look at the script makeindex in that directory.\n> \n> docs/index.html *are* the \"current docs\". They correspond to the current\n> released version, which would be 7.0.x (though it is possible that no\n> one updated them for 7.0.2).\n> \n> For developers (-hackers developers, not application developers using\n> the current release), the \"current docs\" correspond to the docs built\n> nightly (actually twice a day), which reflect the current development\n> tree.\n\nI see the problem. You're not of BSD and I was thinking you were. I'm \nguessing Marc is catching on right now. To me, \"current\" means the\ndocs that correspond to the current (or development) code. The release\nI refer to is either stable or (more appropriately) \"release\" docs.\n\nEither way, I think I have things figured out now.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 19 Oct 2000 22:14:04 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_dump docs"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> On Thu, 19 Oct 2000, Thomas Lockhart wrote:\n> \n> > > > I don't see a reference from the developer's corner web page, which\n> > > > seems to point back to the released version instead.\n> > > That's strange... I see it :) ...now\n> >\n> > Great! But I don't, maybe due to caching somewhere? Should I be seeing\n> >\n> > http://www.postgresql.org/devel-corner/\n> > -> Current documentation\n> >\n> > pointing at\n> >\n> > http://www.postgresql.org/devel-corner/docs/index.html\n> >\n> > (afaik this index.html does not yet exist, but could point to the\n> > various flavors of pages and tarballs) rather than\n> >\n> > http://www.postgresql.org/docs/index.html\n> >\n> > which is what I see now?\n> \n> I thought /docs/index.html was to be for the current docs. Since they're\n> not, what ARE they pointing to?? Anyway, I've now got it pointing to\n> devel-contrib/docs/index.html and created an index. If you ever need to\n> update the index, look at the script makeindex in that directory.\n\ndocs/index.html *are* the \"current docs\". They correspond to the current\nreleased version, which would be 7.0.x (though it is possible that no\none updated them for 7.0.2).\n\nFor developers (-hackers developers, not application developers using\nthe current release), the \"current docs\" correspond to the docs built\nnightly (actually twice a day), which reflect the current development\ntree.\n\n - Thomas\n",
"msg_date": "Fri, 20 Oct 2000 02:15:56 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_dump docs"
},
{
"msg_contents": "At 11:13 19/10/00 -0400, Vince Vielhaber wrote:\n>\n>I thought /docs/index.html was to be for the current docs. Since they're\n>not, what ARE they pointing to?? Anyway, I've now got it pointing to\n>devel-contrib/docs/index.html and created an index. If you ever need to \n>update the index, look at the script makeindex in that directory.\n>\n\nI looks like all the HTML links on\nhttp://www.postgresql.org/devel-corner/docs/index.html seem to be broken.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 20 Oct 2000 13:07:42 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_dump docs"
},
{
"msg_contents": "On Fri, 20 Oct 2000, Philip Warner wrote:\n\n> At 11:13 19/10/00 -0400, Vince Vielhaber wrote:\n> >\n> >I thought /docs/index.html was to be for the current docs. Since they're\n> >not, what ARE they pointing to?? Anyway, I've now got it pointing to\n> >devel-contrib/docs/index.html and created an index. If you ever need to \n> >update the index, look at the script makeindex in that directory.\n> >\n> \n> I looks like all the HTML links on\n> http://www.postgresql.org/devel-corner/docs/index.html seem to be broken.\n\noops! Fixed.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 19 Oct 2000 23:39:46 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_dump docs"
},
{
"msg_contents": "> [Not to list]\n\nBack on list; thanks though for protecting me from ridicule ;)\n\n> >For developers (-hackers developers, not application developers using\n> >the current release), the \"current docs\" correspond to the docs built\n> >nightly (actually twice a day), which reflect the current development\n> >tree.\n> If http://www.postgresql.org/devel-corner/docs/user/ is supposed to be\n> based on CVS, then I must have done something wrong; the new pg_restore\n> entry has not appeared (it's now > 24 hours old).\n> Any hints would be appreciated.\n\nAh, the build has been failing for at least the last few days due to\nsmall problems in new content. Since I receive ~700 logs of doc builds\neach year (well, that is the annual rate but I've only stepped up to\ntwice daily since ~April), I get sloppy about looking through them\ncarefully, and instead tend to look for the *length* of the log as a\nmeasure of success while rarely examining the end of the log to see how\nit actually went. In this case I missed the failure.\n\nbtw, the build log is updated and posted at\n\n http://www.postgresql.org/devel-corner/docs/docbuild.log\n\nVince, could we get a cross reference to this on the developer's page?\n\n\nThe current problem is in using underscores in the \"id\" field of header\ntags; this is an illegal character in this context per DocBook (a\nfeature not at all obvious, but which can be seen by omission in our\nother docs).\n\npg_restore.sgml also had quite a few ^M's at the end of lines; I've got\na utility to clean those up so I applied those fixes also.\n\nWhile tracking down another problem with replicated ID fields in\nruntime.sgml, I caught a duplicated section and removed the apparently\nolder version.\n\nI've also applied a couple of fixes suggested by Laser Henry.\n\nThings now build without errors on my local machine, and should do the\nsame on postgresql.org.\n\n - Thomas\n",
"msg_date": "Fri, 20 Oct 2000 14:21:06 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_dump docs"
},
{
"msg_contents": "On Fri, 20 Oct 2000, Thomas Lockhart wrote:\n\n> Ah, the build has been failing for at least the last few days due to\n> small problems in new content. Since I receive ~700 logs of doc builds\n> each year (well, that is the annual rate but I've only stepped up to\n> twice daily since ~April), I get sloppy about looking through them\n> carefully, and instead tend to look for the *length* of the log as a\n> measure of success while rarely examining the end of the log to see how\n> it actually went. In this case I missed the failure.\n> \n> btw, the build log is updated and posted at\n> \n> http://www.postgresql.org/devel-corner/docs/docbuild.log\n> \n> Vince, could we get a cross reference to this on the developer's page?\n\nI was gonna ask you if you wanted that. Anyway it's now there. And\nbefore anyone says it, yes I know about the duplicate link. :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 20 Oct 2000 11:51:00 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_dump docs"
}
] |
[
{
"msg_contents": "You have to tell us whether you plan to implement \na safe file rename in WAL ? If yes a simple filename\nwithout version would be possible and better.\n\nAndreas\n\n> Where are we in subj?\n> Oid.version or UniqueId?\n> If noone is going to implement it soon then I'll have to\n> change code to OID file naming for WAL.\n> \n> Vadim\n> \n",
"msg_date": "Wed, 13 Sep 2000 10:24:01 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Status of new relation file naming"
}
] |
[
{
"msg_contents": "Hello,\n\nI'd like to know if anyone can reproduce a strange error in V7.02\n(debian i386). It happens when an index-scan is done in an index with\nmore than 80 entries, using a like-match, where the %-wildcard\ndirectly follows a '/'. A simple example (which creates a database\ntestdb) is appended. In my installation the first count is 99, the\nsecond 0.\n\nciao\nAndreas\n\n#! /bin/sh\nset -e\npsql <<.\n\\set ON_ERROR_STOP 1\ncreate database testdb;\n\\c testdb\ncreate table test (a text);\n.\ni=1\nwhile test $i -lt 100; do\n echo \"insert into test values('/a');\"\n i=`expr $i + 1`\ndone | psql -d testdb\npsql -d testdb <<.\nselect count(*) from test where a like '/%';\ncreate index test1 on test(a);\nselect count(*) from test where a like '/%';\n\\c template1\ndrop database testdb;\n",
"msg_date": "13 Sep 2000 10:36:54 +0200",
"msg_from": "Andreas Degert <[email protected]>",
"msg_from_op": true,
"msg_subject": "like-operator on index-scan"
},
{
"msg_contents": "Andreas Degert <[email protected]> writes:\n> I'd like to know if anyone can reproduce a strange error in V7.02\n> (debian i386). It happens when an index-scan is done in an index with\n> more than 80 entries, using a like-match, where the %-wildcard\n> directly follows a '/'.\n\nSomeone else just reported this a couple days ago. Very odd. I suspect\nthe problem is locale-related; what LOCALE do you run the postmaster in?\n\nAlso, it might help to look at the output of EXPLAIN VERBOSE for\nthe misbehaving query. That would let us see what indexscan limits\nare being generated.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2000 11:22:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: like-operator on index-scan "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Andreas Degert <[email protected]> writes:\n> > I'd like to know if anyone can reproduce a strange error in V7.02\n> > (debian i386). It happens when an index-scan is done in an index with\n> > more than 80 entries, using a like-match, where the %-wildcard\n> > directly follows a '/'.\n> \n> Someone else just reported this a couple days ago. Very odd. I suspect\n> the problem is locale-related; what LOCALE do you run the postmaster in?\n\nyes, i just found that out myself (should check my mail more often :)).\n\nIt seems it doesn't make any difference which locale it is; I tried\nde_DE (that's what I'm using normally), en_US, en_GB, ...\n\n> \n> Also, it might help to look at the output of EXPLAIN VERBOSE for\n> the misbehaving query. That would let us see what indexscan limits\n> are being generated.\n\nThis is the output of\n\nexplain verbose select count(*) from test where a like '/%';\n\n(hand-formatted.. first and last time i've done that :))\n\n{ AGG\n :startup_cost 2.02 :total_cost 2.02 :rows 1 :width 4 :state <>\n :qptargetlist\n ({ TARGETENTRY\n :resdom { RESDOM\n :resno 1 :restype 23 :restypmod -1\n :resname count :reskey 0 :reskeyop 0\n :ressortgroupref 0 :resjunk false }\n :expr { AGGREG\n :aggname count :basetype 0 :aggtype 23\n :target { CONST :consttype 23 :constlen 4\n :constisnull false\n :constvalue 4 [ 1 0 0 0 ]\n :constbyval true }\n :usenulls false :aggstar true\n :aggdistinct false }\n })\n :qpqual <>\n :lefttree\n { INDEXSCAN\n :startup_cost 0.00 :total_cost 2.01 :rows 1 :width 4\n :state <> :qptargetlist <>\n :qpqual\n ({ EXPR\n :typeOid 16 :opType op\n :oper { OPER :opno 1209 :opid 850 :opresulttype 16 }\n :args\n ({ VAR :varno 1 :varattno 1 :vartype 25 :vartypmod -1\n :varlevelsup 0 :varnoold 1 :varoattno 1}\n { CONST :consttype 25 :constlen -1 :constisnull false\n :constvalue 6 [ 6 0 0 0 47 37 ] :constbyval false }\n )\n })\n :lefttree <> :righttree <> :extprm () :locprm () :initplan <>\n :nprm 0 :scanrelid 1 :indxid ( 30208)\n :indxqual\n (({ EXPR\n :typeOid 16 :opType op\n :oper { OPER :opno 667 :opid 743 :opresulttype 16 }\n :args ({ VAR :varno 1 :varattno 1 :vartype 25 :vartypmod -1\n :varlevelsup 0 :varnoold 1 :varoattno 1}\n { CONST :consttype 25 :constlen -1 :constisnull false\n :constvalue 5 [ 5 0 0 0 47 ] :constbyval false }\n )}\n { EXPR\n :typeOid 16 :opType op\n :oper { OPER :opno 664 :opid 740 :opresulttype 16 }\n :args ({ VAR :varno 1 :varattno 1 :vartype 25 :vartypmod -1\n :varlevelsup 0 :varnoold 1 :varoattno 1}\n { CONST :consttype 25 :constlen -1 :constisnull false\n :constvalue 5 [ 5 0 0 0 48 ] :constbyval false }\n )\n }\n ))\n :indxqualorig\n (({ EXPR :typeOid 16 :opType op\n :oper { OPER :opno 667 :opid 743 :opresulttype 16 }\n :args ({ VAR :varno 1 :varattno 1 :vartype 25\n :vartypmod -1 :varlevelsup 0 :varnoold 1\n :varoattno 1}\n { CONST :consttype 25 :constlen -1 :constisnull false\n :constvalue 5 [ 5 0 0 0 47 ]\n :constbyval false }\n )\n }\n { EXPR :typeOid 16 :opType op\n :oper { OPER :opno 664 :opid 740 :opresulttype 16 }\n :args ({ VAR :varno 1 :varattno 1 :vartype 25 :vartypmod -1\n :varlevelsup 0 :varnoold 1 :varoattno 1}\n { CONST :consttype 25 :constlen -1 :constisnull false\n :constvalue 5 [ 5 0 0 0 48 ]\n :constbyval false }\n )\n }\n ))\n :indxorderdir 1\n }\n :righttree <> :extprm () :locprm () :initplan <> :nprm 0\n}\n\nIt looks like the like-match is converted into a \"x >= a and x < b\"\nform of expression (opno 664 is text_lt and opno 667 is text_ge, and\n47 == ascii(/)), which doesn't work with collating order in most\nlocales? But it must be more compicated, because the query works when\nthere are less then 80 entries in the index. How many go onto a page?\n\nciao\nAndreas\n",
"msg_date": "13 Sep 2000 22:51:37 +0200",
"msg_from": "Andreas Degert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: like-operator on index-scan"
},
{
"msg_contents": "Andreas Degert <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> Also, it might help to look at the output of EXPLAIN VERBOSE for\n>> the misbehaving query. That would let us see what indexscan limits\n>> are being generated.\n\n> This is the output of\n> explain verbose select count(*) from test where a like '/%';\n> (hand-formatted.. first and last time i've done that :))\n\n> :indxqual\n> (({ EXPR\n> :typeOid 16 :opType op\n> :oper { OPER :opno 667 :opid 743 :opresulttype 16 }\n> :args ({ VAR :varno 1 :varattno 1 :vartype 25 :vartypmod -1\n> :varlevelsup 0 :varnoold 1 :varoattno 1}\n> { CONST :consttype 25 :constlen -1 :constisnull false\n> :constvalue 5 [ 5 0 0 0 47 ] :constbyval false }\n> )}\n> { EXPR\n> :typeOid 16 :opType op\n> :oper { OPER :opno 664 :opid 740 :opresulttype 16 }\n> :args ({ VAR :varno 1 :varattno 1 :vartype 25 :vartypmod -1\n> :varlevelsup 0 :varnoold 1 :varoattno 1}\n> { CONST :consttype 25 :constlen -1 :constisnull false\n> :constvalue 5 [ 5 0 0 0 48 ] :constbyval false }\n> )\n> }\n> ))\n\n> It looks like the like-match is converted into a \"x >= a and x < b\"\n> form of expression (opno 664 is text_lt and opno 667 is text_ge, and\n> 47 == ascii(/)), which doesn't work with collating order in most\n> locales?\n\nWhat we've got here is x >= '/' AND x < '0', which should work as far\nas I can see, unless your machine uses a really peculiar collation\norder.\n\nWhat happens if you try the query in the form\n\nselect count(*) from test where a >= '/' and a < '0'\n\nDo you get the same behavior? If so, try changing the index bounds to\nsee where it works and stops working.\n\n> But it must be more compicated, because the query works when\n> there are less then 80 entries in the index. How many go onto a page?\n\nMore than that, I'd think, at least for strings as short as you showed\nin your example. An index item only has about a dozen bytes of\noverhead, so for short strings you ought to get three or four hundred\nper index page. You can check this by looking to see if the index\nfile has grown past its minimum size of 2 pages (16K).\n\nThe whole thing is quite peculiar. Your example works fine for me;\ncan anyone else duplicate the failure, and if so on what platform?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2000 20:31:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: like-operator on index-scan "
}
] |
[
{
"msg_contents": "Hello Hackers,\n\nwith V7.02, it seems when a constraint evalutes to 'null', it behaves\nlike 'true'. I'm rather sure this behaviour changed from V6.x, though I \ncan't check it.\n\nexample:\n\n create table test (a int4 check (a > 0));\n\nallows a to be null. But\n\n select * from test where a > 0;\n\ndoesn't return any null values.\n\nIs this the intended behaviour?\n\ncheers\nAndreas\n",
"msg_date": "13 Sep 2000 10:44:46 +0200",
"msg_from": "Andreas Degert <[email protected]>",
"msg_from_op": true,
"msg_subject": "null in constraints"
},
{
"msg_contents": "Andreas Degert <[email protected]> writes:\n> with V7.02, it seems when a constraint evalutes to 'null', it behaves\n> like 'true'. I'm rather sure this behaviour changed from V6.x, though I \n> can't check it.\n\nYes, it did change. The previous behavior was not compliant with SQL92:\n\n 4.10.2 Table constraints\n\n A table constraint is either a unique constraint, a referential\n constraint or a table check constraint.\n\t [ snip ]\n A table check constraint is satisfied if and only if the specified\n <search condition> is not false for any row of a table.\n\n\"Not false\" is the spec's way of saying \"true or unknown (ie, NULL)\".\n\nIt's not particularly consistent with the behavior of WHERE clauses,\nwherein NULL is treated like FALSE:\n\n 7.6 <where clause>\n\n 1) The <search condition> is applied to each row of T. The result\n of the <where clause> is a table of those rows of T for which\n the result of the <search condition> is true.\n\nNote the difference in wording. \"true\" and \"not false\" are not the same\nthing in 3-valued boolean logic.\n\n> Is this the intended behaviour?\n\nWell, it does mean that you can put on a constraint like \"X > 0\" without\nautomatically requiring X to be non-null, as it did in our earlier code.\nIf you also want to constrain X to be non-null, you can specify NOT NULL\nalong with the constraint clause. So it's more flexible this way. Or\nat least I suppose that was the SQL committee's reasoning.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2000 11:10:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: null in constraints "
}
] |
[
{
"msg_contents": "\n> with V7.02, it seems when a constraint evalutes to 'null', it behaves\n> like 'true'. I'm rather sure this behaviour changed from \n> V6.x, though I \n> can't check it.\n> \n> example:\n> \n> create table test (a int4 check (a > 0));\n> \n> allows a to be null. But\n\nyes\n\n> \n> select * from test where a > 0;\n> \n> doesn't return any null values.\n\nyes\n\n> \n> Is this the intended behaviour?\n\nYes, previous behavior was wrong.\n\nAndreas\n",
"msg_date": "Wed, 13 Sep 2000 10:53:20 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: null in constraints"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm not sure if this has been already discussed..\n\nRenaming a database table doesn't change automatically created index\nnames, so that if you do\n\ncreate table t(a int primary key);\nalter table t rename to t1;\ncreate table t(a int primary key);\n\nthen the second create will fail. Would it be possible to have the\nalter table command to also rename the index? The same goes for\ncolumns with unique.\n\nciao\nAndreas\n",
"msg_date": "13 Sep 2000 10:59:30 +0200",
"msg_from": "Andreas Degert <[email protected]>",
"msg_from_op": true,
"msg_subject": "alter table rename to .."
}
] |
[
{
"msg_contents": "Hello,\n\nI wrote this mail to pgsql-general, but have no mail from it since\nyesterday. If someone replied to me, please resend the reply privately.\n\n---------- Forwarded Message ----------\nSubject: Problems inserting data\nDate: Tue, 12 Sep 2000 17:02:57 +0700\nFrom: Denis Perchine <[email protected]>\n\n\nHello,\n\nI have really strange problem with insert.\nIt worked before...\n\nSep 12 04:48:34 mx postgres[25768]: DEBUG: query: insert into listmembers (server_id,email,name) values(12836,'[email protected]','LAN IV')\nSep 12 04:48:34 mx postgres[25768]: ERROR: Index 13853499 does not exist\n\nWhat does this error mean???\n>From the source code I can get that this mean that relation with this oid is\ninvalid, but problem is that I do not have such relation in pg_class.\n\nwebmailstation=> select * from pg_class where oid=13853499;\n relname | reltype | relowner | relam | relpages | reltuples | rellongrelid | relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers | relukeys | relfkeys | relrefs | relhaspkey | relhasrules | relhassubclass | relacl\n---------+---------+----------+-------+----------+-----------+--------------+-------------+-------------+---------+----------+-----------+-------------+----------+----------+---------+------------+-------------+----------------+--------\n(0 rows)\n\nAnd I can not found it in pg_index.\n\nwebmailstation=> select * from pg_index where indexrelid=13853499;\n indexrelid | indrelid | indproc | indkey | indclass | indisclustered | indislossy | indhaskeytype | indisunique | indisprimary | indreference | indpred\n------------+----------+---------+--------+----------+----------------+------------+---------------+-------------+--------------+--------------+---------\n(0 rows)\n\nI have rebuild indices for listmembers table, but this does not help.\nAny thoughts of what is this can be?\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n-------------------------------------------------------\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Wed, 13 Sep 2000 18:07:56 +0700",
"msg_from": "Denis Perchine <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Problems inserting data"
},
{
"msg_contents": "> -----Original Message-----\n> From: Denis Perchine\n> \n> Hello,\n> \n> I wrote this mail to pgsql-general, but have no mail from it since\n> yesterday. If someone replied to me, please resend the reply privately.\n>\n\nI don't have any mail from pgsql-general for 2 days either.\n \n> \n> I have really strange problem with insert.\n> It worked before...\n> \n> Sep 12 04:48:34 mx postgres[25768]: DEBUG: query: insert into \n> listmembers (server_id,email,name) \n> values(12836,'[email protected]','LAN IV')\n> Sep 12 04:48:34 mx postgres[25768]: ERROR: Index 13853499 does not exist\n> \n> What does this error mean???\n> From the source code I can get that this mean that relation with \n> this oid is\n> invalid, but problem is that I do not have such relation in pg_class.\n> \n> webmailstation=> select * from pg_class where oid=13853499;\n\nCould you try \n\tselect oid from pg_class;\nand find an oid=13853499 entry ?\n\n> \n> And I can not found it in pg_index.\n> \n> webmailstation=> select * from pg_index where indexrelid=13853499;\n\nCould you also try\n\tselect indexrelid from pg_index;\nand find an indexrelid=13853499 entry ?\n\nIf you could find such an entry,your system indexes may be broken.\n\nRegards.\n\nHiroshi Inoue\n\t\n \n",
"msg_date": "Thu, 14 Sep 2000 06:17:32 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Fwd: Problems inserting data"
}
] |
[
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> I thought I would test the latest changes to pg_dump on the regression db,\n> and got the following output for an aggregate:\n\n> CREATE AGGREGATE newcnt (BASETYPE = opaque, SFUNC = int4inc, STYPE =\n> int4, INITCOND = '0' );\n\n> Unfortunately, the backend produces the following error when this statement\n> is executed:\n\n> ERROR: AggregateCreate: Type 'opaque' undefined\n\nThe command needs to read \"basetype = any\". I guess you'll have to\nspecial-case this in pg_dump (or more accurately, change the special\ncase that's probably there now for aggbasetype = 0). I think I changed\nthe aggregate regression test to exercise basetype = any not long ago.\nIt didn't before, which is why you didn't see the failure before.\n\n\n> I vaguely recall seeing something about pg_dump not working of the\n> regression db, but would be interested to know if this is the known\n> problem,\n\nNo, the known problem is that ALTER TABLE on a inheritance hierarchy\nscrews up the column ordering of the child tables:\n\n* create parent table w/columns a,b,c\n* create child table adding columns d,e to parent\n* alter parent* add column f\n\nAt this point the parent has columns a,b,c,f and the child has\na,b,c,d,e,f --- in that order.\n\npg_dump will now produce a script that creates parent with a,b,c,f\nand then creates child adding d,e, so that the child table has\ncolumns a,b,c,f,d,e --- in that order. Unfortunately the COPY output\nfor the child has the columns in order a,b,c,d,e,f, so the data reload\nfails.\n\nIMHO this is not pg_dump's fault, it's a bug in ALTER TABLE. See the\narchives for prior discussions of how ALTER TABLE might be fixed so that\nthe child has the \"correct\" column order a,b,c,f,d,e right off the bat.\n\nIn the meantime, it's possible to work around this if you use pg_dump's\nmost verbose form of data dumping, where the data is reloaded by INSERT\ncommands with called-out column names (I forget what the option name\nis). You should find that pg_dump will work on the regression database\nif you use that option.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2000 12:34:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump of regression db? "
},
{
"msg_contents": "\nI thought I would test the latest changes to pg_dump on the regression db,\nand got the following output for an aggregate:\n\n CREATE AGGREGATE newcnt (BASETYPE = opaque, SFUNC = int4inc, STYPE =\nint4, INITCOND = '0' );\n\nUnfortunately, the backend produces the following error when this statement\nis executed:\n\n ERROR: AggregateCreate: Type 'opaque' undefined\n\nI vaguely recall seeing something about pg_dump not working of the\nregression db, but would be interested to know if this is the known\nproblem, and if there is any value in trying to fix it. \n\npsql shows the following:\n\nregression=# \\da newcnt\n List of aggregates\n Name | Type | Description\n--------+-------------+-------------\n newcnt | (all types) |\n(1 row)\n\nwhereas in 7.0.2, it shows int4, so my guess is that this is a problem with\npg_dump and the new function manager.\n\nAny suggestions would be appreciated.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 14 Sep 2000 02:55:15 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "pg_dump of regression db?"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> At 12:34 13/09/00 -0400, Tom Lane wrote:\n>> The command needs to read \"basetype = any\".\n\n> The particular piece of code (findTypeByOid) that does this is used to\n> display types other places (eg. function return types). My guess is that I\n> should use the new 'format_type' function in these as well, and have a flag\n> for the specific case of the aggregate dumping code.\n\n> So I would build the type info table with a new column that contains\n> 'typedefn', which is just the output of format_type(typeid, NULL), and\n> pass an 'opaque as any' flag when dumping aggregates.\n\n> Does this sound reasonable?\n\nThat would solve the immediate issue, but you might want to look a\nlittle further ahead. The real problem here is that a zero typeid is\n(mis) used for several different purposes in the existing backend code.\nIn this context we see two of them: zero representing \"any input\ndatatype is accepted by this function\" and zero representing \"opaque\ntype\". When you look at the uses of \"opaque\" you find that that has\nseveral different meanings as well. Eventually I would like to clean\nthis up by inventing distinct typeids for each shade of meaning ---\nwe already have one special typeid of this sort (UNKNOWN) and it seems\nlike having several of them would be a cleaner way to proceed than\noverloading zero with such wild abandon.\n\nI'm not entirely sure what that means for pg_dump; maybe it means that\nthe problem goes away and just printing the format_type output will\nwork in all contexts. But for now you should consider that there are\nseveral different possible interpretations of typeid zero. In short,\npass an enum not a boolean...\n\n>> No, the known problem is that ALTER TABLE on a inheritance hierarchy\n>> screws up the column ordering of the child tables:\n\n> Am I correct that someone was working on allowing a column order to be\n> specified in COPY commands? If so, this would fix the problem, I think.\n\nNo, that is a kluge that would allow pg_dump to work around ALTER\nTABLE's fundamental inadequacy. It's not a fix unless you think it's OK\nto expect every application forevermore to take special care with column\norders.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2000 22:43:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump of regression db? "
},
{
"msg_contents": "At 12:34 13/09/00 -0400, Tom Lane wrote:\n>\n>The command needs to read \"basetype = any\". I guess you'll have to\n>special-case this in pg_dump (or more accurately, change the special\n>case that's probably there now for aggbasetype = 0). I think I changed\n>the aggregate regression test to exercise basetype = any not long ago.\n>It didn't before, which is why you didn't see the failure before.\n\nThe particular piece of code (findTypeByOid) that does this is used to\ndisplay types other places (eg. function return types). My guess is that I\nshould use the new 'format_type' function in these as well, and have a flag\nfor the specific case of the aggregate dumping code.\n\nSo I would build the type info table with a new column that contains\n'typedefn', which is just the output of format_type(typeid, NULL), and\npass an 'opaque as any' flag when dumping aggregates.\n\nDoes this sound reasonable?\n\n\n>\n>> I vaguely recall seeing something about pg_dump not working of the\n>> regression db, but would be interested to know if this is the known\n>> problem,\n>\n>No, the known problem is that ALTER TABLE on a inheritance hierarchy\n>screws up the column ordering of the child tables:\n>\n...\n>\n>IMHO this is not pg_dump's fault, it's a bug in ALTER TABLE. See the\n>archives for prior discussions of how ALTER TABLE might be fixed so that\n>the child has the \"correct\" column order a,b,c,f,d,e right off the bat.\n>\n\nAm I correct that someone was working on allowing a column order to be\nspecified in COPY commands? If so, this would fix the problem, I think.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 14 Sep 2000 13:12:15 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump of regression db? "
},
{
"msg_contents": "At 12:34 13/09/00 -0400, Tom Lane wrote:\n>\n>The command needs to read \"basetype = any\". I guess you'll have to\n\nDoes this apply to any other parts of 'CREATE AGGREGATE' (or anywhere else?)\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 14 Sep 2000 13:39:47 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump of regression db? "
},
{
"msg_contents": "At 22:43 13/09/00 -0400, Tom Lane wrote:\n>\n>> Am I correct that someone was working on allowing a column order to be\n>> specified in COPY commands? If so, this would fix the problem, I think.\n>\n>No, that is a kluge that would allow pg_dump to work around ALTER\n>TABLE's fundamental inadequacy. It's not a fix unless you think it's OK\n>to expect every application forevermore to take special care with column\n>orders.\n>\n\nI suppose from an application programming point of view, I am used to\nhaving to specify column order in my 'select' statements (ie. I don't tend\nto let 'select * from...' into production code, and usually consider it bad\nform to do so), so I thought the issue was confined to COPY & pg_dump. And\nin the case of pg_dump, I would just explicitly set the column order when\nthey are dumped/restored.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 14 Sep 2000 14:13:32 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump of regression db? "
}
] |
[
{
"msg_contents": "> You have to tell us whether you plan to implement \n> a safe file rename in WAL ? If yes a simple filename\n> without version would be possible and better.\n\nWhat do you mean?\n\nVadim\n",
"msg_date": "Wed, 13 Sep 2000 10:29:37 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Status of new relation file naming"
}
] |
[
{
"msg_contents": "> Will this new storage manager replace the current one or will one be \n> able to choose which storage manager to use (at compile time, at \n> startup, for each table)?\n\nThis would be possible, but no way if new smgr will be overwriting one\n(smgr nature affects access methods).\n\n> PostgreSQL started as an extensible ORDBMS, but IIRC at some stage \n> all other SMs were thrown out.\n\nThere was just one additional smgr for stable memory. If someone has\nthis feature in comp then he could try to resurrect it.\n\n> I don't think it would be a good idea to completely abandon the \n> notion of storage manager as a replacable component.\n\nSmgr wrapper is still in place.\n\nVadim\n",
"msg_date": "Wed, 13 Sep 2000 10:36:31 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Status of new relation file naming"
}
] |
[
{
"msg_contents": "> My vote is for a random number, and then someone can write \n> the tools to display the file info. I'll even volunteer to\n> work on them...\n\nOk. If someone will decide to implement this please try to use\nRelFileNode structure defined in storage/relfilenode.h.\nIt's just place holder for the moment - I needed in *some*\nstructure as \"file pointer\" in log records - so feel free\nto change context.\n\nVadim\n \n",
"msg_date": "Wed, 13 Sep 2000 10:48:13 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Status of new relation file naming"
},
{
"msg_contents": "At 10:48 13/09/00 -0700, Mikheev, Vadim wrote:\n>> My vote is for a random number, and then someone can write \n>> the tools to display the file info. I'll even volunteer to\n>> work on them...\n>\n>Ok. If someone will decide to implement this please try to use\n>RelFileNode structure defined in storage/relfilenode.h.\n\nJust to clarify, I was offering to write the file info tools, not the\nchanges to the file handling in PG. Although I would be willing to help in\nthe latter for obvious reasons.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 14 Sep 2000 13:02:16 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Status of new relation file naming"
}
] |
[
{
"msg_contents": "I wasn't too sure where to mail this.\n\nI have noticed that there are some identical files in\npostgresql-7.0.2/src/test/regress/expected/\n\n> diff float8-cygwin.out float8-small-is-zero.out #I recommend deleting\nfloat8-cygwin.out\n> diff geometry-cygwin-precision.out geometry-solaris-precision.out #I\nrecommend deleting geometry-cygwin-precision.out\n\nbelow is the diff of postgresql-7.0.2/src/test/regress/resultmap\nthat has the above files deleted plus the addition of an alpha regression\ntest built with alphaev56-dec-osf4.0e/2.95.2/ . The alpha geometry\nregression file is attached\n\n11c11\n< float8/i.86-pc-cygwin*=float8-cygwin\n---\n> float8/i.86-pc-cygwin*=float8-small-is-zero\n18c18\n< geometry/i.86-pc-cygwin*=geometry-cygwin-precision\n---\n> geometry/i.86-pc-cygwin*=geometry-solaris-precision\n21a22\n> geometry/alpha.*-dec-osf=geometry-alpha-precision\n\n\n\nRicardo Muggli\nSystems Manager\nInformation and Technology Services\nMinnesota State University, Mankato",
"msg_date": "Wed, 13 Sep 2000 13:50:32 -0500 (CDT)",
"msg_from": "Ricardo Muggli <[email protected]>",
"msg_from_op": true,
"msg_subject": "minor fixes for regress"
},
{
"msg_contents": "Applied and updated.\n\n\n\n> I wasn't too sure where to mail this.\n> \n> I have noticed that there are some identical files in\n> postgresql-7.0.2/src/test/regress/expected/\n> \n> > diff float8-cygwin.out float8-small-is-zero.out #I recommend deleting\n> float8-cygwin.out\n> > diff geometry-cygwin-precision.out geometry-solaris-precision.out #I\n> recommend deleting geometry-cygwin-precision.out\n> \n> below is the diff of postgresql-7.0.2/src/test/regress/resultmap\n> that has the above files deleted plus the addition of an alpha regression\n> test built with alphaev56-dec-osf4.0e/2.95.2/ . The alpha geometry\n> regression file is attached\n> \n> 11c11\n> < float8/i.86-pc-cygwin*=float8-cygwin\n> ---\n> > float8/i.86-pc-cygwin*=float8-small-is-zero\n> 18c18\n> < geometry/i.86-pc-cygwin*=geometry-cygwin-precision\n> ---\n> > geometry/i.86-pc-cygwin*=geometry-solaris-precision\n> 21a22\n> > geometry/alpha.*-dec-osf=geometry-alpha-precision\n> \n> \n> \n> Ricardo Muggli\n> Systems Manager\n> Information and Technology Services\n> Minnesota State University, Mankato\nContent-Description: \n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 18:37:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: minor fixes for regress"
}
] |
[
{
"msg_contents": "I noticed today that current sources are not able to use an index\nfor a query like\n\tselect * from foo where bar like 'xyz%';\n\nIt seems that the parser now emits some kind of function call for LIKE\nexpressions, whereas the optimizer's code to use indexes for LIKE is\nlooking for an operator.\n\nI have more pressing things to do than try to teach the optimizer about\nlooking for function calls as well as operators, so I recommend\nreverting the parser's output to what it was.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Sep 2000 15:25:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Indexing of LIKE queries is broken in current sources"
},
{
"msg_contents": "> It seems that the parser now emits some kind of function call for LIKE\n> expressions, whereas the optimizer's code to use indexes for LIKE is\n> looking for an operator.\n> I have more pressing things to do than try to teach the optimizer about\n> looking for function calls as well as operators, so I recommend\n> reverting the parser's output to what it was.\n\nOh, that's bad. I changed it to the function call to allow\nthree-parameter LIKE clauses, which is a perfectly reasonable thing to\ndo imho.\n\nThis LIKE shortcircuit stuff is a hack anyway, but where should I look\nin the optimizer?\n\n - Thomas\n",
"msg_date": "Thu, 14 Sep 2000 05:03:41 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexing of LIKE queries is broken in current sources"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> It seems that the parser now emits some kind of function call for LIKE\n>> expressions, whereas the optimizer's code to use indexes for LIKE is\n>> looking for an operator.\n\n> Oh, that's bad. I changed it to the function call to allow\n> three-parameter LIKE clauses, which is a perfectly reasonable thing to\n> do imho.\n\nWell, it's a clean solution in and of itself, but it fails to take into\naccount what the rest of the system is able or not able to do.\n\n> This LIKE shortcircuit stuff is a hack anyway, but where should I look\n> in the optimizer?\n\nThe tip of this iceberg is the \"special index operator\" hacks in\nbackend/optimizer/path/indxpath.c. However, the true dimensions of\nthe problem don't become apparent until you realize that *none* of\nthe optimizer does anything much with function calls as opposed to\noperators. To get back up to the level of functionality we had for\nLIKE before, you'd also need to devise a way of doing selectivity\nestimation for function calls --- ie, define an API for function\nestimators, add appropriate column(s) to pg_proc, then start actually\nwriting some code. This would be a good thing to do someday, but\nI think we have considerably more pressing problems to deal with\nfor 7.1.\n\nBTW, none of the code that actually bashes LIKE/regexp patterns in\nindxpath.c/selfuncs.c knows anything about nonstandard ESCAPE characters\nfor patterns. It was this (and the code's lack of knowledge about\nILIKE) that I'd originally had as my to-do item --- it wasn't till I\nrealized you'd switched from operators to functions that I saw this was\nmore than a trivial problem to fix.\n\n\nAfter a little bit of thought I think I see a way out that avoids\nopening the function-selectivity can of worms. Let's translate LIKE\nto an operator same as we always did (and add an operator for ILIKE).\nThe forms with an ESCAPE clause can be translated to the same operators\nbut with a righthand argument that is a call of a new function\n\t\tescape_for_like(pattern, escape)\nescape_for_like would interpret the pattern with the given escape\ncharacter and translate it into a pattern that uses the standard escape\ncharacter, ie backslash. After constant folding, this looks just like\nthe old style of LIKE call and none of the optimizer code has to change\nat all, except to add a case for ILIKE which will be a trivial addition.\nLIKE itself gets simpler too, since the pattern matcher needn't cope\nwith nonstandard escape characters. Seems like a good subdivision of\nlabor within LIKE.\n\nThoughts? I'm willing to take responsibility for making this happen\nif you agree it's a good solution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Sep 2000 10:46:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Indexing of LIKE queries is broken in current sources "
},
{
"msg_contents": "> After a little bit of thought I think I see a way out that avoids\n> opening the function-selectivity can of worms. Let's translate LIKE\n> to an operator same as we always did (and add an operator for ILIKE).\n> The forms with an ESCAPE clause can be translated to the same operators\n> but with a righthand argument that is a call of a new function\n> escape_for_like(pattern, escape)\n> escape_for_like would interpret the pattern with the given escape\n> character and translate it into a pattern that uses the standard escape\n> character, ie backslash. After constant folding, this looks just like\n> the old style of LIKE call and none of the optimizer code has to change\n> at all, except to add a case for ILIKE which will be a trivial addition.\n> LIKE itself gets simpler too, since the pattern matcher needn't cope\n> with nonstandard escape characters. Seems like a good subdivision of\n> labor within LIKE.\n> Thoughts? I'm willing to take responsibility for making this happen\n> if you agree it's a good solution.\n\nHmm. It's a great solution, though I'm disappointed that the seemingly\nmore direct function-call strategy turned out to be a dead end (at least\nfor now).\n\nIt seems that \"~~*\" should be the operator for ILIKE.\n\n - Thomas\n\nbtw, I notice that psql has trouble with \\dd when you try to show any\noperator which contains an asterisk.\n\n\\dd *\n\nresults in an error.\n\n\\dd ~*\n\nshows every function, type, and operator.\n\nI've tried surrounding it with single- and double-quotes, but that\ndidn't seem to work around it.\n",
"msg_date": "Thu, 14 Sep 2000 15:30:25 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexing of LIKE queries is broken in current sources"
}
] |
[
{
"msg_contents": "I was just doing some huge operations with PostgreSQL and it all crashed out\nwith a \"too many files open\" message plastered all over the place..\n\nNow in /proc/sys/fs/file-max there is only 4096, that limit could have\neasily been reached. Does changing the value in the file effectively change\nthe limit system-wide? I changed it and rebooted but it was set right back\nto 4096.. I've been out of the Linux loop for a long time (FreeBSD junkie\nnow) so I don't know how to set that up to permanently change the limit.\n\nMarc: You use Linux, don't you? Have you ever run into this?\n\nAny help is very appreciated, thanks!\n\n-Mitch\n\n\n",
"msg_date": "Wed, 13 Sep 2000 13:04:32 -0700",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux / PostgreSQL question"
},
{
"msg_contents": "\"Mitch Vincent\" <[email protected]> writes:\n\n> I was just doing some huge operations with PostgreSQL and it all crashed out\n> with a \"too many files open\" message plastered all over the place..\n> \n> Now in /proc/sys/fs/file-max there is only 4096, that limit could have\n> easily been reached. Does changing the value in the file effectively change\n> the limit system-wide? I changed it and rebooted but it was set right back\n> to 4096.. \n\nA reboot would reset values in proc...\n\nFor Red Hat Linux 6.2 and up, you would add a line to\n/etc/sysctl.conf:\n\n# Max open files:\nfs.file-max = 8192\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "13 Sep 2000 16:13:57 -0400",
"msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Linux / PostgreSQL question"
},
{
"msg_contents": "On Wed, Sep 13, 2000 at 01:04:32PM -0700, Mitch Vincent wrote:\n> I was just doing some huge operations with PostgreSQL and it all crashed out\n> with a \"too many files open\" message plastered all over the place..\n> \n\nThis is really a Linux question, not postgresql, but you knew that...\n\n(I'm keeping hackers on this message, so if it comes up again, the\nanswer's in the archives with the question)\n\n> Now in /proc/sys/fs/file-max there is only 4096, that limit could have\n> easily been reached. Does changing the value in the file effectively change\n> the limit system-wide? I changed it and rebooted but it was set right back\n> to 4096.. I've been out of the Linux loop for a long time (FreeBSD junkie\n> now) so I don't know how to set that up to permanently change the limit.\n\nAlmost right. Why'd you reboot? It's a runtime configuration. Proc is not\na file system, it's a pseudo-filesystem interface to kernel internals.\n\nJust do something like:\n\necho 32768 > /proc/sys/fs/file-max \n\nAnd you may need to up the number of inodes, too:\n\necho 65536 > /proc/sys/fs/inode-max \n\nYou'll probably want to put these in rc.boot, or rc.local, or something, \nto set this at boot time, as well.\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Wed, 13 Sep 2000 15:18:44 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux / PostgreSQL question"
},
{
"msg_contents": "I didn't see any mention of it on the TODO so I thought I'd ask if anyone\nhad thought about full test indexing for 7.1 (I'm guessing not)..\n\nIf not, I'd like to suggest it be put on the TODO -- if nothing else so\nsomeone could pick it up in the far future if they wanted to.. It doesn't\nseem like too many are worried about it so the request is pretty selfish,\nthough I'm sure it would help many people especially after 7.1 and TOAST\nmake text fields unlimited in size.\n\nThanks!\n\n-Mitch\n\n\n",
"msg_date": "Mon, 16 Oct 2000 09:37:58 -0700",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Full text indexing (Question/request)"
},
{
"msg_contents": "See contrib/fulltextindex.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> I didn't see any mention of it on the TODO so I thought I'd ask if anyone\n> had thought about full test indexing for 7.1 (I'm guessing not)..\n> \n> If not, I'd like to suggest it be put on the TODO -- if nothing else so\n> someone could pick it up in the far future if they wanted to.. It doesn't\n> seem like too many are worried about it so the request is pretty selfish,\n> though I'm sure it would help many people especially after 7.1 and TOAST\n> make text fields unlimited in size.\n> \n> Thanks!\n> \n> -Mitch\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 12:45:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Full text indexing (Question/request)"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> See contrib/fulltextindex.\n\nAn easy answer, but not a very good solution in the real world.\n\ncontrib/fulltextindex requires you to jump through hoops in developing\nqueries to retrieve your data. It's also very space-inefficient in that\na table with a fulltextindex on a field needs another table with a\ncomplete set of values for that field, as well as any substrings of that\nfield, and then it wants two indexes on that table. Add that up!\n\nIt would be nice to see a true index which was full text. It would be\nnice to see a true index which allowed an individual field to index to\nmany entries through a function interface. This would straightforwardly\nallow people to create their own simple functions to perform full-text,\nkeyword or other indexing schemes quite simply.\n\nIt naively appears to me that the function interface is moving closer to\nachieving this with the enhancements in 7.1 to the use of setof()\nreturns combined with the earlier enhancement to indexing on function\nresults.\n\nIf a function fulltextindex(text) returned a setof() the substrings in\nits text argument, how hard will it be to index on that return value and\nallow WHERE field=fulltextindex('substring') to use that index?\n\nOf course such a fulltextindex() function would have to know not to do\nany processing on the string when called in the second situation. Is it\npossible for functions to do this sort of trick? It seems a bit beyond\nthe pale!\n\nI would _love_ to see full-text or keyword indexing natively in\nPostgreSQL.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n",
"msg_date": "Tue, 17 Oct 2000 22:36:32 +1300",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Full text indexing (Question/request)"
},
{
"msg_contents": "Andrew McMillan wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > See contrib/fulltextindex.\n> \n> An easy answer, but not a very good solution in the real world.\n> \n> contrib/fulltextindex requires you to jump through hoops in developing\n> queries to retrieve your data. It's also very space-inefficient in that\n> a table with a fulltextindex on a field needs another table with a\n> complete set of values for that field, as well as any substrings of that\n> field, and then it wants two indexes on that table. Add that up!\n> \n> It would be nice to see a true index which was full text. It would be\n> nice to see a true index which allowed an individual field to index to\n> many entries through a function interface. This would straightforwardly\n> allow people to create their own simple functions to perform full-text,\n> keyword or other indexing schemes quite simply.\n> \n> It naively appears to me that the function interface is moving closer to\n> achieving this with the enhancements in 7.1 to the use of setof()\n> returns combined with the earlier enhancement to indexing on function\n> results.\n> \n> If a function fulltextindex(text) returned a setof() the substrings in\n> its text argument, how hard will it be to index on that return value and\n> allow WHERE field=fulltextindex('substring') to use that index?\n> \n> Of course such a fulltextindex() function would have to know not to do\n> any processing on the string when called in the second situation. Is it\n> possible for functions to do this sort of trick? It seems a bit beyond\n> the pale!\n> \n> I would _love_ to see full-text or keyword indexing natively in\n> PostgreSQL.\n\nI tottally agree with you. FTI is not a good solution. It seems natural\nthat PostgreSQL will have a built-in (and better) FTI, now that the\nTOAST project will be implemented in 7.1.\n\nPoul L. Christiansen\n",
"msg_date": "Tue, 17 Oct 2000 12:28:16 +0100",
"msg_from": "\"Poul L. Christiansen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Full text indexing (Question/request)"
},
{
"msg_contents": "Andrew McMillan <[email protected]> writes:\n> \n> I would _love_ to see full-text or keyword indexing natively in\n> PostgreSQL.\n> \n\nI would rather love to see a great fulltext engine integrated with\nPostgreSQL. It would be cool to be able to have ranked results and\ndifferent ways of searching the index(regex, soundex, synonym). Having a\nsolution that where pluggable for different languages would be nice. \n\nregards, \n\n\tGunnar\n",
"msg_date": "17 Oct 2000 14:54:58 +0100",
"msg_from": "Gunnar R|nning <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Full text indexing (Question/request)"
}
] |
[
{
"msg_contents": "NOTICE: Rel webhit_details_formatted: TID 7/17: OID IS INVALID. TUPGONE 0.\n\nThis happens after a while. Then I try:\n\n# select * into wdbak from webhit_details_formatted; \nFATAL 1: Memory exhausted in AllocSetAlloc()\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n# select * into wdbak from webhit_details_formatted;\nFATAL 1: Memory exhausted in AllocSetAlloc()\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n\n# vacuum verbose analyze webhit_details_formatted;\nNOTICE: --Relation webhit_details_formatted--\nNOTICE: Rel webhit_details_formatted: TID 7/17: OID IS INVALID. TUPGONE 0.\n\n85605 pgsql -2 0 1138M 765M biofre 0 0:21 7.32% 7.32% postgres\n\nArrrrrrrrrgh!\n\nkill -ABRT 85605\n\nUnfortunatly the core file I wanted was nuked because afterwards\nall backends cored. I've set up the system to dump to $name.$pid.core\nI'll try to get a more meaningful bug report.\n\nung. :(\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Wed, 13 Sep 2000 16:28:04 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "argh! NOTICE: Rel webhit_details_formatted: TID 7/17: OID IS INVALID.\n\tTUPGONE 0."
}
] |
[
{
"msg_contents": "Hi there,\n[I sent this email to pgsql-general yesterday, but it seems to be horked and \nthis looks like the next best place]\n \nI have a set of structually identical databases which are functionally \nindependant.\n\nHowever, I now want to be able to do a full text index search on a single (two \nfield) record across all of these databases. At the moment I only have a few \nof these databases and connecting to and scanning each one sequentially is \nsufficient. In the long term I expect the number of databases that will exist \nto make this unfeasable.\n\nThus I'd like to create a centralised database whose sole task is to enable \nfull text searches of this record across all the other databases.\n \nI believe that Postgres doesn't do cross-database stuff as it stands.\n \nI currently have this harebrained scheme whereby I write my own SQL function \nin C that links to libpq so that any of the other databases can connect to the \ncentral DB and perform the required insert/update/delete when triggered to do \nso.\n \nDoes that seem to be a reasonable thing to try? Has anyone else done such a\nthing? Is there a better way of doing it?\n \nPaul.\n\n------------------------------------------------------------\nThis e-mail has been sent to you courtesy of OperaMail, a\nfree web-based service from Opera Software, makers of\nthe award-winning Web Browser - http://www.operasoftware.com\n------------------------------------------------------------\n\n",
"msg_date": "Wed, 13 Sep 2000 19:32:00 -0400",
"msg_from": "Paul <[email protected]>",
"msg_from_op": true,
"msg_subject": "Replication of a small portion of data to another database"
}
] |
[
{
"msg_contents": "\nOkay, logically I think this makes sense, but its not working ... should\nit?\n\nglobalmatch=# insert into auth_info_new\nglobalmatch-# select ai.* from auth_info ai, auth_info_new ain\nglobalmatch-# where ai.username != ain.username;\nINSERT 0 0\n\nauth_info has 14k tuples, but some are duplicates ... I want to insert\ninto auth_info_new everything except those that have already been inserted\ninto auth_info_new ...\n\nnow, my first thought looking at the above would be that since\nauth_info_new is empty, all from auth_info should be copied over ...\nbasically, since each tuple isn't \"committed\" until the insert is\nfinished, then every username in ai is definitely not in ain ... but\nnothing is being copied over, so my first thought is definitely wrong ...\n\nbug? *raised eyebrow* \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 14 Sep 2000 01:18:02 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "using a join in an 'INSERT ... SELECT' ..."
},
{
"msg_contents": "On Thu, 14 Sep 2000, The Hermit Hacker wrote:\n\n> \n> Okay, logically I think this makes sense, but its not working ... should\n> it?\n> \n> globalmatch=# insert into auth_info_new\n> globalmatch-# select ai.* from auth_info ai, auth_info_new ain\n> globalmatch-# where ai.username != ain.username;\n> INSERT 0 0\n> \n> auth_info has 14k tuples, but some are duplicates ... I want to insert\n> into auth_info_new everything except those that have already been inserted\n> into auth_info_new ...\n> \n> now, my first thought looking at the above would be that since\n> auth_info_new is empty, all from auth_info should be copied over ...\n> basically, since each tuple isn't \"committed\" until the insert is\n> finished, then every username in ai is definitely not in ain ... but\n> nothing is being copied over, so my first thought is definitely wrong ...\n>\n> bug? *raised eyebrow* \n\nNah. Remember, you're doing a product with that join. If there are no\nrows in auth_info_new, there are no rows after the join to apply the where\nto.\n\nYou really want something like (untested):\ninsert into auth_info_new\n select ai.* from auth_info ai where not exists\n ( select * from auth_info_new ain where ai.username=ain.username );\n\n",
"msg_date": "Wed, 13 Sep 2000 22:17:03 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: using a join in an 'INSERT ... SELECT' ..."
}
] |
[
{
"msg_contents": "Hi,\n\nI'm developping a geographical object type, very close to the geographic\ntype of PG. For the moment it is set up as external functions...\n\nI would like to add indexing capabilities, and I have seen that indexing for\nPG geographical objects is on the TODO list for 7.1. \n\nI would like to get in touch with the person maintaining this part of the\ncode, and see if I could transfer some of these algorithms to my code...\n\nAt the end, these new geo objects could be incorporated in PG, but that up\nto the PG dev team...\n\nCheers..\n\nFranck Martin\nDatabase Development Officer\nSOPAC South Pacific Applied Geoscience Commission\nFiji\nE-mail: [email protected] <mailto:[email protected]> \nWeb site: http://www.sopac.org/ <http://www.sopac.org/> \n\nThis e-mail is intended for its recipients only. Do not forward this\ne-mail without approval. The views expressed in this e-mail may not be\nneccessarily the views of SOPAC.\n\n",
"msg_date": "Thu, 14 Sep 2000 17:14:45 +1200",
"msg_from": "Franck Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Indexing for geographical objects"
},
{
"msg_contents": "We certainly would like to have them. Can you send a patch that applies\nagainst our current CVS snapshot.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> Hi,\n> \n> I'm developping a geographical object type, very close to the geographic\n> type of PG. For the moment it is set up as external functions...\n> \n> I would like to add indexing capabilities, and I have seen that indexing for\n> PG geographical objects is on the TODO list for 7.1. \n> \n> I would like to get in touch with the person maintaining this part of the\n> code, and see if I could transfer some of these algorithms to my code...\n> \n> At the end, these new geo objects could be incorporated in PG, but that up\n> to the PG dev team...\n> \n> Cheers..\n> \n> Franck Martin\n> Database Development Officer\n> SOPAC South Pacific Applied Geoscience Commission\n> Fiji\n> E-mail: [email protected] <mailto:[email protected]> \n> Web site: http://www.sopac.org/ <http://www.sopac.org/> \n> \n> This e-mail is intended for its recipients only. Do not forward this\n> e-mail without approval. The views expressed in this e-mail may not be\n> neccessarily the views of SOPAC.\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Oct 2000 18:41:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexing for geographical objects"
}
] |
[
{
"msg_contents": "> > You have to tell us whether you plan to implement \n> > a safe file rename in WAL ? If yes a simple filename\n> > without version would be possible and better.\n> \n> What do you mean?\n\nThe previous discussion we had where we concluded, that \nan os rename of a file cannot be done without risc.\nBut that risc was imho acceptable to avoid the extra version in the filename\n\n(a rename back to the old name could fail when the tx is supposed \nto be rolled back).\n\nSearch the archive for \"file rename sync\".\n\nMy conlusion would be an oid only filename, or a mixture of\noid and tablename, where tablename can be wildcarded on a directory search,\nsince oid is already unique. No version in the name, we would do renames in\nthat case.\n\nIf I remember correctly a patch exists that does oid only filenames.\n\nAndreas\n\n\n> > WAL can solve the versioned relations problem.\n> > Remember that a sure new step in postmaster startup will be a\n> > rollforward of the WAL,\n> > since that will have the only sync write of our last txn's. \n> > Thus in this step it can also\n> > do any pending rename or delete of files.\n> \n> Hmm,don't you allow DDL commands inside transaction block ?\n> \n> If we allow DDL commands inside transaction block,WAL couldn't\n> postpone all rename/unlink operations until the end of transaction\n> without a resolution of the conflict of table file name.\n\nIt does not postpone anything. WAL only logs what it does:\n\n1. log i now start to rename file\n2. rename file\n3. log rename successful or abort txn\n\n> the old table file of t must vanish(using unlink() etc) \n> before 'create table t'\n> unless new file name is different from old one(OID file name would\n> resolve the conflict in this case).\n\nI was basing my statement on OID filenames being a factum.\nI am only arguing against the extra version in the filename.\n\n> To unlink/rename the table file immediately isn't a problem for the\n> rollforward functionality. It seems a problem of rollback \n> functionality.\n\nonly unlink cannot be done immediately a rename can be undone\nand thus be executed immediately.\n\n> \n> > If a rename or delete fails we\n> > bail out, since we don't want postmaster running under such \n> circumstances\n> > anyway.\n> \n> No there's a significant difference between the failure of 'delete'\n> and that of 'rename'. We would have no consistency problem even\n> though 'delete' fails and wouldn't have to stop postmaster. But we\n> wouldn't be able to see the renamed relation in case of 'rename'\n> failure and an excellent(??) dba would have to recover the \n> inconsistency.\n\nThe dba only fixes the underlying problem, like filesystem mounted readonly \nor wrong permissions on directory. He then simply starts the postmaster\nagain,\nthe rollforward with rename will then succeed.\n\nAndreas",
"msg_date": "Thu, 14 Sep 2000 09:47:07 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Status of new relation file naming"
}
] |
[
{
"msg_contents": "\n> > My vote is for a random number, and then someone can write \n> > the tools to display the file info. I'll even volunteer to\n> > work on them...\n\nWhat was the advantage of random number over oid [+version]\nin the light that there is an extra field in pg_class for other smgrs ?\nWe surely want readable names for tablespace files, no ?\n\nAndreas\n",
"msg_date": "Thu, 14 Sep 2000 10:00:39 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Status of new relation file naming"
}
] |
[
{
"msg_contents": "I was just about to ask the same question...\n\nmed v�nlig h�lsning\n/Dana\n\n-----Original Message-----\nFrom:\[email protected] [SMTP:[email protected]]\nSent:\tThursday, September 14, 2000 10:09 AM\nTo:\[email protected]; [email protected]\nSubject:\t[HACKERS] List funnies ?\n\nHas something happened to the list server ?\n\nI am only subscribed to the general list, but after two days of nothing I'm\nnow getting the hackers list stuff.\n\nSteve\n\n\n-- thorNET - Internet Consultancy, Services & Training\nPhone: 01454 854413\nFax: 01454 854412\nhttp://www.thornet.co.uk \n",
"msg_date": "Thu, 14 Sep 2000 10:12:27 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] List funnies ?"
},
{
"msg_contents": "Me too, and I'm getting some stuff two times, like I'm double subscribed\n(which I shouldn't be). Sometimes it takes few days for messages to appear\non list etc.\n\nAt 10:12 14.9.2000 , [email protected] wrote:\n>I was just about to ask the same question...\n>\n>med v�nlig h�lsning\n>/Dana\n>\n>-----Original Message-----\n>From:\[email protected] [SMTP:[email protected]]\n>Sent:\tThursday, September 14, 2000 10:09 AM\n>To:\[email protected]; [email protected]\n>Subject:\t[HACKERS] List funnies ?\n>\n>Has something happened to the list server ?\n>\n>I am only subscribed to the general list, but after two days of nothing I'm\n>now getting the hackers list stuff.\n>\n>Steve\n>\n>\n>-- thorNET - Internet Consultancy, Services & Training\n>Phone: 01454 854413\n>Fax: 01454 854412\n>http://www.thornet.co.uk \n\n\n",
"msg_date": "Thu, 14 Sep 2000 12:49:18 +0200",
"msg_from": "Zeljko Trogrlic <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] List funnies ?"
},
{
"msg_contents": "\nremoved \n\nOn Thu, 14 Sep 2000, Zeljko Trogrlic wrote:\n\n> Me too, and I'm getting some stuff two times, like I'm double subscribed\n> (which I shouldn't be). Sometimes it takes few days for messages to appear\n> on list etc.\n> \n> At 10:12 14.9.2000 , [email protected] wrote:\n> >I was just about to ask the same question...\n> >\n> >med v�nlig h�lsning\n> >/Dana\n> >\n> >-----Original Message-----\n> >From:\[email protected] [SMTP:[email protected]]\n> >Sent:\tThursday, September 14, 2000 10:09 AM\n> >To:\[email protected]; [email protected]\n> >Subject:\t[HACKERS] List funnies ?\n> >\n> >Has something happened to the list server ?\n> >\n> >I am only subscribed to the general list, but after two days of nothing I'm\n> >now getting the hackers list stuff.\n> >\n> >Steve\n> >\n> >\n> >-- thorNET - Internet Consultancy, Services & Training\n> >Phone: 01454 854413\n> >Fax: 01454 854412\n> >http://www.thornet.co.uk \n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 14 Sep 2000 09:39:08 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] List funnies ?"
},
{
"msg_contents": "Thanks!\n\nAt 14:39 14.9.2000 , The Hermit Hacker wrote:\n>\n>removed \n>\n>On Thu, 14 Sep 2000, Zeljko Trogrlic wrote:\n>\n>> Me too, and I'm getting some stuff two times, like I'm double subscribed\n>> (which I shouldn't be). Sometimes it takes few days for messages to appear\n>> on list etc.\n>\n\n",
"msg_date": "Thu, 14 Sep 2000 16:04:55 +0200",
"msg_from": "Zeljko Trogrlic <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] List funnies ?"
},
{
"msg_contents": "Forgive me if I am missing the obvious, but can someone please tell me how\nto show a list of triggers, or the code in a specific trigger, via the pgsql\nutility?\n\nThank you,\nBryan\n\n",
"msg_date": "Thu, 14 Sep 2000 09:13:16 -0600",
"msg_from": "\"Bryan Field-Elliot\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Show triggers in psql?"
},
{
"msg_contents": "\nYou can get a list of all triggers in the system\nas \nselect * from pg_trigger;\n\nYou can get the source for a PL function a trigger\ncalls with:\nselect prosrc from pg_trigger,pg_proc where\n pg_proc.oid=pg_trigger.tgfoid\n and pg_trigger.tgname = '<name>'\n\n[Note, in the case of C functions, I think this \nreturns the name of the function.]\n\n\nStephan Szabo\[email protected]\n\nOn Thu, 14 Sep 2000, Bryan Field-Elliot wrote:\n\n> Forgive me if I am missing the obvious, but can someone please tell me how\n> to show a list of triggers, or the code in a specific trigger, via the pgsql\n> utility?\n> \n> Thank you,\n> Bryan\n> \n\n",
"msg_date": "Thu, 14 Sep 2000 09:02:19 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show triggers in psql?"
},
{
"msg_contents": "\n\nI've got a database I would like to upgrade from 6.3.2+phpfi 2.0 to\n7.0.2+php 4.0. I've found the changes needed to be made to the php code,\nand that should not be a problem. The real issue I'm having at this point\nis getting the data into 7.0.2, or any newer release of PostgreSQL. I've\ntried the pg_upgrade, but it does not seem to work for 6.3.2 -> 6.5.3.\nShould I try 6.4 first, then 6.5.3, then 7.0? I've tried pg_dump and\npg_dumpall, but the data will not insert into the newer version\ndatabase. Do I need to provide some err info here? What would be helpful? \n\nNotes on this setup: I've inherited the DBA position for this database. I\nknow the people who designed and built it, but cannot get any help from\nthem. They added some special charactor types and some compiled C code to\nthe postgres install, and I think that may be what's throwing the process\nout. Any suggestions? \n\nThank you,\n\nChris Sterling\t\t\t\t\t\[email protected]\n\n\n",
"msg_date": "Thu, 14 Sep 2000 11:44:26 -0500 (CDT)",
"msg_from": "Chris Sterling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Upgrading from 6.3.2 to 7.0.2"
}
] |
[
{
"msg_contents": "At 10:00 14/09/00 +0200, Zeugswetter Andreas SB wrote:\n>\n>> > My vote is for a random number, and then someone can write \n>> > the tools to display the file info. I'll even volunteer to\n>> > work on them...\n>\n>What was the advantage of random number over oid [+version]\n>in the light that there is an extra field in pg_class for other smgrs ?\n\nNone other than it removes the temptation to write utilities that rely on\nthe internal representation of our data.\n\n\n>We surely want readable names for tablespace files, no ?\n\nSo long as we say from the outset that 'rename tablespace' must be done\noutside of transactions, and while the database (or at least the tables\nthat it contains) are not available.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 14 Sep 2000 19:28:25 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: Status of new relation file naming"
}
] |
[
{
"msg_contents": "\nContinuing to try to use format_type to output all types, I get the\nfollowing in the regression database:\n\nCREATE AGGREGATE newavg ( \n BASETYPE = integer, \n SFUNC = int4_accum, \n STYPE = \"numeric[]\", \n INITCOND = '{0,0,0}', \n FINALFUNC = numeric_avg\n);\n\nwhere the original source was:\n\nCREATE AGGREGATE newavg (\n sfunc = int4_accum, basetype = int4, \n stype = _numeric,\n finalfunc = numeric_avg,\n initcond1 = '{0,0,0}'\n);\n\nThe problem is the \"numeric[]\" type. Does this mean I should go back to\njust using typnam for aggregates? For all but table definitions? Or is\nthere an alternate solution.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 14 Sep 2000 21:09:49 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump of regression (again)"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> Continuing to try to use format_type to output all types, I get the\n> following in the regression database:\n\n> CREATE AGGREGATE newavg ( \n> BASETYPE = integer, \n> SFUNC = int4_accum, \n> STYPE = \"numeric[]\", \n> INITCOND = '{0,0,0}', \n> FINALFUNC = numeric_avg\n> );\n\n> where the original source was:\n\n> CREATE AGGREGATE newavg (\n> sfunc = int4_accum, basetype = int4, \n> stype = _numeric,\n> finalfunc = numeric_avg,\n> initcond1 = '{0,0,0}'\n> );\n\n> The problem is the \"numeric[]\" type.\n\nnumeric[] is a correct display of the type (and more intelligible than\n_numeric IMHO), but quoting it is not correct. If you are feeding the\noutput of format_type through something that believes it's quoting a\nsingle identifier, you are going to have lots of problems. It looks\nto me like format_type will supply quotes when needed, so you shouldn't\nadd more.\n\nUnfortunately, this won't work anyway for CREATE AGGREGATE, because I'm\npretty sure the parser only accepts simple identifiers and literals as\narguments in the list of keyword = value items. The type-declaration\nparser isn't invoked here, mainly because the grammar doesn't know\nanything about the semantics of the individual keyword items. So you\nhave to give the raw type name, no fancy fandangoes ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Sep 2000 11:44:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump of regression (again) "
},
{
"msg_contents": "At 11:44 14/09/00 -0400, Tom Lane wrote:\n>\n>> The problem is the \"numeric[]\" type.\n>\n>numeric[] is a correct display of the type (and more intelligible than\n>_numeric IMHO), but quoting it is not correct. If you are feeding the\n>output of format_type through something that believes it's quoting a\n>single identifier, you are going to have lots of problems. It looks\n>to me like format_type will supply quotes when needed, so you shouldn't\n>add more.\n\nYou're quite right - I left the original pg_dump ID quoting stuff in place.\n\n\n>So you have to give the raw type name, no fancy fandangoes ...\n\nOK - I'll use typname in CREATE AGGREGATE, and see how it hangs together.\n\nDo you know if the type parser is invoked in function declarations? If not\nI probably just need to limit use of format_type to table declarations.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 15 Sep 2000 11:22:01 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump of regression (again) "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> OK - I'll use typname in CREATE AGGREGATE, and see how it hangs together.\n\n> Do you know if the type parser is invoked in function declarations?\n\nSort of --- it looks like the production is for SimpleTypename not a\nfull typename. This is something that needs to be cleaned up in the\nbackend. In the very short run maybe you should avoid format_type here,\nbut I'm thinking this is something we need to fix for 7.1.\n\nIt looks like the main reason for avoiding full typename here is that\nthe production for Typename will fail on bogus types such as \"opaque\".\nThere are cleaner ways to deal with that --- and even more importantly,\ngram.y should not be doing table access under any circumstances,\nper our prior discussions about being able to syntax commands after\nthe current transaction has aborted. My thought at the moment is\nto postpone the check for \"setof\" until later in parse analysis.\nThomas, any comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Sep 2000 22:16:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump of regression (again) "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n>> So you have to give the raw type name, no fancy fandangoes ...\n\n> OK - I'll use typname in CREATE AGGREGATE, and see how it hangs together.\n\n> Do you know if the type parser is invoked in function declarations? If not\n> I probably just need to limit use of format_type to table declarations.\n\nBTW, type parsing is now done \"properly\" in CREATE FUNCTION, CREATE\nAGGREGATE, etc, so you should be able to use format_type more freely\nnow.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Oct 2000 19:56:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump of regression (again) "
}
] |
[
{
"msg_contents": "\n\n\npg_dump.c: In function `AsViewRule':\npg_dump.c:267: parse error before `int'\n/* 'int len;' not in declaration block */\npg_dump.c:268: 'len' undeclared (first use in this function)\npg_dump.c:268: (Each undeclared identifier is reported only once\npg_dump.c:268: for each function it appears in.)\npg_dump.c:268: warning: implicit declaration of function `pg_mbcliplen'\n\n\n-- \n Trurl McByte, Capt. of StasisCruiser \"Prince\"\n|InterNIC: AR3200 RIPE: AR1627-RIPE|\n|--98 C3 78 8E 90 E3 01 35 87 1F 3F EF FD 6D 84 B3--|\n\n",
"msg_date": "Thu, 14 Sep 2000 15:47:00 +0300 (EEST)",
"msg_from": "Trurl McByte <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump bug (if use multibyte)"
},
{
"msg_contents": "On Thu, 14 Sep 2000 (Today), Trurl McByte wrote:\n\n TM> \n TM> \n TM> \n TM> pg_dump.c: In function `AsViewRule':\n\nsorry - 'isViewRule'!\n \n TM> pg_dump.c:267: parse error before `int'\n TM> /* 'int len;' not in declaration block */\n TM> pg_dump.c:268: 'len' undeclared (first use in this function)\n TM> pg_dump.c:268: (Each undeclared identifier is reported only once\n TM> pg_dump.c:268: for each function it appears in.)\n TM> pg_dump.c:268: warning: implicit declaration of function `pg_mbcliplen'\n TM> \n TM> \n TM> \n\n-- \n Trurl McByte, Capt. of StasisCruiser \"Prince\"\n|InterNIC: AR3200 RIPE: AR1627-RIPE|\n|--98 C3 78 8E 90 E3 01 35 87 1F 3F EF FD 6D 84 B3--|\n\n",
"msg_date": "Thu, 14 Sep 2000 15:53:06 +0300 (EEST)",
"msg_from": "Trurl McByte <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump bug (if use multibyte)"
}
] |
[
{
"msg_contents": "I've made minor modifications to the abstime and horology regression\ntests, to move cross-type tests to horology and abstime-only tests to\nabstime. I slightly rearranged the order of regression tests in the\ndate/time area, and made the abstime test part of the \"parallel safe\"\ntest sequence.\n\nAll tests pass on my Linux box. I hand-patched the solaris and \"1947\"\nversions of the tests, but if someone could test to make sure that they\nmatch the actual results that would be good.\n\n - Thomas\n",
"msg_date": "Thu, 14 Sep 2000 16:59:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Touched-up regression tests"
}
] |
[
{
"msg_contents": "Rename... Why would we need in rename with OID filenames?\nOk, let's start with OID (*without tablename prefix|suffix*) filenames\nand we'll see later how it will work.\n\nSo, could someone implement OID filenames?\n(Please use RelFileNode structure).\n\nVadim\n\n> > > You have to tell us whether you plan to implement \n> > > a safe file rename in WAL ? If yes a simple filename\n> > > without version would be possible and better.\n> > \n> > What do you mean?\n> \n> The previous discussion we had where we concluded, that \n> an os rename of a file cannot be done without risc.\n> But that risc was imho acceptable to avoid the extra version \n> in the filename\n> \n> (a rename back to the old name could fail when the tx is supposed \n> to be rolled back).\n> \n> Search the archive for \"file rename sync\".\n> \n> My conlusion would be an oid only filename, or a mixture of\n> oid and tablename, where tablename can be wildcarded on a \n> directory search,\n> since oid is already unique. No version in the name, we would \n> do renames in\n> that case.\n> \n> If I remember correctly a patch exists that does oid only filenames.\n> \n> Andreas\n> \n> \n",
"msg_date": "Thu, 14 Sep 2000 10:08:40 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Status of new relation file naming"
},
{
"msg_contents": "\n\n\"Mikheev, Vadim\" wrote:\n\n> Rename... Why would we need in rename with OID filenames?\n\nAndreas seems to refer to in place replacement of OID files e.g.\nusing your *relink*.\n\nRegards.\n\nHiroshi Inoue\n\n",
"msg_date": "Fri, 15 Sep 2000 08:11:51 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of new relation file naming"
}
] |
[
{
"msg_contents": "> > I'm going to handle btree split but currently there is no way\n> > to rollback it - we unlock splitted pages after parent\n> > is locked and concurrent backend may update one/both of\n> > siblings before we get our locks back.\n> > We have to continue with split or could leave parent unchanged\n> > and handle \"my bits moved...\" (ie continue split in another\n> > xaction if we found no parent for a page) ... or we could hold\n> > locks on all splitted pages till some parent updated without\n> > split, but I wouldn't do this.\n> >\n> \n> It seems to me that btree split operations must always be\n> rolled forward even in case of abort/crash. DO you have\n> other ideas ?\n\nYes, it should, but hard to implement, especially for abort case.\nSo, for the moment, I would proceed with handling \"my bits moved...\":\nno reason to elog(FATAL) here - we can try to insert missed pointers\ninto parent page(s). WAL will guarantee that btitems moved to right\nsibling will not be lost (level consistency), and missing some pointers\nin parent level is acceptable - scans will work.\n\nVadim\n",
"msg_date": "Thu, 14 Sep 2000 10:17:56 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: strange behaviour (bug) "
},
{
"msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim [mailto:[email protected]]\n> \n> > > I'm going to handle btree split but currently there is no way\n> > > to rollback it - we unlock splitted pages after parent\n> > > is locked and concurrent backend may update one/both of\n> > > siblings before we get our locks back.\n> > > We have to continue with split or could leave parent unchanged\n> > > and handle \"my bits moved...\" (ie continue split in another\n> > > xaction if we found no parent for a page) ... or we could hold\n> > > locks on all splitted pages till some parent updated without\n> > > split, but I wouldn't do this.\n> > >\n> > \n> > It seems to me that btree split operations must always be\n> > rolled forward even in case of abort/crash. DO you have\n> > other ideas ?\n> \n> Yes, it should, but hard to implement, especially for abort case.\n> So, for the moment, I would proceed with handling \"my bits moved...\":\n> no reason to elog(FATAL) here - we can try to insert missed pointers\n> into parent page(s). WAL will guarantee that btitems moved to right\n> sibling will not be lost (level consistency), and missing some pointers\n> in parent level is acceptable - scans will work.\n>\n\nI looked into your XLOG stuff a little.\nIt seems that XLogFileOpen() isn't implemented yet.\nWould/should XLogFIleOpen() guarantee to open a Relation\nproperly at any time ?\n\nRegards.\n\nHiroshi Inoue\n",
"msg_date": "Fri, 15 Sep 2000 08:38:39 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: strange behaviour (bug) "
}
] |
[
{
"msg_contents": " \n> > Rename... Why would we need in rename with OID filenames?\n> \n> Andreas seems to refer to in place replacement of OID files e.g.\n> using your *relink*.\n\nSorry, I've messed things for myself.\n\nOk. In short, I vote for UNIQUE_ID (unrelated to pg_class.oid) file names.\nI think that it's better to implement this (but neither OID nor OID.VERSION)\nright now\nbecause of this is like what we'll have in new smgr -\ntablespace_id.relation_file_node.\nPg_class' OID is kind of logical things, totaly unrelated to the issue\nhow/where to\nstore relation file.\n\nPlease comment ASAP.\n\nVadim\n",
"msg_date": "Thu, 14 Sep 2000 16:36:14 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Status of new relation file naming"
},
{
"msg_contents": "On Thu, 14 Sep 2000, Mikheev, Vadim wrote:\n\n> \n> > > Rename... Why would we need in rename with OID filenames?\n> > \n> > Andreas seems to refer to in place replacement of OID files e.g.\n> > using your *relink*.\n> \n> Sorry, I've messed things for myself.\n> \n> Ok. In short, I vote for UNIQUE_ID (unrelated to pg_class.oid) file names.\n> I think that it's better to implement this (but neither OID nor OID.VERSION)\n> right now\n> because of this is like what we'll have in new smgr -\n> tablespace_id.relation_file_node.\n> Pg_class' OID is kind of logical things, totaly unrelated to the issue\n> how/where to\n> store relation file.\n> \n> Please comment ASAP.\n\nsounds perfect to me \n\n",
"msg_date": "Thu, 14 Sep 2000 21:10:50 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Status of new relation file naming"
},
{
"msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim [mailto:[email protected]]\n>\n> > > Rename... Why would we need in rename with OID filenames?\n> >\n> > Andreas seems to refer to in place replacement of OID files e.g.\n> > using your *relink*.\n>\n> Sorry, I've messed things for myself.\n>\n> Ok. In short, I vote for UNIQUE_ID (unrelated to pg_class.oid) file names.\n> I think that it's better to implement this (but neither OID nor\n> OID.VERSION)\n> right now\n> because of this is like what we'll have in new smgr -\n> tablespace_id.relation_file_node.\n> Pg_class' OID is kind of logical things, totaly unrelated to the issue\n> how/where to\n> store relation file.\n>\n> Please comment ASAP.\n>\n\nPhilip Warner mentioned about the advantage of random number.\nIt's exactly what I've wanted to say.\n\n>> it removes the temptation to write utilities that rely on\n>> the internal representation of our data.\n\nIt is preferable that file naming rule is encapsulated so that we\ncan change it without notice.\n\nRegards.\n\nHiroshi Inoue\n\n",
"msg_date": "Fri, 15 Sep 2000 09:11:41 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Status of new relation file naming"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.