threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "\nFYI,\n\n\nCERT Advisory CA-2001-01 Interbase Server Contains\nCompiled-in Back Door\nAccount\n\n Original release date: January 10, 2001\n Last revised: --\n Source: CERT/CC\n\n A complete revision history is at the end of this file.\n\nSystems Affected\n\n * Borland/Inprise Interbase 4.x and 5.x\n * Open source Interbase 6.0 and 6.01\n * Open source Firebird 0.9-3 and earlier\n\nOverview\n\n Interbase is an open source database package that had\npreviously been\n distributed in a closed source fashion by\nBorland/Inprise. Both the\n open and closed source verisions of the Interbase server\ncontain a\n compiled-in back door account with a known password.\n\nI. Description\n\n Interbase is an open source database package that is\ndistributed by\n Borland/Inprise at http://www.borland.com/interbase/ and\non\n SourceForge. The Firebird Project, an alternate Interbase\npackage, is\n also distributed on SourceForge. The Interbase server for\nboth\n distributions contains a compiled-in back door account\nwith a fixed,\n easily located plaintext password. The password and\naccount are\n contained in source code and binaries previously made\navailable at the\n following sites:\n\n http://www.borland.com/interbase/\n http://sourceforge.net/projects/interbase\n http://sourceforge.net/projects/firebird\n http://firebird.sourceforge.net\n http://www.ibphoenix.com\n http://www.interbase2000.com\n\n This back door allows any local user or remote user able\nto access\n port 3050/tcp [gds_db] to manipulate any database object\non the\n system. This includes the ability to install trapdoors or\nother trojan\n horse software in the form of stored procedures. In\naddition, if the\n database software is running with root privileges, then\nany file on\n the server's file system can be overwritten, possibly\nleading to\n execution of arbitrary commands as root.\n\n This vulnerability was not introduced by unauthorized\nmodifications to\n the original vendor's source. It was introduced by\nmaintainers of the\n code within Borland. The back door account password\ncannot be changed\n using normal operational commands, nor can the account be\ndeleted from\n existing vulnerable servers [see References].\n\n This vulnerability has been assigned the identifier\nCAN-2001-0008 by\n the Common Vulnerabilities and Exposures (CVE) group:\n\n \nhttp://cve.mitre.org/cgi-bin/cvename.cgi?name=CAN-2001-0008\n\n The CERT/CC has not received reports of this back door\nbeing exploited\n at the current time. We do recommend, however, that all\naffected sites\n and redistributors of Interbase products or services\nfollow the\n recommendations suggested in Section III, as soon as\npossible due to\n the seriousness of this issue.\n\nII. Impact\n\n Any local user or remote user able to access port\n3050/tcp [gds_db]\n can manipulate any database object on the system. This\nincludes the\n ability to install trapdoors or other trojan horse\nsoftware in the\n form of stored procedures. In addition, if the database\nsoftware is\n running with root privileges, then any file on the\nserver's file\n system can be overwritten, possibly leading to execution\nof arbitrary\n commands as root.\n\nIII. Solution\n\nApply a vendor-supplied patch\n\n Both Borland and The Firebird Project on SourceForge have\npublished\n fixes for this problem. Appendix A contains information\nprovided by\n vendors supplying these fixes. We will update the\nappendix as we\n receive more information. If you do not see your vendor's\nname, the\n CERT/CC did not hear from that vendor. Please contact\nyour vendor\n directly.\n\n Users who are more comfortable making their own changes\nin source code\n may find the new code available on SourceForge useful as\nwell:\n\n http://sourceforge.net/projects/interbase\n http://sourceforge.net/projects/firebird\n\nBlock access to port 3050/tcp\n\n This will not, however, prevent local users or users\nwithin a\n firewall's adminstrative boundary from accessing the back\ndoor\n account. In addition, the port the Interbase server\nlistens on may be\n changed dynamically at startup.\n\nAppendix A. Vendor Information\n\nBorland\n\n Please see:\n\n http://www.borland.com/interbase/\n\nIBPhoenix\n\n The Firebird project uncovered serious security problems\nwith\n InterBase. The problems are fixed in Firebird build 0.9.4\nfor all\n platforms. If you are running either InterBase V6 or\nFirebird 0.9.3,\n you should upgrade to Firebird 0.9.4.\n\n These security holes affect all version of InterBase\nshipped since\n 1994, on all platforms.\n\n For those who can not upgrade, Jim Starkey developed a\npatch program\n that will correct the more serious problems in any\nversion of\n InterBase on any platform. IBPhoenix chose to release the\nprogram\n without charge, given the nature of the problem and our\nrelationship\n to the community.\n\n At the moment, name service is not set up to the machine\nthat is\n hosting the patch, so you will have to use the IP number\nboth for the\n initial contact and for the ftp download.\n\n To start, point your browser at\n\n http://firebird.ibphoenix.com/\n\nApple\n\n The referenced database package is not packaged with Mac\nOS X or Mac\n OS X Server.\n\nFujitsu\n\n Fujitsu's UXP/V operating system is not affected by this\nproblem\n because we don't support the relevant database.\n\nReferences\n\n 1. VU#247371: Borland/Inprise Interbase SQL database\nserver contains\n backdoor superuser account with known password\nCERT/CC,\n 01/10/2001, https://www.kb.cert.org/vuls/id/247371\n \n_________________________________________________________________\n\n Author: This document was written by Jeffrey S Havrilla.\nFeedback on\n this advisory is appreciated.\n \n______________________________________________________________________\n\n This document is available from:\n http://www.cert.org/advisories/CA-2001-01.html\n \n______________________________________________________________________\n\nCERT/CC Contact Information\n\n Email: [email protected]\n Phone: +1 412-268-7090 (24-hour hotline)\n Fax: +1 412-268-6989\n Postal address:\n CERT Coordination Center\n Software Engineering Institute\n Carnegie Mellon University\n Pittsburgh PA 15213-3890\n U.S.A.\n\n CERT personnel answer the hotline 08:00-20:00 EST(GMT-5)\n/ EDT(GMT-4)\n Monday through Friday; they are on call for emergencies\nduring other\n hours, on U.S. holidays, and on weekends.\n\nUsing encryption\n\n We strongly urge you to encrypt sensitive information\nsent by email.\n Our public PGP key is available from\n\n http://www.cert.org/CERT_PGP.key\n\n If you prefer to use DES, please call the CERT hotline\nfor more\n information.\n\nGetting security information\n\n CERT publications and other security information are\navailable from\n our web site\n\n http://www.cert.org/\n\n To subscribe to the CERT mailing list for advisories and\nbulletins,\n send email to [email protected]. Please include in the\nbody of your\n message\n\n subscribe cert-advisory\n\n * \"CERT\" and \"CERT Coordination Center\" are registered in\nthe U.S.\n Patent and Trademark Office.\n \n______________________________________________________________________\n\n NO WARRANTY\n Any material furnished by Carnegie Mellon University and\nthe Software\n Engineering Institute is furnished on an \"as is\" basis.\nCarnegie\n Mellon University makes no warranties of any kind, either\nexpressed or\n implied as to any matter including, but not limited to,\nwarranty of\n fitness for a particular purpose or merchantability,\nexclusivity or\n results obtained from use of the material. Carnegie\nMellon University\n does not make any warranty of any kind with respect to\nfreedom from\n patent, trademark, or copyright infringement.\n \n_________________________________________________________________\n\n Conditions for use, disclaimers, and sponsorship\ninformation\n\n Copyright 2001 Carnegie Mellon University.\n",
"msg_date": "Wed, 10 Jan 2001 19:11:18 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Interesting CERT advisory"
},
{
"msg_contents": "> Both the open and closed source versions of the Interbase server\n> contain a compiled-in back door account with a known password.\n\nDarn. We are probably too late in beta to consider adding this feature;\nwe'll have to play catchup in 7.2 ;)\n\n - Thomas\n",
"msg_date": "Thu, 11 Jan 2001 03:28:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interesting CERT advisory"
}
]
|
[
{
"msg_contents": "I just jumped in on this thread so I'm not sure where you're looking for the\nlogging, but postgreSQL has the following option when building (from\n'./configure --help'):\n\n--enable-syslog enable logging to syslog\n\nI saw that you're installing from RPM's so this won't help and I'm not even\nsure that this is the logging about which you're talking, but thought I'd\npost just in case!\n\nCheers,\nCraig\n\n-----Original Message-----\nFrom: Martin A. Marques [mailto:[email protected]]\nSent: Wednesday, January 10, 2001 6:19 PM\nTo: Oliver Elphick\nCc: pgsql-general; [email protected]\nSubject: [HACKERS] Re: still no log\n\n\nEl Mi� 10 Ene 2001 21:07, escribiste:\n> \"Martin A. Marques\" wrote:\n> >Sorry for the insistence, but after looking and looking again, I can't\n> > find out why the postgres logs are empty. The postgres database is up\n> > and working\n> >\n> >great, but nothing is getting logged.\n> >I'm on a RedHat Linux (6.0 with lot of upgrades)\n> >postgres 7.0.3 from rpm (downoaded from the postgres ftp server)\n> >\n> >Any ideas?\n>\n> If postmaster is started with -S, nothing gets logged. Is that your\n> problem?\n\nWell, I'm not sure. I can't recall checking that in the startup script, but\nI \nthink it's not there. Any way, why would the instalation make the entries in\n\nthe logroutate config files and then have a startup script that won't do \nlogging? I'm using the normal startup script in /etc/rc.d/init.d/.\n\nSaludos... :-)\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Wed, 10 Jan 2001 18:49:34 -0600",
"msg_from": "\"Craig L. Ching\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Re: still no log"
},
{
"msg_contents": "You need to edit /etc/rc.d/init.d/postgresql\n\nchange the line that looks like this:\n\nsu -l postgres -c \"/usr/bin/pg_ctl -D $PGDATA -p /usr/bin/postmaster start\n>/dev/null 2>&1\"\nto:\n\nsu -l postgres -c \"/usr/bin/pg_ctl -D $PGDATA -p /usr/bin/postmaster start\n>> /var/log/postgres 2>&1\"\n\nNotice the appended redirection (>>) and the file name (/var/log/postgres)\n\nI'm pretty sure the logrotate entries are disabled by default. They cannot\nrotate the logs unless the postmaster is stopped/restarted at the time of\nrotate and you may not want your posmaster going down unexpectedly in a 24x7\napplication.\n\nAs to why RPM's install this way, I have no idea. I'm just grateful they\nexist.\n\n--rob\n\n----- Original Message -----\nFrom: \"Craig L. Ching\" <[email protected]>\nTo: \"'Martin A. Marques'\" <[email protected]>; \"Oliver Elphick\"\n<[email protected]>\nCc: \"pgsql-general\" <[email protected]>;\n<[email protected]>\nSent: Wednesday, January 10, 2001 7:49 PM\nSubject: RE: [HACKERS] Re: still no log\n\n\n> I just jumped in on this thread so I'm not sure where you're looking for\nthe\n> logging, but postgreSQL has the following option when building (from\n> './configure --help'):\n>\n> --enable-syslog enable logging to syslog\n>\n> I saw that you're installing from RPM's so this won't help and I'm not\neven\n> sure that this is the logging about which you're talking, but thought I'd\n> post just in case!\n>\n> Cheers,\n> Craig\n>\n> -----Original Message-----\n> From: Martin A. Marques [mailto:[email protected]]\n> Sent: Wednesday, January 10, 2001 6:19 PM\n> To: Oliver Elphick\n> Cc: pgsql-general; [email protected]\n> Subject: [HACKERS] Re: still no log\n>\n>\n> El Mi� 10 Ene 2001 21:07, escribiste:\n> > \"Martin A. Marques\" wrote:\n> > >Sorry for the insistence, but after looking and looking again, I\ncan't\n> > > find out why the postgres logs are empty. The postgres database is\nup\n> > > and working\n> > >\n> > >great, but nothing is getting logged.\n> > >I'm on a RedHat Linux (6.0 with lot of upgrades)\n> > >postgres 7.0.3 from rpm (downoaded from the postgres ftp server)\n> > >\n> > >Any ideas?\n> >\n> > If postmaster is started with -S, nothing gets logged. Is that your\n> > problem?\n>\n> Well, I'm not sure. I can't recall checking that in the startup script,\nbut\n> I\n> think it's not there. Any way, why would the instalation make the entries\nin\n>\n> the logroutate config files and then have a startup script that won't do\n> logging? I'm using the normal startup script in /etc/rc.d/init.d/.\n>\n> Saludos... :-)\n>\n> --\n> System Administration: It's a dirty job,\n> but someone told I had to do it.\n> -----------------------------------------------------------------\n> Mart�n Marqu�s email: [email protected]\n> Santa Fe - Argentina http://math.unl.edu.ar/~martin/\n> Administrador de sistemas en math.unl.edu.ar\n> -----------------------------------------------------------------\n>\n\n",
"msg_date": "Thu, 11 Jan 2001 07:07:18 -0500",
"msg_from": "\"rob\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: still no log"
}
]
|
[
{
"msg_contents": "\nAfter almost a month since Beta1 was released, and skipping Beta2, the\nPostgreSQL Developers are pleased to announced Beta3 of v7.1.\n\nDue to the large number of changes made since Beta1 was released, we have\nincluded a Changelog file detailing all changes, that is viewable in the\nChangeLogs subdirectory. This file is available outside of the\ndistribution at:\n\nftp://ftp.postgresql.org/pub/ChangeLogs/ChangeLog-7.1beta1-to-7.1beta3\n\nAltho we anticipate at least one more beta release before full release, we\nwould like to encourage as many people as possible to download and test\nout this version on their various platforms. All problems should be\nreport to [email protected] ...\n\nThe tar files can be downloaded from:\n\n\tftp://ftp.postgresql.org/pub/dev/..\n\nAll mirror sites should have copies of this within the next 24 hours ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Thu, 11 Jan 2001 00:18:13 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL v7.1BETA3 Bundled and Available ..."
},
{
"msg_contents": "Will there be RPMs for this beta? (Whoever makes the RPMs ;))\n\n-mike\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of The Hermit\n> Hacker\n> Sent: Thursday, January 11, 2001 3:18 PM\n> To: [email protected]\n> Cc: [email protected]; [email protected]\n> Subject: [GENERAL] PostgreSQL v7.1BETA3 Bundled and Available ...\n>\n>\n>\n> After almost a month since Beta1 was released, and skipping Beta2, the\n> PostgreSQL Developers are pleased to announced Beta3 of v7.1.\n>\n> Due to the large number of changes made since Beta1 was released, we have\n> included a Changelog file detailing all changes, that is viewable in the\n> ChangeLogs subdirectory. This file is available outside of the\n> distribution at:\n>\n> ftp://ftp.postgresql.org/pub/ChangeLogs/ChangeLog-7.1beta1-to-7.1beta3\n>\n> Altho we anticipate at least one more beta release before full release, we\n> would like to encourage as many people as possible to download and test\n> out this version on their various platforms. All problems should be\n> report to [email protected] ...\n>\n> The tar files can be downloaded from:\n>\n> \tftp://ftp.postgresql.org/pub/dev/..\n>\n> All mirror sites should have copies of this within the next 24 hours ...\n>\n> Marc G. Fournier ICQ#7615664 IRC\n> Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org\n>\n>\n\n",
"msg_date": "Thu, 11 Jan 2001 18:28:34 +1100",
"msg_from": "\"Mike Cannon-Brookes\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: PostgreSQL v7.1BETA3 Bundled and Available ..."
},
{
"msg_contents": "Mike Cannon-Brookes wrote:\n> \n> Will there be RPMs for this beta? (Whoever makes the RPMs ;))\n\nI do, and am working on them. There are a few changes I have to\nintegrate from various sources, as well as stress-testing the build,\netc. Look for RPM's Sunday, possibly sooner.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 11 Jan 2001 11:43:05 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1BETA3 Bundled and Available ..."
},
{
"msg_contents": "On Thu, 11 Jan 2001, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n>\n> > Due to the large number of changes made since Beta1 was released, we have\n> > included a Changelog file detailing all changes, that is viewable in the\n> > ChangeLogs subdirectory.\n>\n> Shouldn't that be in the HISTORY file?\n\nChangeLogs is meant to be more detailed then HISTORY, for those that would\nlike to see results similar to 'cvs log', but without cvs access ...\n\n\n",
"msg_date": "Thu, 11 Jan 2001 18:54:46 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL v7.1BETA3 Bundled and Available ..."
},
{
"msg_contents": "The Hermit Hacker writes:\n\n> Due to the large number of changes made since Beta1 was released, we have\n> included a Changelog file detailing all changes, that is viewable in the\n> ChangeLogs subdirectory.\n\nShouldn't that be in the HISTORY file?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 11 Jan 2001 23:55:18 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1BETA3 Bundled and Available ..."
},
{
"msg_contents": "On Fri, 12 Jan 2001, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n>\n> > ChangeLogs is meant to be more detailed then HISTORY, for those that would\n> > like to see results similar to 'cvs log', but without cvs access ...\n>\n> In that case it would be more useful (and customary) to put the complete\n> ChangeLog (since the beginning of time) into *one* file called\n> 'ChangeLog'. (The version bumps will be apparent since some file will be\n> changed to reflect it.)\n\nThought of that, tried it, the resultant file was *humongous* with all of\nthe TODO changes and such ... what I included was a cvs2cl.pl of just\nthose changes since REL7_1 was tag'd, with a manual cleaning of the \"TODO\nupdated\" lines ...\n\n\n",
"msg_date": "Thu, 11 Jan 2001 19:23:25 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL v7.1BETA3 Bundled and Available ..."
},
{
"msg_contents": "The Hermit Hacker writes:\n\n> ChangeLogs is meant to be more detailed then HISTORY, for those that would\n> like to see results similar to 'cvs log', but without cvs access ...\n\nIn that case it would be more useful (and customary) to put the complete\nChangeLog (since the beginning of time) into *one* file called\n'ChangeLog'. (The version bumps will be apparent since some file will be\nchanged to reflect it.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 12 Jan 2001 00:25:15 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1BETA3 Bundled and Available ..."
},
{
"msg_contents": "The Hermit Hacker writes:\n\n> > > ChangeLogs is meant to be more detailed then HISTORY, for those that would\n> > > like to see results similar to 'cvs log', but without cvs access ...\n> >\n> > In that case it would be more useful (and customary) to put the complete\n> > ChangeLog (since the beginning of time) into *one* file called\n> > 'ChangeLog'. (The version bumps will be apparent since some file will be\n> > changed to reflect it.)\n>\n> Thought of that, tried it, the resultant file was *humongous* with all of\n> the TODO changes and such ... what I included was a cvs2cl.pl of just\n> those changes since REL7_1 was tag'd, with a manual cleaning of the \"TODO\n> updated\" lines ...\n\nThose that like to see something similar to 'cvs log' are surely not\ninterested into just the changes since beta 1 but at least since the\nbranch from 7.0. If we're going to have a new changelog file for each\nsubrelease-to-subrelease then it's not going to be useful. Maybe make\nChangeLog_7_1, later ChangeLog_7_2, then ChangeLog_7_3 and remove\nChangeLog_7_1, so you make a reasonable compromise between storage space\nand preserving the traditional nature and functionality of ChangeLogs.\n\nShould be under doc/ too, I think.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 12 Jan 2001 01:03:12 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1BETA3 Bundled and Available ..."
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The Hermit Hacker writes:\n>> ChangeLogs is meant to be more detailed then HISTORY, for those that would\n>> like to see results similar to 'cvs log', but without cvs access ...\n\n> In that case it would be more useful (and customary) to put the complete\n> ChangeLog (since the beginning of time) into *one* file called\n> 'ChangeLog'.\n\nSaid file would currently amount to about 2.6 megabytes (I know,\nI had occasion to do a complete cvs2cl.pl run earlier this week).\nAnd the long-term trend is up.\n\nI submit that that's too much to stuff into the distribution.\n\nI think it's reasonable to make the changelog available on the\nwebsite (broken down into per-version segments, probably). I just \nhave doubts about forcing people to download it whether they\nwant it or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 19:19:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1BETA3 Bundled and Available ... "
},
{
"msg_contents": "On Thu, Jan 11, 2001 at 11:43:05AM -0500, Lamar Owen wrote:\n> Mike Cannon-Brookes wrote:\n> > \n> > Will there be RPMs for this beta? (Whoever makes the RPMs ;))\n> \n> I do, and am working on them. There are a few changes I have to\n> integrate from various sources, as well as stress-testing the build,\n> etc. Look for RPM's Sunday, possibly sooner.\n> \nHello Lamar,\n\nit would be nice if you would put the spec-file and any patches you apply\nseperately into the SRPM-directory as well. I am following the beta via CVS\nand like to have a working SPEC-file nonetheless.\n\nBest Regards\nMirko\n",
"msg_date": "Fri, 12 Jan 2001 01:23:24 +0100",
"msg_from": "Mirko Zeibig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1BETA3 Bundled and Available ..."
},
{
"msg_contents": "Tom Lane writes:\n\n> I think it's reasonable to make the changelog available on the\n> website (broken down into per-version segments, probably). I just\n> have doubts about forcing people to download it whether they\n> want it or not.\n\nI agree that the complete changelog is probably too long, I'm just against\nthe per-version segmenting. Changelogs are usually used (well, by me) to\nget an overview when and how segments of code were worked on. The primary\nkey here is not what (sub-)version the change happened in. The people\nthat look into this sort of thing probably use cvs or snapshots, and even\nif not it makes the assumption that they downloaded and used *all*\nintermediate releases and only those, otherwise reading the beta1-to-beta3\nchangelog is pretty pointless in terms of finding out what happened to the\ncode.\n\nHow about maintaining a file ChangeLog that goes back one year?\n\nAnother issue with ChangeLogs is that there's an implicit assumption that\nthey are up to date. These won't be.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 12 Jan 2001 17:22:05 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1BETA3 Bundled and Available ... "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I agree that the complete changelog is probably too long, I'm just against\n> the per-version segmenting. Changelogs are usually used (well, by me) to\n> get an overview when and how segments of code were worked on. The primary\n> key here is not what (sub-)version the change happened in. The people\n> that look into this sort of thing probably use cvs or snapshots,\n\nWell, if you have bothered to set up cvs access then you can run cvs2cl\nfor yourself and get exactly the (subset of) the changelog you want.\nMy understanding of what Marc had in mind was to provide info for people\nwho only download releases and want to know what happened since the last\nrelease they had.\n\nThe whole project might be a waste of effort though, since those folks\nare likely to only want the digested HISTORY file and not the\nblow-by-blow logs ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 11:24:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1BETA3 Bundled and Available ... "
}
]
|
[
{
"msg_contents": "Hi folks,\n\n\n I'm co-developer of Delphi and Borland C++ Builder Zeos library\n(http://www.zeos.dn.ua) by Sergey Seroukhov which includes support for\nPostgreSQL. In fact, I'm dedicated Zeos PostgreSQL developer.\n We've run into a problem that we can't properly free the PPGnotify\nhandle returned by PQnotifies() - simply because Delphi does NOT include\nfree() routine (it does have it's own memory management routines of course).\nWe could assume MSVC's malloc() is used and import free() from it, like\nthis:\n procedure free(P: Pointer); cdecl; external 'msvcrt.dll'; // (which\nis the Delphi/Pascal equivalent of C's free())\n\n- but that would be a crude hack and would probably not work in platforms\nother then windows (Kylix, Delphi's encarnation on Linux, is almost ready as\nmost of you should know) or libpq.dll compiled with other C compilers then\nthen currently being used.\n That's why we believe a PQnotifyFree() or similar function should be\nimplemented since not everybody using libpq.dll is also using MSVC compiler.\nThat could also lead to problems in other platforms using compilers with\ndifferent allocation schemes.\n Of course it is trivial to implement such a function and this should\nbe no trouble.\n\n Could you please consider it ? Of course nobody wants memory\ncorruption in their applications and we don't like having to let those\nrecords allocated, but we can't currently do much about it.\n Thanks.\n\n\nBest Regards,\nSteve Howe\nCapella Development Group.\n\n\n",
"msg_date": "Thu, 11 Jan 2001 02:41:53 -0200",
"msg_from": "\"Steve Howe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Must implement PQnotifyFree()"
},
{
"msg_contents": "\"Steve Howe\" <[email protected]> writes:\n> [ doesn't want to assume he knows which malloc() libpq is using ]\n\nYou have a point. This plea would be more compelling if it came\ncomplete with a code and documentation patch against current sources,\nhowever ;-). I doubt anyone else will see it as a high-priority\nproblem ... so if you want it done, do the legwork ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Jan 2001 19:55:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Must implement PQnotifyFree() "
}
]
|
[
{
"msg_contents": "\n> > > > On AIX mktime(3) leaves tm_isdst at -1 if it does not have timezone\n> > > > info for that particular year and returns -1.\n> > > > The following code then makes savings time out of the -1.\n> > > > tz = (tm->tm_isdst ? (timezone - 3600) : timezone);\n> > > Hmm. That description is consistant with what I see in the Linux man\n> > > page. So I should check for (tm->tm_isdst > 0) rather than\n> > > checking for non-zero?\n> > It is obviously not possible to determine tm_isdst with mktime for a\n> > negative time_t. Thus with above fix PST works, but PDT is then busted :-(\n> \n> Obvious to AIX only?\nYes. The whole subject only concerns AIX (at least so far).\n> My conclusion is that the AIX timezone database is\n> damaged or missing for pre-1970 dates, but that other systems bothered\n> to get it at least somewhat right. Is there another issue here that I'm\n> missing?\n\nThe tz db is neighter damaged nor missing anything (see below). Only mktime \ndoes not work for some (maybe even avoidable) reason for dates before 1970.\n\n> > localtime does convert a negative time_t correctly including dst.\n> > Is there another way to determine tm_isdst ?\n> \n> Yes. Replace AIX with Linux or something else, then recompile Postgres\n> ;)\n\nAs I see it, the Linux results are also not 100 % correct in respect to dates \nbefore 1970. (based on assumption that Solaris is correct)\n\ne.g.:\n1503c1503\n< | Sat May 10 23:59:12 1947 PST\n---\n> | Sat May 10 23:59:12 1947 PDT\n\nWas 1947 PDT or PST ? In eighter case one result is one hour off, Solaris or Linux.\n\nThis raises another issue. Why do we distribute expected files with bogus results \nin them ? Imho it would be better to only have expected files for rounding issues and \nthe like. Else the user feels that horology works fine on his machine, but as it looks it only\nworks on a few.\n\nAndreas\n",
"msg_date": "Thu, 11 Jan 2001 10:04:54 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "> As I see it, the Linux results are also not 100 % correct in respect to dates\n> before 1970. (based on assumption that Solaris is correct)\n> < | Sat May 10 23:59:12 1947 PST\n> > | Sat May 10 23:59:12 1947 PDT\n> Was 1947 PDT or PST ? In eighter case one result is one hour off, Solaris or Linux.\n\nYes, I've seen this before. As I mentioned, Solaris does a particularly\ngood job of including the variations in DST definitions around WWII and\nearlier. In the US, there are several different conventions used in\nthose years, presumably based on a need for energy conservation and\nperhaps to maximize production efficiency.\n\n> This raises another issue. Why do we distribute expected files with bogus results\n> in them?\n\nIt depends on what you mean by \"bogus\". imho we should (and do)\ndistribute \"expected\" files which reflect the results expected for a\nparticular machine -- based on a careful analysis of the results from\nthat machine from an expert, such as you are doing with AIX. Your\nresults are incremental differences from some other \"standard machine\",\nwhich has also been carefully analyzed. By definition, the \"standard\nmachine\" has been and is a Linux box, for the simple historical reason\nthat this was my machine at the time that scrappy and I resurrected the\nregression tests several years ago. But it is a good choice for the\nstandard, since that style of machine has a large installed base and the\ncost of entry for someone else wanting to participate is very low.\n\nIf I understand the alternatives you are considering, the other choice\nis to distribute \"expected\" files which reflect what we think a machine\nshould produce if it behaved the way we think it should. That doesn't\nreally help anyone, since a tester would have to derive from first\nprinciples the correctness of the test results each time they run the\ntest.\n\nInstead, we document the current behavior, and the regression tests can\nnow catch *changes* in behavior, to be analyzed just as you are doing.\nIf AIX fixes their mktime() and timezone behavior, you (or someone else\nrunning AIX) will see it, evaluate it, and adjust the regression\n\"expected\" files to reflect this.\n\n - Thomas\n",
"msg_date": "Thu, 11 Jan 2001 15:19:42 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX"
}
]
|
[
{
"msg_contents": "\n> > > we've almost totally rewrite gist.c because old code and algorithm\n> > > were not suitable for variable size keys. I think it might be\n> > > submitted into 7.1 beta source tree.\n> >\n> > Urgh. Dropping in a total rewrite when we're already past \n> beta3 doesn't\n> > strike me as good project management practice --- especially if the\n> > rewrite was done to add features (ie variable-size keys) not merely\n> > fix bugs. I think it might be more prudent to hold this for 7.2.\n> \n> OK. If our changes will not go to 7.1, is't possible to create\n> feature archive and announce it somewhere. It would be nice if\n> people could test it. Anyway, I'll create web page with all\n> docs and patches. I'm afraid one more year to 7.2 is enough for\n> GiST to die :-)\n\nI think featureism is the the most prominent argument for PostgreSQL.\nThus standing before a decision to eighter fix GiST bugs and risc a new \nbug (limited to GiST) because of an added feature or shipping a known \nbroken GiST, my vote would definitely be to add Oleg's patch.\n\nAndreas\n",
"msg_date": "Thu, 11 Jan 2001 11:25:25 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "On Thu, 11 Jan 2001, Zeugswetter Andreas SB wrote:\n\n>\n> > > > we've almost totally rewrite gist.c because old code and algorithm\n> > > > were not suitable for variable size keys. I think it might be\n> > > > submitted into 7.1 beta source tree.\n> > >\n> > > Urgh. Dropping in a total rewrite when we're already past\n> > beta3 doesn't\n> > > strike me as good project management practice --- especially if the\n> > > rewrite was done to add features (ie variable-size keys) not merely\n> > > fix bugs. I think it might be more prudent to hold this for 7.2.\n> >\n> > OK. If our changes will not go to 7.1, is't possible to create\n> > feature archive and announce it somewhere. It would be nice if\n> > people could test it. Anyway, I'll create web page with all\n> > docs and patches. I'm afraid one more year to 7.2 is enough for\n> > GiST to die :-)\n>\n> I think featureism is the the most prominent argument for PostgreSQL.\n> Thus standing before a decision to eighter fix GiST bugs and risc a new\n> bug (limited to GiST) because of an added feature or shipping a known\n> broken GiST, my vote would definitely be to add Oleg's patch.\n\nDefinetely, our changes limited to GiST insert algorithm only.\nOther changes are bugfixes. I encourage people interested in GiST\nto test my submission. Our implementation of RD-Tree which we used\nto support of indexing of int4 arrays will works only with our\nversion of gist.c (actually our interest to GiST was motivated by\nindex support of int4 arrays).\n\n\tRegards,\n\n\t\tOleg\n\n>\n> Andreas\n>\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n",
"msg_date": "Thu, 11 Jan 2001 13:54:06 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> That's my vote too, specially if there will be some regression tests \n> accompanying the patches. The current (pre-patch) state of affairs with \n> GiST could probably be described as security-by-obscurity anyhow i.e. \n> \"we have't tried it so we think it probably works\" ;-)\n\nAu contraire, there *are* a few users of GiST out there now, Gene Selkov\nto name one. So there is a definite risk of breaking things that worked\nin 7.0 and before, in the name of adding new features.\n\nIf I thought that we had adequate ability to test the new GiST\nimplementation during the remaining beta period, I wouldn't be\nso worried. But at this point, Oleg's changes could not appear\nin the beta series before beta4, and between the late date, the\nlack of regression test, and the few interested people to test it,\nI doubt that we'll get any useful coverage.\n\nI would recommend that Oleg do like Ryan K. did for awhile with the\nAlpha patches: make them available as a set of diffs to be applied\nto the official distribution. We'll be happy to merge them in for\n7.2, but the calendar says it's too late for 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 10:01:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n>> ... the calendar says it's too late for 7.1.\n\n> Even for the _real_ bugfixes in gist.c ?\n\nIf he were submitting only bugfixes, we wouldn't be having this\ndiscussion.\n\nLook, I don't like postponing improvements either. But if we don't\nadhere to project management discipline, we are never going to get\nreleases out the door at all --- or if we do, they'll be too buggy\nto be reliable. It's not like \"no new features during beta\" is such\na draconian or difficult-to-understand rule.\n\nThe RelFileNodeEquals() bug we found on Monday proves that no one had\nyet done enough stress-testing on 7.1 to discover that multiple\ndatabases were broken. Think about that for awhile before you campaign\nfor inserting untested new features at this point. We need to focus on\nTESTING, people, not new features.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 11:06:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "Oleg Bartunov wrote:\n> \n> On Thu, 11 Jan 2001, Zeugswetter Andreas SB wrote:\n> \n> > I think featureism is the the most prominent argument for PostgreSQL.\n\nExactly! Altough it has already lost much of it ;( \n\n> > Thus standing before a decision to eighter fix GiST bugs and risc a new\n> > bug (limited to GiST) because of an added feature or shipping a known\n> > broken GiST, my vote would definitely be to add Oleg's patch.\n\nThat's my vote too, specially if there will be some regression tests \naccompanying the patches. The current (pre-patch) state of affairs with \nGiST could probably be described as security-by-obscurity anyhow i.e. \n\"we have't tried it so we think it probably works\" ;-)\n\nAlso I suspect there are still only a few users, most of them capable of \nfixing inside gist.c if something nasti turns up. \nThey will be much more motivated to do so if it is in \"official\" sources \nand not in the sources from postgresql-gist.org or somesuch ;)\n\n> Definetely, our changes limited to GiST insert algorithm only.\n> Other changes are bugfixes. \n\nSo applying _only_ bugfixes may also not be an option as they are not \nwell tested without the changed gist.c\n\n-----------------\nHannu\n",
"msg_date": "Thu, 11 Jan 2001 16:06:41 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> > That's my vote too, specially if there will be some regression tests\n> > accompanying the patches. The current (pre-patch) state of affairs with\n> > GiST could probably be described as security-by-obscurity anyhow i.e.\n> > \"we have't tried it so we think it probably works\" ;-)\n> \n> Au contraire, there *are* a few users of GiST out there now, Gene Selkov\n> to name one.\n\nYes, he is the only one (except Oleg) whom I know to use it too ;)\n\n> So there is a definite risk of breaking things that worked\n> in 7.0 and before, in the name of adding new features.\n\nTrue. Could we ask Gene to test 7.1 with Oleg's patches ?\n\n> If I thought that we had adequate ability to test the new GiST\n> implementation during the remaining beta period, I wouldn't be\n> so worried. But at this point, Oleg's changes could not appear\n> in the beta series before beta4, and between the late date, the\n> lack of regression test, and the few interested people to test it,\n> I doubt that we'll get any useful coverage.\n\nOr if in fact there _are_ only a few people using it now we could \nget _all_ the coverage to be sufficiently sure we don't break anyones\ncode. GiST being such an obscure and underused feature I'm pretty sure \nthat most (all?) active users are on Hackers list and read everything \nthat has GiST in subject.\n\n> I would recommend that Oleg do like Ryan K. did for awhile with the\n> Alpha patches: make them available as a set of diffs to be applied\n> to the official distribution. We'll be happy to merge them in for\n> 7.2, but the calendar says it's too late for 7.1.\n\nEven for the _real_ bugfixes in gist.c ?\n\n----------------\nHannu\n",
"msg_date": "Thu, 11 Jan 2001 17:35:12 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "On Thu, 11 Jan 2001, Hannu Krosing wrote:\n\n> I make a personal promise to spend at least 5 hours of testing new GiST\n> functionality during this weekend if it is commited to 7.1 CVS.\n> (ok, I do it anyhow, just that currently I'm testing it using the\n> patches ;)\n\nHanny,\n\nlatest version is available at http://www.sai.msu.su/~megera/postgres/gist/\nnothing changed in code (in compare with my submission), just added some\ninfo and regression test. Let me know if you need some help.\n\n\tOleg\n\n>\n> -------------------------\n> Hannu\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 11 Jan 2001 22:22:09 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> >> ... the calendar says it's too late for 7.1.\n> \n> > Even for the _real_ bugfixes in gist.c ?\n> \n> If he were submitting only bugfixes, we wouldn't be having this\n> discussion.\n\nBut he had very little incentive to fix bugs in the version he \nwould not use.\n\n> Look, I don't like postponing improvements either. But if we don't\n> adhere to project management discipline,\n\nBut should we do that _blindly_?\nI'd think that improving/fixing things in seldom-visited corners of\npostgres should be a little more tolerable than messing around in core.\n\n> we are never going to get\n> releases out the door at all --- or if we do, they'll be too buggy\n> to be reliable. It's not like \"no new features during beta\" is such\n> a draconian or difficult-to-understand rule.\n\nI'd rather describe his changes as \"a (bug)fix that required a major\nrewrite\" ;)\n\n> The RelFileNodeEquals() bug we found on Monday proves that no one had\n> yet done enough stress-testing on 7.1 to discover that multiple\n> databases were broken.\n\nBTW, What do people use for stress-testing ?\n \n> Think about that for awhile before you campaign for inserting untested\n> new features at this point. \n\nRather new variants of little-tested features ;)\n\n> We need to focus on TESTING, people, not new features.\n\nI make a personal promise to spend at least 5 hours of testing new GiST \nfunctionality during this weekend if it is commited to 7.1 CVS. \n(ok, I do it anyhow, just that currently I'm testing it using the\npatches ;)\n\n-------------------------\nHannu\n",
"msg_date": "Thu, 11 Jan 2001 21:10:01 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
}
]
|
[
{
"msg_contents": "Suppose a function using table t1 as its argument:\n\ncreate table t1(...\ncreate fuction f1(t1) returns...\n\nAnd if I drop t1 then do pg_dump, I would got something like:\n\n\tfailed sanity check, type with oid 1905168 was not found\n\nThis is because the type t1 does not exist anynmore. Since not being\nable to make a back up of database is a critical problem, I think we\nhave to fix this.\n\n1) remove that proc entry from pg_proc if t1 is deleted\n\n2) fix pg_dump so that it ignores sunch a bogus entry\n\n3) do both 1) and 2)\n\nComments?\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 11 Jan 2001 21:42:10 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "drop table and pg_proc"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> \n> Suppose a function using table t1 as its argument:\n> \n> create table t1(...\n> create fuction f1(t1) returns...\n> \n> And if I drop t1 then do pg_dump, I would got something like:\n> \n> failed sanity check, type with oid 1905168 was not found\n> \n> This is because the type t1 does not exist anynmore. Since not being\n> able to make a back up of database is a critical problem, I think we\n> have to fix this.\n> \n> 1) remove that proc entry from pg_proc if t1 is deleted\n> \n> 2) fix pg_dump so that it ignores sunch a bogus entry\n> \n> 3) do both 1) and 2)\n\nI have the same problem with views. If I create a view, drop/recreate\nthe tables to which it references, pg_dump fails unless I also drop and\nrecreate the view. I have seen similar behavior with indexes based on\nuser functions, when a function is dropped and recreated.\n\nI suspect that this is because all these things get an OID, and the OIDs\nchange when things get modified. There should be a way to reassign\ndependencies, perhaps vacuum should be able to do this?\n\n\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Thu, 11 Jan 2001 07:58:34 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: drop table and pg_proc"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Suppose a function using table t1 as its argument:\n> create table t1(...\n> create fuction f1(t1) returns...\n> And if I drop t1 then do pg_dump, I would got something like:\n> \tfailed sanity check, type with oid 1905168 was not found\n> This is because the type t1 does not exist anynmore. Since not being\n> able to make a back up of database is a critical problem, I think we\n> have to fix this.\n\nThis is just one instance of the generic problem that we don't enforce\nreferential integrity across system catalogs. Since this issue has\nalways been there, I'm not inclined to panic about it (ie, I don't want\nto try to solve it for 7.1). But we should think about a long-term fix.\n\n> 1) remove that proc entry from pg_proc if t1 is deleted\n> 2) fix pg_dump so that it ignores sunch a bogus entry\n> 3) do both 1) and 2)\n\nUltimately we should probably do both. #2 looks easier and is probably\nthe thing to work on first. In general, pg_dump is fairly brittle when\nit comes to missing cross-references, eg, I think it fails to even\nnotice a table that has no corresponding owner in pg_shadow (it should\nbe doing an outer not inner join for that). It'd be worth fixing\npg_dump so that it issues warnings about such cases but tries to plow\nahead anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 12:25:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: drop table and pg_proc "
},
{
"msg_contents": "Add to TODO:\n\n\t* Enforce referential integrity for system tables\n\n> Tatsuo Ishii <[email protected]> writes:\n> > Suppose a function using table t1 as its argument:\n> > create table t1(...\n> > create fuction f1(t1) returns...\n> > And if I drop t1 then do pg_dump, I would got something like:\n> > \tfailed sanity check, type with oid 1905168 was not found\n> > This is because the type t1 does not exist anynmore. Since not being\n> > able to make a back up of database is a critical problem, I think we\n> > have to fix this.\n> \n> This is just one instance of the generic problem that we don't enforce\n> referential integrity across system catalogs. Since this issue has\n> always been there, I'm not inclined to panic about it (ie, I don't want\n> to try to solve it for 7.1). But we should think about a long-term fix.\n> \n> > 1) remove that proc entry from pg_proc if t1 is deleted\n> > 2) fix pg_dump so that it ignores sunch a bogus entry\n> > 3) do both 1) and 2)\n> \n> Ultimately we should probably do both. #2 looks easier and is probably\n> the thing to work on first. In general, pg_dump is fairly brittle when\n> it comes to missing cross-references, eg, I think it fails to even\n> notice a table that has no corresponding owner in pg_shadow (it should\n> be doing an outer not inner join for that). It'd be worth fixing\n> pg_dump so that it issues warnings about such cases but tries to plow\n> ahead anyway.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Jan 2001 00:12:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: drop table and pg_proc"
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > Suppose a function using table t1 as its argument:\n> > create table t1(...\n> > create fuction f1(t1) returns...\n> > And if I drop t1 then do pg_dump, I would got something like:\n> > \tfailed sanity check, type with oid 1905168 was not found\n> > This is because the type t1 does not exist anynmore. Since not being\n> > able to make a back up of database is a critical problem, I think we\n> > have to fix this.\n> \n> This is just one instance of the generic problem that we don't enforce\n> referential integrity across system catalogs. Since this issue has\n> always been there, I'm not inclined to panic about it (ie, I don't want\n> to try to solve it for 7.1). But we should think about a long-term fix.\n> \n> > 1) remove that proc entry from pg_proc if t1 is deleted\n> > 2) fix pg_dump so that it ignores sunch a bogus entry\n> > 3) do both 1) and 2)\n> \n> Ultimately we should probably do both. #2 looks easier and is probably\n> the thing to work on first. In general, pg_dump is fairly brittle when\n> it comes to missing cross-references, eg, I think it fails to even\n> notice a table that has no corresponding owner in pg_shadow (it should\n> be doing an outer not inner join for that). It'd be worth fixing\n> pg_dump so that it issues warnings about such cases but tries to plow\n> ahead anyway.\n> \n> \t\t\tregards, tom lane\n\nI'm working on #2. Here is a partial fix for pg_dump, FYI. If it looks\nok, I'll do more cleanup...\n\n$ cvs diff -c common.c pg_dump.c\nIndex: common.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/pg_dump/common.c,v\nretrieving revision 1.49\ndiff -c -r1.49 common.c\n*** common.c\t2001/01/12 15:41:29\t1.49\n--- common.c\t2001/01/21 01:38:48\n***************\n*** 86,95 ****\n \t\t}\n \t}\n \n! \t/* should never get here */\n! \tfprintf(stderr, \"failed sanity check, type with oid %s was not found\\n\",\n! \t\t\toid);\n! \texit(2);\n }\n \n /*\n--- 86,93 ----\n \t\t}\n \t}\n \n! \t/* no suitable type name was found */\n! \treturn(NULL);\n }\n \n /*\n***************\n*** 114,120 ****\n \t/* should never get here */\n \tfprintf(stderr, \"failed sanity check, opr with oid %s was not found\\n\",\n \t\t\toid);\n! \texit(2);\n }\n \n \n--- 112,120 ----\n \t/* should never get here */\n \tfprintf(stderr, \"failed sanity check, opr with oid %s was not found\\n\",\n \t\t\toid);\n! \n! \t/* no suitable operator name was found */\n! \treturn(NULL);\n }\n \n \nIndex: pg_dump.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/pg_dump/pg_dump.c,v\nretrieving revision 1.187\ndiff -c -r1.187 pg_dump.c\n*** pg_dump.c\t2001/01/12 15:41:29\t1.187\n--- pg_dump.c\t2001/01/21 01:38:56\n***************\n*** 2928,2933 ****\n--- 2928,2942 ----\n \t\t\tchar\t *elemType;\n \n \t\t\telemType = findTypeByOid(tinfo, numTypes, tinfo[i].typelem, zeroAsOpaque);\n+ \t\t\tif (elemType == NULL)\n+ \t\t\t{\n+ \t\t\t\tfprintf(stderr, \"Notice: type for oid %s is not dumped.\\n\",\n+ \t\t\t\t\t\ttinfo[i].typelem);\n+ \t\t\t\tresetPQExpBuffer(q);\n+ \t\t\t\tresetPQExpBuffer(delq);\n+ \t\t\t\tcontinue;\n+ \t\t\t}\n+ \n \t\t\tappendPQExpBuffer(q, \", element = %s, delimiter = \", elemType);\n \t\t\tformatStringLiteral(q, tinfo[i].typdelim);\n \t\t}\n***************\n*** 3086,3091 ****\n--- 3095,3101 ----\n \tchar\t\t*listSep;\n \tchar\t\t*listSepComma = \",\";\n \tchar\t\t*listSepNone = \"\";\n+ \tchar\t\t*rettypename;\n \n \tif (finfo[i].dumped)\n \t\treturn;\n***************\n*** 3147,3152 ****\n--- 3157,3177 ----\n \t\tchar\t\t\t*typname;\n \n \t\ttypname = findTypeByOid(tinfo, numTypes, finfo[i].argtypes[j], zeroAsOpaque);\n+ \t\tif (typname == NULL)\n+ \t\t{\n+ \t\t\tfprintf(stderr, \"Notice: function \\\"%s\\\" is not dumped\\n\",\n+ \t\t\t\t\tfinfo[i].proname);\n+ \n+ \t\t\tfprintf(stderr, \"Reason: the %d th arugument type name (oid %s) not found\\n\",\n+ \t\t\t\t\tj, finfo[i].argtypes[j]);\n+ \t\t\tresetPQExpBuffer(q);\n+ \t\t\tresetPQExpBuffer(fn);\n+ \t\t\tresetPQExpBuffer(delqry);\n+ \t\t\tresetPQExpBuffer(fnlist);\n+ \t\t\tresetPQExpBuffer(asPart);\n+ \t\t\treturn;\n+ \t\t}\n+ \n \t\tappendPQExpBuffer(fn, \"%s%s\", \n \t\t\t\t\t\t\t(j > 0) ? \",\" : \"\", \n \t\t\t\t\t\t\ttypname);\n***************\n*** 3159,3169 ****\n \tresetPQExpBuffer(delqry);\n \tappendPQExpBuffer(delqry, \"DROP FUNCTION %s;\\n\", fn->data );\n \n \tresetPQExpBuffer(q);\n \tappendPQExpBuffer(q, \"CREATE FUNCTION %s \", fn->data );\n \tappendPQExpBuffer(q, \"RETURNS %s%s %s LANGUAGE \",\n \t\t\t\t\t (finfo[i].retset) ? \"SETOF \" : \"\",\n! \t\t\t\t\t findTypeByOid(tinfo, numTypes, finfo[i].prorettype, zeroAsOpaque),\n \t\t\t\t\t asPart->data);\n \tformatStringLiteral(q, func_lang);\n \n--- 3184,3211 ----\n \tresetPQExpBuffer(delqry);\n \tappendPQExpBuffer(delqry, \"DROP FUNCTION %s;\\n\", fn->data );\n \n+ \trettypename = findTypeByOid(tinfo, numTypes, finfo[i].prorettype, zeroAsOpaque);\n+ \n+ \tif (rettypename == NULL)\n+ \t{\n+ \t\tfprintf(stderr, \"Notice: function \\\"%s\\\" is not dumped\\n\",\n+ \t\t\t\tfinfo[i].proname);\n+ \n+ \t\tfprintf(stderr, \"Reason: return type name (oid %s) not found\\n\",\n+ \t\t\t\tfinfo[i].prorettype);\n+ \t\t\tresetPQExpBuffer(q);\n+ \t\t\tresetPQExpBuffer(fn);\n+ \t\t\tresetPQExpBuffer(delqry);\n+ \t\t\tresetPQExpBuffer(fnlist);\n+ \t\t\tresetPQExpBuffer(asPart);\n+ \t\t\treturn;\n+ \t}\n+ \n \tresetPQExpBuffer(q);\n \tappendPQExpBuffer(q, \"CREATE FUNCTION %s \", fn->data );\n \tappendPQExpBuffer(q, \"RETURNS %s%s %s LANGUAGE \",\n \t\t\t\t\t (finfo[i].retset) ? \"SETOF \" : \"\",\n! \t\t\t\t\t rettypename,\n \t\t\t\t\t asPart->data);\n \tformatStringLiteral(q, func_lang);\n \n[t-ishii@srapc1474 pg_dump]$ \n",
"msg_date": "Sun, 21 Jan 2001 10:56:43 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: drop table and pg_proc "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I'm working on #2. Here is a partial fix for pg_dump, FYI. If it looks\n> ok, I'll do more cleanup...\n\nLooks OK as far as it goes. The other flavor of problems that pg_dump\nhas in this area are in doing inner joins across system catalogs ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Jan 2001 18:33:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: drop table and pg_proc "
}
]
|
[
{
"msg_contents": "Zeugswetter Andreas SB writes:\n > Try the attachment with negative values, and tell us whether mktime\n > returns anything other that -1. Do you have an idea how else we\n > could determine daylight savings time ?\n\nmktime always returns -1 for tm's that might expect to return a\nnegative number. In those cases the tm is not normalized and\ntm_isdst is set to -1. When mktime returns zero or positive then tm\nis normalized and tm_isdst is set to 0 or 1.\n\nlocaltime sets all the fields of tm correctly, including tm_isdst, for\nall values of time_t, including negative ones. When I say correctly,\nthere is the usual limitation that the rules to specify when DST is in\nforce cannot express a variation from year to year. (You can specify\ne.g. the last Sunday in a month.)\n\nMy observations were consistent across AIX 4.1.5, 4.2.1, and 4.3.3.\n\n\nIf you have a time_t, then you can use localtime to determine DST. If\nyou have a tm then you cannot work out DST for dates before the epoch.\nOne workaround would be to add 4*n to tm_year and subtract (365*4+1)\n*24*60*60*n from the time_t returned. (All leap years are multiples\nof 4 in the range 1901 to 2038. If tm_wday is wanted, that will need\nto be adjusted as well.) But don't you do time interval arithmetic\nusing PostgreSQL date types rather than accepting the limitations of\nPOSIX/UNIX?\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWesternGeco -./\\.- by myself and does not represent\[email protected] -./\\.- opinion of Schlumberger, Baker\nhttp://www.crosswinds.net/~petef -./\\.- Hughes or their divisions.\n",
"msg_date": "Thu, 11 Jan 2001 13:22:06 +0000",
"msg_from": "Pete Forman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "Pete Forman writes:\n > One workaround would be to add 4*n to tm_year and subtract (365*4+1)\n > *24*60*60*n from the time_t returned. (All leap years are multiples\n > of 4 in the range 1901 to 2038. If tm_wday is wanted, that will need\n > to be adjusted as well.)\n\nFWIW, that should be to add 28*n to tm_year and subtract (365*4+1)*7\n*24*60*60*n from the time_t returned. That calculates tm_wday\ncorrectly.\n\nAlso I should have been more explicit that this applies only to AIX\nand IRIX. Those return -1 from mktime(year < 1970) and do not allow\nDST rules to vary from year to year. Linux and Solaris have more\ncapable date libraries.\n-- \nPete Forman http://www.bedford.waii.com/wsdev/petef/PeteF_links.html\nWesternGeco http://www.crosswinds.net/~petef\nManton Lane, Bedford, mailto:[email protected]\nMK41 7PA, UK tel:+44-1234-224798 fax:+44-1234-224804\n",
"msg_date": "Thu, 11 Jan 2001 14:08:59 +0000",
"msg_from": "Pete Forman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "> FWIW, that should be to add 28*n to tm_year and subtract (365*4+1)*7\n> *24*60*60*n from the time_t returned. That calculates tm_wday\n> correctly.\n> Also I should have been more explicit that this applies only to AIX\n> and IRIX. Those return -1 from mktime(year < 1970) and do not allow\n> DST rules to vary from year to year. Linux and Solaris have more\n> capable date libraries.\n\nOh, so AIX and IRIX have just one-line time zone databases? Yuck.\n\nHow about having some #if BROKEN_TIMEZONE_DATABASE code which uses both\nmktime() and localtime() to derive the correct time zone? That is, call\nmktime to get a time_t, then call localtime() to get the time zone info,\nbut only on platforms which do not get a complete result from just the\ncall to mktime(). In fact, we *could* check for tm->tm_isdst coming back\n\"-1\" for every platform, then call localtime() to make a last stab at\ngetting a good value.\n\nComments?\n\n - Thomas\n",
"msg_date": "Thu, 11 Jan 2001 15:31:24 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Re: tinterval - operator problems on AIX"
}
]
|
[
{
"msg_contents": "Is UNDER being stripped out for 7.1? I'm looking at documentation and don't \nwant to write about it if it won't be in there.\n\n-- \n-------- Robert B. Easter [email protected] ---------\n-- CompTechNews Message Board http://www.comptechnews.com/ --\n-- CompTechServ Tech Services http://www.comptechserv.com/ --\n---------- http://www.comptechnews.com/~reaster/ ------------\n",
"msg_date": "Thu, 11 Jan 2001 09:20:47 -0500",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "UNDER?"
},
{
"msg_contents": "> Is UNDER being stripped out for 7.1? I'm looking at documentation and don't\n> want to write about it if it won't be in there.\n\nAlready gone. Check the recent archives for the discussion...\n\n - Thomas\n",
"msg_date": "Thu, 11 Jan 2001 15:32:13 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UNDER?"
},
{
"msg_contents": "\"Robert B. Easter\" <[email protected]> writes:\n> Is UNDER being stripped out for 7.1?\n\nIt's history.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 10:45:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UNDER? "
},
{
"msg_contents": "\"Robert B. Easter\" wrote:\n> \n> Is UNDER being stripped out for 7.1? I'm looking at documentation and don't\n> want to write about it if it won't be in there.\n\nThats' how I understand the outcome of a discussion about 1 week ago\nhere: \n\nTom Lane wrote on Tue Jan 2 20:19:18 2001:\n> Anyway, we seem to have a clear consensus to pull the UNDER clause from\n> the grammar and stick with INHERITS for 7.1. I will take care of that\n> in the next day or so.\n\n------------------\nHannu\n",
"msg_date": "Thu, 11 Jan 2001 17:10:11 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UNDER?"
}
]
|
[
{
"msg_contents": "Hi\n\nIm new here. I was wondering if anybody is working on ALTER TABLE to make it\nmore complete.\nMore specifically drop constraints\n\nSincerely\nPer-Olof Pettersson\n\n",
"msg_date": "Thu, 11 Jan 2001 16:17:21 +0100",
"msg_from": "\"Per-Olof Pettersson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Status of ALTER TABLE"
}
]
|
[
{
"msg_contents": "\n> > As I see it, the Linux results are also not 100 % correct \n> in respect to dates\n> > before 1970. (based on assumption that Solaris is correct)\n> > < | Sat May 10 23:59:12 1947 PST\n> > > | Sat May 10 23:59:12 1947 PDT\n> > Was 1947 PDT or PST ? In eighter case one result is one hour off, Solaris or Linux.\n> \n> Yes, I've seen this before. As I mentioned, Solaris does a particularly\n> good job of including the variations in DST definitions around WWII and\n> earlier.\n\nOne of the two results is still off by one hour if the noted hour is 23 in both the \nPDT and PST case. One issue is determining DST and the other is printing \nthe hour. For a given time in history the hour can't be the same if printed in\nPDT or PST.\n\nAndreas\n",
"msg_date": "Thu, 11 Jan 2001 16:40:23 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: AW: Re: tinterval - operator problems on AI\n\tX"
}
]
|
[
{
"msg_contents": "\n> Oh, so AIX and IRIX have just one-line time zone databases? Yuck.\n> \n> How about having some #if BROKEN_TIMEZONE_DATABASE code which uses both\n> mktime() and localtime() to derive the correct time zone? That is, call\n> mktime to get a time_t, then call localtime() to get the time zone info,\n> but only on platforms which do not get a complete result from just the\n> call to mktime(). In fact, we *could* check for tm->tm_isdst coming back\n> \"-1\" for every platform, then call localtime() to make a last stab at\n> getting a good value.\n\nHow would we construct a valid time_t from the struct tm without mktime?\n\nAndreas\n",
"msg_date": "Thu, 11 Jan 2001 17:06:17 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "> > How about having some #if BROKEN_TIMEZONE_DATABASE code which uses both\n> > mktime() and localtime() to derive the correct time zone? That is, call\n> > mktime to get a time_t, then call localtime() to get the time zone info,\n> > but only on platforms which do not get a complete result from just the\n> > call to mktime(). In fact, we *could* check for tm->tm_isdst coming back\n> > \"-1\" for every platform, then call localtime() to make a last stab at\n> > getting a good value.\n> How would we construct a valid time_t from the struct tm without mktime?\n\nIf I understand the info you have given previously, it should be\npossible to get a valid tm->tm_isdst by the following sequence of calls:\n\n// call mktime() which might return a \"-1\" for DST\ntime = mktime(tm);\n// time is now a correct GMT time\n// localtime() *is* allowed to return a good tm->tm_isdst\n// even for \"negative\" time_t values.\n// I thought I understood this from Andreas' info...\nnewtm = localtime(time);\n// use the new flag for something useful...\ndstflag = newtm->tm_isdst;\n\nYes?\n\n - Thomas\n",
"msg_date": "Thu, 11 Jan 2001 16:49:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX"
}
]
|
[
{
"msg_contents": "Dear All,\nI currently developing a project using SNMP++ and posgresql (7.0.3)\nlibpq++\nSince both SNMP and posgresql define an Oid type\n\ntypedef unsigned int Oid (postgresql)\nstruct Oid (SNMP++)\n\nAt compilation time I get this error :\n\n/deliveries/external/SNMP++/3.4Patched/snmp++/include/oid.h:94:\nconflicting types for `struct Oid'\n/deliveries/external/postgres/7.0.3/include/postgres_ext.h:28: previous\ndeclaration as `typedef unsigned int Oid'\n\nI tryed to encampsulate my postgres derived classes into a namespace but\nthen I got a lot of other compilation errors\nwhere in my code I am using STL classes.\n\nIs there e simple solution to my problem ? is there a way to use\nnamespace in order to avoid conflicting types ?\n\nThanks in advance to everybody,\n\n -Corrado\n\n\n\n",
"msg_date": "Thu, 11 Jan 2001 18:18:45 +0100",
"msg_from": "Corrado Giacomini <[email protected]>",
"msg_from_op": true,
"msg_subject": "conflicting types for `struct Oid'"
}
]
|
[
{
"msg_contents": "> In contrast the current alternatives appear to be either LOCK \n> the entire table (preventing ALL inserts and selects),\n\nSHARE ROW EXCLUSIVE mode doesn't prevent selects...\n\n> or to create a UNIQUE constraint (forcing complete rollbacks\n> and restarts in event of a collision :( ).\n\nHopefully, savepoints will be in 7.2\n\n> Any comments, suggestions or tips would be welcome. It looks \n> like quite a complex thing to do - I've only just started\n> looking at the postgresql internals and the lock manager.\n\nIt's very easy to do (from my PoV -:)) We need in yet another\npseudo table like one we use in XactLockTableInsert/XactLockTableWait\n- try to look there...\n\nVadim\n",
"msg_date": "Thu, 11 Jan 2001 09:20:19 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Lock on arbitrary string feature"
},
{
"msg_contents": "\n\n> Has anyone any input to offer on adding an arbitrary locking feature?\n\nHave you tried user locks? They don't block and span transactions but\nmaybe you can work this out in your application.\n\nUser locks are not in the parser but there is an add on module \nthat is in the contrib dir. Maybe that will work for you.\n\n\n\nMyron Scott\[email protected]\n\n\n\n",
"msg_date": "Thu, 11 Jan 2001 10:14:25 -0800 (PST)",
"msg_from": "Myron Scott <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Lock on arbitrary string feature"
}
]
|
[
{
"msg_contents": "> The RelFileNodeEquals() bug we found on Monday proves that no one had\n> yet done enough stress-testing on 7.1 to discover that multiple\n> databases were broken. Think about that for awhile before \n> you campaign for inserting untested new features at this point.\n> We need to focus on TESTING, people, not new features.\n\nI mostly sure that Oleg' changes touch *only* gist subdir (Oleg?)\nso *nothing* will be broken in other areas. That's why I don't\nobject new gist in 7.1.\n\nRelFileNodeEquals is quite another thing, thanks for fix again -:)\n\nVadim\n",
"msg_date": "Thu, 11 Jan 2001 09:35:07 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "On Thu, 11 Jan 2001, Mikheev, Vadim wrote:\n\n> > The RelFileNodeEquals() bug we found on Monday proves that no one had\n> > yet done enough stress-testing on 7.1 to discover that multiple\n> > databases were broken. Think about that for awhile before\n> > you campaign for inserting untested new features at this point.\n> > We need to focus on TESTING, people, not new features.\n>\n> I mostly sure that Oleg' changes touch *only* gist subdir (Oleg?)\n\nYes, and only one file - gist.c\n\n> so *nothing* will be broken in other areas. That's why I don't\n> object new gist in 7.1.\n>\n\nWe prepare regression test for RD-Tree in the same way as Gene\ndoes for his contribution. I put all files on\nhttp://www.sai.msu.su/~megera/postgres/gist/. btw, all Gene's\ntest for seg and cube in contrib area are passed. It would be better\nGene check his application himself.\n\nI'm sorry for trouble with my submission - I hoped we will be ready\nbefore beta2,3, but we spent too many time to get old insertion\nalgoritm works with variable size keys until we realized it's just\nnot suitable for this.\n\nI understand Tom's arguments and respect his experience, so I think it's\npossible to put link to my page in 7.1 docs for people interested in\nGiST features. Also, we found GiST part of postgres documentation\nis too short, so we'll try to contribute something sometime later.\n>From other side, GiST was too hidden for people, while it's very\npowerfull feature and many people for sure really needs GiST power.\nFrankly speaking I discovered GiST power myself by accident :-)\nNow we have many plans to use GiST in our real life applications such as\nWeb site management system, full text search (killer application !),\ndata mining and others.\n\nThere are several improvements and new features we plan to add to GiST\nwhich could be go to 7.2.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Thu, 11 Jan 2001 22:16:57 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> I understand Tom's arguments and respect his experience, so I think it's\n> possible to put link to my page in 7.1 docs for people interested in\n> GiST features.\n\nBear in mind that I only have one core vote ;-). We've already had some\nprivate core discussion about whether to accept this patch now, and so\nfar I think I'm outvoted.\n\nDid I understand you to say that you'd added some regression tests for\nGiST? That would lessen my unhappiness a little bit ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 16:02:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "On Thu, 11 Jan 2001, Tom Lane wrote:\n\n> Oleg Bartunov <[email protected]> writes:\n> > I understand Tom's arguments and respect his experience, so I think it's\n> > possible to put link to my page in 7.1 docs for people interested in\n> > GiST features.\n>\n> Bear in mind that I only have one core vote ;-). We've already had some\n> private core discussion about whether to accept this patch now, and so\n> far I think I'm outvoted.\n>\n\nThere are several Tom Lane, judge by your activity. You probably need\nseveral votes.\n\n> Did I understand you to say that you'd added some regression tests for\n> GiST? That would lessen my unhappiness a little bit ...\n\nYes, we did. Currently all files are available from my page\nhttp://www.sai.msu.su/~megera/postgres/gist/\nI could submit them to hackers list if CORE people got consensus\n\n\tRegards,\n\n\t\tOleg\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 12 Jan 2001 00:12:51 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "On Fri, 12 Jan 2001, Oleg Bartunov wrote:\n\n> On Thu, 11 Jan 2001, Tom Lane wrote:\n>\n> > Oleg Bartunov <[email protected]> writes:\n> > > I understand Tom's arguments and respect his experience, so I think it's\n> > > possible to put link to my page in 7.1 docs for people interested in\n> > > GiST features.\n> >\n> > Bear in mind that I only have one core vote ;-). We've already had some\n> > private core discussion about whether to accept this patch now, and so\n> > far I think I'm outvoted.\n> >\n>\n> There are several Tom Lane, judge by your activity. You probably need\n> several votes.\n>\n> > Did I understand you to say that you'd added some regression tests for\n> > GiST? That would lessen my unhappiness a little bit ...\n>\n> Yes, we did. Currently all files are available from my page\n> http://www.sai.msu.su/~megera/postgres/gist/\n> I could submit them to hackers list if CORE people got consensus\n\nOkay, if there are appropriate regression tests, I'm going to say go for\nit ...\n\nDoes anyone have any objections to my downloading the tar file (doing that\nnow), committing the changes and wrapping up a quick Beta4 just so that we\nhave a tar ball that is testable right away?\n\nSave Lamar and the other packagers a bit of work by avoiding beta3\npackages :)\n\n\n\n",
"msg_date": "Thu, 11 Jan 2001 18:35:05 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "\njust downloaded it and can't find any regression tests ... ?\n\nOn Thu, 11 Jan 2001, The Hermit Hacker wrote:\n\n> On Fri, 12 Jan 2001, Oleg Bartunov wrote:\n>\n> > On Thu, 11 Jan 2001, Tom Lane wrote:\n> >\n> > > Oleg Bartunov <[email protected]> writes:\n> > > > I understand Tom's arguments and respect his experience, so I think it's\n> > > > possible to put link to my page in 7.1 docs for people interested in\n> > > > GiST features.\n> > >\n> > > Bear in mind that I only have one core vote ;-). We've already had some\n> > > private core discussion about whether to accept this patch now, and so\n> > > far I think I'm outvoted.\n> > >\n> >\n> > There are several Tom Lane, judge by your activity. You probably need\n> > several votes.\n> >\n> > > Did I understand you to say that you'd added some regression tests for\n> > > GiST? That would lessen my unhappiness a little bit ...\n> >\n> > Yes, we did. Currently all files are available from my page\n> > http://www.sai.msu.su/~megera/postgres/gist/\n> > I could submit them to hackers list if CORE people got consensus\n>\n> Okay, if there are appropriate regression tests, I'm going to say go for\n> it ...\n>\n> Does anyone have any objections to my downloading the tar file (doing that\n> now), committing the changes and wrapping up a quick Beta4 just so that we\n> have a tar ball that is testable right away?\n>\n> Save Lamar and the other packagers a bit of work by avoiding beta3\n> packages :)\n>\n>\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Thu, 11 Jan 2001 18:40:46 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "The Hermit Hacker wrote:\n> Save Lamar and the other packagers a bit of work by avoiding beta3\n> packages :)\n\n:-)\n\nWell, working on those packages now. Fun stuff. I learn more about\nmore different parts of the code each new version, as the hang-ups\nchange from version to version. Although, thanks to Karl DeBisschop, I\nnow have a new secret weapon in my war Bwahaha... :-)\n\nThere have been more changes since 7.0 than in any previous version I\ncan remember as far as the build goes. Not a bad thing -- just a\nlearning experience.\n\nAt least the build won't change from beta3 to beta4.....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 11 Jan 2001 17:41:56 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "On Thu, 11 Jan 2001, The Hermit Hacker wrote:\n\n>\n> just downloaded it and can't find any regression tests ... ?\n\nit's in the contrib-intarray.tar.gz\ngmake, gmake install, gmake installcheck\n\n\n\tOleg\n>\n> On Thu, 11 Jan 2001, The Hermit Hacker wrote:\n>\n> > On Fri, 12 Jan 2001, Oleg Bartunov wrote:\n> >\n> > > On Thu, 11 Jan 2001, Tom Lane wrote:\n> > >\n> > > > Oleg Bartunov <[email protected]> writes:\n> > > > > I understand Tom's arguments and respect his experience, so I think it's\n> > > > > possible to put link to my page in 7.1 docs for people interested in\n> > > > > GiST features.\n> > > >\n> > > > Bear in mind that I only have one core vote ;-). We've already had some\n> > > > private core discussion about whether to accept this patch now, and so\n> > > > far I think I'm outvoted.\n> > > >\n> > >\n> > > There are several Tom Lane, judge by your activity. You probably need\n> > > several votes.\n> > >\n> > > > Did I understand you to say that you'd added some regression tests for\n> > > > GiST? That would lessen my unhappiness a little bit ...\n> > >\n> > > Yes, we did. Currently all files are available from my page\n> > > http://www.sai.msu.su/~megera/postgres/gist/\n> > > I could submit them to hackers list if CORE people got consensus\n> >\n> > Okay, if there are appropriate regression tests, I'm going to say go for\n> > it ...\n> >\n> > Does anyone have any objections to my downloading the tar file (doing that\n> > now), committing the changes and wrapping up a quick Beta4 just so that we\n> > have a tar ball that is testable right away?\n> >\n> > Save Lamar and the other packagers a bit of work by avoiding beta3\n> > packages :)\n> >\n> >\n> >\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 12 Jan 2001 02:08:15 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "On Fri, 12 Jan 2001, Oleg Bartunov wrote:\n\n> On Thu, 11 Jan 2001, The Hermit Hacker wrote:\n>\n> >\n> > just downloaded it and can't find any regression tests ... ?\n>\n> it's in the contrib-intarray.tar.gz\n> gmake, gmake install, gmake installcheck\n\nerk, can we get this somehow done in such a way that its part of the\n*standard* regression tests? so when ppl do 'make test', the GiST stuff\nis checked also? My worry, as with others, isn't that GiST itself is\nbroken by the changes, its that *somehow* there is an interaction that is\nwith the rest of the system that isn't being tested ...\n\n\n\n >\n>\n> \tOleg\n> >\n> > On Thu, 11 Jan 2001, The Hermit Hacker wrote:\n> >\n> > > On Fri, 12 Jan 2001, Oleg Bartunov wrote:\n> > >\n> > > > On Thu, 11 Jan 2001, Tom Lane wrote:\n> > > >\n> > > > > Oleg Bartunov <[email protected]> writes:\n> > > > > > I understand Tom's arguments and respect his experience, so I think it's\n> > > > > > possible to put link to my page in 7.1 docs for people interested in\n> > > > > > GiST features.\n> > > > >\n> > > > > Bear in mind that I only have one core vote ;-). We've already had some\n> > > > > private core discussion about whether to accept this patch now, and so\n> > > > > far I think I'm outvoted.\n> > > > >\n> > > >\n> > > > There are several Tom Lane, judge by your activity. You probably need\n> > > > several votes.\n> > > >\n> > > > > Did I understand you to say that you'd added some regression tests for\n> > > > > GiST? That would lessen my unhappiness a little bit ...\n> > > >\n> > > > Yes, we did. Currently all files are available from my page\n> > > > http://www.sai.msu.su/~megera/postgres/gist/\n> > > > I could submit them to hackers list if CORE people got consensus\n> > >\n> > > Okay, if there are appropriate regression tests, I'm going to say go for\n> > > it ...\n> > >\n> > > Does anyone have any objections to my downloading the tar file (doing that\n> > > now), committing the changes and wrapping up a quick Beta4 just so that we\n> > > have a tar ball that is testable right away?\n> > >\n> > > Save Lamar and the other packagers a bit of work by avoiding beta3\n> > > packages :)\n> > >\n> > >\n> > >\n> > >\n> >\n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Thu, 11 Jan 2001 19:21:53 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "On Thu, 11 Jan 2001, The Hermit Hacker wrote:\n\n> On Fri, 12 Jan 2001, Oleg Bartunov wrote:\n>\n> > On Thu, 11 Jan 2001, The Hermit Hacker wrote:\n> >\n> > >\n> > > just downloaded it and can't find any regression tests ... ?\n> >\n> > it's in the contrib-intarray.tar.gz\n> > gmake, gmake install, gmake installcheck\n>\n> erk, can we get this somehow done in such a way that its part of the\n> *standard* regression tests? so when ppl do 'make test', the GiST stuff\n> is checked also? My worry, as with others, isn't that GiST itself is\n> broken by the changes, its that *somehow* there is an interaction that is\n> with the rest of the system that isn't being tested ...\n\nNo way, we need to load functions. there are several contributions\nwhich depends on loaded functions. If you suggest how to do this\nin general way, it would fine. To test GiST you need to define some\ndata structure ( in our case - RD-tree) and functions to access it\n\n>\n>\n>\n> >\n> >\n> > \tOleg\n> > >\n> > > On Thu, 11 Jan 2001, The Hermit Hacker wrote:\n> > >\n> > > > On Fri, 12 Jan 2001, Oleg Bartunov wrote:\n> > > >\n> > > > > On Thu, 11 Jan 2001, Tom Lane wrote:\n> > > > >\n> > > > > > Oleg Bartunov <[email protected]> writes:\n> > > > > > > I understand Tom's arguments and respect his experience, so I think it's\n> > > > > > > possible to put link to my page in 7.1 docs for people interested in\n> > > > > > > GiST features.\n> > > > > >\n> > > > > > Bear in mind that I only have one core vote ;-). We've already had some\n> > > > > > private core discussion about whether to accept this patch now, and so\n> > > > > > far I think I'm outvoted.\n> > > > > >\n> > > > >\n> > > > > There are several Tom Lane, judge by your activity. You probably need\n> > > > > several votes.\n> > > > >\n> > > > > > Did I understand you to say that you'd added some regression tests for\n> > > > > > GiST? That would lessen my unhappiness a little bit ...\n> > > > >\n> > > > > Yes, we did. Currently all files are available from my page\n> > > > > http://www.sai.msu.su/~megera/postgres/gist/\n> > > > > I could submit them to hackers list if CORE people got consensus\n> > > >\n> > > > Okay, if there are appropriate regression tests, I'm going to say go for\n> > > > it ...\n> > > >\n> > > > Does anyone have any objections to my downloading the tar file (doing that\n> > > > now), committing the changes and wrapping up a quick Beta4 just so that we\n> > > > have a tar ball that is testable right away?\n> > > >\n> > > > Save Lamar and the other packagers a bit of work by avoiding beta3\n> > > > packages :)\n> > > >\n> > > >\n> > > >\n> > > >\n> > >\n> > > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > > Systems Administrator @ hub.org\n> > > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> > >\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 12 Jan 2001 02:36:49 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Does anyone have any objections to my downloading the tar file (doing that\n> now), committing the changes and wrapping up a quick Beta4 just so that we\n> have a tar ball that is testable right away?\n\nI think we oughta review the changes at least a little bit before\npushing out a beta4... go ahead and commit though...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 18:41:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> There have been more changes since 7.0 than in any previous version I\n> can remember as far as the build goes.\n\nNo surprise, considering the amount of work Peter E. has done on\ncleaning up the configure, build, and install process --- exactly\nthe stuff that would affect RPM building. I trust you're finding\nthat the changes are for the better.\n\nUnless Peter has plans he hasn't mentioned, future revs shouldn't\nsee so much activity there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 18:52:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
}
]
|
[
{
"msg_contents": "Can PostgreSQL 7.1 store java classes or objects?\n",
"msg_date": "Thu, 11 Jan 2001 13:04:27 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Java Classes"
},
{
"msg_contents": "[email protected] wrote:\n> \n> Can PostgreSQL 7.1 store java classes or objects?\n\nSure it can, but not automatically. just serialize your object to a\nbyte array and store that String.\n\n\n-- \nJoseph Shraibman\[email protected]\nIncrease signal to noise ratio. http://www.targabot.com\n",
"msg_date": "Thu, 11 Jan 2001 23:01:31 -0500",
"msg_from": "Joseph Shraibman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Java Classes"
}
]
|
[
{
"msg_contents": "\nI read the transcript of the alter table drop column discussion (old\ndiscussion) at http://www.postgresql.org/docs/pgsql/doc/TODO.detail/drop,\nand I have something to add:\n\nPeople mentioned such ideas as a hidden column and a really deleted column,\nand it occurred to me that perhaps \"vacuum\" would be a good option to use.\nWhen a delete was issued, the column would be hidden (by a negative/invalid\nlogical column number, it appears was the consensus). Upon issuing a\nvacuum, it could perform a complete deletion. This method would allow users\nto know that the process may take a while (I think the agreed method for a\ncomplete delete was to \"select into...\" the right columns and leave out the\ndeleted ones, then delete the old table).\n\nFurthermore, I liked the idea of some kind of \"undelete\", as long as it was\njust hidden. This could apply to anything that is cleaned out with a vacuum\n(before it is cleaned out), although I am not sure how feasible this is,\nand it isn't particularly important to me.\n\nRegards,\n\tJeff\n\n-- \nJeff Davis\nDynamic Works\[email protected]\nhttp://dynworks.com\n\n",
"msg_date": "Thu, 11 Jan 2001 18:48:36 PST",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "alter table drop column"
},
{
"msg_contents": "\nAdded to TODO.detail/drop.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> I read the transcript of the alter table drop column discussion (old\n> discussion) at http://www.postgresql.org/docs/pgsql/doc/TODO.detail/drop,\n> and I have something to add:\n> \n> People mentioned such ideas as a hidden column and a really deleted column,\n> and it occurred to me that perhaps \"vacuum\" would be a good option to use.\n> When a delete was issued, the column would be hidden (by a negative/invalid\n> logical column number, it appears was the consensus). Upon issuing a\n> vacuum, it could perform a complete deletion. This method would allow users\n> to know that the process may take a while (I think the agreed method for a\n> complete delete was to \"select into...\" the right columns and leave out the\n> deleted ones, then delete the old table).\n> \n> Furthermore, I liked the idea of some kind of \"undelete\", as long as it was\n> just hidden. This could apply to anything that is cleaned out with a vacuum\n> (before it is cleaned out), although I am not sure how feasible this is,\n> and it isn't particularly important to me.\n> \n> Regards,\n> \tJeff\n> \n> -- \n> Jeff Davis\n> Dynamic Works\n> [email protected]\n> http://dynworks.com\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Jan 2001 00:37:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: alter table drop column"
}
]
|
[
{
"msg_contents": "> This is just one instance of the generic problem that we don't enforce\n> referential integrity across system catalogs. Since this issue has\n\nWouldn't be easy to do for views (rules) anyway - table oids are somewhere\nin the body of rule, they are not just keys in column. Also, triggers are\nhandled by Executor and we don't use it for DDL statements. I think it's ok,\nwe have just add \"isdurty\" column to some tables (to be setted when some of\nrefferenced objects deleted/altered and to be used as flag that\n\"re-compiling\"\nis required) and new table to remember object relationships.\n\nGuys here, in Sectorbase, blames PostgreSQL a much for this thing -:)\nThey are Oracle developers and development under PostgreSQL makes\nthem quite unhappy. Probably, work in this area will be sponsored\nby my employer (with me as superviser and some guys in Russia as\ndevelopers), we'll see.\n\nVadim\n",
"msg_date": "Thu, 11 Jan 2001 11:16:11 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: drop table and pg_proc "
}
]
|
[
{
"msg_contents": "I was just shown the following example:\n\nCREATE TABLE profile (haushaltseinkommen_pm numeric(22,2));\nCREATE VIEW profile_view AS\nSELECT *, haushaltseinkommen_pm*12 AS haushaltseinkommen_pa FROM profile;\n\n7.0.* pg_dump produces the following for the view:\n\nCREATE TABLE \"profile_view\" (\n \"haushaltseinkommen_pm\" numeric(22,2),\n \"haushaltseinkommen_pa\" numeric\n);\nCREATE RULE \"_RETprofile_view\" AS ON SELECT TO profile_view DO INSTEAD SELECT profile.haushaltseinkommen_pm, (profile.haushaltseinkommen_pm * '12'::\"numeric\") AS haushaltseinkommen_pa FROM profile;\n\nAFAICS this is perfectly legitimate, but both 7.0.* and current backends\nwill reject the CREATE RULE with\n\nERROR: select rule's target entry 2 has different size from attribute haushaltseinkommen_pa\n\nThe problem here is that DefineQueryRewrite checks\n\n if (attr->atttypmod != resdom->restypmod)\n elog(ERROR, \"select rule's target entry %d has different size from attribute %s\", i, attname);\n\nwhere attr will have the default precision/scale for NUMERIC, as set up\nby the CREATE TABLE, but resdom will have -1 because that's what you're\ngoing to get from a numeric expression. (In the CREATE VIEW case, they\nboth have -1, evidently because CREATE VIEW doesn't force a default\nNUMERIC precision to be inserted in the table definition. Not sure if\nthat's OK or not.)\n\nI think we'd better fix this, else we will have problems reading 7.0\ndump files. I can see two possible answers:\n\n1. Remove this check entirely.\n\n2. Allow the typmods to be different if one of them is -1.\n\nI'm not entirely sure which way to jump. The former seems simpler but\nmight perhaps allow creation of bogus views --- any opinions?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 15:30:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Overprotectiveness in DefineQueryRewrite?"
}
]
|
[
{
"msg_contents": "I've just been looking through the options which can and cannot be set in\npostgresql.conf and have a few points to raise.\n\n1. There are some undocumented options which appear to relate to WAL:\n\n Name\t\t\tVariable\t\tDefault\t\tSet by\n\n\tcheckpoint_timeout\tCheckPointTimeout\t300\t\tStartup\n\twal_buffers\t\tXLOGbuffers\t\t8\t\tStartup\n\twal_files\t\tXLOGfiles\t\t0\t\tStartup\n\twal_debug\t\t&XLOG_DEBU\t\t0\t\tSuperuser\n\tcommit_delay\t\t&CommitDelay\t\t5\t\tUser\n\n Is there any text anywhere to explain what these do? (Point me to that or\n some commented code, and I'll write a documentation patch.)\n\n2. The following command line options to postgres don't have an equivalent in\n postgresql.conf. Is that intentional? (I suppose it is in several cases,\n and I have left out some where it is obviously intentional.) I can't see\n why these items can't be put in the configuration file:\n\n\n\tOption\t\tAction\n\n\t-C\t\tNoversion = true [not documented in postgres man page]\n\t-D\t\tpotential_Datadir = arg [set PGDATA]\n\t-E\t\tEchoQuery = true [echo queries to log]\n\t-e\t\tEuroDates = true [use European format for dates]\n\t-N\t\tUseNewLine = 0 [newline is not a query separator]\n\t-o\t\t[set stdout, stderr to file arg]\n\n3. I see the -E is documented as being for stand-alone mode only; in fact it\n is useful for getting the query into the backend log in normal operation.\n\n4. The documentation for -o is confusing:\n\n\t-o file-name\n\n\tSends all debugging and error output to OutputFile. If the backend\n\tis running under the postmaster, error messages are still sent to\n\tthe frontend process as well as to OutputFile, but debugging output\n\tis sent to the controlling tty of the postmaster (since only one\t\n\tfile descriptor can be sent to an actual file).\n\n I think this is saying that, under the postmaster, debugging output does\n not get sent to OutputFile, and error messages are sent both to OutputFile\n and to the frontend. Is that correct?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Enter into his gates with thanksgiving, and into his \n courts with praise. Be thankful unto him, and bless \n his name.\" Psalms 100:4 \n\n\n",
"msg_date": "Thu, 11 Jan 2001 20:37:57 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql.conf and postgres options"
},
{
"msg_contents": "Oliver Elphick writes:\n\n> 2. The following command line options to postgres don't have an equivalent in\n> postgresql.conf. Is that intentional? (I suppose it is in several cases,\n> and I have left out some where it is obviously intentional.) I can't see\n> why these items can't be put in the configuration file:\n\n> \t-C\t\tNoversion = true [not documented in postgres man page]\n\nThis option doesn't do anything.\n\n> \t-D\t\tpotential_Datadir = arg [set PGDATA]\n\nThis option can't be in the config file because it is used to *find* the\nconfig file.\n\n> \t-E\t\tEchoQuery = true [echo queries to log]\n\nHmm, there's debug_print_query. This will probably be consolidated in the\nfuture.\n\n> \t-e\t\tEuroDates = true [use European format for dates]\n\nThis should be a config file option, but Thomas Lockhart couldn't make up\nhis mind what to call it. ;-)\n\n> \t-N\t\tUseNewLine = 0 [newline is not a query separator]\n\nI don't think this is useful.\n\n> \t-o\t\t[set stdout, stderr to file arg]\n\nI think this is broken or not well maintained. Will be cleaned up in some\nlater release.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 12 Jan 2001 00:20:34 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf and postgres options"
},
{
"msg_contents": "\nI think all valid options should appear in the options file. Right now,\nonly a few are in there, I think.\n\n> I've just been looking through the options which can and cannot be set in\n> postgresql.conf and have a few points to raise.\n> \n> 1. There are some undocumented options which appear to relate to WAL:\n> \n> Name\t\t\tVariable\t\tDefault\t\tSet by\n> \n> \tcheckpoint_timeout\tCheckPointTimeout\t300\t\tStartup\n> \twal_buffers\t\tXLOGbuffers\t\t8\t\tStartup\n> \twal_files\t\tXLOGfiles\t\t0\t\tStartup\n> \twal_debug\t\t&XLOG_DEBU\t\t0\t\tSuperuser\n> \tcommit_delay\t\t&CommitDelay\t\t5\t\tUser\n> \n> Is there any text anywhere to explain what these do? (Point me to that or\n> some commented code, and I'll write a documentation patch.)\n> \n> 2. The following command line options to postgres don't have an equivalent in\n> postgresql.conf. Is that intentional? (I suppose it is in several cases,\n> and I have left out some where it is obviously intentional.) I can't see\n> why these items can't be put in the configuration file:\n> \n> \n> \tOption\t\tAction\n> \n> \t-C\t\tNoversion = true [not documented in postgres man page]\n> \t-D\t\tpotential_Datadir = arg [set PGDATA]\n> \t-E\t\tEchoQuery = true [echo queries to log]\n> \t-e\t\tEuroDates = true [use European format for dates]\n> \t-N\t\tUseNewLine = 0 [newline is not a query separator]\n> \t-o\t\t[set stdout, stderr to file arg]\n> \n> 3. I see the -E is documented as being for stand-alone mode only; in fact it\n> is useful for getting the query into the backend log in normal operation.\n> \n> 4. The documentation for -o is confusing:\n> \n> \t-o file-name\n> \n> \tSends all debugging and error output to OutputFile. If the backend\n> \tis running under the postmaster, error messages are still sent to\n> \tthe frontend process as well as to OutputFile, but debugging output\n> \tis sent to the controlling tty of the postmaster (since only one\t\n> \tfile descriptor can be sent to an actual file).\n> \n> I think this is saying that, under the postmaster, debugging output does\n> not get sent to OutputFile, and error messages are sent both to OutputFile\n> and to the frontend. Is that correct?\n> \n> -- \n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"Enter into his gates with thanksgiving, and into his \n> courts with praise. Be thankful unto him, and bless \n> his name.\" Psalms 100:4 \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Jan 2001 00:24:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf and postgres options"
},
{
"msg_contents": "I have added all possible config options to postgresql.conf.sample.\n\nI have attached the new version of the file. I think you will be amazed\nat how GUC gives us such powerful control over PostgreSQL.\n\nThanks, Peter.\n\n> Oliver Elphick writes:\n> \n> > 2. The following command line options to postgres don't have an equivalent in\n> > postgresql.conf. Is that intentional? (I suppose it is in several cases,\n> > and I have left out some where it is obviously intentional.) I can't see\n> > why these items can't be put in the configuration file:\n> \n> > \t-C\t\tNoversion = true [not documented in postgres man page]\n> \n> This option doesn't do anything.\n> \n> > \t-D\t\tpotential_Datadir = arg [set PGDATA]\n> \n> This option can't be in the config file because it is used to *find* the\n> config file.\n> \n> > \t-E\t\tEchoQuery = true [echo queries to log]\n> \n> Hmm, there's debug_print_query. This will probably be consolidated in the\n> future.\n> \n> > \t-e\t\tEuroDates = true [use European format for dates]\n> \n> This should be a config file option, but Thomas Lockhart couldn't make up\n> his mind what to call it. ;-)\n> \n> > \t-N\t\tUseNewLine = 0 [newline is not a query separator]\n> \n> I don't think this is useful.\n> \n> > \t-o\t\t[set stdout, stderr to file arg]\n> \n> I think this is broken or not well maintained. Will be cleaned up in some\n> later release.\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n#\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form\n#\n# name = value\n#\n# (The `=' is optional.) White space is collapsed, comments are\n# introduced by `#' anywhere on a line. The complete list of option\n# names and allowed values can be found in the PostgreSQL\n# documentation. Examples are:\n\n#log_connections = on\n#fsync = off\n#max_connections = 64\n\n# Any option can also be given as a command line switch to the\n# postmaster, e.g., 'postmaster -c log_connections=on'. Some options\n# can be set at run-time with the 'SET' SQL command.\n\n\n#========================================================================\n\n\n#\n#\tConnection Parameters\n#\n#tcpip_socket = false\n#ssl = false\n\n#max_connections = 32 # 1-1024\n\n#port = 5432 \n#hostname_lookup = false\n#show_source_port = false\n\n#unix_socket_directory = \"\"\n#unix_socket_group = \"\"\n#unix_socket_permissions = 0777\n\n#virtual_host = \"\"\n\n#krb_server_keyfile = \"\"\n\n\n#\n#\tPerformance\n#\n#sort_mem = 512\n#shared_buffers = 2*max_connections # min 16\n#fsync = true\n\n\n#\n#\tOptimizer Parameters\n#\n#enable_seqscan = true\n#enable_indexscan = true\n#enable_tidscan = true\n#enable_sort = true\n#enable_nestloop = true\n#enable_mergejoin = true\n#enable_hashjoin = true\n\n#ksqo = false\n#geqo = true\n\n#effective_cache_size = 1000 # default in 8k pages\n#random_page_cost = 4\n#cpu_tuple_cost = 0.01\n#cpu_index_tuple_cost = 0.001\n#cpu_operator_cost = 0.0025\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n\n#\n#\tGEQO Optimizer Parameters\n#\n#geqo_threshold = 11\n#geqo_pool_size = 0 #default based in tables, range 128-1024\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_random_seed = -1 # auto-compute seed\n\n\n#\n#\tInheritance\n#\n#sql_inheritance = true\n\n\n#\n#\tDeadlock\n#\n#deadlock_timeout = 1000\n\n\n#\n#\tExpression Depth Limitation\n#\n#max_expr_depth = 10000 # min 10\n\n\n#\n#\tWrite-ahead log (WAL)\n#\n#wal_buffers = 8 # min 4\n#wal_files = 0 # range 0-64\n#wal_debug = 0 # range 0-16\n#commit_delay = 5 # range 0-1000\n#checkpoint_timeout = 300 # range 30-1800\n\n\n#\n#\tDebug display\n#\n#silent_mode = false\n\n#log_connections = false\n#log_timestamp = false\n#log_pid = false\n\n#debug_level = 0 # range 0-16\n\n#debug_print_query = false\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n\n#ifdef USE_ASSERT_CHECKING\n#debug_assertions = true\n#endif\n\n\n#\n#\tSyslog\n#\n#ifdef ENABLE_SYSLOG\n#syslog = 0 # range 0-2\n#syslog_facility = \"LOCAL0\"\n#syslog_ident = \"postgres\"\n#endif\n\n\n#\n#\tStatistics\n#\n#show_parser_stats = false\n#show_planner_stats = false\n#show_executor_stats = false\n#show_query_stats = false\n#ifdef BTREE_BUILD_STATS\n#show_btree_build_stats = false\n#endif\n\n\n#\n#\tLock Tracing\n#\n#trace_notify = false\n#ifdef LOCK_DEBUG\n#trace_locks = false\n#trace_userlocks = false\n#trace_spinlocks = false\n#debug_deadlocks = false\n#trace_lock_oidmin = 16384\n#trace_lock_table = 0\n#endif",
"msg_date": "Wed, 24 Jan 2001 13:41:16 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf and postgres options"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I have added all possible config options to postgresql.conf.sample.\n\nIt was actually fully intentional that there was *no* list of all possible\nconfig options in the sample file, because\n\n1) Who's going to maintain this?\n\n2) People should read the documentation before messing with options.\n\n(\" is not the correct string delimiter either.)\n\nI have bad experiences with sample config files. The first thing I\nusually do is delete them and dig up the documentation.\n\nDo other people have comments on this issue?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 24 Jan 2001 20:02:10 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf and postgres options"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > I have added all possible config options to postgresql.conf.sample.\n> \n> It was actually fully intentional that there was *no* list of all possible\n> config options in the sample file, because\n> \n> 1) Who's going to maintain this?\n> \n> 2) People should read the documentation before messing with options.\n> \n> (\" is not the correct string delimiter either.)\n\nChanged to ''. Thanks.\n\n> \n> I have bad experiences with sample config files. The first thing I\n> usually do is delete them and dig up the documentation.\n> \n> Do other people have comments on this issue?\n\nI have marked all places where these defaults are set in the C code,\npointing them to update postgresql.conf.sample. \n\nI found it is nice to see a nice list of all options for quick review.\nIt makes the file much more useful, I think.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 14:03:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf and postgres options"
},
{
"msg_contents": "Bruce Momjian wrote:\n> I have added all possible config options to postgresql.conf.sample.\n\n> I have attached the new version of the file. I think you will be amazed\n> at how GUC gives us such powerful control over PostgreSQL.\n\nGood. As a sysadmin I _like_ sample configs (which I usually rename to\nsomething else, and write my own, of course), as it gives a single\nconcise quick-reference to the syntax.\n\nLots of packages do this -- squid is the biggest one I can think of\nright now (biggest in terms of config file size and power). Samba is\nanother example. There are a plethora of others, of course.\n\nBut, Peter's point does hold -- someone will have to maintain this.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 24 Jan 2001 16:06:38 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf and postgres options"
},
{
"msg_contents": "I think the list is great, show what can be configured rather than \nguessing/digging to find it, where it belongs, in what order (if any), etc. \netc. etc.\nThe only addition I could think would be to label (default value).\n\nNeedless, I like it.. :)\n\n\nAt 1/24/2001 01:03 PM, Bruce Momjian wrote:\n> > Bruce Momjian writes:\n> >\n> > > I have added all possible config options to postgresql.conf.sample.\n> >\n> > It was actually fully intentional that there was *no* list of all possible\n> > config options in the sample file, because\n> >\n> > 1) Who's going to maintain this?\n> >\n> > 2) People should read the documentation before messing with options.\n> >\n> > (\" is not the correct string delimiter either.)\n>\n>Changed to ''. Thanks.\n>\n> >\n> > I have bad experiences with sample config files. The first thing I\n> > usually do is delete them and dig up the documentation.\n> >\n> > Do other people have comments on this issue?\n>\n>I have marked all places where these defaults are set in the C code,\n>pointing them to update postgresql.conf.sample.\n>\n>I found it is nice to see a nice list of all options for quick review.\n>It makes the file much more useful, I think.\n\n\n",
"msg_date": "Wed, 24 Jan 2001 16:09:20 -0600",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf and postgres options"
},
{
"msg_contents": "\nDefaults are listed as the assignment value, and of course, the are all\ncommented out.\n\n> I think the list is great, show what can be configured rather than \n> guessing/digging to find it, where it belongs, in what order (if any), etc. \n> etc. etc.\n> The only addition I could think would be to label (default value).\n> \n> Needless, I like it.. :)\n> \n> \n> At 1/24/2001 01:03 PM, Bruce Momjian wrote:\n> > > Bruce Momjian writes:\n> > >\n> > > > I have added all possible config options to postgresql.conf.sample.\n> > >\n> > > It was actually fully intentional that there was *no* list of all possible\n> > > config options in the sample file, because\n> > >\n> > > 1) Who's going to maintain this?\n> > >\n> > > 2) People should read the documentation before messing with options.\n> > >\n> > > (\" is not the correct string delimiter either.)\n> >\n> >Changed to ''. Thanks.\n> >\n> > >\n> > > I have bad experiences with sample config files. The first thing I\n> > > usually do is delete them and dig up the documentation.\n> > >\n> > > Do other people have comments on this issue?\n> >\n> >I have marked all places where these defaults are set in the C code,\n> >pointing them to update postgresql.conf.sample.\n> >\n> >I found it is nice to see a nice list of all options for quick review.\n> >It makes the file much more useful, I think.\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 19:05:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: postgresql.conf and postgres options"
}
]
|
[
{
"msg_contents": "> 1. There are some undocumented options which appear to relate to WAL:\n...\n> Is there any text anywhere to explain what these do? \n> (Point me to that or some commented code, and I'll write\n> a documentation patch.)\n\nI'll send description to you soon, thanks.\n\nVadim\n",
"msg_date": "Thu, 11 Jan 2001 12:43:55 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: postgresql.conf and postgres options"
}
]
|
[
{
"msg_contents": "> > erk, can we get this somehow done in such a way that its part of the\n> > *standard* regression tests? so when ppl do 'make test',\n> > the GiST stuff is checked also? My worry, as with others, isn't that\n> > GiST itself is broken by the changes, its that *somehow* there is an\n> > interaction that is with the rest of the system that isn't being tested\n...\n> \n> No way, we need to load functions. there are several contributions\n> which depends on loaded functions. If you suggest how to do this\n> in general way, it would fine. To test GiST you need to define some\n> data structure ( in our case - RD-tree) and functions to access it\n\nLook at regress/input/create_function_1.source for hints from\nSPI tests...\n\nVadim\n",
"msg_date": "Thu, 11 Jan 2001 15:59:42 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": " On Thu, 11 Jan 2001, Mikheev, Vadim wrote:\n\n> > > erk, can we get this somehow done in such a way that its part of the\n> > > *standard* regression tests? so when ppl do 'make test',\n> > > the GiST stuff is checked also? My worry, as with others, isn't that\n> > > GiST itself is broken by the changes, its that *somehow* there is an\n> > > interaction that is with the rest of the system that isn't being tested\n> ...\n> >\n> > No way, we need to load functions. there are several contributions\n> > which depends on loaded functions. If you suggest how to do this\n> > in general way, it would fine. To test GiST you need to define some\n> > data structure ( in our case - RD-tree) and functions to access it\n>\n> Look at regress/input/create_function_1.source for hints from\n> SPI tests...\n>\nThanks Vadim for tips. Will do this way, but tommorow. It's\n3:19 am already and I have to sleep :-)\n\n\n\n> Vadim\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 12 Jan 2001 03:18:08 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n>>>> No way, we need to load functions. there are several contributions\n>>>> which depends on loaded functions. If you suggest how to do this\n>>>> in general way, it would fine. To test GiST you need to define some\n>>>> data structure ( in our case - RD-tree) and functions to access it\n>> \n>> Look at regress/input/create_function_1.source for hints from\n>> SPI tests...\n\nUm, you do realize that a contrib module that gets used as part of the\nregress tests may as well be mainstream? At least in terms of the\nportability requirements it will have to meet?\n\nI'm unhappy again. Bad enough we accepted a new feature during beta;\nnow we're going to expect an absolutely virgin contrib module to work\neverywhere in order to pass regress tests?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 19:51:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "Tom Lane wrote:\n> Um, you do realize that a contrib module that gets used as part of the\n> regress tests may as well be mainstream? At least in terms of the\n> portability requirements it will have to meet?\n \n> I'm unhappy again. Bad enough we accepted a new feature during beta;\n> now we're going to expect an absolutely virgin contrib module to work\n> everywhere in order to pass regress tests?\n\nLast I checked, two contrib modules had to be built for regression\ntesting. But that was 7.0. (autoinc and refint.....).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 11 Jan 2001 21:59:58 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n>> I'm unhappy again. Bad enough we accepted a new feature during beta;\n>> now we're going to expect an absolutely virgin contrib module to work\n>> everywhere in order to pass regress tests?\n\n> Last I checked, two contrib modules had to be built for regression\n> testing.\n\nSure, but they've been there awhile. All of my concerns here are\nschedule-driven: do we really want to be wringing out a new contrib\nmodule, to the point where it will run everywhere, before we can\nrelease 7.1?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 22:08:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> >> I'm unhappy again. Bad enough we accepted a new feature during beta;\n> >> now we're going to expect an absolutely virgin contrib module to work\n> >> everywhere in order to pass regress tests?\n \n> > Last I checked, two contrib modules had to be built for regression\n> > testing.\n \n> Sure, but they've been there awhile. All of my concerns here are\n> schedule-driven: do we really want to be wringing out a new contrib\n> module, to the point where it will run everywhere, before we can\n> release 7.1?\n\nAre the benefits worth the effort? Can the current GiST developers pull\nit off in time?\n\nIf the answer to either question is not a resounding YES then we really\ndon't need to go down this road. Either leave it in contrib and\nregression testless (with a test script in the contrib), or make it a\nfeature patch.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 11 Jan 2001 22:15:10 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "On Thu, 11 Jan 2001, Tom Lane wrote:\n\n> Lamar Owen <[email protected]> writes:\n> >> I'm unhappy again. Bad enough we accepted a new feature during beta;\n> >> now we're going to expect an absolutely virgin contrib module to work\n> >> everywhere in order to pass regress tests?\n>\n> > Last I checked, two contrib modules had to be built for regression\n> > testing.\n>\n> Sure, but they've been there awhile. All of my concerns here are\n> schedule-driven: do we really want to be wringing out a new contrib\n> module, to the point where it will run everywhere, before we can\n> release 7.1?\n\nHrmmm ... just a thought here, but how about a potential 'interactive'\nregression test, where it asks if you want to run regress on GiST? If so,\ndo it, if not, ignore it ... ?\n\n\n",
"msg_date": "Thu, 11 Jan 2001 23:29:47 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "> IMHO, giving out real test results, even negative, instead of leaving\n> things untested would be a honest thing to do.\n\nafaict there are several concerns or decisions, and we've made a few\nalready:\n\nRe: gist.c patches...\n\n1) Oleg and Hannu are committed to testing the repaired GiST as soon as\nit is in the main tree. They are both testing already with the patched\nversion.\n\n2) They will try to contact Gene to encourage testing with Gene's\napplication, though they have no reason to suspect from their own\ntesting that Gene's stuff will break.\n\n3) There is a consensus that the gist.c patches should appear in the 7.1\nrelease, to allow useful work with GiST and to enable further\ndevelopment. So it is OK to commit the gist.c patches based on Oleg's\nand Hannu's existing and future test plan.\n\nRe: regression tests...\n\n4) We all would like to see some regression tests of GiST. Tom Lane has\n(rightly) expressed concern over unforeseen breakage in the regression\nflow when done on other platforms.\n\n5) Oleg has some regression-capable test code available for contrib, but\nhas indicated that fully (re)writing the regression tests will take too\nmuch time.\n\n6) We have at least two committed testers for the 7.1 release for the\nGiST features. That is two more than we've ever had before (afacr Gene\ndidn't participate in the end-stage beta cycle, but I may not be\nremembering correctly) so the risks that something is not right are\ngreatly reduced, to below the risks of same on the day of release for\nprevious versions.\n\n8) Additional regression testing is required asap, but may not be\nallowed into the default 7.1 test sequence.\n\nHow about adding an optional test a la \"bigtest\" for GiST for this\nrelease? It could go mainstream for 7.1.x or for 7.2 as we get more\nexperience with it. This is just a suggestion and I'm sure there are\nother possibilities. I'm pretty sure we agree on most of points 1-8, and\nthat 1-3 are resolved. Comments?\n\n - Thomas\n",
"msg_date": "Fri, 12 Jan 2001 15:10:38 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "On Fri, 12 Jan 2001, Thomas Lockhart wrote:\n\n> How about adding an optional test a la \"bigtest\" for GiST for this\n> release? It could go mainstream for 7.1.x or for 7.2 as we get more\n> experience with it. This is just a suggestion and I'm sure there are\n> other possibilities. I'm pretty sure we agree on most of points 1-8, and\n> that 1-3 are resolved. Comments?\n\nmake GIST_TEST=yes\n\nto include GiST testing would be cool, if it can be done ... this way\nTom's worry about non-GiST users having bad regress tests is appeased ...\nbut I do agree with Tom that mainstreaming the GiST testing would be a bad\nidea ... if we could somehow include it as an optional test (as you say,\nala bigtest), then, if nothing else, it saves having to cd to the contrib\ndirectory and run it there ... ala one stop shopping ...\n\n*But*, for the 3 ppl we've pointed out as users of GiST, this is\ndefinitely not a priority issue ... if we can do it, great, if not, no\nsweat either ...\n\n\n",
"msg_date": "Fri, 12 Jan 2001 11:21:13 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> How about adding an optional test a la \"bigtest\" for GiST for this\n> release?\n\nWe could do that, but it seems like rather pointless effort, compared\nto just telling people \"go run the tests in these contrib modules if\nyou want to test GIST\".\n\nI have no objection to fully integrating some GIST test(s) for 7.2.\nI just don't want to deal with it at this late stage of the 7.1 cycle.\nWe have a long list of considerably more mainstream to-do items yet\nto deal with ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 10:27:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Um, you do realize that a contrib module that gets used as part of the\n> regress tests may as well be mainstream? At least in terms of the\n> portability requirements it will have to meet?\n\n_If_ we want to have a tested GiST (and not the \"probably works but \nreally has some nasty known bugs\" one) we need to write _tests_.\n\nTo test it we need something that makes use of it.\n\nAs the only things that make use of it are extensions we need to \nmake use of them in tests.\n\nSo I propose the following : \n1. Keep the fixed (new) gist.c in the main codebase\n2. make use of the RD-index and/or Gene's tests in contrib in regression\ntests\n3. Tellpeople beforehand that it is not the end of the world\n if GiST _tests_ fail on their platform\n\n> I'm unhappy again. Bad enough we accepted a new feature during beta;\n> now we're going to expect an absolutely virgin contrib module to work\n> everywhere in order to pass regress tests?\n\nThere can be always \"expected\" discrepancies in regress tests, and \nfailing GiST test just tells people that if they want to use GiST on \ntheir platform they must probably fix things in core code as well.\nCurrently they have to find it out the hard way - first lot of work \ntrying to \"fix\" their own code and only then the bright idea that \nmaybe it is actually broken in the core.\n\nIMHO, giving out real test results, even negative, instead of leaving \nthings untested would be a honest thing to do.\n\n-----------------\nHannu\n",
"msg_date": "Fri, 12 Jan 2001 16:01:55 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> How about adding an optional test a la \"bigtest\" for GiST for this\n> release?\n\nAn optional test is like no test at all. No one runs optional tests. If\nthe test is supposed to work then it should be mainstream. If the test\nmight not work then you better go back and figure out what you're testing.\nIf the test might not *compile* (which is probably the more severe problem\nthat people are concerned about) then this idea won't help that at all\nunless you want to rework the regression test driver framework as well.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 12 Jan 2001 17:13:46 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "> An optional test is like no test at all. No one runs optional tests. If\n> the test is supposed to work then it should be mainstream. If the test\n> might not work then you better go back and figure out what you're testing.\n> If the test might not *compile* (which is probably the more severe problem\n> that people are concerned about) then this idea won't help that at all\n> unless you want to rework the regression test driver framework as well.\n\nI agree completely. This is just a transition phase to get GiST into the\nmainstream.\n\n - Thomas\n",
"msg_date": "Fri, 12 Jan 2001 16:29:29 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "On Fri, 12 Jan 2001, Tom Lane wrote:\n\n> Thomas Lockhart <[email protected]> writes:\n> > How about adding an optional test a la \"bigtest\" for GiST for this\n> > release?\n>\n> We could do that, but it seems like rather pointless effort, compared\n> to just telling people \"go run the tests in these contrib modules if\n> you want to test GIST\".\n>\n> I have no objection to fully integrating some GIST test(s) for 7.2.\n> I just don't want to deal with it at this late stage of the 7.1 cycle.\n> We have a long list of considerably more mainstream to-do items yet\n> to deal with ...\n\nNot up to us to deal with, its up to Oleg ...\n\nOleg, if you could work on and submit patches for this before the release,\nthat would be appreciated ... it might also serve to increase visibility\nof GiST if ppl know there is a regress test for it ...\n\n\n",
"msg_date": "Fri, 12 Jan 2001 12:55:06 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "I am sorry I wasn't listening -- I may have helped by at least\nanswering the direct questions and by testing. I have, in fact,\npositively tested both my and Oleg's code in the today's snapshot on a\nnumber of linux and FreeBSD systems. I failed on this one:\n\nSunOS typhoon 5.7 Generic_106541-10 sun4u sparc SUNW,Ultra-1\n\non which configure didn't detect the absence of libz.so\n\nI don't think my applications are affected by Oleg's changes. But I\nunderstand the tension that occurred during the past few days and even\nthough I am now satisfied with the agreement you seem to have\nachieved, I could have hardly influenced it in any reasonable way. I\nam as sympathetic with the need for a smooth an solid code control as\nI am with promoting great features (or, in this case, just keeping a\nfeature alive). So, if I were around at the time I was asked to vote,\nI wouldn't know how. I usually find it difficult to take sides in\n\"Motherhood vs. Clean Air\" debates. It is true that throwing a core\nduring a regression test does gives one a black eye. It is also true\nthat there are probably hundreds of possible users, ignorant of the\nGiST, trying to invent surrogate solutions. As far as I am concerned,\nI will be satisfied with whatever solution you arrive at. I am pleased\nthat in this neighborhood, reason prevails over faith.\n\n--Gene\n",
"msg_date": "Sat, 13 Jan 2001 17:31:25 -0600",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "[email protected] writes:\n> I am sorry I wasn't listening -- I may have helped by at least\n> answering the direct questions and by testing. I have, in fact,\n> positively tested both my and Oleg's code in the today's snapshot on a\n> number of linux and FreeBSD systems. I failed on this one:\n\n> SunOS typhoon 5.7 Generic_106541-10 sun4u sparc SUNW,Ultra-1\n\n> on which configure didn't detect the absence of libz.so\n\nReally? Details please. It's hard to see how it could have messed\nup on that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Jan 2001 22:13:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "Hi,\n\nI've put R-Tree realization using GiST (yet another test of our changes in\ngist code )on my gist page http://www.sai.msu.su/~megera/postgres/gist/\nAlso, I've put some GiST related papers for interested readers.\nThe package( contrib-rtree_box_gist.tar.gz ) is built for 7.1.\nIf you find it's interesting you may include it into contrib area for 7.1\n\nfrom README.rtree_box_gist:\n\n\n1. One interesting thing is that insertion time for built-in R-Tree is\n about 8 times more than ones for GiST implementation of R-Tree !!!\n2. Postmaster requires much more memory for built-in R-Tree\n3. Search time depends on dataset. In our case we got:\n +------------+-----------+--------------+\n |Number boxes|R-tree, sec|R-tree using |\n | | | GiST, sec |\n +------------+-----------+--------------+\n | 10| 0.002| 0.002|\n +------------+-----------+--------------+\n | 100| 0.002| 0.002|\n +------------+-----------+--------------+\n | 1000| 0.002| 0.002|\n +------------+-----------+--------------+\n | 10000| 0.015| 0.025|\n +------------+-----------+--------------+\n | 20000| 0.029| 0.048|\n +------------+-----------+--------------+\n | 40000| 0.055| 0.092|\n +------------+-----------+--------------+\n | 80000| 0.113| 0.178|\n +------------+-----------+--------------+\n | 160000| 0.338| 0.337|\n +------------+-----------+--------------+\n | 320000| 0.674| 0.673|\n +------------+-----------+--------------+\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sun, 14 Jan 2001 18:41:10 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "R-Tree implementation using GiST "
},
{
"msg_contents": "[email protected] writes:\n>>>> on which configure didn't detect the absence of libz.so\n>> \n>> Really? Details please. It's hard to see how it could have messed\n>> up on that.\n\n> I didn't look well enough -- I apologize. The library is there, but\n> ld.so believes it is not:\n\n> typhoon> postmaster \n> ld.so.1: postmaster: fatal: libz.so: open failed: No such file or directory\n> Killed\n\nOdd. Can you show us the part of config.log that relates to zlib?\nIt's strange that configure's check to see if zlib is linkable should\nsucceed, only to have the live startup fail. Is it possible that\nyou ran configure with a different library search path (LD_LIBRARY_PATH\nor local equivalent) than you are using now?\n\nIt's suspicious that the error message mentions libz.so when the actual\nfile name is libz.so.1, but I still don't see how that could result in\nconfigure's link test succeeding but the executable not running.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Jan 2001 00:58:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "> Tom Lane writes:\n> > [email protected] writes:\n...\n> > SunOS typhoon 5.7 Generic_106541-10 sun4u sparc SUNW,Ultra-1\n> \n> > on which configure didn't detect the absence of libz.so\n> \n> Really? Details please. It's hard to see how it could have messed\n> up on that.\n\nTom, \n\nI didn't look well enough -- I apologize. The library is there, but\nld.so believes it is not:\n\ntyphoon> postmaster \nld.so.1: postmaster: fatal: libz.so: open failed: No such file or directory\nKilled\n\nThis may very well be just my ISP's problem.\n\nAnyway, the details are:\n\n1. My (relevant) environment:\n\nLD_LIBRARY_PATH=/usr/openwin/lib:/usr/lib:/usr/ucblib:/usr/ccs/lib\nPGLIB=/home/customer/selkovjr/pgsql/lib\nPGDATA=/home/customer/selkovjr/pgsql/data\nPATH=/usr/local/vendor/SUNWspro/bin:/usr/local/bin:/usr/local/gnu/bin:/usr/local/GNU/bin:/usr/sbin:/usr/bin:/usr/ccs/bin:/usr/ucb:/etc:/usr/etc:/usr/openwin/bin:/home/customer/selkovjr/bin:./usr/local/bin::/home/customer/selkovjr/pgsql/bin\n\n2. I built postgres (from the snapshot of Jan 13) with:\n\n./configure --prefix=/home/customer/selkovjr/pgsql\nmake\nmake install\n\n3. initdb worked.\n\n4. The library in question is in /usr/openwin/lib:\n\ntyphoon> ls -l /usr/openwin/lib | grep libz\n-rwxr-xr-x 1 root bin 97836 Sep 23 1999 libz.a\n-rwxr-xr-x 1 root bin 70452 Sep 23 1999 libz.so.1\n\nI can't think of anything else. Is there a one-liner to test libz? I\nbelieve I have successfully tested and run 6.5.3 in the same\nenvironment.\n\n--Gene\n",
"msg_date": "Mon, 15 Jan 2001 00:39:01 -0600",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "Tom Lane writes:\n> [email protected] writes:\n> >>>> on which configure didn't detect the absence of libz.so\n> >> \n> >> Really? Details please. It's hard to see how it could have messed\n> >> up on that.\n> \n> > I didn't look well enough -- I apologize. The library is there, but\n> > ld.so believes it is not:\n> \n> > typhoon> postmaster \n> > ld.so.1: postmaster: fatal: libz.so: open failed: No such file or directory\n> > Killed\n> \n> Odd. Can you show us the part of config.log that relates to zlib?\n\nconfigure:4179: checking for zlib.h\nconfigure:4189: gcc -E conftest.c >/dev/null 2>conftest.out\nconfigure:4207: checking for inflate in -lz\nconfigure:4226: gcc -o conftest conftest.c -lz -lgen -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses 1>&5\nconfigure:4660: checking for crypt.h\n\nThis doesn't tell me much. But I modified configure to exit right\nafter this, without removing conftest*, and when I ran conftest it came\nback with the same message:\n\ntyphoon> ./conftest\nld.so.1: ./conftest: fatal: libz.so: open failed: No such file or directory\nKilled\n\n> It's strange that configure's check to see if zlib is linkable should\n> succeed, only to have the live startup fail. \n\nIt is. In this line:\n\nif { (eval echo configure:4226: \\\"$ac_link\\\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then\n\nwhy is conftest tested for size instead of being executed?\n\n> Is it possible that\n> you ran configure with a different library search path (LD_LIBRARY_PATH\n> or local equivalent) than you are using now?\n\nNo, I didn't alter it. I am using the system-wide settings.\n\n> It's suspicious that the error message mentions libz.so when the actual\n> file name is libz.so.1, but I still don't see how that could result in\n> configure's link test succeeding but the executable not running.\n\nThat puzzles me as well. It seems to be because there is no libz.so on\nthe system. For if I do this:\n\nexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/customer/selkovjr/lib \nln -s /usr/openwin/lib/libz.so.1 ~/lib/libz.so\n\nthe libz problem is gone, only to be followed by the next one:\n\ntyphoon> ./conftest\nld.so.1: ./conftest: fatal: libreadline.so: open failed: No such file or directory\n\nThe odd thing is, there is no libreadline.so* on this system. Here's the corresponding part of config.log:\n\nconfigure:3287: checking for library containing readline\nconfigure:3305: gcc -o conftest conftest.c -ltermcap -lcurses 1>&5\nUndefined first referenced\n symbol in file\nreadline /var/tmp/ccxxiW3R.o\nld: fatal: Symbol referencing errors. No output written to conftest\ncollect2: ld returned 1 exit status\nconfigure: failed program was:\n#line 3294 \"configure\"\n#include \"confdefs.h\"\n/* Override any gcc2 internal prototype to avoid an error. */\n/* We use char because int might match the return type of a gcc2\n builtin and then its argument prototype would still apply. */\nchar readline();\n\nint main() {\nreadline()\n; return 0; }\nconfigure:3327: gcc -o conftest conftest.c -lreadline -ltermcap -lcurses \n1>&5\n\nThis system is probaly badly misconfigured, but it would be great if\nconfigure could see that. By the way, would you mind if I asked you to\nlog in and take a look? Is there a phone number where I can get you\nwith the password? I am not sure whether such tests could be of any\nvalue, but it's the only Sun machine available to me for testing.\n\nThank you,\n\n--Gene\n\n",
"msg_date": "Tue, 16 Jan 2001 03:07:20 -0600",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "[email protected] writes:\n> configure:4207: checking for inflate in -lz\n> configure:4226: gcc -o conftest conftest.c -lz -lgen -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses 1>&5\n> configure:4660: checking for crypt.h\n\n> This doesn't tell me much. But I modified configure to exit right\n> after this, without removing conftest*, and when I ran conftest it came\n> back with the same message:\n\n> typhoon> ./conftest\n> ld.so.1: ./conftest: fatal: libz.so: open failed: No such file or directory\n> Killed\n\n>> It's strange that configure's check to see if zlib is linkable should\n>> succeed, only to have the live startup fail. \n\n> This system is probaly badly misconfigured, but it would be great if\n> configure could see that.\n\nGene and I looked into this, and the cause of the misbehavior is this:\ngcc on this installation is set to search /usr/local/lib (along with the\nusual system library directories). libz.so and libreadline.so are\nindeed in /usr/local/lib, so configure's tests to see if they can be\nlinked against will succeed. But he had LD_LIBRARY_PATH set to a list\nthat did *not* include /usr/local/lib, so actually firing up the\nexecutable would fail.\n\nAs he says, it'd be nice if configure could either prevent this or at\nleast detect it. Not sure about a good way to do that --- any ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Jan 2001 20:02:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Getting configure to notice link-time vs run-time failures"
},
{
"msg_contents": "Tom Lane writes:\n\n> Gene and I looked into this, and the cause of the misbehavior is this:\n> gcc on this installation is set to search /usr/local/lib (along with the\n> usual system library directories). libz.so and libreadline.so are\n> indeed in /usr/local/lib, so configure's tests to see if they can be\n> linked against will succeed. But he had LD_LIBRARY_PATH set to a list\n> that did *not* include /usr/local/lib, so actually firing up the\n> executable would fail.\n\nYou get what you pay for. If you're running executables from configure\nyou're asking for it.\n\nThis setup is a poor man's cross-compilation situation because the system\nyou're compiling on is not identically configured to the system you're\ngoing to run on. (Strictly speaking, the behaviour of a test program\nmight even vary with different LD_LIBRARY_PATH settings.)\n\nSo\n\na) PostgreSQL does not support cross-compilation (yet). Too bad.\n\nb) We could get rid of all executition time checks in configure (to\n remedy (a)). This is one of my plans for the future.\n\nc) You could move the execution time checks up before the suspicious\n library checks, but I'm afraid that this will only cure a particular\n symptom and might introduce other problems.\n\nI'd say, you're stuck.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 19 Jan 2001 00:46:53 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Getting configure to notice link-time vs run-time failures"
},
{
"msg_contents": "On Fri, Jan 19, 2001 at 12:46:53AM +0100, Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > Gene and I looked into this, and the cause of the misbehavior is this:\n> > gcc on this installation is set to search /usr/local/lib (along with the\n> > usual system library directories). libz.so and libreadline.so are\n> > indeed in /usr/local/lib, so configure's tests to see if they can be\n> > linked against will succeed. But he had LD_LIBRARY_PATH set to a list\n> > that did *not* include /usr/local/lib, so actually firing up the\n> > executable would fail.\n> \n> You get what you pay for. If you're running executables from configure\n> you're asking for it.\n> \n> This setup is a poor man's cross-compilation situation because the system\n> you're compiling on is not identically configured to the system you're\n> going to run on. (Strictly speaking, the behaviour of a test program\n> might even vary with different LD_LIBRARY_PATH settings.)\n> \n> So\n> \n> a) PostgreSQL does not support cross-compilation (yet). Too bad.\n> \n> b) We could get rid of all executition time checks in configure (to\n> remedy (a)). This is one of my plans for the future.\n> \n> c) You could move the execution time checks up before the suspicious\n> library checks, but I'm afraid that this will only cure a particular\n> symptom and might introduce other problems.\n> \n> I'd say, you're stuck.\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n\nWouldn't a -Wl,-R/usr/local/lib have helped?\n\nCheers,\n\nPatrick\n",
"msg_date": "Fri, 19 Jan 2001 15:09:59 +0000",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Getting configure to notice link-time vs run-time failures"
},
{
"msg_contents": "Patrick Welche <[email protected]> writes:\n> Wouldn't a -Wl,-R/usr/local/lib have helped?\n\nWell, yeah, but how would we know to do that? The fact that gcc is\nsearching /usr/local/lib is completely unknown to configure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Jan 2001 10:17:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Getting configure to notice link-time vs run-time failures "
},
{
"msg_contents": "I have added the URL to the GIST SGML docs.\n\n> Hi,\n> \n> I've put R-Tree realization using GiST (yet another test of our changes in\n\n> gist code )on my gist page http://www.sai.msu.su/~megera/postgres/gist/\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n> Also, I've put some GiST related papers for interested readers.\n> The package( contrib-rtree_box_gist.tar.gz ) is built for 7.1.\n> If you find it's interesting you may include it into contrib area for 7.1\n> \n> from README.rtree_box_gist:\n> \n> \n> 1. One interesting thing is that insertion time for built-in R-Tree is\n> about 8 times more than ones for GiST implementation of R-Tree !!!\n> 2. Postmaster requires much more memory for built-in R-Tree\n> 3. Search time depends on dataset. In our case we got:\n> +------------+-----------+--------------+\n> |Number boxes|R-tree, sec|R-tree using |\n> | | | GiST, sec |\n> +------------+-----------+--------------+\n> | 10| 0.002| 0.002|\n> +------------+-----------+--------------+\n> | 100| 0.002| 0.002|\n> +------------+-----------+--------------+\n> | 1000| 0.002| 0.002|\n> +------------+-----------+--------------+\n> | 10000| 0.015| 0.025|\n> +------------+-----------+--------------+\n> | 20000| 0.029| 0.048|\n> +------------+-----------+--------------+\n> | 40000| 0.055| 0.092|\n> +------------+-----------+--------------+\n> | 80000| 0.113| 0.178|\n> +------------+-----------+--------------+\n> | 160000| 0.338| 0.337|\n> +------------+-----------+--------------+\n> | 320000| 0.674| 0.673|\n> +------------+-----------+--------------+\n> \n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Jan 2001 22:32:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: R-Tree implementation using GiST"
}
]
|
[
{
"msg_contents": "> >> This is OK for table files, unless someone's broken the \n> >> code that will auto-initialize a zero page when it comes across one.\n> \n> > Hmmm, I don't see anything like auto-initialization in code -:(\n> > Where did you put these changes?\n> \n> I didn't put 'em in, it looked like your work to me: see vacuum.c,\n> lines 618-622 in current sources.\n\nOh, this code was there from 6.0 days.\n\n> Awhile back I did fix PageGetFreeSpace and some related macros to\n> deliver sane results when looking at an all-zero page header, so that\n> scans and inserts would ignore the page until vacuum fixes it.\n\nI see now - PageGetMaxOffsetNumber... Ok.\n \n> Perhaps WAL redo needs to be prepared to do PageInit as well?\n\nIt calls PageIsNew and uses flag in record to know when a page could\nbe uninitialized.\n \n> Actually, I'd expect the CRC check to catch an all-zeroes page (if\n> it fails to complain, then you misimplemented the CRC), so that would\n> be the place to deal with it now.\n\nI've used standard CRC32 implementation you pointed me to -:)\nBut CRC is used in WAL records only.\n\nVadim\n",
"msg_date": "Thu, 11 Jan 2001 17:19:29 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Re: Loading optimization "
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> Actually, I'd expect the CRC check to catch an all-zeroes page (if\n>> it fails to complain, then you misimplemented the CRC), so that would\n>> be the place to deal with it now.\n\n> I've used standard CRC32 implementation you pointed me to -:)\n> But CRC is used in WAL records only.\n\nOh. I thought we'd agreed that a CRC on each stored disk block would\nbe a good idea as well. I take it you didn't do that.\n\nDo we want to consider doing this (and forcing another initdb)?\nOr shall we say \"too late for 7.1\"?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 21:55:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "CRCs (was Re: [GENERAL] Re: Loading optimization)"
},
{
"msg_contents": "> \"Mikheev, Vadim\" <[email protected]> writes:\n> >> Actually, I'd expect the CRC check to catch an all-zeroes page (if\n> >> it fails to complain, then you misimplemented the CRC), so that would\n> >> be the place to deal with it now.\n> \n> > I've used standard CRC32 implementation you pointed me to -:)\n> > But CRC is used in WAL records only.\n> \n> Oh. I thought we'd agreed that a CRC on each stored disk block would\n> be a good idea as well. I take it you didn't do that.\n\n\nNo, I thought we agreed disk block CRC was way overkill. If the CRC on\nthe WAL log checks for errors that are not checked anywhere else, then\nfine, but I thought disk CRC would just duplicate the I/O subsystem/disk\nchecks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Jan 2001 00:39:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs (was Re: [GENERAL] Re: Loading optimization)"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Oh. I thought we'd agreed that a CRC on each stored disk block would\n>> be a good idea as well. I take it you didn't do that.\n\n> No, I thought we agreed disk block CRC was way overkill. If the CRC on\n> the WAL log checks for errors that are not checked anywhere else, then\n> fine, but I thought disk CRC would just duplicate the I/O subsystem/disk\n> checks.\n\nA disk-block CRC would detect partially written blocks (ie, power drops\nafter disk has written M of the N sectors in a block). The disk's own\nchecks will NOT consider this condition a failure. I'm not convinced\nthat WAL will reliably detect it either (Vadim?). Certainly WAL will\nnot help for corruption caused by external agents, away from any updates\nthat are actually being performed/logged.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 01:16:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs (was Re: [GENERAL] Re: Loading optimization) "
},
{
"msg_contents": "At 21:55 11/01/01 -0500, Tom Lane wrote:\n>\n>Oh. I thought we'd agreed that a CRC on each stored disk block would\n>be a good idea as well. I take it you didn't do that.\n>\n>Do we want to consider doing this (and forcing another initdb)?\n>Or shall we say \"too late for 7.1\"?\n>\n\nI thought it was coming too. I'd like to see it - if it's not too hard in\nthis release.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 12 Jan 2001 17:20:22 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs (was Re: [GENERAL] Re: Loading optimization)"
},
{
"msg_contents": "> > But CRC is used in WAL records only.\n> \n> Oh. I thought we'd agreed that a CRC on each stored disk block would\n> be a good idea as well. I take it you didn't do that.\n> \n> Do we want to consider doing this (and forcing another initdb)?\n> Or shall we say \"too late for 7.1\"?\n\nI personally was never agreed to this. Reasons?\n\nVadim\n\n\n",
"msg_date": "Thu, 11 Jan 2001 23:07:16 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs (was Re: [GENERAL] Re: Loading optimization)"
},
{
"msg_contents": "> > No, I thought we agreed disk block CRC was way overkill. If the CRC on\n> > the WAL log checks for errors that are not checked anywhere else, then\n> > fine, but I thought disk CRC would just duplicate the I/O subsystem/disk\n> > checks.\n> \n> A disk-block CRC would detect partially written blocks (ie, power drops\n> after disk has written M of the N sectors in a block). The disk's own\n> checks will NOT consider this condition a failure. I'm not convinced\n> that WAL will reliably detect it either (Vadim?). Certainly WAL will\n\nIdea proposed by Andreas about \"physical log\" is implemented!\nNow WAL saves whole data blocks on first after checkpoint\nmodification. This way on recovery modified data blocks will be\nfirst restored *as a whole*. Isn't it much better than just\ndetection of partially writes?\n\nOnly one type of modification isn't covered at the moment -\nupdated t_infomask of heap tuples.\n\n> not help for corruption caused by external agents, away from any updates\n> that are actually being performed/logged.\n\nWhat do you mean by \"external agents\"?\n\nVadim\n\n\n",
"msg_date": "Thu, 11 Jan 2001 23:32:12 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs (was Re: [GENERAL] Re: Loading optimization) "
}
]
|
[
{
"msg_contents": "> Um, you do realize that a contrib module that gets used as part of the\n> regress tests may as well be mainstream? At least in terms of the\n> portability requirements it will have to meet?\n> \n> I'm unhappy again. Bad enough we accepted a new feature during beta;\n> now we're going to expect an absolutely virgin contrib module to work\n> everywhere in order to pass regress tests?\n\nOps, agreed.\nAnd I fear that in current code there is no one GiST index\nimplementation -:( Should we worry about regress tests? -:)\n\nVadim\n",
"msg_date": "Thu, 11 Jan 2001 17:26:03 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "On Thu, 11 Jan 2001, Mikheev, Vadim wrote:\n\n> > Um, you do realize that a contrib module that gets used as part of the\n> > regress tests may as well be mainstream? At least in terms of the\n> > portability requirements it will have to meet?\n> >\n> > I'm unhappy again. Bad enough we accepted a new feature during beta;\n> > now we're going to expect an absolutely virgin contrib module to work\n> > everywhere in order to pass regress tests?\n>\n> Ops, agreed.\n> And I fear that in current code there is no one GiST index\n> implementation -:( Should we worry about regress tests? -:)\n\n\nYes, we had to write contrib module even to test GiST. People,\nI'm really confused after reading all of messages.\nGiST is just an interface and to test any interface you need 2 sides.\nIn current code there is only one side. old GiST code live\nuntested for years. What's the problem ? It's the problem of\ncurrent regression test, mostly.\nOk. We could rewrite R-Tree to use GiST and make regression test which\nwill not make people nervous. But this certainly not for 7.1 and most\nprobable without us. Author of R-Tree could write this easily.\nI read Bruce's interview and was really relaxed -\nhow everything is going well. Bruce, we need your opinion.\n\n\n\tOleg\n>\n> Vadim\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 12 Jan 2001 12:21:51 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "OK. We found an old implementation of R-Tre using GiST (Pg95)\nand we'll try to implement regression test using R-Tree\nit's anyway will be a good test.\n\n\tRegards,\n\n\t\tOleg\n\nOn Fri, 12 Jan 2001, Oleg Bartunov wrote:\n\n> On Thu, 11 Jan 2001, Mikheev, Vadim wrote:\n>\n> > > Um, you do realize that a contrib module that gets used as part of the\n> > > regress tests may as well be mainstream? At least in terms of the\n> > > portability requirements it will have to meet?\n> > >\n> > > I'm unhappy again. Bad enough we accepted a new feature during beta;\n> > > now we're going to expect an absolutely virgin contrib module to work\n> > > everywhere in order to pass regress tests?\n> >\n> > Ops, agreed.\n> > And I fear that in current code there is no one GiST index\n> > implementation -:( Should we worry about regress tests? -:)\n>\n>\n> Yes, we had to write contrib module even to test GiST. People,\n> I'm really confused after reading all of messages.\n> GiST is just an interface and to test any interface you need 2 sides.\n> In current code there is only one side. old GiST code live\n> untested for years. What's the problem ? It's the problem of\n> current regression test, mostly.\n> Ok. We could rewrite R-Tree to use GiST and make regression test which\n> will not make people nervous. But this certainly not for 7.1 and most\n> probable without us. Author of R-Tree could write this easily.\n> I read Bruce's interview and was really relaxed -\n> how everything is going well. Bruce, we need your opinion.\n>\n>\n> \tOleg\n> >\n> > Vadim\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 12 Jan 2001 14:33:38 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "On Fri, 12 Jan 2001, Oleg Bartunov wrote:\n\n> On Thu, 11 Jan 2001, Mikheev, Vadim wrote:\n>\n> > > Um, you do realize that a contrib module that gets used as part of the\n> > > regress tests may as well be mainstream? At least in terms of the\n> > > portability requirements it will have to meet?\n> > >\n> > > I'm unhappy again. Bad enough we accepted a new feature during beta;\n> > > now we're going to expect an absolutely virgin contrib module to work\n> > > everywhere in order to pass regress tests?\n> >\n> > Ops, agreed.\n> > And I fear that in current code there is no one GiST index\n> > implementation -:( Should we worry about regress tests? -:)\n>\n>\n> Yes, we had to write contrib module even to test GiST. People,\n> I'm really confused after reading all of messages.\n> GiST is just an interface and to test any interface you need 2 sides.\n> In current code there is only one side. old GiST code live\n> untested for years. What's the problem ? It's the problem of\n> current regression test, mostly.\n> Ok. We could rewrite R-Tree to use GiST and make regression test which\n> will not make people nervous. But this certainly not for 7.1 and most\n> probable without us. Author of R-Tree could write this easily.\n\nThe problem is that there is inadequate amount of time to fully test the\nthe regression tests among all of the platforms that we support ...\ntherefore, we can't test GiST among all of the platforms we support ...\n\nfor instance, can you guarantee that _int.c will compile on *every*\nplatform that PostgreSQL supports? That it will operate properly? Have\nyou tested that?\n\n\n",
"msg_date": "Fri, 12 Jan 2001 09:07:03 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "On Fri, 12 Jan 2001, Hannu Krosing wrote:\n\n> Oleg Bartunov wrote:\n> >\n> > OK. We found an old implementation of R-Tre using GiST (Pg95)\n> > and we'll try to implement regression test using R-Tree\n> > it's anyway will be a good test.\n>\n> How is it different than using RD-tree for tests ?\n>\n\nNo difference at all ! It's just another implemetation of R-Tree.\n\n> Can you do it usin already compiled-in functions and modifying\n> things only at SQL level ?\n>\n\nunfortunately not ! Current postgres code has nothing connected with\nGiST and this is a problem ! How to test interface code without\nhaving two sides ? I understand we don't want to have another reason\nfor complaints about non-working regression test. I never got\nregression test passed 100% on my Linux box with almost all versions\nof PostgreSQL but I could live with that. What's wrong with\nwarning message if GiST test not passed ?\n\n> Or is it just much simpler ?\n>\n\nI'm interesting to test performance of built-in R-Tree and R-Tree + GiST.\n\n\tOleg\n\n> ---------------------\n> Hannu\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 12 Jan 2001 17:31:15 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> What's wrong with\n> warning message if GiST test not passed ?\n\nYou're being *way* too optimistic. An output discrepancy in a test of\nGIST we could live with. But think about other scenarios:\n\n1. GIST test coredumps on some platforms. This corrupts other tests\n(at least through the \"system is starting up\" failure mode), thus\nmasking problems that we actually care about.\n\n2. GIST test code does not compile on some platforms, causing \"make check\"\nto fail completely.\n\nAt this point my vote is to leave the GIST test in contrib for 7.1.\nAnyone who actually cares about GIST (to be blunt: all three of you)\ncan run it as a separate step. I don't want it in the standard regress\ntests until 7.2, when we will have a reasonable amount of time to test\nand debug the test.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 10:02:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !! "
},
{
"msg_contents": "On Fri, 12 Jan 2001, Oleg Bartunov wrote:\n\n> On Fri, 12 Jan 2001, Hannu Krosing wrote:\n>\n> > Oleg Bartunov wrote:\n> > >\n> > > OK. We found an old implementation of R-Tre using GiST (Pg95)\n> > > and we'll try to implement regression test using R-Tree\n> > > it's anyway will be a good test.\n> >\n> > How is it different than using RD-tree for tests ?\n> >\n>\n> No difference at all ! It's just another implemetation of R-Tree.\n>\n> > Can you do it usin already compiled-in functions and modifying\n> > things only at SQL level ?\n> >\n>\n> unfortunately not ! Current postgres code has nothing connected with\n> GiST and this is a problem ! How to test interface code without\n> having two sides ? I understand we don't want to have another reason\n> for complaints about non-working regression test. I never got\n> regression test passed 100% on my Linux box with almost all versions\n> of PostgreSQL but I could live with that. What's wrong with\n> warning message if GiST test not passed ?\n\nIt has *nothing* to do with passing or not, it has to do with timing of\nhte patches ... had they come in before we went beta, this would all have\nbeen a no-brainer ... because they didn't, the problem arises ...\n\nGiST changes are included ... testing of GiST changes aren't integrated\n... can we *please* drop this whole thing already, as its really\ndetracting from getting *real* work done with very little, to no, benefit\n...\n\n\n",
"msg_date": "Fri, 12 Jan 2001 11:15:47 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "On Fri, 12 Jan 2001, Tom Lane wrote:\n\n> Oleg Bartunov <[email protected]> writes:\n> > What's wrong with\n> > warning message if GiST test not passed ?\n>\n> You're being *way* too optimistic. An output discrepancy in a test of\n> GIST we could live with. But think about other scenarios:\n>\n> 1. GIST test coredumps on some platforms. This corrupts other tests\n> (at least through the \"system is starting up\" failure mode), thus\n> masking problems that we actually care about.\n>\n> 2. GIST test code does not compile on some platforms, causing \"make check\"\n> to fail completely.\n>\n> At this point my vote is to leave the GIST test in contrib for 7.1.\n> Anyone who actually cares about GIST (to be blunt: all three of you)\n> can run it as a separate step. I don't want it in the standard regress\n> tests until 7.2, when we will have a reasonable amount of time to test\n> and debug the test.\n\nAgreed ... now let's move onto more important things, cause we've spent\nmuch too long on this as it is ...\n\nNamely, should we bundle up a beta4 this weeekend, so that the GiST\nchanges are in place for further testing, or hold off for ... ?\n\n\n",
"msg_date": "Fri, 12 Jan 2001 11:17:25 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Beta4 for GiST? (Was: Re: AW: Re: GiST for 7.1 !! )"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Namely, should we bundle up a beta4 this weeekend, so that the GiST\n> changes are in place for further testing, or hold off for ... ?\n\nFirst I'd like to finish a couple of open items I have, like fixing\nthe CRIT_SECTION code so that SIGTERM response will not occur when\nwe are holding a spinlock. Should be able to get this stuff done in\na day or two, if I quit arguing about GIST and get back to work...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 10:29:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beta4 for GiST? (Was: Re: AW: Re: GiST for 7.1 !! ) "
},
{
"msg_contents": "Oleg Bartunov wrote:\n> \n> OK. We found an old implementation of R-Tre using GiST (Pg95)\n> and we'll try to implement regression test using R-Tree\n> it's anyway will be a good test.\n\nHow is it different than using RD-tree for tests ?\n\nCan you do it usin already compiled-in functions and modifying \n things only at SQL level ?\n\nOr is it just much simpler ?\n\n---------------------\nHannu\n",
"msg_date": "Fri, 12 Jan 2001 16:05:25 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "> At this point my vote is to leave the GIST test in contrib for 7.1.\n> Anyone who actually cares about GIST (to be blunt: all three of you)\n> can run it as a separate step. I don't want it in the standard regress\n> tests until 7.2, when we will have a reasonable amount of time to test\n> and debug the test.\n\nAgreed. I want the GIST fixes in 7.1, but adding a new test at this\npoint is too risky.\n\nThe issue is that only the GIST people will be using the GIST fixes,\nwhile adding it to the regression test will affect all users, which is\ntoo risky at this point.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Jan 2001 11:33:03 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: GiST for 7.1 !!"
},
{
"msg_contents": "> Agreed ... now let's move onto more important things, cause we've spent\n> much too long on this as it is ...\n> \n> Namely, should we bundle up a beta4 this weeekend, so that the GiST\n> changes are in place for further testing, or hold off for ... ?\n\nI would hold off. GIST people can download the snapshot. Others aren't\ninterested in GIST.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Jan 2001 11:39:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beta4 for GiST? (Was: Re: AW: Re: GiST for 7.1 !! )"
},
{
"msg_contents": "On Fri, 12 Jan 2001, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Namely, should we bundle up a beta4 this weeekend, so that the GiST\n> > changes are in place for further testing, or hold off for ... ?\n>\n> First I'd like to finish a couple of open items I have, like fixing\n> the CRIT_SECTION code so that SIGTERM response will not occur when\n> we are holding a spinlock. Should be able to get this stuff done in\n> a day or two, if I quit arguing about GIST and get back to work...\n\nOkay, let's scheduale for Monday then if we can ... unless someone comes\nacross something major like we did with the whole beta2/beta3 release :)\n\n\n",
"msg_date": "Fri, 12 Jan 2001 12:57:29 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beta4 for GiST? (Was: Re: AW: Re: GiST for 7.1 !! ) "
}
]
|
[
{
"msg_contents": "At 09:20 AM 11-01-2001 -0800, Mikheev, Vadim wrote:\n>> In contrast the current alternatives appear to be either LOCK \n>> the entire table (preventing ALL inserts and selects),\n>\n>SHARE ROW EXCLUSIVE mode doesn't prevent selects...\n\nSorry, I meant all inserts and selects on the locked table. At least so far\nit seems to block those selects in 7.0.3 (I hope it does in all cases! If\nnot uhoh!).\n\n>> or to create a UNIQUE constraint (forcing complete rollbacks\n>> and restarts in event of a collision :( ).\n>\n>Hopefully, savepoints will be in 7.2\n\nYep that'll solve some things. Still think the getlock feature will be very\nhandy in many other cases.\n\nBTW would there be a significant performance/resource hit with savepoints?\n\n>> Any comments, suggestions or tips would be welcome. It looks \n>> like quite a complex thing to do - I've only just started\n>> looking at the postgresql internals and the lock manager.\n>\n>It's very easy to do (from my PoV -:)) We need in yet another\n>pseudo table like one we use in XactLockTableInsert/XactLockTableWait\n>- try to look there...\n\nThanks!\n\nI think by the time I succeed Postgresql will be version 7.2 or even 8 :).\n\nCheerio,\nLink.\n\n",
"msg_date": "Fri, 12 Jan 2001 09:43:16 +0800",
"msg_from": "Lincoln Yeoh <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Lock on arbitrary string feature"
}
]
|
[
{
"msg_contents": "what happens?\n\nvalter.\n\n[i've done vacuum analyze while query are running... vacuum stopped at some \npoint, then i've decided to ctrl-c, then killed postmaster]\n\n-------------------------------------\npostgres@lora:~$ /usr/pg71/bin/pg_ctl -D /usr/pg71/data/ stop\n\nSmart Shutdown request at Fri Jan 12 05:46:11 2001\npostmaster successfully shut down.\n\npostgres@lora:~$ /usr/pg71/bin/pg_ctl -D /usr/pg71/data/ start\n\npg_ctl: It seems another postmaster is running. Trying to start postmaster \nanyway.\nLock file \"/usr/pg71/data//postmaster.pid\" already exists.\nIs another postmaster (pid 11320) running in \"/usr/pg71/data/\"?\npg_ctl: Cannot start postmaster. Is another postmaster is running?\n\npostgres@lora:~$ /usr/pg71/bin/pg_ctl -D /usr/pg71/data/ restart\n\nWaiting for postmaster to shut down.........................\nFATAL: s_lock(0x401f7010) at spin.c:147, stuck spinlock. Aborting.\n\nFATAL: s_lock(0x401f7010) at spin.c:147, stuck spinlock. Aborting.\nStartup failed - abort\ndone.\npostmaster successfully shut down.\npostmaster successfully started up\n\npostgres@lora:~$ /usr/pg71/bin/postmaster: invalid argument -- '-D'\n\nTry '/usr/pg71/bin/postmaster --help' for more information.\n\npostgres@lora:~$ /usr/pg71/bin/pg_ctl -D /usr/pg71/data/ start\npostmaster successfully started up\npostgres@lora:~$ DEBUG: starting up\n\nDEBUG: database system was interrupted being in recovery at 2001-01-12 \n05:45:33\n\tThis propably means that some data blocks are corrupted\n\tand you will have to use last backup for recovery.\nDEBUG: CheckPoint record at (0, 107606368)\nDEBUG: Redo record at (0, 107467584); Undo record at (0, 107484188); \nShutdown FALSE\nDEBUG: NextTransactionId: 46738; NextOid: 59680\nDEBUG: database system was not properly shut down; automatic recovery in \nprogress...\nDEBUG: redo starts at (0, 107467584)\nFATAL 2: out of free buffers: time to abort !\n\n\npostgres@lora:~$\npostgres@lora:~$\nFATAL: s_lock(0x401f7010) at spin.c:147, stuck spinlock. Aborting.\n\nFATAL: s_lock(0x401f7010) at spin.c:147, stuck spinlock. Aborting.\nStartup failed - abort\n\n\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Fri, 12 Jan 2001 04:54:55 +0100",
"msg_from": "\"Valter Mazzola\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pg7.1beta3: FATAL: s_lock(0x401f7010) at spin.c:147, stuck spinlock."
},
{
"msg_contents": "Some of the noise here is coming from the fact that you didn't wait for\nthe old postmaster to quit before you tried to start another. (\"pg_ctl\nstop\" doesn't wait unless you say -w ... there's been some talk of\nreversing that default ...)\n\nHowever, it still looks like you had other problems. What sort of\nplatform is this on? Do the regression tests pass for you?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2001 23:08:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pg7.1beta3: FATAL: s_lock(0x401f7010) at spin.c:147,\n\tstuck spinlock."
}
]
|
[
{
"msg_contents": "In Linux Weekly News, an Interview with Bruce (from Nov 30):\nhttp://lwn.net/2001/features/Momjian/\n\n:-)\n\nGo get'em, Bruce....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 11 Jan 2001 23:10:21 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bruce Momjian's interview in LWN."
},
{
"msg_contents": "I announced this on Announce/General a few hours ago. \n\nI wanted to mention that all general PostgreSQL news goes to those two\nlists, on the assumption that all people are subscribed to either of\nthose two lists.\n\nI don't post to hackers by default because I don't want to duplicate\nthese postings.\n\n> In Linux Weekly News, an Interview with Bruce (from Nov 30):\n> http://lwn.net/2001/features/Momjian/\n> \n> :-)\n> \n> Go get'em, Bruce....\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Jan 2001 00:42:39 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bruce Momjian's interview in LWN."
},
{
"msg_contents": "Bruce Momjian wrote:\n> I announced this on Announce/General a few hours ago.\n \n> I wanted to mention that all general PostgreSQL news goes to those two\n> lists, on the assumption that all people are subscribed to either of\n> those two lists.\n \n> I don't post to hackers by default because I don't want to duplicate\n> these postings.\n\nSorry to duplicate, but I had not received the post to general or\nannounce (to both of which I am subscribed) before posting. But I will\nkeep in mind the new postings. I also put it on the 'In The News' page\non the website (thanks Vince).\n\nNow that I look through my inbox, I don't see the post anywhere. \nHmmm.... Not in trash either, which I didn't empty yesterday.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 12 Jan 2001 10:20:41 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bruce Momjian's interview in LWN."
},
{
"msg_contents": "Bruce Momjian wrote:\n> \nAnd here I was thinking it was \n\npost''-gre-see'-quel\n\n:-)\n",
"msg_date": "Fri, 12 Jan 2001 10:00:40 -0600",
"msg_from": "\"Keith G. Murphy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bruce Momjian's interview in LWN."
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > I announced this on Announce/General a few hours ago.\n> \n> > I wanted to mention that all general PostgreSQL news goes to those two\n> > lists, on the assumption that all people are subscribed to either of\n> > those two lists.\n> \n> > I don't post to hackers by default because I don't want to duplicate\n> > these postings.\n> \n> Sorry to duplicate, but I had not received the post to general or\n> announce (to both of which I am subscribed) before posting. But I will\n> keep in mind the new postings. I also put it on the 'In The News' page\n> on the website (thanks Vince).\n> \n> Now that I look through my inbox, I don't see the post anywhere. \n> Hmmm.... Not in trash either, which I didn't empty yesterday.\n\nThat is strange. I saw it on those lists.\n\nI want to say I didn't phrase the email correctly. Sorry.\n\nFirst, I wanted to say thanks for saying you liked it. Second, I \nwanted to make sure the people on hackers who want to see general\ninformation are subscribed to either the 'general' list or the\n'announce' list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Jan 2001 11:19:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bruce Momjian's interview in LWN."
},
{
"msg_contents": "On Fri, 12 Jan 2001, Keith G. Murphy wrote:\n\n> Bruce Momjian wrote:\n> >\n> And here I was thinking it was\n>\n> post''-gre-see'-quel\n>\n> :-)\n>\n\nIt's on the website in both WAV and MP3.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 12 Jan 2001 11:30:30 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bruce Momjian's interview in LWN."
},
{
"msg_contents": "On Fri, 12 Jan 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian wrote:\n> > > I announced this on Announce/General a few hours ago.\n> >\n> > > I wanted to mention that all general PostgreSQL news goes to those two\n> > > lists, on the assumption that all people are subscribed to either of\n> > > those two lists.\n> >\n> > > I don't post to hackers by default because I don't want to duplicate\n> > > these postings.\n> >\n> > Sorry to duplicate, but I had not received the post to general or\n> > announce (to both of which I am subscribed) before posting. But I will\n> > keep in mind the new postings. I also put it on the 'In The News' page\n> > on the website (thanks Vince).\n> >\n> > Now that I look through my inbox, I don't see the post anywhere.\n> > Hmmm.... Not in trash either, which I didn't empty yesterday.\n>\n> That is strange. I saw it on those lists.\n\nSo did I. But lately I've noticed that mail sent to both hackers and\ngeneral or announcements and general, one will show up a long while\nafter the first. One took two days to show up. It was sent on the\n9th (and I saw it on hackers that day) then on the 11th it showed up\nas posted to general. I didn't bother looking at the headers tho to\nsee where it was hung.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 12 Jan 2001 11:54:42 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bruce Momjian's interview in LWN."
},
{
"msg_contents": "\n-announce is moderated, which means it has to be approved by myself ... I\n*try* to get in nightly and approve what is in the queue, targetting\n-announce items first, but sometimes time doesn't permit :(\n\n\nOn Fri, 12 Jan 2001, Vince Vielhaber wrote:\n\n> On Fri, 12 Jan 2001, Bruce Momjian wrote:\n>\n> > > Bruce Momjian wrote:\n> > > > I announced this on Announce/General a few hours ago.\n> > >\n> > > > I wanted to mention that all general PostgreSQL news goes to those two\n> > > > lists, on the assumption that all people are subscribed to either of\n> > > > those two lists.\n> > >\n> > > > I don't post to hackers by default because I don't want to duplicate\n> > > > these postings.\n> > >\n> > > Sorry to duplicate, but I had not received the post to general or\n> > > announce (to both of which I am subscribed) before posting. But I will\n> > > keep in mind the new postings. I also put it on the 'In The News' page\n> > > on the website (thanks Vince).\n> > >\n> > > Now that I look through my inbox, I don't see the post anywhere.\n> > > Hmmm.... Not in trash either, which I didn't empty yesterday.\n> >\n> > That is strange. I saw it on those lists.\n>\n> So did I. But lately I've noticed that mail sent to both hackers and\n> general or announcements and general, one will show up a long while\n> after the first. One took two days to show up. It was sent on the\n> 9th (and I saw it on hackers that day) then on the 11th it showed up\n> as posted to general. I didn't bother looking at the headers tho to\n> see where it was hung.\n>\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Fri, 12 Jan 2001 13:11:27 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bruce Momjian's interview in LWN."
},
{
"msg_contents": "On Fri, Jan 12, 2001 at 10:00:40AM -0600, Keith G. Murphy wrote:\n> And here I was thinking it was \n> \n> post''-gre-see'-quel\n\nI pronounce it \"postgres\". (I suspect that everybody else does\ntoo, whenever possible.) In English there's no problem with the \nspelling differing from the pronunciation. Making the name hard \nto pronounce doesn't help anybody but Oracle.\n\nNathan Myers\[email protected]\n",
"msg_date": "Fri, 12 Jan 2001 11:45:17 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Bruce Momjian's interview in LWN."
},
{
"msg_contents": "Bruce Momjian wrote:\n>Lamar Owen wrote:\n> > Now that I look through my inbox, I don't see the post anywhere.\n> > Hmmm.... Not in trash either, which I didn't empty yesterday.\n \n> That is strange. I saw it on those lists.\n\nI've been seeing some odd e-mail propagation lately.....\n \n> I want to say I didn't phrase the email correctly. Sorry.\n\nOh, not a problem. You're famous for, er, non-verbosity.\n \n> First, I wanted to say thanks for saying you liked it. Second, I\n> wanted to make sure the people on hackers who want to see general\n> information are subscribed to either the 'general' list or the\n> 'announce' list.\n\nFirst, you're welcome. Second, understood and agreed.\n\n:-)\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 12 Jan 2001 15:02:55 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bruce Momjian's interview in LWN."
},
{
"msg_contents": "> Bruce Momjian wrote:\n> >Lamar Owen wrote:\n> > > Now that I look through my inbox, I don't see the post anywhere.\n> > > Hmmm.... Not in trash either, which I didn't empty yesterday.\n> \n> > That is strange. I saw it on those lists.\n> \n> I've been seeing some odd e-mail propagation lately.....\n> \n> > I want to say I didn't phrase the email correctly. Sorry.\n> \n> Oh, not a problem. You're famous for, er, non-verbosity.\n\nI am. Hmm...\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Jan 2001 15:05:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bruce Momjian's interview in LWN."
},
{
"msg_contents": "> > Oh, not a problem. You're famous for, er, non-verbosity.\n> I am. Hmm...\n\n*rofl* \n\nNo need to take that as a personal challenge to remove the \"non-\" from\nLamar's opinion... ;)\n\n - Thomas\n",
"msg_date": "Sat, 13 Jan 2001 16:02:05 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bruce Momjian's interview in LWN."
}
]
|
[
{
"msg_contents": "Architecture and regression.diffs:\nvalter.\n\n-------------\nvalter@lora:$ uname -a:\n\nLinux lorax 2.2.17 #3 Mon Oct 2 23:11:04 UTC 2000 i686 unknown\n-------------\n\nvalter@lora:$ less\n./src/test/regress/regression.diffs\n\n\n\n\n*** ./expected/random.out Thu Jan 6 06:40:54 2000\n--- ./results/random.out Fri Jan 12 06:18:18 2001\n***************\n*** 25,31 ****\n GROUP BY random HAVING count(random) > 1;\n random | count\n --------+-------\n! (0 rows)\n\n SELECT random FROM RANDOM_TBL\n WHERE random NOT BETWEEN 80 AND 120;\n--- 25,32 ----\n GROUP BY random HAVING count(random) > 1;\n random | count\n --------+-------\n! 103 | 2\n! (1 row)\n\n SELECT random FROM RANDOM_TBL\n WHERE random NOT BETWEEN 80 AND 120;\n\n======================================================================\n\n\n\n>From: Tom Lane <[email protected]>\n>To: \"Valter Mazzola\" <[email protected]>\n>CC: [email protected]\n>Subject: Re: [HACKERS] Pg7.1beta3: FATAL: s_lock(0x401f7010) at spin.c:147, \n>stuck spinlock.\n>Date: Thu, 11 Jan 2001 23:08:14 -0500\n>\n>Some of the noise here is coming from the fact that you didn't wait for\n>the old postmaster to quit before you tried to start another. (\"pg_ctl\n>stop\" doesn't wait unless you say -w ... there's been some talk of\n>reversing that default ...)\n>\n>However, it still looks like you had other problems. What sort of\n>platform is this on? Do the regression tests pass for you?\n>\n>\t\t\tregards, tom lane\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Fri, 12 Jan 2001 05:21:03 +0100",
"msg_from": "\"Valter Mazzola\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pg7.1beta3: FATAL: s_lock(0x401f7010) at spin.c:147,\n\tstuck spinlock."
}
]
|
[
{
"msg_contents": "With Apache Mod Perl, Apache::DBI, stress test with apache bench (ab -n \n100000 -c 4) in apache error_log i've got:\n\n[Pg7.1beta3 with standard conf files.]\n..........\n[Fri Jan 12 07:48:58 2001] [error] DBI->connect(dbname=mydb) failed: The \nData Base System is starting up\n............\n\nArchitecture:\nLinux 2.2.17 #3 Mon Oct 2 23:11:04 UTC 2000 i686 unknown\n\nAlso messages: \"DB in recovery ...\".\n\nWhat is the problem?\n\nIn pg7.0.2 it's all ok.\n\nvalter\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Fri, 12 Jan 2001 06:53:56 +0100",
"msg_from": "\"Valter Mazzola\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pg7.1beta3: connect failed: The DB System is starting up. "
},
{
"msg_contents": "> With Apache Mod Perl, Apache::DBI, stress test with apache bench (ab -n \n> 100000 -c 4) in apache error_log i've got:\n> \n> [Pg7.1beta3 with standard conf files.]\n\nAnd how many simult connections you did?\n\n> ..........\n> [Fri Jan 12 07:48:58 2001] [error] DBI->connect(dbname=mydb) failed: The \n> Data Base System is starting up\n> ............\n> \n> Also messages: \"DB in recovery ...\".\n\nLooks like server was crashed and now is in recovery.\n\nVadim\n\n\n",
"msg_date": "Fri, 12 Jan 2001 00:36:55 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pg7.1beta3: connect failed: The DB System is starting up. "
},
{
"msg_contents": "On Fri, 12 Jan 2001, Vadim Mikheev wrote:\n\n> > With Apache Mod Perl, Apache::DBI, stress test with apache bench (ab -n\n> > 100000 -c 4) in apache error_log i've got:\n> >\n> > [Pg7.1beta3 with standard conf files.]\n>\n> And how many simult connections you did?\n>\n> > ..........\n> > [Fri Jan 12 07:48:58 2001] [error] DBI->connect(dbname=mydb) failed: The\n> > Data Base System is starting up\n> > ............\n> >\n> > Also messages: \"DB in recovery ...\".\n>\n> Looks like server was crashed and now is in recovery.\n\nI'm confused about this \"recovery\" thing ... is it supposed to eventually\nstart working again? I've had it happen in the past, and the only way to\nget things started again is to kill off the postmaster and restart it :(\n\n\n",
"msg_date": "Fri, 12 Jan 2001 09:12:45 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pg7.1beta3: connect failed: The DB System is starting\n up."
}
]
|
[
{
"msg_contents": "\n> > > How about having some #if BROKEN_TIMEZONE_DATABASE code which uses both\n> > > mktime() and localtime() to derive the correct time zone? That is, call\n> > > mktime to get a time_t, then call localtime() to get the time zone info,\n> > > but only on platforms which do not get a complete result from just the\n> > > call to mktime(). In fact, we *could* check for tm->tm_isdst coming back\n> > > \"-1\" for every platform, then call localtime() to make a last stab at\n> > > getting a good value.\n> > How would we construct a valid time_t from the struct tm \n> without mktime?\n> \n> If I understand the info you have given previously, it should be\n> possible to get a valid tm->tm_isdst by the following \n> sequence of calls:\n> \n> // call mktime() which might return a \"-1\" for DST\n> time = mktime(tm);\n> // time is now a correct GMT time\n\nUnfortunately the returned time is -1 for all dates before 1970 \n(on AIX and, as I understood, IRIX) :-( \n(can you send someone from IBM, that I can shout at, to releive my anger)\n\nAndreas\n",
"msg_date": "Fri, 12 Jan 2001 09:40:20 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: AW: Re: tinterval - operator problems on AIX"
}
]
|
[
{
"msg_contents": "\n\n\n>From: \"Vadim Mikheev\" To: \"Valter Mazzola\" , Subject: Re: [HACKERS] \n>Pg7.1beta3: connect failed: The DB System is starting up. Date: Fri, 12 Jan \n>2001 00:36:55 -0800\n>\n> > With Apache Mod Perl, Apache::DBI, stress test with apache bench (ab -n \n> > 100000 -c 4) in apache error_log i've got: > > [Pg7.1beta3 with standard \n>conf files.]\n>\n>And how many simult connections you did?\n\nabout 10 connections\nvalter\n>\n> > .......... > [Fri Jan 12 07:48:58 2001] [error] \n>DBI->connect(dbname=mydb) failed: The > Data Base System is starting up > \n>............ > > Also messages: \"DB in recovery ...\".\n>\n>Looks like server was crashed and now is in recovery.\n>\n>Vadim\n>\n>\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Fri, 12 Jan 2001 09:48:14 +0100",
"msg_from": "\"Valter Mazzola\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pg7.1beta3: connect failed: The DB System is starting up."
}
]
|
[
{
"msg_contents": "\n> A disk-block CRC would detect partially written blocks (ie, power drops\n> after disk has written M of the N sectors in a block). The disk's own\n> checks will NOT consider this condition a failure.\n\nBut physical log recovery will rewrite every page that was changed\nafter last checkpoint, thus this is not an issue anymore.\n\n> I'm not convinced\n> that WAL will reliably detect it either (Vadim?). Certainly WAL will\n> not help for corruption caused by external agents, away from any updates\n> that are actually being performed/logged.\n\nThe external agent (if malvolent) could write a correct CRC anyway.\nIf on the other hand the agent writes complete garbage, vacuum will notice.\n\nAndreas\n",
"msg_date": "Fri, 12 Jan 2001 10:29:20 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: CRCs (was Re: [GENERAL] Re: Loading optimization) "
}
]
|
[
{
"msg_contents": "Two questions about pgaccess. I use tkl/Tk 8.2, Postgresql 7.1,\nFreeBSD 4.0 .\n\n\n1. I cannot view russian text in russian when I use pgaccess. I set all\nthe fonts in 'Preferences' to\n-cronyx-helvetica-*-*-*-*-17-*-*-*-*-*-koi8-* , but don't see russian\nletters in 'tables' and others windows. The texts are really in\nrussian, DBENCODING is KOI8.\n\n2. How can I manage with 'SQL window'? I can type text, but no\nmore. How can I send my queries to server?\n\nThanks to any help.\n\n\n-- \nAnatoly K. Lasareff Email: [email protected] \nhttp://tolikus.hq.aaanet.ru:8080 Phone: (8632)-710071\n",
"msg_date": "12 Jan 2001 15:40:34 +0300",
"msg_from": "[email protected] (Anatoly K. Lasareff)",
"msg_from_op": true,
"msg_subject": "pgaccess: russian fonts && SQL window???"
},
{
"msg_contents": "[email protected] (Anatoly K. Lasareff) writes:\n> Two questions about pgaccess. I use tkl/Tk 8.2, Postgresql 7.1,\n> FreeBSD 4.0 .\n\n> 1. I cannot view russian text in russian when I use pgaccess. I set all\n> the fonts in 'Preferences' to\n> -cronyx-helvetica-*-*-*-*-17-*-*-*-*-*-koi8-* , but don't see russian\n> letters in 'tables' and others windows. The texts are really in\n> russian, DBENCODING is KOI8.\n\nHm. We've had a couple of reports recently that suggest that there's\na character-set-translation problem between Postgres and recent Tcl/Tk\nreleases --- which seems to describe your problem as well. I have an\nunproven suspicion that this is related to Tcl's changeover to UTF-8\ninternal representation, because the reports have all come from people\nrunning Tcl 8.2 or later. But no one's done the work yet to understand\nthe problem in detail or propose a fix. Want to dig into it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Jan 2001 01:37:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess: russian fonts && SQL window??? "
}
]
|
[
{
"msg_contents": "\n> Pete Forman <[email protected]> writes:\n> > Thinking about that a bit more, I think that tm_isdst should not be\n> > written into.\n> \n> IIRC, setting isdst to -1 was necessary to get the right \n> behavior across\n> DST boundaries on more-mainstream systems. I do not think it's\n> acceptable to do worse on systems with good time libraries in order to\n> improve behavior on fundamentally broken ones.\n\nYes, the annoyance is, that localtime works for dates before 1970\nbut mktime doesn't. Best would probably be to assume no DST before\n1970 on AIX and IRIX.\n\nAndreas\n",
"msg_date": "Fri, 12 Jan 2001 16:51:51 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: Re: tinterval - operator problems on AIX "
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> Yes, the annoyance is, that localtime works for dates before 1970\n> but mktime doesn't. Best would probably be to assume no DST before\n> 1970 on AIX and IRIX.\n\nThat seems like a reasonable answer to me, especially since we have\nother platforms that behave that way. How can we do this --- just\ntest for isdst = -1 after the call, and assume that means failure?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 10:55:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Re: tinterval - operator problems on AIX "
}
]
|
[
{
"msg_contents": "\n> > Yes, the annoyance is, that localtime works for dates before 1970\n> > but mktime doesn't. Best would probably be to assume no DST before\n> > 1970 on AIX and IRIX.\n> \n> That seems like a reasonable answer to me, especially since we have\n> other platforms that behave that way. How can we do this --- just\n> test for isdst = -1 after the call, and assume that means failure?\n\nSince we have tests of the form (tm_isdst > 0) ? ... in other parts of the code\nthis would imho be sufficient.\nBTW, I would use the above check for DST in all parts of the code.\nCurrently we eighter have (tm_isdst ? ....) or the above form.\n\nAndreas \n",
"msg_date": "Fri, 12 Jan 2001 17:07:26 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: Re: tinterval - operator problems on AIX "
}
]
|
[
{
"msg_contents": "First I tried to dump out a database like:\n\nfrank@limedes:~ > pg_dump mpi > dump.mpi\ngetTables(): relation 'institute': 6 Triggers were expected, but got 0\n\nThe database mpi does contain a table 'institute' and a few foreign key constraints. Then\nI tried to dump another database, as in:\n\npostgres@limedes:~ > pg_dump intranet > dumptest\ngetTables(): relation 'institute': 6 Triggers were expected, but got 0\n\nThe database intranet has _no_ table 'institute' and no foreign key constraints.\n\nThen I had a look via psql at intranet and it turns out that it shows up as the database\nmpi mangled into the database intranet, contentwise; i.e. it doesn't only show the tables\nthat are in intranet but also those that belong to mpi?! Then I look at _any_ of the\ndatabases in this Postgres installation, they show up as mangled together with mpi?! When\nI try to vacuum any of those databases, I always get:\n\n[ . . . stuff that looks normal . . . ]\n Index pg_class_oid_index: Pages 2; Tuples 138: Deleted 45. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_class_oid_index: NUMBER OF INDEX' TUPLES (138) IS NOT THE SAME AS HEAP'\n(205).\n Recreate the index.\nNOTICE: Index pg_class_relname_index: Pages 4; Tuples 138: Deleted 44. CPU 0.00s/0.00u\nsec.\nNOTICE: Index pg_class_relname_index: NUMBER OF INDEX' TUPLES (138) IS NOT THE SAME AS\nHEAP' (205).\n Recreate the index.\nERROR: Cannot insert a duplicate key into unique index pg_class_relname_index\n\nHowever, if I use another client, i.e. not psql, but a web app, then I do still have\naccess to the contents of, for instance, the intranet database.\n\nRestarting the server didn't make a difference.\n\nDoes this make any sense to anyone?\n\nRegards, Frank\n",
"msg_date": "Fri, 12 Jan 2001 18:05:09 +0100",
"msg_from": "Frank Joerdens <[email protected]>",
"msg_from_op": true,
"msg_subject": "Beta2 Vacuum and pg_dump failures and mangled databases"
},
{
"msg_contents": "Frank Joerdens wrote:\n[ . . . ]\n> Restarting the server didn't make a difference.\n\nI upgraded to beta3 just now and the problem persists. I didn't do an initdb obviously cuz\nI cannot save the data via pg_dump. Beta3 will read beta2 data OK (I guess this means that\nan initdb is not required when going from beta2 to beta3?!) but I can't vacuum or dump on\nany database.\n\n- Frank\n",
"msg_date": "Fri, 12 Jan 2001 20:13:16 +0100",
"msg_from": "Frank Joerdens <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Beta2 Vacuum and pg_dump failures and mangled databases"
},
{
"msg_contents": "On Fri, Jan 12, 2001 at 06:05:09PM +0100, Frank Joerdens wrote:\n[ . . . ]\n> Does this make any sense to anyone?\n\nAre questions related to 7.1 beta versions best directed to hackers or to\ngeneral?\n\n- Frank\n",
"msg_date": "Fri, 12 Jan 2001 20:28:00 +0100",
"msg_from": "Frank Joerdens <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Beta2 Vacuum and pg_dump failures and mangled databases"
},
{
"msg_contents": "Frank Joerdens <[email protected]> writes:\n> Then I had a look via psql at intranet and it turns out that it shows\n> up as the database mpi mangled into the database intranet,\n> contentwise; i.e. it doesn't only show the tables that are in intranet\n> but also those that belong to mpi?\n\nI think you've been bit by the RelFileNodeEquals bug we found on Monday.\nOne of the known possible effects of that bug is that a VACUUM can write\nblocks of system-catalog tables into the same catalog of a different\ndatabase.\n\nYou should probably write off your databases as toast ... update to\nbeta3 and do an initdb. Sorry about that ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 14:35:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 Vacuum and pg_dump failures and mangled databases "
},
{
"msg_contents": "Frank Joerdens <[email protected]> writes:\n> Are questions related to 7.1 beta versions best directed to hackers or to\n> general?\n\nhackers is the proper place for discussing any unreleased version, I'd\nsay. Or you can file a bug report on pgsql-bugs, if that seems more\nappropriate.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 14:37:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 Vacuum and pg_dump failures and mangled databases "
}
]
|
[
{
"msg_contents": "> Postgresql subtracts one minute from any times I enter into a database:\n> mydb=# create table test (timeval time);\n> mydb=# insert into test values ('08:30');\n> mydb=# select * from test;\n> ----------\n> 08:29:00\n...\n> In a later message he says he's running 7.0.2 on \"Trustix Secure Linux\n> 1.2 (RedHat based)\", whatever that is.\n> Thomas, did you see this thread on pg-novices? You ever seen behavior\n> like this? I'm baffled.\n\nNot sure about the distro, but it is hard to imagine that they got it\nmore wrong wrt compiler options than has Mandrake (the other platform\nwith rounding trouble in their default packages).\n\nBarry, can you give more details? How have you build PostgreSQL? We'll\nneed some help tracking this down, but afaik it will point back to a\nbuild problem on your platform.\n\n - Thomas\n",
"msg_date": "Fri, 12 Jan 2001 17:13:57 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FWD: bizarre behavior of 'time' data entry"
},
{
"msg_contents": "Check out Trustix at www.trustix.net. I'm using the RPM that was installed\nwith the distro.\n\nps shows postgresql running:\n\n/usr/bin/pg_ctl -D /var/lib/pgsql/data -p /usr/bin/postmaster start\n/usr/bin/postmaster -i\n\nI can poke a hole in my firewall and let you connect to the database if you\nwould like troubleshoot my sytem. But I'll need some help setting up\npermissions to allow external connections. Let me know and I'll send my IP\naddress to your private email.\n\nAlso, let me know what particular build problems might cause this, and I'll\npost to Trustix mail list.\n\nThanks for you help,\n\nBarry\n\nOn 2001.01.12 12:13:57 -0500 Thomas Lockhart wrote:\n> > Postgresql subtracts one minute from any times I enter into a database:\n> > mydb=# create table test (timeval time);\n> > mydb=# insert into test values ('08:30');\n> > mydb=# select * from test;\n> > ----------\n> > 08:29:00\n> ...\n> > In a later message he says he's running 7.0.2 on \"Trustix Secure Linux\n> > 1.2 (RedHat based)\", whatever that is.\n> > Thomas, did you see this thread on pg-novices? You ever seen behavior\n> > like this? I'm baffled.\n> \n> Not sure about the distro, but it is hard to imagine that they got it\n> more wrong wrt compiler options than has Mandrake (the other platform\n> with rounding trouble in their default packages).\n> \n> Barry, can you give more details? How have you build PostgreSQL? We'll\n> need some help tracking this down, but afaik it will point back to a\n> build problem on your platform.\n> \n> - Thomas\n> \n\n",
"msg_date": "Fri, 12 Jan 2001 12:57:46 -0500",
"msg_from": "Barry Stewart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FWD: bizarre behavior of 'time' data entry"
},
{
"msg_contents": "> ps shows postgresql running:\n> /usr/bin/pg_ctl -D /var/lib/pgsql/data -p /usr/bin/postmaster start\n> /usr/bin/postmaster -i\n> I can poke a hole in my firewall and let you connect to the database if you\n> would like troubleshoot my sytem. But I'll need some help setting up\n> permissions to allow external connections. Let me know and I'll send my IP\n> address to your private email.\n\nI won't have time to do this in the next few weeks, since I'll be\ntraveling most of the time. You might find another volunteer from this\nlist, but if not I would suggest the following steps:\n\n1) send us your default rpm compiler and build options. Use\n\n rpm --showrc > tempfiletomail.txt\n gcc --version >>&! tempfiletomail.txt\n\n2) try building postgresql from sources. See if the problem persists (it\nwon't).\n\n3) try building the postgresql rpm from sources. The steps are\n\n a) rpm -ivv postgresql-xxx.src.rpm\n b) cd /usr/src/RPM/SPEC (for Mandrake, RedHat uses /usr/src/RedHat...)\n c) rpm -ba postgresql.spec (verify the spec file name)\n d) cd /usr/src/RPM/RPMS/ixxx\n e) rpm -Uvh --force postgresql*.ixxx.rpm\n\nI'm doing these steps from memory, and you had better save your database\ncontents somewhere just in case it gets trashed ;)\n\n> Also, let me know what particular build problems might cause this, and I'll\n> post to Trustix mail list.\n\nNot sure, since I've never seen this before for this data type :(\n\n - Thomas\n",
"msg_date": "Sat, 13 Jan 2001 15:34:55 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FWD: bizarre behavior of 'time' data entry"
},
{
"msg_contents": "See attached tmpfile.txt. I'm going to try rebuilding the 7.0.2 SRPM\nfirst. If that doesn't work, I'll try building the 7.0.3 tar source file. \nWish me luck!\n\nBarry\n\n\nOn 2001.01.13 10:34:55 -0500 Thomas Lockhart wrote:\n> > ps shows postgresql running:\n> > /usr/bin/pg_ctl -D /var/lib/pgsql/data -p /usr/bin/postmaster start\n> > /usr/bin/postmaster -i\n> > I can poke a hole in my firewall and let you connect to the database if\n> you\n> > would like troubleshoot my sytem. But I'll need some help setting up\n> > permissions to allow external connections. Let me know and I'll send\n> my IP\n> > address to your private email.\n> \n> I won't have time to do this in the next few weeks, since I'll be\n> traveling most of the time. You might find another volunteer from this\n> list, but if not I would suggest the following steps:\n> \n> 1) send us your default rpm compiler and build options. Use\n> \n> rpm --showrc > tempfiletomail.txt\n> gcc --version >>&! tempfiletomail.txt\n> \n> 2) try building postgresql from sources. See if the problem persists (it\n> won't).\n> \n> 3) try building the postgresql rpm from sources. The steps are\n> \n> a) rpm -ivv postgresql-xxx.src.rpm\n> b) cd /usr/src/RPM/SPEC (for Mandrake, RedHat uses /usr/src/RedHat...)\n> c) rpm -ba postgresql.spec (verify the spec file name)\n> d) cd /usr/src/RPM/RPMS/ixxx\n> e) rpm -Uvh --force postgresql*.ixxx.rpm\n> \n> I'm doing these steps from memory, and you had better save your database\n> contents somewhere just in case it gets trashed ;)\n> \n> > Also, let me know what particular build problems might cause this, and\n> I'll\n> > post to Trustix mail list.\n> \n> Not sure, since I've never seen this before for this data type :(\n> \n> - Thomas\n>",
"msg_date": "Sat, 13 Jan 2001 12:08:05 -0500",
"msg_from": "Barry Stewart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FWD: bizarre behavior of 'time' data entry"
},
{
"msg_contents": "> See attached tmpfile.txt...\n\nThis distro uses the same or similar compiler flags as does Mandrake,\n*and it is the wrong thing to do*!!!! The gcc folks recommend against\never using \"-O3\" with \"-fast-math\", but both of these distros do it\nanyway.\n\nAnd you see the results :(\n\nPick up the .rpmrc I've posted at ftp.postgresql.org for Mandrake (look\nsomewhere under /pub/binaries/...) and put it in your root account's\nhome directory. Then try rebuilding from your .src.rpm and see what\nhappens...\n\n - Thomas\n",
"msg_date": "Sat, 13 Jan 2001 17:17:31 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FWD: bizarre behavior of 'time' data entry"
},
{
"msg_contents": "FIXED! I had already started to rebuild the SRPM when your latest email\narrived. After installing the resulting RPMS, the problem still existed. \nI downloaded the Mandrake .rpmrc file, rebuilt, and the problem is gone\n(yea!). Interestingly enough, the existing time entries in my test\ndatabase magically added back the missing minute. I guess the values were\nstored correctly in the database, but were mangled in the display somehow. \nOh well, that's academic; I'm off to be a productive PostgreSQL user...\n\nThanks for lending me your valuable time.\n\nBarry\n\nTODO: Advise Trustix developers of conflicting gcc flags.\n\n\nOn 2001.01.13 12:17:31 -0500 Thomas Lockhart wrote:\n> > See attached tmpfile.txt...\n> \n> This distro uses the same or similar compiler flags as does Mandrake,\n> *and it is the wrong thing to do*!!!! The gcc folks recommend against\n> ever using \"-O3\" with \"-fast-math\", but both of these distros do it\n> anyway.\n> \n> And you see the results :(\n> \n> Pick up the .rpmrc I've posted at ftp.postgresql.org for Mandrake (look\n> somewhere under /pub/binaries/...) and put it in your root account's\n> home directory. Then try rebuilding from your .src.rpm and see what\n> happens...\n> \n> - Thomas\n> \n\n",
"msg_date": "Sat, 13 Jan 2001 13:26:02 -0500",
"msg_from": "Barry Stewart <[email protected]>",
"msg_from_op": false,
"msg_subject": "SUCCESS!!: bizarre behavior of 'time' data entry"
}
]
|
[
{
"msg_contents": "bpalmer writes:\n\n> This traffic does not seem necessary for the list, but here are my\n> thoughts.\n\nI think it is.\n\n> I don't begin to disagree with this for a second. I know that there are a\n> lot of RPM users out there that would like the RPM. I'm aware that there\n> would be a lesser demand for the OBSD packages, but it's still worth\n> putting up there.\n\nDefinitely.\n\n> I have talked to the maintainer and am working with him on this. With\n> luck, if I/we keep up on the betas, when 7.1 comes out for real, we\n> will be able to make the changed then too.\n\nThat's even better. Maintaining separate tracks of packages would be a\nsource of confusion at best.\n\n> What I am gathering from all this conversation is that there is no\n> repository for packages. I would love to test the FBSD package too, but\n> I don't know where it is, nor if it's being worked on. If it's not, I\n> may be interested in working on that too!\n\nWell, in the light of the openpackages.org effort it seems you have just\nsigned yourself up to create a BSD-independent package. ;-) Asking the\nrelevant maintainer might be a first step, though.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 12 Jan 2001 20:02:02 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: Beta2 ... ?"
},
{
"msg_contents": "> > What I am gathering from all this conversation is that there is no\n> > repository for packages.\n\nWhoops. There is a repository for packages on ftp.postgresql.org, and\nyou are welcome to contribute packages to there. As Peter points out, we\nprobably aren't helping folks if we have some independent track of\npackage development, so we would do better to also coordinate with the\ndistro package maintainers at the same time. And we would all really\nprefer if the packages posted on ftp.postgresql.org are traceable to the\n\"official\" builds of packages elsewhere.\n\nFor most folks running a particular OS and distro, there are certain\nplaces they would look for packages, and it would be great if those\nusual places have the benefit of your contributions too.\n\nFor cases where more coordination is required, such as with the RPM\npackaging used for a bunch of distros, having them posted on\nftp.postgresql.org has helped us keep the RPM package itself consistant\nwith the various packagers. Not sure if you will find the same\ncoordination problem with your platform.\n\n> Well, in the light of the openpackages.org effort it seems you have just\n> signed yourself up to create a BSD-independent package. ;-) Asking the\n> relevant maintainer might be a first step, though.\n\n:)\n\n - Thomas\n",
"msg_date": "Sat, 13 Jan 2001 15:45:33 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Beta2 ... ?"
}
]
|
[
{
"msg_contents": "> [ . . . ]\n> > Restarting the server didn't make a difference.\n> \n> I upgraded to beta3 just now and the problem persists. I \n> didn't do an initdb obviously cuz\n> I cannot save the data via pg_dump. Beta3 will read beta2 \n> data OK (I guess this means that\n> an initdb is not required when going from beta2 to beta3?!) \n> but I can't vacuum or dump on\n> any database.\n\nSo, server doesn't restart?\nCould add\n\nwal_debug = 1\n\nto postgresql.conf, start postmaster and send me stderr output?\n\nVadim\n\n",
"msg_date": "Fri, 12 Jan 2001 11:19:44 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Beta2 Vacuum and pg_dump failures and mangled datab\n\tases"
},
{
"msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > [ . . . ]\n> > > Restarting the server didn't make a difference.\n> >\n> > I upgraded to beta3 just now and the problem persists. I\n> > didn't do an initdb obviously cuz\n> > I cannot save the data via pg_dump. Beta3 will read beta2\n> > data OK (I guess this means that\n> > an initdb is not required when going from beta2 to beta3?!)\n> > but I can't vacuum or dump on\n> > any database.\n> \n> So, server doesn't restart?\n\nYes, it does restart, that is not the problem (did I explain that properly?).\n\n> Could add\n> \n> wal_debug = 1\n> \n> to postgresql.conf, start postmaster and send me stderr output?\n\nI did add wal_debug = 1 to postgresql.conf. Starting up is OK, when I then try a vacuum\nverbose on a database, it goes:\n\n-------------------------------------- start log --------------------------------------\nDEBUG: database system is shut down\nDEBUG: starting up\nDEBUG: database system was shut down at 2001-01-12 20:11:37\nDEBUG: CheckPoint record at (0, 11629776)\nDEBUG: Redo record at (0, 11629776); Undo record at (0, 0); Shutdown TRUE\nDEBUG: NextTransactionId: 8284; NextOid: 98635\nDEBUG: database system is in production state\nNOTICE: --Relation pg_type--\nNOTICE: Pages 2: Changed 0, reaped 1, Empty 0, New 0; Tup 131: Vac 0, Keep/VTL 0/0, Crash\n0, UnUsed 2, MinLen 106, MaxLen 109; Re-using: Free/Avail. Space 1428/0; EndEmpty/Avail.\nPages 0/0. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_oid_index: Pages 2; Tuples 131: Deleted 0. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_type_typname_index: Pages 2; Tuples 131: Deleted 0. CPU 0.00s/0.00u sec.\nINSERT @ 0/11629840: prev 0/11629776; xprev 0/0; xid 8291; bkpb 1: Heap - clean: node\n95464/1247; blk 1\nXLogFlush: rqst 0/11638108; wrt 0/0; flsh 0/0\nINSERT @ 0/11638108: prev 0/11629840; xprev 0/11629840; xid 8291: Transaction - commit:\n2001-01-12 20:12:51\nXLogFlush: rqst 0/11638144; wrt 0/11638108; flsh 0/11638108\nNOTICE: --Relation pg_attribute--\nNOTICE: Pages 9: Changed 0, reaped 1, Empty 0, New 0; Tup 649: Vac 0, Keep/VTL 0/0, Crash\n0, UnUsed 18, MinLen 98, MaxLen 98; Re-using: Free/Avail. Space 5500/0; EndEmpty/Avail.\nPages 0/0. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_attribute_relid_attnam_index: Pages 10; Tuples 649: Deleted 0. CPU\n0.01s/0.00u sec.\nNOTICE: Index pg_attribute_relid_attnum_index: Pages 5; Tuples 649: Deleted 0. CPU\n0.00s/0.00u sec.\nINSERT @ 0/11638144: prev 0/11638108; xprev 0/0; xid 8292; bkpb 1: Heap - clean: node\n95464/1249; blk 8\nXLogFlush: rqst 0/11646412; wrt 0/11638144; flsh 0/11638144\nINSERT @ 0/11646412: prev 0/11638144; xprev 0/11638144; xid 8292: Transaction - commit:\n2001-01-12 20:12:52\nXLogFlush: rqst 0/11646448; wrt 0/11646412; flsh 0/11646412\nNOTICE: --Relation pg_class--\nNOTICE: Pages 7: Changed 0, reaped 6, Empty 2, New 0; Tup 155: Vac 29, Keep/VTL 0/0,\nCrash 0, UnUsed 81, MinLen 115, MaxLen 160; Re-using: Free/Avail. Space 37060/30444;\nEndEmpty/Avail. Pages 0/5. CPU 0.00s/0.00u sec.\nINSERT @ 0/11646448: prev 0/11646412; xprev 0/0; xid 8293; bkpb 1: Btree - delete: node\n95464/17118; tid 1/81\nINSERT @ 0/11654720: prev 0/11646448; xprev 0/11646448; xid 8293: Btree - delete: node\n95464/17118; tid 1/81\nINSERT @ 0/11654768: prev 0/11654720; xprev 0/11654720; xid 8293: Btree - delete: node\n95464/17118; tid 1/81\nINSERT @ 0/11654816: prev 0/11654768; xprev 0/11654768; xid 8293: Btree - delete: node\n95464/17118; tid 1/82\nINSERT @ 0/11654864: prev 0/11654816; xprev 0/11654816; xid 8293: Btree - delete: node\n95464/17118; tid 1/83\nINSERT @ 0/11654912: prev 0/11654864; xprev 0/11654864; xid 8293: Btree - delete: node\n95464/17118; tid 1/84\nINSERT @ 0/11654960: prev 0/11654912; xprev 0/11654912; xid 8293: Btree - delete: node\n95464/17118; tid 1/85\nINSERT @ 0/11655008: prev 0/11654960; xprev 0/11654960; xid 8293: Btree - delete: node\n95464/17118; tid 1/86\nINSERT @ 0/11655056: prev 0/11655008; xprev 0/11655008; xid 8293: Btree - delete: node\n95464/17118; tid 1/87\nINSERT @ 0/11655104: prev 0/11655056; xprev 0/11655056; xid 8293: Btree - delete: node\n95464/17118; tid 1/88\nNOTICE: Index pg_class_oid_index: Pages 2; Tuples 88: Deleted 10. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_class_oid_index: NUMBER OF INDEX' TUPLES (88) IS NOT THE SAME AS HEAP'\n(155).\n\tRecreate the index.\nINSERT @ 0/11655152: prev 0/11655104; xprev 0/11655104; xid 8293; bkpb 1: Btree - delete:\nnode 95464/17121; tid 1/1\nINSERT @ 0/11663424: prev 0/11655152; xprev 0/11655152; xid 8293: Btree - delete: node\n95464/17121; tid 1/1\nINSERT @ 0/11663472: prev 0/11663424; xprev 0/11663424; xid 8293: Btree - delete: node\n95464/17121; tid 1/1\nINSERT @ 0/11663520: prev 0/11663472; xprev 0/11663472; xid 8293: Btree - delete: node\n95464/17121; tid 1/71\nINSERT @ 0/11663568: prev 0/11663520; xprev 0/11663520; xid 8293: Btree - delete: node\n95464/17121; tid 1/72\nINSERT @ 0/11663616: prev 0/11663568; xprev 0/11663568; xid 8293: Btree - delete: node\n95464/17121; tid 1/73\nINSERT @ 0/11663664: prev 0/11663616; xprev 0/11663616; xid 8293: Btree - delete: node\n95464/17121; tid 1/74\nINSERT @ 0/11663712: prev 0/11663664; xprev 0/11663664; xid 8293: Btree - delete: node\n95464/17121; tid 1/86\nINSERT @ 0/11663760: prev 0/11663712; xprev 0/11663712; xid 8293: Btree - delete: node\n95464/17121; tid 1/87\nINSERT @ 0/11663808: prev 0/11663760; xprev 0/11663760; xid 8293: Btree - delete: node\n95464/17121; tid 1/88\nNOTICE: Index pg_class_relname_index: Pages 2; Tuples 88: Deleted 10. CPU 0.00s/0.00u\nsec.\nNOTICE: Index pg_class_relname_index: NUMBER OF INDEX' TUPLES (88) IS NOT THE SAME AS\nHEAP' (155).\n\tRecreate the index.\nINSERT @ 0/11663856: prev 0/11663808; xprev 0/11663808; xid 8293; bkpb 1: Heap - clean:\nnode 95464/1259; blk 6\nINSERT @ 0/11672124: prev 0/11663856; xprev 0/11663856; xid 8293; bkpb 1: Heap - clean:\nnode 95464/1259; blk 1\nINSERT @ 0/11680392: prev 0/11672124; xprev 0/11672124; xid 8293: Heap - move: node\n95464/1259; tid 6/3; new 1/1\nINSERT @ 0/11680592: prev 0/11680392; xprev 0/11680392; xid 8293: Btree - insert: node\n95464/17118; tid 1/81\nINSERT @ 0/11680652: prev 0/11680592; xprev 0/11680592; xid 8293: Btree - insert: node\n95464/17121; tid 1/1\nINSERT @ 0/11680740: prev 0/11680652; xprev 0/11680652; xid 8293: Heap - move: node\n95464/1259; tid 6/5; new 1/2\nINSERT @ 0/11680940: prev 0/11680740; xprev 0/11680740; xid 8293: Btree - insert: node\n95464/17118; tid 1/81\nINSERT @ 0/11681000: prev 0/11680940; xprev 0/11680940; xid 8293: Btree - insert: node\n95464/17121; tid 1/1\nINSERT @ 0/11681088: prev 0/11681000; xprev 0/11681000; xid 8293: Heap - move: node\n95464/1259; tid 6/11; new 1/3\nINSERT @ 0/11681288: prev 0/11681088; xprev 0/11681088; xid 8293: Btree - insert: node\n95464/17118; tid 1/83\nINSERT @ 0/11681348: prev 0/11681288; xprev 0/11681288; xid 8293: Btree - insert: node\n95464/17121; tid 1/88\nINSERT @ 0/11681436: prev 0/11681348; xprev 0/11681348; xid 8293: Heap - move: node\n95464/1259; tid 6/17; new 1/4\nINSERT @ 0/11681596: prev 0/11681436; xprev 0/11681436; xid 8293: Btree - insert: node\n95464/17118; tid 1/87\nINSERT @ 0/11681656: prev 0/11681596; xprev 0/11681596; xid 8293: Btree - insert: node\n95464/17121; tid 1/74\nINSERT @ 0/11681744: prev 0/11681656; xprev 0/11681656; xid 8293: Heap - move: node\n95464/1259; tid 6/18; new 1/5\nINSERT @ 0/11681916: prev 0/11681744; xprev 0/11681744; xid 8293: Btree - insert: node\n95464/17118; tid 1/86\nINSERT @ 0/11681976: prev 0/11681916; xprev 0/11681916; xid 8293: Btree - insert: node\n95464/17121; tid 1/73\nINSERT @ 0/11682064: prev 0/11681976; xprev 0/11681976; xid 8293: Heap - move: node\n95464/1259; tid 6/28; new 1/6\nINSERT @ 0/11682224: prev 0/11682064; xprev 0/11682064; xid 8293: Btree - insert: node\n95464/17118; tid 1/92\nINSERT @ 0/11682284: prev 0/11682224; xprev 0/11682224; xid 8293: Btree - insert: node\n95464/17121; tid 1/78\nINSERT @ 0/11682372: prev 0/11682284; xprev 0/11682284; xid 8293: Heap - move: node\n95464/1259; tid 6/29; new 1/7\nINSERT @ 0/11682532: prev 0/11682372; xprev 0/11682372; xid 8293: Btree - insert: node\n95464/17118; tid 1/91\nINSERT @ 0/11682592: prev 0/11682532; xprev 0/11682532; xid 8293: Btree - insert: node\n95464/17121; tid 1/77\nINSERT @ 0/11682680: prev 0/11682592; xprev 0/11682592; xid 8293: Heap - move: node\n95464/1259; tid 6/33; new 1/8\nINSERT @ 0/11682880: prev 0/11682680; xprev 0/11682680; xid 8293: Btree - insert: node\n95464/17118; tid 1/90\nINSERT @ 0/11682940: prev 0/11682880; xprev 0/11682880; xid 8293: Btree - insert: node\n95464/17121; tid 1/3\nINSERT @ 0/11683028: prev 0/11682940; xprev 0/11682940; xid 8293: Heap - move: node\n95464/1259; tid 6/36; new 1/9\nINSERT @ 0/11683188: prev 0/11683028; xprev 0/11683028; xid 8293: Btree - insert: node\n95464/17118; tid 1/96\nINSERT @ 0/11683248: prev 0/11683188; xprev 0/11683188; xid 8293: Btree - insert: node\n95464/17121; tid 1/96\nINSERT @ 0/11683336: prev 0/11683248; xprev 0/11683248; xid 8293: Heap - move: node\n95464/1259; tid 6/37; new 1/10\nINSERT @ 0/11683536: prev 0/11683336; xprev 0/11683336; xid 8293: Btree - insert: node\n95464/17118; tid 1/85\nINSERT @ 0/11683596: prev 0/11683536; xprev 0/11683536; xid 8293: Btree - insert: node\n95464/17121; tid 1/95\nINSERT @ 0/11683684: prev 0/11683596; xprev 0/11683596; xid 8293; bkpb 1: Heap - clean:\nnode 95464/1259; blk 5\nINSERT @ 0/11691952: prev 0/11683684; xprev 0/11683684; xid 8293: Heap - move: node\n95464/1259; tid 5/1; new 1/11\nERROR: Cannot insert a duplicate key into unique index pg_class_oid_index\n-------------------------------------- start log --------------------------------------\n\nWhich makes me pause . . . are OIDs unique per database or per PostgreSQL installation? I\nthink per database. Therefore if databases are mangled togeher, then things would be\npretty messed up, oid-wise. Maybe I did something really stupid on importing mpi into\nthis new installation, but I don't think so. Basically what I did was \n\n>createdb mpi \n\nand then \n\n>psql -e mpi < whatevernameIassignedtothefilewhichIdumpedtheorignaldatabaseinto\n\nI managed to rescue my data via COPY but if this is a 7.1-related error and not\nFrank-confusedness, then it looks like an evil issue indeed.\n\nRegards, Frank\n",
"msg_date": "Fri, 12 Jan 2001 20:47:55 +0100",
"msg_from": "Frank Joerdens <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 Vacuum and pg_dump failures and mangled databases"
},
{
"msg_contents": "Frank Joerdens <[email protected]> writes:\n> I managed to rescue my data via COPY\n\nOh, good.\n\n> but if this is a 7.1-related error and not\n> Frank-confusedness, then it looks like an evil issue indeed.\n\nEvil it was. The haste with which beta3 appeared should've tipped you\noff that beta2 was badly broken :-(. What's puzzling us, though, is\nthat this bug was in the WAL code from day one, and no one noticed it\ntill this week. Seems like someone should have reported trouble with\nbeta1, if not before.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 15:19:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beta2 Vacuum and pg_dump failures and mangled databases "
}
]
|
[
{
"msg_contents": "> You should probably write off your databases as toast ... update to\n> beta3 and do an initdb. Sorry about that ...\n\nAnd try to reproduce bug.\nSorry.\n\nVadim\n",
"msg_date": "Fri, 12 Jan 2001 11:42:46 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Beta2 Vacuum and pg_dump failures and mangled datab\n\tases"
}
]
|
[
{
"msg_contents": "> ERROR: Cannot insert a duplicate key into unique index \n> pg_class_oid_index\n> -------------------------------------- start log \n> --------------------------------------\n> \n> Which makes me pause . . . are OIDs unique per database or \n> per PostgreSQL installation? I think per database. Therefore\n\nper installation\n\nVadim\n",
"msg_date": "Fri, 12 Jan 2001 11:52:36 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Beta2 Vacuum and pg_dump failures and mangled datab\n\tases"
}
]
|
[
{
"msg_contents": "> Evil it was. The haste with which beta3 appeared should've tipped you\n> off that beta2 was badly broken :-(. What's puzzling us, though, is\n> that this bug was in the WAL code from day one, and no one noticed it\n\nJust for accuracy - this bug is not related to WAL anyhow.\nThis bug was in new file naming code, which was committed in Oct.\n\nVadim\n",
"msg_date": "Fri, 12 Jan 2001 12:24:37 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Beta2 Vacuum and pg_dump failures and mangled datab\n\tases"
}
]
|
[
{
"msg_contents": "Vadim wrote:\n> Tom wrote:\n> > Bruce wrote:\n> > > ... If the CRC on\n> > > the WAL log checks for errors that are not checked anywhere else,\n> > > then fine, but I thought disk CRC would just duplicate the I/O\n> > > subsystem/disk.\n> >\n> > A disk-block CRC would detect partially written blocks (ie,\n> > power drops after disk has written M of the N sectors in a\n> > block). The disk's own checks will NOT consider this condition a\n> > failure. I'm not convinced that WAL will reliably detect it either\n> > (Vadim?).\n>\n> Idea proposed by Andreas about \"physical log\" is implemented! Now WAL\n> saves whole data blocks on first after checkpoint modification. This\n> way on recovery modified data blocks will be first restored *as a\n> whole*. Isn't it much better than just detection of partially writes?\n\nThis seems to protect against some partial writes, but see below.\n\n> > Certainly WAL will not help for corruption caused by external agents, \n> > away from any updates that are actually being performed/logged.\n>\n> What do you mean by \"external agents\"?\n\nExternal agents include RAM bit drops and noise on cables when\nblocks are (read and re-) written. Every time data is moved, \nthere is a chance of an undetected error being introduced. The \ndisk only promises (within limits) to deliver the sector that \nwas written; it doesn't promise that what was written is what \nyou meant to write. Errors of this sort accumulate unless \ncaught by end-to-end checks.\n\nExternal agents include bugs in database code, bugs in OS code,\nbugs in disk controller firmware, and bugs in disk firmware.\nEach can result in clobbered data, blocks being written in the\nwrong place, blocks said to be written but not, and any number\nof other variations. All this code is written by humans, and\neven the most thorough testing cannot cover even the majority\nof code paths.\n\nExternal agents include sector errors not caught by the disk CRC: \nthe disk only promises to keep the number of errors delivered to a\nreasonably low (and documented) level. It's up to the user to \nnotice the errors that slip through.\n\nand Andreas wrote:\n> > A disk-block CRC would detect partially written blocks (ie, power\n> > drops after disk has written M of the N sectors in a block). The\n> > disk's own checks will NOT consider this condition a failure.\n>\n> But physical log recovery will rewrite every page that was changed\n> after last checkpoint, thus this is not an issue anymore.\n\nNo. That assumes that when the drive _says_ the block is written, \nit is really on the disk. That is not true for IDE drives. It is \ntrue for SCSI drives only when the SCSI spec is implemented correctly,\nbut implementing the spec correctly interferes with favorable benchmark \nresults.\n\n> > I'm not convinced that WAL will reliably detect it either\n> > (Vadim?). Certainly WAL will not help for corruption caused by\n> > external agents, away from any updates that are actually being\n> > performed/logged.\n>\n> The external agent (if malvolent) could write a correct CRC anyway\n> If on the other hand the agent writes complete garbage, vacuum will\n> notice.\n\nVacuum does not check most of the bits in the blocks it reads. \n(Bad bits in metadata will cause a crash only if you're lucky.\nIf not, they result in more corruption.)\n\nA database is unusual among computer applications in that an error\nintroduced today can sit unnoticed on the disk, and then result in \nan unnoticed wrong answer six months later. We need to be able to\ndetect bad bits as soon as possible, before the backups have been\noverwritten. CRCs are how we can detect cumulative corruption from \nall sources.\n\nNathan Myers\[email protected]\n\n",
"msg_date": "Fri, 12 Jan 2001 12:35:14 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": true,
"msg_subject": "CRCs "
},
{
"msg_contents": "On Fri, Jan 12, 2001 at 12:35:14PM -0800, Nathan Myers wrote:\n> Vadim wrote:\n> > What do you mean by \"external agents\"?\n> \n> External agents include RAM bit drops and noise on cables when\n> blocks are (read and re-) written. Every time data is moved, \n> there is a chance of an undetected error being introduced. The \n> disk only promises (within limits) to deliver the sector that \n> was written; it doesn't promise that what was written is what \n> you meant to write. Errors of this sort accumulate unless \n> caught by end-to-end checks.\n> \n> External agents include bugs in database code, bugs in OS code,\n> bugs in disk controller firmware, and bugs in disk firmware.\n> Each can result in clobbered data, blocks being written in the\n> wrong place, blocks said to be written but not, and any number\n> of other variations. All this code is written by humans, and\n> even the most thorough testing cannot cover even the majority\n> of code paths.\n> \n> External agents include sector errors not caught by the disk CRC: \n> the disk only promises to keep the number of errors delivered to a\n> reasonably low (and documented) level. It's up to the user to \n> notice the errors that slip through.\n\nInterestingly, right after I posted this I noticed that cron \nnoticed a corrupt inode in /dev on my machine. The disk is \nhappy with it, but I'm not...\n\nNathan Myers\[email protected]\n",
"msg_date": "Fri, 12 Jan 2001 14:01:16 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": true,
"msg_subject": "Re: CRCs"
}
]
|
[
{
"msg_contents": "I'm trying to retrieve a big query (bah, not so big, but with a dozen of \njoins) and all I get is this:\n\n010112.17:22:06.050 [5387] DEBUG: Rel trabajos_docentes: Pages: 3 --> 2.\n010112.17:30:48.458 [5412] DEBUG: geqo_params: ga parameter file\n'/var/lib/pgsql/data/pg_geqo'\ndoes not exist or permissions are not setup correctly\n010112.17:30:48.475 [5412] DEBUG: geqo_params: no pool size specified;\nusing computed value of 1024\n010112.17:30:48.475 [5412] DEBUG: geqo_params: no optimization effort \nspecified;\nusing value of 40\n010112.17:30:48.475 [5412] DEBUG: geqo_params: no number of trials \nspecified;\nusing computed value of 400\n010112.17:30:48.475 [5412] DEBUG: geqo_params: no random seed specified;\nusing computed value of 979331448\n010112.17:30:48.475 [5412] DEBUG: geqo_params: no selection bias specified;\nusing default value of 2.000000\n010112.17:30:57.936 [5412] DEBUG: geqo_main: using edge recombination \ncrossover [ERX]\n\nNow, when I go to /var/lib/pgsql/data/ I see this:\n\ntotal 100\n-rw------- 1 postgres postgres 4 Dec 5 12:10 PG_VERSION\ndrwx------ 5 postgres postgres 1024 Dec 19 17:41 base/\n-rw------- 1 postgres postgres 8192 Jan 12 15:30 pg_control\n-rw------- 1 postgres postgres 8192 Jan 12 17:22 pg_database\n-r-------- 1 postgres postgres 3407 Dec 5 12:10 pg_geqo.sample\n-rw------- 1 postgres postgres 0 Dec 5 12:10 pg_group\n-rw------- 1 postgres postgres 16384 Dec 5 12:10 pg_group_name_index\n-rw------- 1 postgres postgres 16384 Dec 5 12:10 pg_group_sysid_index\n-r-------- 1 postgres postgres 5729 Dec 5 12:10 pg_hba.conf\n-rw------- 1 postgres postgres 16384 Jan 12 17:22 pg_log\n-rw------- 1 postgres postgres 58 Dec 16 17:03 pg_pwd\n-rw------- 1 postgres postgres 0 Dec 16 17:03 pg_pwd.reload\n-rw------- 1 postgres postgres 8192 Dec 16 17:03 pg_shadow\n-rw------- 1 postgres postgres 8192 Jan 12 17:22 pg_variable\ndrwx------ 2 postgres postgres 1024 Dec 5 12:10 pg_xlog/\n-rw------- 1 postgres postgres 87 Jan 12 15:30 postmaster.opts\n-r-------- 1 postgres postgres 4 Dec 5 12:10 postmaster.opts.default\n-rw------- 1 postgres postgres 4 Jan 12 15:30 postmaster.pid\n\nas you can see, there is a pg_geqo.sample, but not a pg_geqo. Should I rename \nit for it to work? I have been making lots of querys (inserts and selects) \nwith no problem at all.\n\nAny help will be appretiated.\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Fri, 12 Jan 2001 17:39:11 -0300",
"msg_from": "\"Martin A. Marques\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "problems with pg_geqo"
},
{
"msg_contents": "\"Martin A. Marques\" <[email protected]> writes:\n> as you can see, there is a pg_geqo.sample, but not a pg_geqo. Should I\n> rename it for it to work?\n\nOnly if you want to mess with the default GEQO parameters.\n\nThe debug messages you show don't seem to indicate that anything's\nwrong, so I'm not at all clear on what your complaint is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 17:09:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] problems with pg_geqo "
}
]
|
[
{
"msg_contents": "> > But physical log recovery will rewrite every page that was changed\n> > after last checkpoint, thus this is not an issue anymore.\n> \n> No. That assumes that when the drive _says_ the block is written, \n> it is really on the disk. That is not true for IDE drives. It is \n> true for SCSI drives only when the SCSI spec is implemented correctly,\n> but implementing the spec correctly interferes with favorable \n> benchmark results.\n\nYou know - this is *core* assumption. If drive lies about this then\n*nothing* will help you. Do you remember core rule of WAL?\n\"Changes must be logged *before* changed data pages written\".\nIf this rule will be broken then data files will be inconsistent\nafter crash recovery and you will not notice this, w/wo CRC in\ndata blocks.\n\nI agreed that CRCs could help to detect other errors but probably\nit's too late for 7.1\n\nVadim\n",
"msg_date": "Fri, 12 Jan 2001 13:07:56 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: CRCs "
},
{
"msg_contents": "On Fri, Jan 12, 2001 at 01:07:56PM -0800, Mikheev, Vadim wrote:\n> > > But physical log recovery will rewrite every page that was changed\n> > > after last checkpoint, thus this is not an issue anymore.\n> > \n> > No. That assumes that when the drive _says_ the block is written, \n> > it is really on the disk. That is not true for IDE drives. It is \n> > true for SCSI drives only when the SCSI spec is implemented correctly,\n> > but implementing the spec correctly interferes with favorable \n> > benchmark results.\n> \n> You know - this is *core* assumption. If drive lies about this then\n> *nothing* will help you. Do you remember core rule of WAL?\n> \"Changes must be logged *before* changed data pages written\".\n> If this rule will be broken then data files will be inconsistent\n> after crash recovery and you will not notice this, w/wo CRC in\n> data blocks.\n\nYou can include the data blocks' CRCs in the log entries.\n\n> I agreed that CRCs could help to detect other errors but probably\n> it's too late for 7.1.\n\n7.2 is not too far off. I'm hoping to see it then.\n\nNathan Myers\[email protected]\n",
"msg_date": "Fri, 12 Jan 2001 13:49:35 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRCs"
}
]
|
[
{
"msg_contents": "\nHas anyone ever thought of asking the FreeBSD folks for\ntheir CVS COmmit message generator? They generate ONE message\nwith more info in it for multi-directory commits than we\ndo with ours. \n\nThanks...\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 12 Jan 2001 15:58:36 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "CVS updates on committers list..."
},
{
"msg_contents": "Well there is cvs2cl and there is a utility I use:\n\t\n\tpgsql/src/tools/pgcvslog\n\n> \n> Has anyone ever thought of asking the FreeBSD folks for\n> their CVS COmmit message generator? They generate ONE message\n> with more info in it for multi-directory commits than we\n> do with ours. \n> \n> Thanks...\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Jan 2001 17:02:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS updates on committers list..."
},
{
"msg_contents": "I'm referring to the actual commit messages. \n\nIt would be in the CVS server config....\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: Friday, January 12, 2001 4:03 PM\nTo: Larry Rosenman\nCc: PostgreSQL Hackers List\nSubject: Re: [HACKERS] CVS updates on committers list...\n\n\nWell there is cvs2cl and there is a utility I use:\n\t\n\tpgsql/src/tools/pgcvslog\n\n> \n> Has anyone ever thought of asking the FreeBSD folks for\n> their CVS COmmit message generator? They generate ONE message\n> with more info in it for multi-directory commits than we\n> do with ours. \n> \n> Thanks...\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Jan 2001 16:06:16 -0600",
"msg_from": "\"Larry Rosenman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: CVS updates on committers list..."
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> I'm referring to the actual commit messages. \n> \n> It would be in the CVS server config....\n\nOh, yes, I understand now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 12 Jan 2001 17:06:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS updates on committers list..."
},
{
"msg_contents": "\"Larry Rosenman\" <[email protected]> writes:\n> I'm referring to the actual commit messages. \n\nIt *would* be awfully nice if the pgsql-committers traffic were one\nmessage per commit, instead of one per directory touched per commit.\nThis has been suggested before, but nothing got done ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 17:38:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS updates on committers list... "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> \"Larry Rosenman\" <[email protected]> writes:\n> > I'm referring to the actual commit messages. \n> \n> It *would* be awfully nice if the pgsql-committers traffic were one\n> message per commit, instead of one per directory touched per commit.\n> This has been suggested before, but nothing got done ...\n\nThis is easy to set up, for an appropriate definition of ``easy''.\n\nI have included two files below, commit_prep and log_accum. Do\n cvs co CVSROOT\nCopy the files into the CVSROOT directory. Check the definition of\n$MAILER near the top of log_accum. Everything else should be fine.\nDo this:\n cvs add commit_prep\n cvs add log_accum\n\nEdit the file checkoutlist, and add these lines:\n\ncommit_prep Won't be able to do mail logging.\nlog_accum Won't be able to do mail logging.\n\nEdit the file commitinfo, and add something like this line:\n\nDEFAULT\t /usr/bin/perl $CVSROOT/CVSROOT/commit_prep -r\n\nEdit the file loginfo, and add something like this line (replace\nMAILINGLIST with the mailing address to which you want log messages to\nbe sent):\n\nDEFAULT\t /usr/bin/perl $CVSROOT/CVSROOT/log_accum -m MAILINGLIST -s %s\n\nThen:\n cvs commit\n\nThen test it.\n\nGood luck. I didn't write these scripts, but I've used them in a\nnumber of places.\n\nIan\n\ncommit_prep:\n==================================================\n#!/usr/bin/perl\n# -*-Perl-*-\n#\n# $Id: commit_prep,v 1.1 1998/12/01 03:18:27 ian Exp $\n#\n# Perl filter to handle pre-commit checking of files. This program\n# records the last directory where commits will be taking place for\n# use by the log_accum.pl script. For new files, it forces the\n# existence of a RCS \"Id\" keyword in the first ten lines of the file.\n# For existing files, it checks version number in the \"Id\" line to\n# prevent losing changes because an old version of a file was copied\n# into the direcory.\n#\n# Possible future enhancements:\n#\n# Check for cruft left by unresolved conflicts. Search for\n# \"^<<<<<<<$\", \"^-------$\", and \"^>>>>>>>$\".\n#\n# Look for a copyright and automagically update it to the\n# current year. [[ bad idea! -- woods ]]\n#\n#\n# Contributed by David Hampton <[email protected]>\n#\n# Hacked on lots by Greg A. Woods <[email protected]>\n\n#\n#\tConfigurable options\n#\n\n# Constants (remember to protect strings from RCS keyword substitution)\n#\n$LAST_FILE = \"/tmp/#egcscvs.lastdir\"; # must match name in log_accum.pl\n$ENTRIES = \"CVS/Entries\";\n\n# Patterns to find $Log keywords in files\n#\n$LogString1 = \"\\\\\\$\\\\Log: .* \\\\\\$\";\n$LogString2 = \"\\\\\\$\\\\Log\\\\\\$\";\n$NoLog = \"%s - contains an RCS \\$Log keyword. It must not!\\n\";\n\n# pattern to match an RCS Id keyword line with an existing ID\n#\n$IDstring = \"\\\"@\\\\(#\\\\)[^:]*:.*\\\\\\$\\Id: .*\\\\\\$\\\"\";\n$NoId = \"\n%s - Does not contain a properly formatted line with the keyword \\\"Id:\\\".\n\tI.e. no lines match \\\"\" . $IDstring . \"\\\".\n\tPlease see the template files for an example.\\n\";\n\n# pattern to match an RCS Id keyword line for a new file (i.e. un-expanded)\n#\n$NewId = \"\\\"@(#)[^:]*:.*\\\\$\\Id\\\\$\\\"\";\n\n$NoName = \"\n%s - The ID line should contain only \\\"@(#)module/path:\\$Name\\$:\\$\\Id\\$\\\"\n\tfor a newly created file.\\n\";\n\n$BadName = \"\n%s - The file name '%s' in the ID line does not match\n\tthe actual filename.\\n\";\n\n$BadVersion = \"\n%s - How dare you!!! You replaced your copy of the file '%s',\n\twhich was based upon version %s, with an %s version based\n\tupon %s. Please move your '%s' out of the way, perform an\n\tupdate to get the current version, and them merge your changes\n\tinto that file, then try the commit again.\\n\";\n\n#\n#\tSubroutines\n#\n\nsub write_line {\n local($filename, $line) = @_;\n open(FILE, \">$filename\") || die(\"Cannot open $filename, stopped\");\n print(FILE $line, \"\\n\");\n close(FILE);\n}\n\nsub check_version {\n local($i, $id, $rname, $version);\n local($filename, $cvsversion) = @_;\n\n open(FILE, \"<$filename\") || return(0);\n\n @all_lines = ();\n $idpos = -1;\n $newidpos = -1;\n for ($i = 0; <FILE>; $i++) {\n\tchop;\n\tpush(@all_lines, $_);\n\tif ($_ =~ /$IDstring/) {\n\t $idpos = $i;\n\t}\n\tif ($_ =~ /$NewId/) {\n\t $newidpos = $i;\n\t}\n }\n\n if (grep(/$LogString1/, @all_lines) || grep(/$LogString2/, @all_lines)) {\n\tprint STDERR sprintf($NoLog, $filename);\n\treturn(1);\n }\n\n if ($debug != 0) {\n\tprint STDERR sprintf(\"file = %s, version = %d.\\n\", $filename, $cvsversion{$filename});\n }\n\n if ($cvsversion{$filename} == 0) {\n\tif ($newidpos != -1 && $all_lines[$newidpos] !~ /$NewId/) {\n\t print STDERR sprintf($NoName, $filename);\n\t return(1);\n\t}\n\treturn(0);\n }\n\n if ($idpos == -1) {\n\tprint STDERR sprintf($NoId, $filename);\n\treturn(1);\n }\n\n $line = $all_lines[$idpos];\n $pos = index($line, \"Id: \");\n if ($debug != 0) {\n\tprint STDERR sprintf(\"%d in '%s'.\\n\", $pos, $line);\n }\n ($id, $rname, $version) = split(' ', substr($line, $pos));\n if ($rname ne \"$filename,v\") {\n\tprint STDERR sprintf($BadName, $filename, substr($rname, 0, length($rname)-2));\n\treturn(1);\n }\n if ($cvsversion{$filename} < $version) {\n\tprint STDERR sprintf($BadVersion, $filename, $filename, $cvsversion{$filename},\n\t\t\t \"newer\", $version, $filename);\n\treturn(1);\n }\n if ($cvsversion{$filename} > $version) {\n\tprint STDERR sprintf($BadVersion, $filename, $filename, $cvsversion{$filename},\n\t\t\t \"older\", $version, $filename);\n\treturn(1);\n }\n return(0);\n}\n\n#\n#\tMain Body\t\n#\n\n$id = getpgrp();\t\t# You *must* use a shell that does setpgrp()!\n\n# Check each file (except dot files) for an RCS \"Id\" keyword.\n#\n$check_id = 0;\n\n# Record the directory for later use by the log_accumulate stript.\n#\n$record_directory = 0;\n\n# parse command line arguments\n#\nwhile (@ARGV) {\n $arg = shift @ARGV;\n\n if ($arg eq '-d') {\n\t$debug = 1;\n\tprint STDERR \"Debug turned on...\\n\";\n } elsif ($arg eq '-c') {\n\t$check_id = 1;\n } elsif ($arg eq '-r') {\n\t$record_directory = 1;\n } else {\n\tpush(@files, $arg);\n }\n}\n\n$directory = shift @files;\n\nif ($debug != 0) {\n print STDERR \"dir - \", $directory, \"\\n\";\n print STDERR \"files - \", join(\":\", @files), \"\\n\";\n print STDERR \"id - \", $id, \"\\n\";\n}\n\n# Suck in the CVS/Entries file\n#\nopen(ENTRIES, $ENTRIES) || die(\"Cannot open $ENTRIES.\\n\");\nwhile (<ENTRIES>) {\n local($filename, $version) = split('/', substr($_, 1));\n $cvsversion{$filename} = $version;\n}\n\n# Now check each file name passed in, except for dot files. Dot files\n# are considered to be administrative files by this script.\n#\nif ($check_id != 0) {\n $failed = 0;\n foreach $arg (@files) {\n\tif (index($arg, \".\") == 0) {\n\t next;\n\t}\n\t$failed += &check_version($arg);\n }\n if ($failed) {\n\tprint STDERR \"\\n\";\n\texit(1);\n }\n}\n\n# Record this directory as the last one checked. This will be used\n# by the log_accumulate script to determine when it is processing\n# the final directory of a multi-directory commit.\n#\nif ($record_directory != 0) {\n &write_line(\"$LAST_FILE.$id\", $directory);\n}\nexit(0);\n==================================================\n\nlog_accum:\n==================================================\n#!/usr/bin/perl\n# -*-Perl-*-\n#\n# Perl filter to handle the log messages from the checkin of files in\n# a directory. This script will group the lists of files by log\n# message, and mail a single consolidated log message at the end of\n# the commit.\n#\n# This file assumes a pre-commit checking program that leaves the\n# names of the first and last commit directories in a temporary file.\n#\n# Contributed by David Hampton <[email protected]>\n#\n# hacked greatly by Greg A. Woods <[email protected]>\n\n# Usage: log_accum.pl [-d] [-s] [-M module] [[-m mailto] ...] [-f logfile]\n#\t-d\t\t- turn on debugging\n#\t-m mailto\t- send mail to \"mailto\" (multiple)\n#\t-M modulename\t- set module name to \"modulename\"\n#\t-f logfile\t- write commit messages to logfile too\n#\t-s\t\t- *don't* run \"cvs status -v\" for each file\n\n#\n#\tConfigurable options\n#\n\n# Set this to something that takes \"-s\"\n$MAILER\t = \"/bin/mail\";\n\n# Constants (don't change these!)\n#\n$STATE_NONE = 0;\n$STATE_CHANGED = 1;\n$STATE_ADDED = 2;\n$STATE_REMOVED = 3;\n$STATE_LOG = 4;\n\n$LAST_FILE = \"/tmp/#egcscvs.lastdir\";\n\n$CHANGED_FILE = \"/tmp/#egcscvs.files.changed\";\n$ADDED_FILE = \"/tmp/#egcscvs.files.added\";\n$REMOVED_FILE = \"/tmp/#egcscvs.files.removed\";\n$LOG_FILE = \"/tmp/#egcscvs.files.log\";\n\n$FILE_PREFIX = \"#egcscvs.files\";\n\n#\n#\tSubroutines\n#\n\nsub cleanup_tmpfiles {\n local($wd, @files);\n\n $wd = `pwd`;\n chdir(\"/tmp\") || die(\"Can't chdir('/tmp')\\n\");\n opendir(DIR, \".\");\n push(@files, grep(/^$FILE_PREFIX\\..*\\.$id$/, readdir(DIR)));\n closedir(DIR);\n foreach (@files) {\n\tunlink $_;\n }\n unlink $LAST_FILE . \".\" . $id;\n\n chdir($wd);\n}\n\nsub write_logfile {\n local($filename, @lines) = @_;\n\n open(FILE, \">$filename\") || die(\"Cannot open log file $filename.\\n\");\n print FILE join(\"\\n\", @lines), \"\\n\";\n close(FILE);\n}\n\nsub format_names {\n local($dir, @files) = @_;\n local(@lines);\n\n if ($dir =~ /^\\.\\//) {\n\t$dir = $';\n }\n if ($dir =~ /\\/$/) {\n\t$dir = $`;\n }\n if ($dir eq \"\") {\n\t$dir = \".\";\n }\n\n $format = \"\\t%-\" . sprintf(\"%d\", length($dir) > 15 ? length($dir) : 15) . \"s%s \";\n\n $lines[0] = sprintf($format, $dir, \":\");\n\n if ($debug) {\n\tprint STDERR \"format_names(): dir = \", $dir, \"; files = \", join(\":\", @files), \".\\n\";\n }\n foreach $file (@files) {\n\tif (length($lines[$#lines]) + length($file) > 65) {\n\t $lines[++$#lines] = sprintf($format, \" \", \" \");\n\t}\n\t$lines[$#lines] .= $file . \" \";\n }\n\n @lines;\n}\n\nsub format_lists {\n local(@lines) = @_;\n local(@text, @files, $lastdir);\n\n if ($debug) {\n\tprint STDERR \"format_lists(): \", join(\":\", @lines), \"\\n\";\n }\n @text = ();\n @files = ();\n $lastdir = shift @lines;\t# first thing is always a directory\n if ($lastdir !~ /.*\\/$/) {\n\tdie(\"Damn, $lastdir doesn't look like a directory!\\n\");\n }\n foreach $line (@lines) {\n\tif ($line =~ /.*\\/$/) {\n\t push(@text, &format_names($lastdir, @files));\n\t $lastdir = $line;\n\t @files = ();\n\t} else {\n\t push(@files, $line);\n\t}\n }\n push(@text, &format_names($lastdir, @files));\n\n @text;\n}\n\nsub accum_subject {\n local(@lines) = @_;\n local(@files, $lastdir);\n\n $lastdir = shift @lines;\t# first thing is always a directory\n @files = ($lastdir);\n if ($lastdir !~ /.*\\/$/) {\n\tdie(\"Damn, $lastdir doesn't look like a directory!\\n\");\n }\n foreach $line (@lines) {\n\tif ($line =~ /.*\\/$/) {\n\t $lastdir = $line;\n\t push(@files, $line);\n\t} else {\n\t push(@files, $lastdir . $line);\n\t}\n }\n\n @files;\n}\n\nsub compile_subject {\n local(@files) = @_;\n local($text, @a, @b, @c, $dir, $topdir);\n\n # find the highest common directory\n $dir = '-';\n do {\n\t$topdir = $dir;\n\tforeach $file (@files) {\n\t if ($file =~ /.*\\/$/) {\n\t\tif ($dir eq '-') {\n\t\t $dir = $file;\n\t\t} else {\n\t\t if (index($dir,$file) == 0) {\n\t\t\t$dir = $file;\n\t\t } elsif (index($file,$dir) != 0) {\n\t\t\t@a = split /\\//,$file;\n\t\t\t@b = split /\\//,$dir;\n\t\t\t@c = ();\n\t\t\tCMP: while ($#a > 0 && $#b > 0) {\n\t\t\t if ($a[0] eq $b[0]) {\n\t\t\t\tpush(@c, $a[0]);\n\t\t\t\tshift @a;\n\t\t\t\tshift @b;\n\t\t\t } else {\n\t\t\t\tlast CMP;\n\t\t\t }\n\t\t\t}\n\t\t\t$dir = join('/',@c) . '/';\n\t\t }\n\t\t}\n\t }\n\t}\n } until $dir eq $topdir;\n\n # strip out directories and the common prefix topdir.\n chop $topdir;\n @c = ($modulename . '/' . $topdir);\n foreach $file (@files) {\n\tif (!($file =~ /.*\\/$/)) {\n\t push(@c, substr($file, length($topdir)+1));\n\t}\n }\n\n # put it together and limit the length.\n $text = join(' ',@c);\n if (length($text) > 50) {\n\t$text = substr($text, 0, 46) . ' ...';\n }\n\n $text;\n}\n\nsub append_names_to_file {\n local($filename, $dir, @files) = @_;\n\n if (@files) {\n\topen(FILE, \">>$filename\") || die(\"Cannot open file $filename.\\n\");\n\tprint FILE $dir, \"\\n\";\n\tprint FILE join(\"\\n\", @files), \"\\n\";\n\tclose(FILE);\n }\n}\n\nsub read_line {\n local($line);\n local($filename) = @_;\n\n open(FILE, \"<$filename\") || die(\"Cannot open file $filename.\\n\");\n $line = <FILE>;\n close(FILE);\n chop($line);\n $line;\n}\n\nsub read_logfile {\n local(@text);\n local($filename, $leader) = @_;\n\n open(FILE, \"<$filename\");\n while (<FILE>) {\n\tchop;\n\tpush(@text, $leader.$_);\n }\n close(FILE);\n @text;\n}\n\nsub build_header {\n local($header);\n local($sec,$min,$hour,$mday,$mon,$year) = localtime(time);\n $header = sprintf(\"CVSROOT:\\t%s\\nModule name:\\t%s\\n\",\n\t\t $cvsroot,\n\t\t $modulename);\n if (defined($branch)) {\n\t$header .= sprintf(\"Branch: \\t%s\\n\",\n\t\t $branch);\n }\n $header .= sprintf(\"Changes by:\\t%s@%s\\t%02d/%02d/%02d %02d:%02d:%02d\",\n\t\t $login, $hostdomain,\n\t\t $year%100, $mon+1, $mday,\n\t\t $hour, $min, $sec);\n}\n\nsub mail_notification {\n local($name, $subject, @text) = @_;\n open(MAIL, \"| $MAILER -s \\\"$subject\\\" $name\");\n print MAIL join(\"\\n\", @text), \"\\n\";\n close(MAIL);\n}\n\nsub write_commitlog {\n local($logfile, @text) = @_;\n\n open(FILE, \">>$logfile\");\n print FILE join(\"\\n\", @text), \"\\n\\n\";\n close(FILE);\n}\n\n#\n#\tMain Body\n#\n\n# Initialize basic variables\n#\n$debug = 0;\n$id = getpgrp();\t\t# note, you *must* use a shell which does setpgrp()\n$state = $STATE_NONE;\n$login = $ENV{'USER'} || (getpwuid($<))[0] || \"nobody\";\nchop($hostname = `hostname`);\nif ($hostname !~ /\\./) {\n chop($domainname = `domainname`);\n $hostdomain = $hostname . \".\" . $domainname;\n} else {\n $hostdomain = $hostname;\n}\n$cvsroot = $ENV{'CVSROOT'};\n$do_status = 1;\n$modulename = \"\";\n\n# parse command line arguments (file list is seen as one arg)\n#\nwhile (@ARGV) {\n $arg = shift @ARGV;\n\n if ($arg eq '-d') {\n\t$debug = 1;\n\tprint STDERR \"Debug turned on...\\n\";\n } elsif ($arg eq '-m') {\n\t$mailto = \"$mailto \" . shift @ARGV;\n } elsif ($arg eq '-M') {\n\t$modulename = shift @ARGV;\n } elsif ($arg eq '-s') {\n\t$do_status = 0;\n } elsif ($arg eq '-f') {\n\t($commitlog) && die(\"Too many '-f' args\\n\");\n\t$commitlog = shift @ARGV;\n } else {\n\t($donefiles) && die(\"Too many arguments! Check usage.\\n\");\n\t$donefiles = 1;\n\t@files = split(/ /, $arg);\n }\n}\n($mailto) || die(\"No -m mail recipient specified\\n\");\n\n# for now, the first \"file\" is the repository directory being committed,\n# relative to the $CVSROOT location\n#\n@path = split('/', $files[0]);\n\n# XXX there are some ugly assumptions in here about module names and\n# XXX directories relative to the $CVSROOT location -- really should\n# XXX read $CVSROOT/CVSROOT/modules, but that's not so easy to do, since\n# XXX we have to parse it backwards.\n#\nif ($modulename eq \"\") {\n $modulename = $path[0];\t# I.e. the module name == top-level dir\n}\nif ($commitlog ne \"\") {\n $commitlog = $cvsroot . \"/\" . $modulename . \"/\" . $commitlog unless ($commitlog =~ /^\\//);\n}\nif ($#path == 0) {\n $dir = \".\";\n} else {\n $dir = join('/', @path[1..$#path]);\n}\n$dir = $dir . \"/\";\n\nif ($debug) {\n print STDERR \"module - \", $modulename, \"\\n\";\n print STDERR \"dir - \", $dir, \"\\n\";\n print STDERR \"path - \", join(\":\", @path), \"\\n\";\n print STDERR \"files - \", join(\":\", @files), \"\\n\";\n print STDERR \"id - \", $id, \"\\n\";\n}\n\n# Check for a new directory first. This appears with files set as follows:\n#\n# files[0] - \"path/name/newdir\"\n# files[1] - \"-\"\n# files[2] - \"New\"\n# files[3] - \"directory\"\n#\nif ($files[2] =~ /New/ && $files[3] =~ /directory/) {\n local(@text);\n\n @text = ();\n push(@text, &build_header());\n push(@text, \"\");\n push(@text, $files[0]);\n push(@text, \"\");\n\n while (<STDIN>) {\n\tchop;\t\t\t# Drop the newline\n\tpush(@text, $_);\n }\n\n &mail_notification($mailto, $files[0], @text);\n\n if ($commitlog) {\n\t&write_commitlog($commitlog, @text);\n }\n\n exit 0;\n}\n\n# Iterate over the body of the message collecting information.\n#\nwhile (<STDIN>) {\n chop;\t\t\t# Drop the newline\n\n if (/^Modified Files/) { $state = $STATE_CHANGED; next; }\n if (/^Added Files/) { $state = $STATE_ADDED; next; }\n if (/^Removed Files/) { $state = $STATE_REMOVED; next; }\n if (/^Log Message/) { $state = $STATE_LOG; next; }\n if (/^Revision\\/Branch/) { /^[^:]+:\\s*(.*)/; $branch = $+; next; }\n\n s/^[ \\t\\n]+//;\t\t# delete leading whitespace\n s/[ \\t\\n]+$//;\t\t# delete trailing whitespace\n \n if ($state == $STATE_CHANGED) { push(@changed_files, split); }\n if ($state == $STATE_ADDED) { push(@added_files, split); }\n if ($state == $STATE_REMOVED) { push(@removed_files, split); }\n if ($state == $STATE_LOG) { push(@log_lines, $_); }\n}\n\n# Strip leading and trailing blank lines from the log message. Also\n# compress multiple blank lines in the body of the message down to a\n# single blank line.\n#\nwhile ($#log_lines > -1) {\n last if ($log_lines[0] ne \"\");\n shift(@log_lines);\n}\nwhile ($#log_lines > -1) {\n last if ($log_lines[$#log_lines] ne \"\");\n pop(@log_lines);\n}\nfor ($i = $#log_lines; $i > 0; $i--) {\n if (($log_lines[$i - 1] eq \"\") && ($log_lines[$i] eq \"\")) {\n\tsplice(@log_lines, $i, 1);\n }\n}\n\n# Check for an import command. This appears with files set as follows:\n#\n# files[0] - \"path/name\"\n# files[1] - \"-\"\n# files[2] - \"Imported\"\n# files[3] - \"sources\"\n#\nif ($files[2] =~ /Imported/ && $files[3] =~ /sources/) {\n local(@text);\n\n @text = ();\n push(@text, &build_header());\n push(@text, \"\");\n\n push(@text, \"Log message:\");\n while ($#log_lines > -1) {\n\tpush (@text, \" \" . $log_lines[0]);\n\tshift(@log_lines);\n }\n\n &mail_notification($mailto, \"Import $file[0]\", @text);\n\n if ($commitlog) {\n\t&write_commitlog($commitlog, @text);\n }\n\n exit 0;\n}\n\nif ($debug) {\n print STDERR \"Searching for log file index...\";\n}\n# Find an index to a log file that matches this log message\n#\nfor ($i = 0; ; $i++) {\n local(@text);\n\n last if (! -e \"$LOG_FILE.$i.$id\"); # the next available one\n @text = &read_logfile(\"$LOG_FILE.$i.$id\", \"\");\n last if ($#text == -1);\t# nothing in this file, use it\n last if (join(\" \", @log_lines) eq join(\" \", @text)); # it's the same log message as another\n}\nif ($debug) {\n print STDERR \" found log file at $i.$id, now writing tmp files.\\n\";\n}\n\n# Spit out the information gathered in this pass.\n#\n&append_names_to_file(\"$CHANGED_FILE.$i.$id\", $dir, @changed_files);\n&append_names_to_file(\"$ADDED_FILE.$i.$id\", $dir, @added_files);\n&append_names_to_file(\"$REMOVED_FILE.$i.$id\", $dir, @removed_files);\n&write_logfile(\"$LOG_FILE.$i.$id\", @log_lines);\n\n# Check whether this is the last directory. If not, quit.\n#\nif ($debug) {\n print STDERR \"Checking current dir against last dir.\\n\";\n}\n$_ = &read_line(\"$LAST_FILE.$id\");\n\nif ($_ ne $cvsroot . \"/\" . $files[0]) {\n if ($debug) {\n\tprint STDERR sprintf(\"Current directory %s is not last directory %s.\\n\", $cvsroot . \"/\" .$files[0], $_);\n }\n exit 0;\n}\nif ($debug) {\n print STDERR sprintf(\"Current directory %s is last directory %s -- all commits done.\\n\", $files[0], $_);\n}\n\n#\n#\tEnd Of Commits!\n#\n\n# This is it. The commits are all finished. Lump everything together\n# into a single message, fire a copy off to the mailing list, and drop\n# it on the end of the Changes file.\n#\n\n#\n# Produce the final compilation of the log messages\n#\n@text = ();\n@status_txt = ();\n@subject_files = ();\npush(@text, &build_header());\npush(@text, \"\");\n\nfor ($i = 0; ; $i++) {\n last if (! -e \"$LOG_FILE.$i.$id\"); # we're done them all!\n @lines = &read_logfile(\"$CHANGED_FILE.$i.$id\", \"\");\n if ($#lines >= 0) {\n\tpush(@text, \"Modified files:\");\n\tpush(@text, &format_lists(@lines));\n\tpush(@subject_files, &accum_subject(@lines));\n }\n @lines = &read_logfile(\"$ADDED_FILE.$i.$id\", \"\");\n if ($#lines >= 0) {\n\tpush(@text, \"Added files:\");\n\tpush(@text, &format_lists(@lines));\n\tpush(@subject_files, &accum_subject(@lines));\n }\n @lines = &read_logfile(\"$REMOVED_FILE.$i.$id\", \"\");\n if ($#lines >= 0) {\n\tpush(@text, \"Removed files:\");\n\tpush(@text, &format_lists(@lines));\n\tpush(@subject_files, &accum_subject(@lines));\n }\n if ($#text >= 0) {\n\tpush(@text, \"\");\n }\n @lines = &read_logfile(\"$LOG_FILE.$i.$id\", \"\\t\");\n if ($#lines >= 0) {\n\tpush(@text, \"Log message:\");\n\tpush(@text, @lines);\n\tpush(@text, \"\");\n }\n if ($do_status) {\n\tlocal(@changed_files);\n\n\t@changed_files = ();\n\tpush(@changed_files, &read_logfile(\"$CHANGED_FILE.$i.$id\", \"\"));\n\tpush(@changed_files, &read_logfile(\"$ADDED_FILE.$i.$id\", \"\"));\n\tpush(@changed_files, &read_logfile(\"$REMOVED_FILE.$i.$id\", \"\"));\n\n\tif ($debug) {\n\t print STDERR \"main: pre-sort changed_files = \", join(\":\", @changed_files), \".\\n\";\n\t}\n\tsort(@changed_files);\n\tif ($debug) {\n\t print STDERR \"main: post-sort changed_files = \", join(\":\", @changed_files), \".\\n\";\n\t}\n\n\tforeach $dofile (@changed_files) {\n\t if ($dofile =~ /\\/$/) {\n\t\tnext;\t\t# ignore the silly \"dir\" entries\n\t }\n\t if ($debug) {\n\t\tprint STDERR \"main(): doing status on $dofile\\n\";\n\t }\n\t open(STATUS, \"-|\") || exec 'cvs', '-n', 'status', '-Qqv', $dofile;\n\t while (<STATUS>) {\n\t\tchop;\n\t\tpush(@status_txt, $_);\n\t }\n\t}\n }\n}\n\n$subject_txt = &compile_subject(@subject_files);\n\n# Write to the commitlog file\n#\nif ($commitlog) {\n &write_commitlog($commitlog, @text);\n}\n\nif ($#status_txt >= 0) {\n push(@text, @status_txt);\n}\n\n# Mailout the notification.\n#\n&mail_notification($mailto, $subject_txt, @text);\n\n# cleanup\n#\nif (! $debug) {\n &cleanup_tmpfiles();\n}\n\nexit 0;\n==================================================\n",
"msg_date": "12 Jan 2001 15:08:49 -0800",
"msg_from": "Ian Lance Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS updates on committers list..."
},
{
"msg_contents": "\nI keep meaning to work with Alfred on this ... just keeps getting\nbackburnered :(\n\nLet me take a look at her this weekend ...\n\n\nOn Fri, 12 Jan 2001, Larry Rosenman wrote:\n\n>\n> Has anyone ever thought of asking the FreeBSD folks for\n> their CVS COmmit message generator? They generate ONE message\n> with more info in it for multi-directory commits than we\n> do with ours.\n>\n> Thanks...\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Fri, 12 Jan 2001 20:43:14 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS updates on committers list..."
},
{
"msg_contents": "\nDone ... just tried adding white space to three files, in three\ndirectories, and it seems to go through as one commit ...\n\nlet's see what happens next time Tom makes a big change :)\n\n\nOn 12 Jan 2001, Ian Lance Taylor wrote:\n\n> Tom Lane <[email protected]> writes:\n>\n> > \"Larry Rosenman\" <[email protected]> writes:\n> > > I'm referring to the actual commit messages.\n> >\n> > It *would* be awfully nice if the pgsql-committers traffic were one\n> > message per commit, instead of one per directory touched per commit.\n> > This has been suggested before, but nothing got done ...\n>\n> This is easy to set up, for an appropriate definition of ``easy''.\n>\n> I have included two files below, commit_prep and log_accum. Do\n> cvs co CVSROOT\n> Copy the files into the CVSROOT directory. Check the definition of\n> $MAILER near the top of log_accum. Everything else should be fine.\n> Do this:\n> cvs add commit_prep\n> cvs add log_accum\n>\n> Edit the file checkoutlist, and add these lines:\n>\n> commit_prep Won't be able to do mail logging.\n> log_accum Won't be able to do mail logging.\n>\n> Edit the file commitinfo, and add something like this line:\n>\n> DEFAULT\t /usr/bin/perl $CVSROOT/CVSROOT/commit_prep -r\n>\n> Edit the file loginfo, and add something like this line (replace\n> MAILINGLIST with the mailing address to which you want log messages to\n> be sent):\n>\n> DEFAULT\t /usr/bin/perl $CVSROOT/CVSROOT/log_accum -m MAILINGLIST -s %s\n>\n> Then:\n> cvs commit\n>\n> Then test it.\n>\n> Good luck. I didn't write these scripts, but I've used them in a\n> number of places.\n>\n> Ian\n>\n> commit_prep:\n> ==================================================\n> #!/usr/bin/perl\n> # -*-Perl-*-\n> #\n> # $Id: commit_prep,v 1.1 1998/12/01 03:18:27 ian Exp $\n> #\n> # Perl filter to handle pre-commit checking of files. This program\n> # records the last directory where commits will be taking place for\n> # use by the log_accum.pl script. For new files, it forces the\n> # existence of a RCS \"Id\" keyword in the first ten lines of the file.\n> # For existing files, it checks version number in the \"Id\" line to\n> # prevent losing changes because an old version of a file was copied\n> # into the direcory.\n> #\n> # Possible future enhancements:\n> #\n> # Check for cruft left by unresolved conflicts. Search for\n> # \"^<<<<<<<$\", \"^-------$\", and \"^>>>>>>>$\".\n> #\n> # Look for a copyright and automagically update it to the\n> # current year. [[ bad idea! -- woods ]]\n> #\n> #\n> # Contributed by David Hampton <[email protected]>\n> #\n> # Hacked on lots by Greg A. Woods <[email protected]>\n>\n> #\n> #\tConfigurable options\n> #\n>\n> # Constants (remember to protect strings from RCS keyword substitution)\n> #\n> $LAST_FILE = \"/tmp/#egcscvs.lastdir\"; # must match name in log_accum.pl\n> $ENTRIES = \"CVS/Entries\";\n>\n> # Patterns to find $Log keywords in files\n> #\n> $LogString1 = \"\\\\\\$\\\\Log: .* \\\\\\$\";\n> $LogString2 = \"\\\\\\$\\\\Log\\\\\\$\";\n> $NoLog = \"%s - contains an RCS \\$Log keyword. It must not!\\n\";\n>\n> # pattern to match an RCS Id keyword line with an existing ID\n> #\n> $IDstring = \"\\\"@\\\\(#\\\\)[^:]*:.*\\\\\\$\\Id: .*\\\\\\$\\\"\";\n> $NoId = \"\n> %s - Does not contain a properly formatted line with the keyword \\\"Id:\\\".\n> \tI.e. no lines match \\\"\" . $IDstring . \"\\\".\n> \tPlease see the template files for an example.\\n\";\n>\n> # pattern to match an RCS Id keyword line for a new file (i.e. un-expanded)\n> #\n> $NewId = \"\\\"@(#)[^:]*:.*\\\\$\\Id\\\\$\\\"\";\n>\n> $NoName = \"\n> %s - The ID line should contain only \\\"@(#)module/path:\\$Name\\$:\\$\\Id\\$\\\"\n> \tfor a newly created file.\\n\";\n>\n> $BadName = \"\n> %s - The file name '%s' in the ID line does not match\n> \tthe actual filename.\\n\";\n>\n> $BadVersion = \"\n> %s - How dare you!!! You replaced your copy of the file '%s',\n> \twhich was based upon version %s, with an %s version based\n> \tupon %s. Please move your '%s' out of the way, perform an\n> \tupdate to get the current version, and them merge your changes\n> \tinto that file, then try the commit again.\\n\";\n>\n> #\n> #\tSubroutines\n> #\n>\n> sub write_line {\n> local($filename, $line) = @_;\n> open(FILE, \">$filename\") || die(\"Cannot open $filename, stopped\");\n> print(FILE $line, \"\\n\");\n> close(FILE);\n> }\n>\n> sub check_version {\n> local($i, $id, $rname, $version);\n> local($filename, $cvsversion) = @_;\n>\n> open(FILE, \"<$filename\") || return(0);\n>\n> @all_lines = ();\n> $idpos = -1;\n> $newidpos = -1;\n> for ($i = 0; <FILE>; $i++) {\n> \tchop;\n> \tpush(@all_lines, $_);\n> \tif ($_ =~ /$IDstring/) {\n> \t $idpos = $i;\n> \t}\n> \tif ($_ =~ /$NewId/) {\n> \t $newidpos = $i;\n> \t}\n> }\n>\n> if (grep(/$LogString1/, @all_lines) || grep(/$LogString2/, @all_lines)) {\n> \tprint STDERR sprintf($NoLog, $filename);\n> \treturn(1);\n> }\n>\n> if ($debug != 0) {\n> \tprint STDERR sprintf(\"file = %s, version = %d.\\n\", $filename, $cvsversion{$filename});\n> }\n>\n> if ($cvsversion{$filename} == 0) {\n> \tif ($newidpos != -1 && $all_lines[$newidpos] !~ /$NewId/) {\n> \t print STDERR sprintf($NoName, $filename);\n> \t return(1);\n> \t}\n> \treturn(0);\n> }\n>\n> if ($idpos == -1) {\n> \tprint STDERR sprintf($NoId, $filename);\n> \treturn(1);\n> }\n>\n> $line = $all_lines[$idpos];\n> $pos = index($line, \"Id: \");\n> if ($debug != 0) {\n> \tprint STDERR sprintf(\"%d in '%s'.\\n\", $pos, $line);\n> }\n> ($id, $rname, $version) = split(' ', substr($line, $pos));\n> if ($rname ne \"$filename,v\") {\n> \tprint STDERR sprintf($BadName, $filename, substr($rname, 0, length($rname)-2));\n> \treturn(1);\n> }\n> if ($cvsversion{$filename} < $version) {\n> \tprint STDERR sprintf($BadVersion, $filename, $filename, $cvsversion{$filename},\n> \t\t\t \"newer\", $version, $filename);\n> \treturn(1);\n> }\n> if ($cvsversion{$filename} > $version) {\n> \tprint STDERR sprintf($BadVersion, $filename, $filename, $cvsversion{$filename},\n> \t\t\t \"older\", $version, $filename);\n> \treturn(1);\n> }\n> return(0);\n> }\n>\n> #\n> #\tMain Body\n> #\n>\n> $id = getpgrp();\t\t# You *must* use a shell that does setpgrp()!\n>\n> # Check each file (except dot files) for an RCS \"Id\" keyword.\n> #\n> $check_id = 0;\n>\n> # Record the directory for later use by the log_accumulate stript.\n> #\n> $record_directory = 0;\n>\n> # parse command line arguments\n> #\n> while (@ARGV) {\n> $arg = shift @ARGV;\n>\n> if ($arg eq '-d') {\n> \t$debug = 1;\n> \tprint STDERR \"Debug turned on...\\n\";\n> } elsif ($arg eq '-c') {\n> \t$check_id = 1;\n> } elsif ($arg eq '-r') {\n> \t$record_directory = 1;\n> } else {\n> \tpush(@files, $arg);\n> }\n> }\n>\n> $directory = shift @files;\n>\n> if ($debug != 0) {\n> print STDERR \"dir - \", $directory, \"\\n\";\n> print STDERR \"files - \", join(\":\", @files), \"\\n\";\n> print STDERR \"id - \", $id, \"\\n\";\n> }\n>\n> # Suck in the CVS/Entries file\n> #\n> open(ENTRIES, $ENTRIES) || die(\"Cannot open $ENTRIES.\\n\");\n> while (<ENTRIES>) {\n> local($filename, $version) = split('/', substr($_, 1));\n> $cvsversion{$filename} = $version;\n> }\n>\n> # Now check each file name passed in, except for dot files. Dot files\n> # are considered to be administrative files by this script.\n> #\n> if ($check_id != 0) {\n> $failed = 0;\n> foreach $arg (@files) {\n> \tif (index($arg, \".\") == 0) {\n> \t next;\n> \t}\n> \t$failed += &check_version($arg);\n> }\n> if ($failed) {\n> \tprint STDERR \"\\n\";\n> \texit(1);\n> }\n> }\n>\n> # Record this directory as the last one checked. This will be used\n> # by the log_accumulate script to determine when it is processing\n> # the final directory of a multi-directory commit.\n> #\n> if ($record_directory != 0) {\n> &write_line(\"$LAST_FILE.$id\", $directory);\n> }\n> exit(0);\n> ==================================================\n>\n> log_accum:\n> ==================================================\n> #!/usr/bin/perl\n> # -*-Perl-*-\n> #\n> # Perl filter to handle the log messages from the checkin of files in\n> # a directory. This script will group the lists of files by log\n> # message, and mail a single consolidated log message at the end of\n> # the commit.\n> #\n> # This file assumes a pre-commit checking program that leaves the\n> # names of the first and last commit directories in a temporary file.\n> #\n> # Contributed by David Hampton <[email protected]>\n> #\n> # hacked greatly by Greg A. Woods <[email protected]>\n>\n> # Usage: log_accum.pl [-d] [-s] [-M module] [[-m mailto] ...] [-f logfile]\n> #\t-d\t\t- turn on debugging\n> #\t-m mailto\t- send mail to \"mailto\" (multiple)\n> #\t-M modulename\t- set module name to \"modulename\"\n> #\t-f logfile\t- write commit messages to logfile too\n> #\t-s\t\t- *don't* run \"cvs status -v\" for each file\n>\n> #\n> #\tConfigurable options\n> #\n>\n> # Set this to something that takes \"-s\"\n> $MAILER\t = \"/bin/mail\";\n>\n> # Constants (don't change these!)\n> #\n> $STATE_NONE = 0;\n> $STATE_CHANGED = 1;\n> $STATE_ADDED = 2;\n> $STATE_REMOVED = 3;\n> $STATE_LOG = 4;\n>\n> $LAST_FILE = \"/tmp/#egcscvs.lastdir\";\n>\n> $CHANGED_FILE = \"/tmp/#egcscvs.files.changed\";\n> $ADDED_FILE = \"/tmp/#egcscvs.files.added\";\n> $REMOVED_FILE = \"/tmp/#egcscvs.files.removed\";\n> $LOG_FILE = \"/tmp/#egcscvs.files.log\";\n>\n> $FILE_PREFIX = \"#egcscvs.files\";\n>\n> #\n> #\tSubroutines\n> #\n>\n> sub cleanup_tmpfiles {\n> local($wd, @files);\n>\n> $wd = `pwd`;\n> chdir(\"/tmp\") || die(\"Can't chdir('/tmp')\\n\");\n> opendir(DIR, \".\");\n> push(@files, grep(/^$FILE_PREFIX\\..*\\.$id$/, readdir(DIR)));\n> closedir(DIR);\n> foreach (@files) {\n> \tunlink $_;\n> }\n> unlink $LAST_FILE . \".\" . $id;\n>\n> chdir($wd);\n> }\n>\n> sub write_logfile {\n> local($filename, @lines) = @_;\n>\n> open(FILE, \">$filename\") || die(\"Cannot open log file $filename.\\n\");\n> print FILE join(\"\\n\", @lines), \"\\n\";\n> close(FILE);\n> }\n>\n> sub format_names {\n> local($dir, @files) = @_;\n> local(@lines);\n>\n> if ($dir =~ /^\\.\\//) {\n> \t$dir = $';\n> }\n> if ($dir =~ /\\/$/) {\n> \t$dir = $`;\n> }\n> if ($dir eq \"\") {\n> \t$dir = \".\";\n> }\n>\n> $format = \"\\t%-\" . sprintf(\"%d\", length($dir) > 15 ? length($dir) : 15) . \"s%s \";\n>\n> $lines[0] = sprintf($format, $dir, \":\");\n>\n> if ($debug) {\n> \tprint STDERR \"format_names(): dir = \", $dir, \"; files = \", join(\":\", @files), \".\\n\";\n> }\n> foreach $file (@files) {\n> \tif (length($lines[$#lines]) + length($file) > 65) {\n> \t $lines[++$#lines] = sprintf($format, \" \", \" \");\n> \t}\n> \t$lines[$#lines] .= $file . \" \";\n> }\n>\n> @lines;\n> }\n>\n> sub format_lists {\n> local(@lines) = @_;\n> local(@text, @files, $lastdir);\n>\n> if ($debug) {\n> \tprint STDERR \"format_lists(): \", join(\":\", @lines), \"\\n\";\n> }\n> @text = ();\n> @files = ();\n> $lastdir = shift @lines;\t# first thing is always a directory\n> if ($lastdir !~ /.*\\/$/) {\n> \tdie(\"Damn, $lastdir doesn't look like a directory!\\n\");\n> }\n> foreach $line (@lines) {\n> \tif ($line =~ /.*\\/$/) {\n> \t push(@text, &format_names($lastdir, @files));\n> \t $lastdir = $line;\n> \t @files = ();\n> \t} else {\n> \t push(@files, $line);\n> \t}\n> }\n> push(@text, &format_names($lastdir, @files));\n>\n> @text;\n> }\n>\n> sub accum_subject {\n> local(@lines) = @_;\n> local(@files, $lastdir);\n>\n> $lastdir = shift @lines;\t# first thing is always a directory\n> @files = ($lastdir);\n> if ($lastdir !~ /.*\\/$/) {\n> \tdie(\"Damn, $lastdir doesn't look like a directory!\\n\");\n> }\n> foreach $line (@lines) {\n> \tif ($line =~ /.*\\/$/) {\n> \t $lastdir = $line;\n> \t push(@files, $line);\n> \t} else {\n> \t push(@files, $lastdir . $line);\n> \t}\n> }\n>\n> @files;\n> }\n>\n> sub compile_subject {\n> local(@files) = @_;\n> local($text, @a, @b, @c, $dir, $topdir);\n>\n> # find the highest common directory\n> $dir = '-';\n> do {\n> \t$topdir = $dir;\n> \tforeach $file (@files) {\n> \t if ($file =~ /.*\\/$/) {\n> \t\tif ($dir eq '-') {\n> \t\t $dir = $file;\n> \t\t} else {\n> \t\t if (index($dir,$file) == 0) {\n> \t\t\t$dir = $file;\n> \t\t } elsif (index($file,$dir) != 0) {\n> \t\t\t@a = split /\\//,$file;\n> \t\t\t@b = split /\\//,$dir;\n> \t\t\t@c = ();\n> \t\t\tCMP: while ($#a > 0 && $#b > 0) {\n> \t\t\t if ($a[0] eq $b[0]) {\n> \t\t\t\tpush(@c, $a[0]);\n> \t\t\t\tshift @a;\n> \t\t\t\tshift @b;\n> \t\t\t } else {\n> \t\t\t\tlast CMP;\n> \t\t\t }\n> \t\t\t}\n> \t\t\t$dir = join('/',@c) . '/';\n> \t\t }\n> \t\t}\n> \t }\n> \t}\n> } until $dir eq $topdir;\n>\n> # strip out directories and the common prefix topdir.\n> chop $topdir;\n> @c = ($modulename . '/' . $topdir);\n> foreach $file (@files) {\n> \tif (!($file =~ /.*\\/$/)) {\n> \t push(@c, substr($file, length($topdir)+1));\n> \t}\n> }\n>\n> # put it together and limit the length.\n> $text = join(' ',@c);\n> if (length($text) > 50) {\n> \t$text = substr($text, 0, 46) . ' ...';\n> }\n>\n> $text;\n> }\n>\n> sub append_names_to_file {\n> local($filename, $dir, @files) = @_;\n>\n> if (@files) {\n> \topen(FILE, \">>$filename\") || die(\"Cannot open file $filename.\\n\");\n> \tprint FILE $dir, \"\\n\";\n> \tprint FILE join(\"\\n\", @files), \"\\n\";\n> \tclose(FILE);\n> }\n> }\n>\n> sub read_line {\n> local($line);\n> local($filename) = @_;\n>\n> open(FILE, \"<$filename\") || die(\"Cannot open file $filename.\\n\");\n> $line = <FILE>;\n> close(FILE);\n> chop($line);\n> $line;\n> }\n>\n> sub read_logfile {\n> local(@text);\n> local($filename, $leader) = @_;\n>\n> open(FILE, \"<$filename\");\n> while (<FILE>) {\n> \tchop;\n> \tpush(@text, $leader.$_);\n> }\n> close(FILE);\n> @text;\n> }\n>\n> sub build_header {\n> local($header);\n> local($sec,$min,$hour,$mday,$mon,$year) = localtime(time);\n> $header = sprintf(\"CVSROOT:\\t%s\\nModule name:\\t%s\\n\",\n> \t\t $cvsroot,\n> \t\t $modulename);\n> if (defined($branch)) {\n> \t$header .= sprintf(\"Branch: \\t%s\\n\",\n> \t\t $branch);\n> }\n> $header .= sprintf(\"Changes by:\\t%s@%s\\t%02d/%02d/%02d %02d:%02d:%02d\",\n> \t\t $login, $hostdomain,\n> \t\t $year%100, $mon+1, $mday,\n> \t\t $hour, $min, $sec);\n> }\n>\n> sub mail_notification {\n> local($name, $subject, @text) = @_;\n> open(MAIL, \"| $MAILER -s \\\"$subject\\\" $name\");\n> print MAIL join(\"\\n\", @text), \"\\n\";\n> close(MAIL);\n> }\n>\n> sub write_commitlog {\n> local($logfile, @text) = @_;\n>\n> open(FILE, \">>$logfile\");\n> print FILE join(\"\\n\", @text), \"\\n\\n\";\n> close(FILE);\n> }\n>\n> #\n> #\tMain Body\n> #\n>\n> # Initialize basic variables\n> #\n> $debug = 0;\n> $id = getpgrp();\t\t# note, you *must* use a shell which does setpgrp()\n> $state = $STATE_NONE;\n> $login = $ENV{'USER'} || (getpwuid($<))[0] || \"nobody\";\n> chop($hostname = `hostname`);\n> if ($hostname !~ /\\./) {\n> chop($domainname = `domainname`);\n> $hostdomain = $hostname . \".\" . $domainname;\n> } else {\n> $hostdomain = $hostname;\n> }\n> $cvsroot = $ENV{'CVSROOT'};\n> $do_status = 1;\n> $modulename = \"\";\n>\n> # parse command line arguments (file list is seen as one arg)\n> #\n> while (@ARGV) {\n> $arg = shift @ARGV;\n>\n> if ($arg eq '-d') {\n> \t$debug = 1;\n> \tprint STDERR \"Debug turned on...\\n\";\n> } elsif ($arg eq '-m') {\n> \t$mailto = \"$mailto \" . shift @ARGV;\n> } elsif ($arg eq '-M') {\n> \t$modulename = shift @ARGV;\n> } elsif ($arg eq '-s') {\n> \t$do_status = 0;\n> } elsif ($arg eq '-f') {\n> \t($commitlog) && die(\"Too many '-f' args\\n\");\n> \t$commitlog = shift @ARGV;\n> } else {\n> \t($donefiles) && die(\"Too many arguments! Check usage.\\n\");\n> \t$donefiles = 1;\n> \t@files = split(/ /, $arg);\n> }\n> }\n> ($mailto) || die(\"No -m mail recipient specified\\n\");\n>\n> # for now, the first \"file\" is the repository directory being committed,\n> # relative to the $CVSROOT location\n> #\n> @path = split('/', $files[0]);\n>\n> # XXX there are some ugly assumptions in here about module names and\n> # XXX directories relative to the $CVSROOT location -- really should\n> # XXX read $CVSROOT/CVSROOT/modules, but that's not so easy to do, since\n> # XXX we have to parse it backwards.\n> #\n> if ($modulename eq \"\") {\n> $modulename = $path[0];\t# I.e. the module name == top-level dir\n> }\n> if ($commitlog ne \"\") {\n> $commitlog = $cvsroot . \"/\" . $modulename . \"/\" . $commitlog unless ($commitlog =~ /^\\//);\n> }\n> if ($#path == 0) {\n> $dir = \".\";\n> } else {\n> $dir = join('/', @path[1..$#path]);\n> }\n> $dir = $dir . \"/\";\n>\n> if ($debug) {\n> print STDERR \"module - \", $modulename, \"\\n\";\n> print STDERR \"dir - \", $dir, \"\\n\";\n> print STDERR \"path - \", join(\":\", @path), \"\\n\";\n> print STDERR \"files - \", join(\":\", @files), \"\\n\";\n> print STDERR \"id - \", $id, \"\\n\";\n> }\n>\n> # Check for a new directory first. This appears with files set as follows:\n> #\n> # files[0] - \"path/name/newdir\"\n> # files[1] - \"-\"\n> # files[2] - \"New\"\n> # files[3] - \"directory\"\n> #\n> if ($files[2] =~ /New/ && $files[3] =~ /directory/) {\n> local(@text);\n>\n> @text = ();\n> push(@text, &build_header());\n> push(@text, \"\");\n> push(@text, $files[0]);\n> push(@text, \"\");\n>\n> while (<STDIN>) {\n> \tchop;\t\t\t# Drop the newline\n> \tpush(@text, $_);\n> }\n>\n> &mail_notification($mailto, $files[0], @text);\n>\n> if ($commitlog) {\n> \t&write_commitlog($commitlog, @text);\n> }\n>\n> exit 0;\n> }\n>\n> # Iterate over the body of the message collecting information.\n> #\n> while (<STDIN>) {\n> chop;\t\t\t# Drop the newline\n>\n> if (/^Modified Files/) { $state = $STATE_CHANGED; next; }\n> if (/^Added Files/) { $state = $STATE_ADDED; next; }\n> if (/^Removed Files/) { $state = $STATE_REMOVED; next; }\n> if (/^Log Message/) { $state = $STATE_LOG; next; }\n> if (/^Revision\\/Branch/) { /^[^:]+:\\s*(.*)/; $branch = $+; next; }\n>\n> s/^[ \\t\\n]+//;\t\t# delete leading whitespace\n> s/[ \\t\\n]+$//;\t\t# delete trailing whitespace\n>\n> if ($state == $STATE_CHANGED) { push(@changed_files, split); }\n> if ($state == $STATE_ADDED) { push(@added_files, split); }\n> if ($state == $STATE_REMOVED) { push(@removed_files, split); }\n> if ($state == $STATE_LOG) { push(@log_lines, $_); }\n> }\n>\n> # Strip leading and trailing blank lines from the log message. Also\n> # compress multiple blank lines in the body of the message down to a\n> # single blank line.\n> #\n> while ($#log_lines > -1) {\n> last if ($log_lines[0] ne \"\");\n> shift(@log_lines);\n> }\n> while ($#log_lines > -1) {\n> last if ($log_lines[$#log_lines] ne \"\");\n> pop(@log_lines);\n> }\n> for ($i = $#log_lines; $i > 0; $i--) {\n> if (($log_lines[$i - 1] eq \"\") && ($log_lines[$i] eq \"\")) {\n> \tsplice(@log_lines, $i, 1);\n> }\n> }\n>\n> # Check for an import command. This appears with files set as follows:\n> #\n> # files[0] - \"path/name\"\n> # files[1] - \"-\"\n> # files[2] - \"Imported\"\n> # files[3] - \"sources\"\n> #\n> if ($files[2] =~ /Imported/ && $files[3] =~ /sources/) {\n> local(@text);\n>\n> @text = ();\n> push(@text, &build_header());\n> push(@text, \"\");\n>\n> push(@text, \"Log message:\");\n> while ($#log_lines > -1) {\n> \tpush (@text, \" \" . $log_lines[0]);\n> \tshift(@log_lines);\n> }\n>\n> &mail_notification($mailto, \"Import $file[0]\", @text);\n>\n> if ($commitlog) {\n> \t&write_commitlog($commitlog, @text);\n> }\n>\n> exit 0;\n> }\n>\n> if ($debug) {\n> print STDERR \"Searching for log file index...\";\n> }\n> # Find an index to a log file that matches this log message\n> #\n> for ($i = 0; ; $i++) {\n> local(@text);\n>\n> last if (! -e \"$LOG_FILE.$i.$id\"); # the next available one\n> @text = &read_logfile(\"$LOG_FILE.$i.$id\", \"\");\n> last if ($#text == -1);\t# nothing in this file, use it\n> last if (join(\" \", @log_lines) eq join(\" \", @text)); # it's the same log message as another\n> }\n> if ($debug) {\n> print STDERR \" found log file at $i.$id, now writing tmp files.\\n\";\n> }\n>\n> # Spit out the information gathered in this pass.\n> #\n> &append_names_to_file(\"$CHANGED_FILE.$i.$id\", $dir, @changed_files);\n> &append_names_to_file(\"$ADDED_FILE.$i.$id\", $dir, @added_files);\n> &append_names_to_file(\"$REMOVED_FILE.$i.$id\", $dir, @removed_files);\n> &write_logfile(\"$LOG_FILE.$i.$id\", @log_lines);\n>\n> # Check whether this is the last directory. If not, quit.\n> #\n> if ($debug) {\n> print STDERR \"Checking current dir against last dir.\\n\";\n> }\n> $_ = &read_line(\"$LAST_FILE.$id\");\n>\n> if ($_ ne $cvsroot . \"/\" . $files[0]) {\n> if ($debug) {\n> \tprint STDERR sprintf(\"Current directory %s is not last directory %s.\\n\", $cvsroot . \"/\" .$files[0], $_);\n> }\n> exit 0;\n> }\n> if ($debug) {\n> print STDERR sprintf(\"Current directory %s is last directory %s -- all commits done.\\n\", $files[0], $_);\n> }\n>\n> #\n> #\tEnd Of Commits!\n> #\n>\n> # This is it. The commits are all finished. Lump everything together\n> # into a single message, fire a copy off to the mailing list, and drop\n> # it on the end of the Changes file.\n> #\n>\n> #\n> # Produce the final compilation of the log messages\n> #\n> @text = ();\n> @status_txt = ();\n> @subject_files = ();\n> push(@text, &build_header());\n> push(@text, \"\");\n>\n> for ($i = 0; ; $i++) {\n> last if (! -e \"$LOG_FILE.$i.$id\"); # we're done them all!\n> @lines = &read_logfile(\"$CHANGED_FILE.$i.$id\", \"\");\n> if ($#lines >= 0) {\n> \tpush(@text, \"Modified files:\");\n> \tpush(@text, &format_lists(@lines));\n> \tpush(@subject_files, &accum_subject(@lines));\n> }\n> @lines = &read_logfile(\"$ADDED_FILE.$i.$id\", \"\");\n> if ($#lines >= 0) {\n> \tpush(@text, \"Added files:\");\n> \tpush(@text, &format_lists(@lines));\n> \tpush(@subject_files, &accum_subject(@lines));\n> }\n> @lines = &read_logfile(\"$REMOVED_FILE.$i.$id\", \"\");\n> if ($#lines >= 0) {\n> \tpush(@text, \"Removed files:\");\n> \tpush(@text, &format_lists(@lines));\n> \tpush(@subject_files, &accum_subject(@lines));\n> }\n> if ($#text >= 0) {\n> \tpush(@text, \"\");\n> }\n> @lines = &read_logfile(\"$LOG_FILE.$i.$id\", \"\\t\");\n> if ($#lines >= 0) {\n> \tpush(@text, \"Log message:\");\n> \tpush(@text, @lines);\n> \tpush(@text, \"\");\n> }\n> if ($do_status) {\n> \tlocal(@changed_files);\n>\n> \t@changed_files = ();\n> \tpush(@changed_files, &read_logfile(\"$CHANGED_FILE.$i.$id\", \"\"));\n> \tpush(@changed_files, &read_logfile(\"$ADDED_FILE.$i.$id\", \"\"));\n> \tpush(@changed_files, &read_logfile(\"$REMOVED_FILE.$i.$id\", \"\"));\n>\n> \tif ($debug) {\n> \t print STDERR \"main: pre-sort changed_files = \", join(\":\", @changed_files), \".\\n\";\n> \t}\n> \tsort(@changed_files);\n> \tif ($debug) {\n> \t print STDERR \"main: post-sort changed_files = \", join(\":\", @changed_files), \".\\n\";\n> \t}\n>\n> \tforeach $dofile (@changed_files) {\n> \t if ($dofile =~ /\\/$/) {\n> \t\tnext;\t\t# ignore the silly \"dir\" entries\n> \t }\n> \t if ($debug) {\n> \t\tprint STDERR \"main(): doing status on $dofile\\n\";\n> \t }\n> \t open(STATUS, \"-|\") || exec 'cvs', '-n', 'status', '-Qqv', $dofile;\n> \t while (<STATUS>) {\n> \t\tchop;\n> \t\tpush(@status_txt, $_);\n> \t }\n> \t}\n> }\n> }\n>\n> $subject_txt = &compile_subject(@subject_files);\n>\n> # Write to the commitlog file\n> #\n> if ($commitlog) {\n> &write_commitlog($commitlog, @text);\n> }\n>\n> if ($#status_txt >= 0) {\n> push(@text, @status_txt);\n> }\n>\n> # Mailout the notification.\n> #\n> &mail_notification($mailto, $subject_txt, @text);\n>\n> # cleanup\n> #\n> if (! $debug) {\n> &cleanup_tmpfiles();\n> }\n>\n> exit 0;\n> ==================================================\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Fri, 12 Jan 2001 20:56:19 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS updates on committers list..."
},
{
"msg_contents": "The Hermit Hacker writes:\n\n> Done ... just tried adding white space to three files, in three\n> directories, and it seems to go through as one commit ...\n>\n> let's see what happens next time Tom makes a big change :)\n\nWhen Peter makes a small change, this happens:\n\n/home/projects/pgsql/cvsroot/CVSROOT/log_accum: not found\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 13 Jan 2001 03:24:23 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS updates on committers list..."
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> The Hermit Hacker writes:\n> \n> > Done ... just tried adding white space to three files, in three\n> > directories, and it seems to go through as one commit ...\n> >\n> > let's see what happens next time Tom makes a big change :)\n> \n> When Peter makes a small change, this happens:\n> \n> /home/projects/pgsql/cvsroot/CVSROOT/log_accum: not found\n\nOdd that it would work once.\n\nIs log_accum listed in the CVSROOT/checkoutlist file?\n\nWhat files are in /home/projects/pgsql/cvsroot/CVSROOT on the CVS\nserver?\n\nIan\n",
"msg_date": "12 Jan 2001 18:28:15 -0800",
"msg_from": "Ian Lance Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS updates on committers list..."
},
{
"msg_contents": "Ian Lance Taylor writes:\n\n> > /home/projects/pgsql/cvsroot/CVSROOT/log_accum: not found\n>\n> Odd that it would work once.\n>\n> Is log_accum listed in the CVSROOT/checkoutlist file?\n\ncommit_prep Won't be able to do mail logging.\nlog_accum Won't be able to do mail logging.\n\n> What files are in /home/projects/pgsql/cvsroot/CVSROOT on the CVS\n> server?\n\nThere is only 'log_accum.pl,v', but nothing without the '.pl' or without\nthe ',v'.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 13 Jan 2001 03:59:38 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS updates on committers list..."
},
{
"msg_contents": "\nokay, got all of this fixed up ... I missed the 'checkoutlist' file, and\nthe mail program was pointing at the wrong place :(\n\ntesting now ...\n\nOn Sat, 13 Jan 2001, Peter Eisentraut wrote:\n\n> Ian Lance Taylor writes:\n>\n> > > /home/projects/pgsql/cvsroot/CVSROOT/log_accum: not found\n> >\n> > Odd that it would work once.\n> >\n> > Is log_accum listed in the CVSROOT/checkoutlist file?\n>\n> commit_prep Won't be able to do mail logging.\n> log_accum Won't be able to do mail logging.\n>\n> > What files are in /home/projects/pgsql/cvsroot/CVSROOT on the CVS\n> > server?\n>\n> There is only 'log_accum.pl,v', but nothing without the '.pl' or without\n> the ',v'.\n>\n> --\n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Fri, 12 Jan 2001 23:11:43 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS updates on committers list..."
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> Ian Lance Taylor writes:\n> \n> > > /home/projects/pgsql/cvsroot/CVSROOT/log_accum: not found\n> >\n> > Odd that it would work once.\n> >\n> > Is log_accum listed in the CVSROOT/checkoutlist file?\n> \n> commit_prep Won't be able to do mail logging.\n> log_accum Won't be able to do mail logging.\n> \n> > What files are in /home/projects/pgsql/cvsroot/CVSROOT on the CVS\n> > server?\n> \n> There is only 'log_accum.pl,v', but nothing without the '.pl' or without\n> the ',v'.\n\nIt sounds like the file checked in was log_accum.pl. Either it should\nbe log_accum, or checkoutlist and loginfo should be changed to use\nlog_accum.pl instead of log_accum.\n\nIan\n",
"msg_date": "12 Jan 2001 19:22:51 -0800",
"msg_from": "Ian Lance Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS updates on committers list..."
}
]
|
[
{
"msg_contents": "I have just finished trudging through a bunch of code and trying to make\nit secure against being interrupted by die() at arbitrary instants.\nHowever, I am under no illusion that I have succeeded in making the\nworld safe for SIGTERM, and you shouldn't be either. There is just way\ntoo much code that is potentially invoked during proc_exit; even if we\nfixed every line of our code, there's C library code that's not under\nour control. For example, malloc/free are not interrupt-safe on many\nplatforms, last I heard. Do you want to put START/END_CRIT_SECTION\naround every memory allocation operation? I don't.\n\nI think we'd be lots better off to abandon the notion that we can exit\ndirectly from the SIGTERM interrupt handler, and instead treat SIGTERM\nthe same way we treat QueryCancel: set a flag that is inspected at\nspecific places where we know we are in a good state.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 17:01:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> I have just finished trudging through a bunch of code and trying to make\n> it secure against being interrupted by die() at arbitrary instants.\n\nIt seems that START/END_CRIT_SECTION() is called in both\nexistent and newly added places.\nIsn't it appropriate to call a diffrent macro using a separate \nCriticalSectionCount variable in newly added places ?\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Sun, 14 Jan 2001 22:01:01 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Isn't it appropriate to call a diffrent macro using a separate \n> CriticalSectionCount variable in newly added places ?\n\nWhy? What difference do you see in the nature of the critical sections?\nThey all look the same to me: hold off cancel/die response.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Jan 2001 11:21:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Isn't it appropriate to call a diffrent macro using a separate \n> > CriticalSectionCount variable in newly added places ?\n> \n> Why? What difference do you see in the nature of the critical sections?\n> They all look the same to me: hold off cancel/die response.\n>\n\nI've thought that the main purpose of CRIT_SECTION is to\nforce redo recovery for any errors during the CRIT_SECTION\nto complete the critical operation e.g. bt_split(). Note that\nelog(ERROR/FATAL) is changed to elog(STOP) if Critical\nSectionCount > 0. Postgres 7.1 stll lacks an undo functionality\nand AbortTransaction() does little about rolling back the\ntransaction. PostgreSQL seems to have to retry the critical\noperation by running a redo recovery after killing all backends.\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Mon, 15 Jan 2001 02:03:08 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> Why? What difference do you see in the nature of the critical sections?\n>> They all look the same to me: hold off cancel/die response.\n\n> I've thought that the main purpose of CRIT_SECTION is to\n> force redo recovery for any errors during the CRIT_SECTION\n> to complete the critical operation e.g. bt_split().\n\nHow could it force redo? Rollback, maybe, but that should happen\nanyway.\n\n> Note that elog(ERROR/FATAL) is changed to elog(STOP) if Critical\n> SectionCount > 0.\n\nNot in current sources ;-).\n\nPerhaps Vadim will say that I broke his error scheme, but if so it's\nhis own fault for not documenting such delicate code at all. I believe\nhe's out of town this weekend, so let's wait till he gets back and then\ndiscuss it some more. Perhaps there is a need to distinguish xlog-\nrelated critical sections from other ones, or perhaps not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Jan 2001 12:21:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea "
},
{
"msg_contents": "Hmm, I've seen neither my posting nor your reply\non hackers ML.\n\nTom Lane wrote:\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> Why? What difference do you see in the nature of the critical sections?\n> >> They all look the same to me: hold off cancel/die response.\n> \n> > I've thought that the main purpose of CRIT_SECTION is to\n> > force redo recovery for any errors during the CRIT_SECTION\n> > to complete the critical operation e.g. bt_split().\n> \n> How could it force redo?\n\nDoesn't proc_exit(non-zero) force shuttdown recovery ?\nAFAIK, Postgres doesn't have a rollback recovery \nfunctionality yet.\n\n> Rollback, maybe, but that should happen\n> anyway.\n> \n> > Note that elog(ERROR/FATAL) is changed to elog(STOP) if Critical\n> > SectionCount > 0.\n> \n> Not in current sources ;-).\n>\n\nOh you removed the code 20 hours ago. AFAIK, the (equivalent)\ncode has lived there from the first appearance of CRIT_SECTION.\nIs there any reason to remove the code ?\n \nRegards.\nHiroshi Inoue\n",
"msg_date": "Mon, 15 Jan 2001 10:23:27 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad\n idea"
},
{
"msg_contents": "Hiroshi Inoue <[email protected]> writes:\n>>>> I've thought that the main purpose of CRIT_SECTION is to\n>>>> force redo recovery for any errors during the CRIT_SECTION\n>>>> to complete the critical operation e.g. bt_split().\n>> \n>> How could it force redo?\n\n> Doesn't proc_exit(non-zero) force shuttdown recovery ?\n\nIt forces a shutdown and restart, but that does not do anything good\nthat I can see. The WAL log entry hasn't been made, typically, so there\nis nothing to redo. If there *were* a log entry, and the redo failed\nagain (pretty likely), then we'd have an infinite crash/try to\nrestart/crash cycle, which is just about the worst possible behavior.\nSo I'm not seeing what the point is.\n\n> Oh you removed the code 20 hours ago. AFAIK, the (equivalent)\n> code has lived there from the first appearance of CRIT_SECTION.\n> Is there any reason to remove the code ?\n\nBecause I think turning an elog(ERROR) into a system-wide crash is\nnot a good idea ;-). If you are correct that this behavior is necessary\nfor WAL-related critical sections, then indeed we need two kinds of\ncritical sections, one that just holds off cancel/die response and one\nthat turns elog(ERROR) into a dangerous weapon. I'm going to wait and\nsee Vadim's response before I do anything ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Jan 2001 20:41:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <[email protected]> writes:\n> >>>> I've thought that the main purpose of CRIT_SECTION is to\n> >>>> force redo recovery for any errors during the CRIT_SECTION\n> >>>> to complete the critical operation e.g. bt_split().\n> >>\n> >> How could it force redo?\n> \n> > Doesn't proc_exit(non-zero) force shuttdown recovery ?\n> \n> It forces a shutdown and restart, but that does not do anything good\n> that I can see. The WAL log entry hasn't been made, typically, so there\n> is nothing to redo. If there *were* a log entry, and the redo failed\n> again (pretty likely), then we'd have an infinite crash/try to\n> restart/crash cycle, which is just about the worst possible behavior.\n> So I'm not seeing what the point is.\n> \n\nIt seems a nature of 7.1 recovery scheme.\nOnce a WAL log entry is made, recovery should \ncomplete the log in regardless of the cause of\nrecovery(elog, system error like SEGV etc).\n\nI've wondered why no one has asked how we could\nrecover from a recovery failure. Unfortunately,\nI don't know the answer. Recovery failure seems\nveeeeery serious because postmaster couldn't\nstart if the startup recovery fails.\nIn addtion I have another anxiety. I don't know\nhow robust WAL is against general bugs not\ndirectly related to WAL.\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Mon, 15 Jan 2001 11:57:06 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad\n idea"
}
]
|
[
{
"msg_contents": "> I think we'd be lots better off to abandon the notion that we can exit\n> directly from the SIGTERM interrupt handler, and instead treat SIGTERM\n> the same way we treat QueryCancel: set a flag that is inspected at\n> specific places where we know we are in a good state.\n> \n> Comments?\n\nThis will be much cleaner.\n\nVadim\n",
"msg_date": "Fri, 12 Jan 2001 14:09:05 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: SIGTERM -> elog(FATAL) -> proc_exit() is probably a\n\t bad idea"
},
{
"msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim\n> \n> > I think we'd be lots better off to abandon the notion that we can exit\n> > directly from the SIGTERM interrupt handler, and instead treat SIGTERM\n> > the same way we treat QueryCancel: set a flag that is inspected at\n> > specific places where we know we are in a good state.\n> > \n> > Comments?\n> \n> This will be much cleaner.\n> \n\nHmm, CancelQuery isn't so urgent an operation currently.\nFor example, VACUUM checks QueryCancel flag only\nonce per table.\n\nRegards.\nHiroshi Inoue \n",
"msg_date": "Sat, 13 Jan 2001 08:59:34 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea"
},
{
"msg_contents": ">> I think we'd be lots better off to abandon the notion that we can exit\n>> directly from the SIGTERM interrupt handler, and instead treat SIGTERM\n>> the same way we treat QueryCancel: set a flag that is inspected at\n>> specific places where we know we are in a good state.\n>> \n>> Comments?\n\n> This will be much cleaner.\n\nOK, here's a more detailed proposal.\n\nWe'll eliminate elog() directly from the signal handlers, except in the\nextremely limited case where the main line is waiting for a lock (the\nexisting \"cancel wait for lock\" mechanism for QueryCancel can be used\nfor SIGTERM as well, I think). Otherwise, both die() and\nQueryCancelHandler() will just set flags and return. handle_warn()\n(the SIGQUIT handler) should probably go away entirely; it does nothing\nthat's not done better by QueryCancel. I'm inclined to make SIGQUIT\ninvoke die() the same as SIGTERM does, unless someone has a better idea\nwhat to do with that signal.\n\nI believe we should keep the \"critical section count\" mechanism that\nVadim already created, even though it is no longer needed to discourage\nthe signal handlers themselves from aborting processing. With the\ncritical section counter, it is OK to have flag checks inside subroutines\nthat are safe to abort from in some contexts and not others --- the\ncontexts that don't want an abort just have to do START/END_CRIT_SECTION\nto ensure that QueryCancel/ProcDie won't happen in routines they call.\nSo the basic flag checks are like \"if (InterruptFlagSet &&\nCritSectionCounter == 0) elog(...);\".\n\nHaving done that, the $64 question is where to test the flags.\n\nObviously we can check for interrupts at all the places where\nQueryCancel is tested now, which is primarily the outer loops.\nI suggest we also check for interrupts in s_lock's wait loop (ie, we can\ncancel/die if we haven't yet got the lock AND we are not in a crit\nsection), as well as in END_CRIT_SECTION.\n\nI intend to devise a macro CHECK_FOR_INTERRUPTS() that embodies\nall the test code, rather than duplicating these if-tests in\nmany places.\n\nNote I am assuming that it's always reasonable to check for QueryCancel\nand ProcDie at the same places. I do not see any hole in that theory,\nbut if someone finds one, we could introduce additional flags/counters\nto distinguish safe-to-cancel from safe-to-die states.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 19:01:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Hmm, CancelQuery isn't so urgent an operation currently.\n> For example, VACUUM checks QueryCancel flag only\n> once per table.\n\nRight, we'll need to check in more places. See my just-posted proposal.\nChecking at any spinlock grab should ensure that we check reasonably\noften.\n\nI just realized I forgot to mention the case of SIGTERM while the main\nline is waiting for input from the frontend --- obviously we want to\nquit immediately in that case, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 19:03:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea "
}
]
|
[
{
"msg_contents": "> > You know - this is *core* assumption. If drive lies about this then\n> > *nothing* will help you. Do you remember core rule of WAL?\n> > \"Changes must be logged *before* changed data pages written\".\n> > If this rule will be broken then data files will be inconsistent\n> > after crash recovery and you will not notice this, w/wo CRC in\n> > data blocks.\n> \n> You can include the data blocks' CRCs in the log entries.\n\nHow could it help?\n\nVadim\n",
"msg_date": "Fri, 12 Jan 2001 14:16:07 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: CRCs"
},
{
"msg_contents": "On Fri, Jan 12, 2001 at 02:16:07PM -0800, Mikheev, Vadim wrote:\n> > > You know - this is *core* assumption. If drive lies about this then\n> > > *nothing* will help you. Do you remember core rule of WAL?\n> > > \"Changes must be logged *before* changed data pages written\".\n> > > If this rule will be broken then data files will be inconsistent\n> > > after crash recovery and you will not notice this, w/wo CRC in\n> > > data blocks.\n> > \n> > You can include the data blocks' CRCs in the log entries.\n> \n> How could it help?\n\nIt wouldn't help you recover, but you would be able to report that \nyou cannot recover.\n\nTo be more specific, if the blocks referenced in the log are partially \nwritten, their CRCs will (probably) be wrong. If they are not \nphysically written at all, their CRCs will be correct but will \nnot match what is in the log. In either case the user will know \nimmediately that the database has been corrupted, and must fall \nback on a failover image or backup.\n\nIt would be no bad thing to include the CRC of the block referenced\nwherever in the file format that a block reference lives.\n\nNathan Myers\[email protected]\n",
"msg_date": "Fri, 12 Jan 2001 14:54:45 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRCs"
},
{
"msg_contents": "[email protected] (Nathan Myers) writes:\n>>>>>> \"Changes must be logged *before* changed data pages written\".\n>>>>>> If this rule will be broken then data files will be inconsistent\n>>>>>> after crash recovery and you will not notice this, w/wo CRC in\n>>>>>> data blocks.\n>>>> \n>>>> You can include the data blocks' CRCs in the log entries.\n>> \n>> How could it help?\n\n> It wouldn't help you recover, but you would be able to report that \n> you cannot recover.\n\nHow? The scenario Vadim is pointing out is where the disk drive writes\na changed data block in advance of the WAL log entry describing the\nchange. Then power drops and the WAL entry never gets made. At\nrestart, how will you realize that that data block now contains data you\ndon't want? There's not even a log entry telling you you need to look\nat it, much less one that tells you what should be in it.\n\nAFAICS, disk-block CRCs do not guard against mishaps involving intended\nwrites. They will help guard against data corruption that might creep\nin due to outside factors, however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 18:06:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs "
},
{
"msg_contents": "On Fri, Jan 12, 2001 at 06:06:21PM -0500, Tom Lane wrote:\n> [email protected] (Nathan Myers) writes:\n> >>>>>> \"Changes must be logged *before* changed data pages written\".\n> >>>>>> If this rule will be broken then data files will be inconsistent\n> >>>>>> after crash recovery and you will not notice this, w/wo CRC in\n> >>>>>> data blocks.\n> >>>> \n> >>>> You can include the data blocks' CRCs in the log entries.\n> >> \n> >> How could it help?\n> \n> > It wouldn't help you recover, but you would be able to report that \n> > you cannot recover.\n> \n> How? The scenario Vadim is pointing out is where the disk drive writes\n> a changed data block in advance of the WAL log entry describing the\n> change. Then power drops and the WAL entry never gets made. At\n> restart, how will you realize that that data block now contains data you\n> don't want? There's not even a log entry telling you you need to look\n> at it, much less one that tells you what should be in it.\n\nOK. In that case, recent transactions that were acknowledged to user \nprograms just disappear. The database isn't corrupt, but it doesn't\ncontain what the user believes is in it.\n\nThe only way I can think of to guard against that is to have a sequence\nnumber in each acknowledgement sent to users, and also reported when the \ndatabase recovers. If users log their ACK numbers, they can be compared\nwhen the database comes back up.\n\nObviously it's better to configure the disk so that it doesn't lie about\nwhat's been written.\n\n> AFAICS, disk-block CRCs do not guard against mishaps involving intended\n> writes. They will help guard against data corruption that might creep\n> in due to outside factors, however.\n\nRight. \n\nNathan Myers\[email protected]\n",
"msg_date": "Fri, 12 Jan 2001 15:48:30 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRCs"
},
{
"msg_contents": "* Nathan Myers <[email protected]> [010112 15:49] wrote:\n> On Fri, Jan 12, 2001 at 06:06:21PM -0500, Tom Lane wrote:\n> > [email protected] (Nathan Myers) writes:\n> > >>>>>> \"Changes must be logged *before* changed data pages written\".\n> > >>>>>> If this rule will be broken then data files will be inconsistent\n> > >>>>>> after crash recovery and you will not notice this, w/wo CRC in\n> > >>>>>> data blocks.\n> > >>>> \n> > >>>> You can include the data blocks' CRCs in the log entries.\n> > >> \n> > >> How could it help?\n> > \n> > > It wouldn't help you recover, but you would be able to report that \n> > > you cannot recover.\n> > \n> > How? The scenario Vadim is pointing out is where the disk drive writes\n> > a changed data block in advance of the WAL log entry describing the\n> > change. Then power drops and the WAL entry never gets made. At\n> > restart, how will you realize that that data block now contains data you\n> > don't want? There's not even a log entry telling you you need to look\n> > at it, much less one that tells you what should be in it.\n> \n> OK. In that case, recent transactions that were acknowledged to user \n> programs just disappear. The database isn't corrupt, but it doesn't\n> contain what the user believes is in it.\n> \n> The only way I can think of to guard against that is to have a sequence\n> number in each acknowledgement sent to users, and also reported when the \n> database recovers. If users log their ACK numbers, they can be compared\n> when the database comes back up.\n> \n> Obviously it's better to configure the disk so that it doesn't lie about\n> what's been written.\n\nI thought WAL+fsync wasn't supposed to allow this to happen?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Fri, 12 Jan 2001 16:10:36 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs"
},
{
"msg_contents": "On Fri, Jan 12, 2001 at 04:10:36PM -0800, Alfred Perlstein wrote:\n> Nathan Myers <[email protected]> [010112 15:49] wrote:\n> >\n> > Obviously it's better to configure the disk so that it doesn't\n> > lie about what's been written.\n> \n> I thought WAL+fsync wasn't supposed to allow this to happen?\n\nIt's an OS and hardware configuration matter; you only get correct\nWAL+fsync semantics if the underlying system is configured right. \nIDE disks are almost always configured wrong, to spoof benchmarks; \nSCSI disks sometimes are.\n\nIf they're configured wrong, then (now that we have a CRC in the \nlog entry) in the event of a power outage the database might come \nback with recently-acknowledged transaction results discarded.\nThat's a lot better than a corrupt database, but it's not \nindustrial-grade semantics. (Use a UPS.)\n\nNathan Myers\[email protected]\n",
"msg_date": "Fri, 12 Jan 2001 16:43:10 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRCs"
},
{
"msg_contents": "Nathan Myers wrote:\n> \n> It wouldn't help you recover, but you would be able to report that\n> you cannot recover.\n\nWhile this could help decting hardware problems, you still won't be able\nto detect some (many) memory errors because the CRC will be calculated\non the already corrupted data.\n\nOf course there are other situations where CRC will not match and\nappropriately logged is a reliable heads-up warning.\n\nBye!\n\n-- \n Daniele\n",
"msg_date": "Sat, 13 Jan 2001 01:41:53 +0000",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs"
},
{
"msg_contents": ">> AFAICS, disk-block CRCs do not guard against mishaps involving intended\n>> writes. They will help guard against data corruption that might creep\n>> in due to outside factors, however.\n\n> Right. \n\nGiven that we seem to have agreed on that, I withdraw my complaint about\ndisk-block-CRC not being in there for 7.1. I think we are still a ways\naway from the point where externally-induced corruption is a major share\nof our failure rate ;-). 7.2 or so will be time enough to add this\nfeature, and I'd really rather not force another initdb for 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 23:30:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs "
},
{
"msg_contents": "On Fri, Jan 12, 2001 at 11:30:30PM -0500, Tom Lane wrote:\n> >> AFAICS, disk-block CRCs do not guard against mishaps involving intended\n> >> writes. They will help guard against data corruption that might creep\n> >> in due to outside factors, however.\n> \n> > Right. \n> \n> Given that we seem to have agreed on that, I withdraw my complaint about\n> disk-block-CRC not being in there for 7.1. I think we are still a ways\n> away from the point where externally-induced corruption is a major share\n> of our failure rate ;-). 7.2 or so will be time enough to add this\n> feature, and I'd really rather not force another initdb for 7.1.\n\nMore to the point, putting CRCs on data blocks might have unintended\nconsequences for dump or vacuum processes. 7.1 is a monumental \naccomplishment even without corruption detection, and the sooner\nthe world has it, the better.\n\nNathan Myers\[email protected]\n",
"msg_date": "Sat, 13 Jan 2001 01:36:47 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRCs"
}
]
|
[
{
"msg_contents": "> > It wouldn't help you recover, but you would be able to report that \n> > you cannot recover.\n> \n> How? The scenario Vadim is pointing out is where the disk \n> drive writes a changed data block in advance of the WAL log entry\n> describing the change. Then power drops and the WAL entry never gets\n> made. At restart, how will you realize that that data block now\n> contains data you don't want? There's not even a log entry telling\n> you you need to look at it, much less one that tells you what should\n> be in it.\n> \n> AFAICS, disk-block CRCs do not guard against mishaps involving intended\n> writes. They will help guard against data corruption that might creep\n> in due to outside factors, however.\n\nI couldn't describe better -:)\n\nVadim\n",
"msg_date": "Fri, 12 Jan 2001 15:14:50 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: CRCs "
}
]
|
[
{
"msg_contents": "> > How? The scenario Vadim is pointing out is where the disk \n> > drive writes a changed data block in advance of the WAL log\n> > entry describing the change. Then power drops and the WAL\n> > entry never gets made. At restart, how will you realize that\n> > that data block now contains data you don't want? There's not\n> > even a log entry telling you you need to look at it, much less\n> > one that tells you what should be in it.\n> \n> OK. In that case, recent transactions that were acknowledged to user \n> programs just disappear. The database isn't corrupt, but it doesn't\n> contain what the user believes is in it.\n\nExample.\n\n1. Tuple was inserted into index.\n2. Looking for free buffer bufmgr decides to write index block.\n3. Following WAL core rule bufmgr first calls XLogFlush() to write\n and fsync log record related to index tuple insertion.\n4. *Beliving* that log record is on disk now (after successful fsync)\n bufmgr writes index block.\n\nIf log record was not really flushed on disk in 3. but on-disk image of\nindex block was updated in 4. and system crashed after this then after\nrestart recovery you'll have unlawful index tuple pointing to where?\nWho knows! No guarantee that corresponding heap tuple was flushed on\ndisk.\n\nIsn't database corrupted now?\n\nVadim\n",
"msg_date": "Fri, 12 Jan 2001 16:38:37 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: CRCs"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> If log record was not really flushed on disk in 3. but on-disk image of\n> index block was updated in 4. and system crashed after this then after\n> restart recovery you'll have unlawful index tuple pointing to where?\n> Who knows! No guarantee that corresponding heap tuple was flushed on\n> disk.\n\nThis example doesn't seem very convincing. Wouldn't the XLOG entry\ndescribing creation of the heap tuple appear in the log before the one\nfor the index tuple? Or are you assuming that both these XLOG entries\nare lost due to disk drive malfeasance?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Jan 2001 19:48:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs "
},
{
"msg_contents": "On Fri, Jan 12, 2001 at 04:38:37PM -0800, Mikheev, Vadim wrote:\n> Example.\n> 1. Tuple was inserted into index.\n> 2. Looking for free buffer bufmgr decides to write index block.\n> 3. Following WAL core rule bufmgr first calls XLogFlush() to write\n> and fsync log record related to index tuple insertion.\n> 4. *Believing* that log record is on disk now (after successful fsync)\n> bufmgr writes index block.\n> \n> If log record was not really flushed on disk in 3. but on-disk image of\n> index block was updated in 4. and system crashed after this then after\n> restart recovery you'll have unlawful index tuple pointing to where?\n> Who knows! No guarantee that corresponding heap tuple was flushed on\n> disk.\n> \n> Isn't database corrupted now?\n\nNote, I haven't read the WAL code, so much of what I've said is based \non what I know is and isn't possible with logging, rather than on \nVadim's actual choices. I know it's *possible* to implement a logging \ndatabase which can maintain consistency without need for strict write \nordering; but without strict write ordering, it is not possible to \nguarantee durable transactions. That is, after a power outage, such \na database may be guaranteed to recover uncorrupted, but some number \n(>= 0) of the last few acknowledged/committed transactions may be lost.\n\nVadim's implementation assumes strict write ordering, so that (e.g.) \nwith IDE disks a corrupt database is possible in the event of a power \noutage. (Database and OS crashes don't count; those don't keep the \nblocks from finding their way from on-disk buffers to disk.) This is \nno criticism; it is more efficient to assume strict write ordering, \nand a database that can lose (the last few) committed transactions \nhas limited value.\n\nTo achieve disk write-order independence is probably not a worthwhile \ngoal, but for systems that cannot provide strict write ordering (e.g., \nmost PCs) it would be helpful to be able to detect that the database \nhas become corrupted. In Vadim's example above, if the index were to\ncontain not only the heap blocks' numbers, but also their CRCs, then \nthe corruption could be detected when the index is used. When the \nblock is read in, its CRC is checked, and when it is referenced via \nthe index, the two CRC values are simply compared and the corruption\nis revealed. \n\nOn a machine that does provide strict write ordering, the CRCs in the \nindex might be unnecessary overhead, but they also provide cross-checks\nto help detect corruption introduced by bugs and whatnot.\n\nOr maybe I don't know what I'm talking about. \n\nNathan Myers\[email protected]\n",
"msg_date": "Sat, 13 Jan 2001 02:47:53 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRCs"
},
{
"msg_contents": "[email protected] (Nathan Myers) writes:\n> To achieve disk write-order independence is probably not a worthwhile \n> goal, but for systems that cannot provide strict write ordering (e.g., \n> most PCs) it would be helpful to be able to detect that the database \n> has become corrupted. In Vadim's example above, if the index were to\n> contain not only the heap blocks' numbers, but also their CRCs, then \n> the corruption could be detected when the index is used. When the \n> block is read in, its CRC is checked, and when it is referenced via \n> the index, the two CRC values are simply compared and the corruption\n> is revealed. \n\nA row-level CRC might be useful for this, but it would have to be on\nthe data only (not the tuple commit-status bits). It'd be totally\nimpractical with a block CRC, I think. To do it with a block CRC, every\ntime you changed *anything* in a heap page, you'd have to find all the\nindex items for each row on the page and update their copies of the\nheap block's CRC. That could easily turn one disk-write into hundreds,\nnot to mention the index search costs. Similarly, a check value that is\naffected by tuple status updates would enormously increase the cost of\nmarking tuples committed or dead.\n\nInstead of a partial row CRC, we could just as well use some other bit\nof identifying information, say the row OID. Given a block CRC on the\nheap page, we'll be pretty confident already that the heap page is OK,\nwe just need to guard against the possibility that it's older than the\nindex item. Checking that there is a valid tuple at the slot indicated\nby the index item, and that it has the right OID, should be a good\nenough (and cheap enough) test.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Jan 2001 12:49:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs "
},
{
"msg_contents": "On Sat, Jan 13, 2001 at 12:49:34PM -0500, Tom Lane wrote:\n> [email protected] (Nathan Myers) writes:\n> > ... for systems that cannot provide strict write ordering (e.g., \n> > most PCs) it would be helpful to be able to detect that the database \n> > has become corrupted. In Vadim's example above, if the index were to\n> > contain not only the heap blocks' numbers, but also their CRCs, then \n> > the corruption could be detected when the index is used. ...\n> \n> A row-level CRC might be useful for this, but it would have to be on\n> the data only (not the tuple commit-status bits). It'd be totally\n> impractical with a block CRC, I think. ...\n\nI almost wrote about an indirect scheme to share the expected block CRC\nvalue among all the index entries that need it, but thought it would \ndistract from the correct approach:\n\n> Instead of a partial row CRC, we could just as well use some other bit\n> of identifying information, say the row OID. ...\n\nGood. But, wouldn't the TID be more specific? True, it would be pretty\nunlikely for a block to have an old tuple with the right OID in the same\nplace. Belt-and-braces says check both :-). Either way, the check seems \nindependent of block CRCs. Would this check be simple enough to be safe\nfor 7.1? \n\nNathan Myers\[email protected]\n",
"msg_date": "Sat, 13 Jan 2001 15:33:59 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRCs"
},
{
"msg_contents": "[email protected] (Nathan Myers) writes:\n>> Instead of a partial row CRC, we could just as well use some other bit\n>> of identifying information, say the row OID. ...\n\n> Good. But, wouldn't the TID be more specific?\n\nUh, the TID *is* the pointer from index to heap. There's no redundancy\nthat way.\n\n> Would this check be simple enough to be safe for 7.1? \n\nIt'd probably be safe, but adding OIDs to index tuples would force an\ninitdb, which I'd rather avoid at this stage of the cycle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Jan 2001 23:20:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs "
},
{
"msg_contents": "On Sunday 14 January 2001 04:49, Tom Lane wrote:\n\n> A row-level CRC might be useful for this, but it would have to be on\n> the data only (not the tuple commit-status bits). It'd be totally\n> impractical with a block CRC, I think. To do it with a block CRC, every\n> time you changed *anything* in a heap page, you'd have to find all the\n> index items for each row on the page and update their copies of the\n> heap block's CRC. That could easily turn one disk-write into hundreds,\n> not to mention the index search costs. Similarly, a check value that is\n> affected by tuple status updates would enormously increase the cost of\n> marking tuples committed or dead.\n\nAh, finally. Looks like we are moving in circles (or spirals ;-) )Remember \nthat some 3-4 months ago I requested help from this list several times \nregarding a trigger function that implements a crc only on the user defined \nattributes? I wrote one in pgtcl which was slow and had trouble with the C \nequivalent due to lack of documentation. I still believe this is that useful \nthat it should be an option in Postgresand not a user defined function.\n\nHorst\n",
"msg_date": "Sun, 14 Jan 2001 21:39:40 +1100",
"msg_from": "Horst Herb <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CRCs"
}
]
|
[
{
"msg_contents": "> > If log record was not really flushed on disk in 3. but \n> > on-disk image of index block was updated in 4. and system\n> > crashed after this then after restart recovery you'll have\n> > unlawful index tuple pointing to where? Who knows!\n> > No guarantee that corresponding heap tuple was flushed on\n> > disk.\n> \n> This example doesn't seem very convincing. Wouldn't the XLOG entry\n> describing creation of the heap tuple appear in the log before the one\n> for the index tuple? Or are you assuming that both these XLOG entries\n> are lost due to disk drive malfeasance?\n\nYes, that was assumed.\nWhen UNDO will be implemented and uncomitted tuples will be removed by\nrollback part of after crash recovery we'll get corrupted database without\nthat assumption.\n\nVadim\n",
"msg_date": "Fri, 12 Jan 2001 16:55:08 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: CRCs "
}
]
|
[
{
"msg_contents": "\n\n",
"msg_date": "Sat, 13 Jan 2001 00:45:14 -0800",
"msg_from": "\"Vadim Mikheev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Out of town these long weekends..."
}
]
|
[
{
"msg_contents": "Hi,\n\nsince I have limited bandwidth. Are the diffs between the different versions\navailable to use with patch instead of always downloading the whole package?\n\nKonstantin\n-- \nDipl-Inf. Konstantin Agouros aka Elwood Blues. Internet: [email protected]\nOtkerstr. 28, 81547 Muenchen, Germany. Tel +49 89 69370185\n----------------------------------------------------------------------------\n\"Captain, this ship will not sustain the forming of the cosmos.\" B'Elana Torres\n",
"msg_date": "Sat, 13 Jan 2001 12:38:39 +0100",
"msg_from": "Konstantinos Agouros <[email protected]>",
"msg_from_op": true,
"msg_subject": "diffs available?"
}
]
|
[
{
"msg_contents": "FYI...\n----- Forwarded message from Jordan Hubbard <[email protected]> -----\n\nFrom: Jordan Hubbard <[email protected]>\nSubject: Re: CVS Commit message generator... \nDate: Fri, 12 Jan 2001 19:50:33 -0800\nMessage-ID: <[email protected]>\nTo: Larry Rosenman <[email protected]>\n\nSure, it's all available from:\n\nftp://ftp.freebsd.org/pub/FreeBSD/development/FreeBSD-CVS/CVSROOT\n\nRegards,\n\n- Jordan\n\n> Jordan,\n> Would it be possible to get a copy of whatever files are necessary\n> to have a CVS server generate the commit messages like the FreeBSD\n> project commits generate? \n> \n> I'm involved with the PostgreSQL project and our commits generate\n> one message per directory, and would much prefer to see them move\n> towards the FreeBSD style. \n> \n> Thanks for any help. \n> \n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n----- End forwarded message -----\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 13 Jan 2001 05:47:57 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": true,
"msg_subject": "(forw) Re: CVS Commit message generator..."
}
]
|
[
{
"msg_contents": ">> 1. I cannot view russian text in russian when I use pgaccess. I set all\n>> the fonts in 'Preferences' to\n>> -cronyx-helvetica-*-*-*-*-17-*-*-*-*-*-koi8-* , but don't see russian\n>> letters in 'tables' and others windows. The texts are really in\n>> russian, DBENCODING is KOI8.\n>\n>Hm. We've had a couple of reports recently that suggest that there's\n>a character-set-translation problem between Postgres and recent Tcl/Tk\n>releases --- which seems to describe your problem as well. I have an\n>unproven suspicion that this is related to Tcl's changeover to UTF-8\n>internal representation, because the reports have all come from people\n>running Tcl 8.2 or later. But no one's done the work yet to understand\n>the problem in detail or propose a fix. Want to dig into it?\n>\n> regards, tom lane\n>\n\nI have used Postgres and Tcl/Tk for quite some time and yes, when 8.2 came\nout, I had trouble accessing ANYTHING because of the UTF-8 switch. My\nsolution was to upgrade my pgsql.tcl file with a new one. I tried it once\nand it worked but other events have prevented me from switching all of my\ncode yet. Pgsql.tcl is a tcl source only interface to postgres (as opposed\nto a .dll or .so). Whether the changes in there have made it into\nlibpgtcl.so/dll I don't know. I would be happy to forward it somewhere if\nyou would like to try it out.\n\nlen morgan\n\n",
"msg_date": "Sat, 13 Jan 2001 08:19:39 -0600",
"msg_from": "\"Len Morgan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgaccess: russian fonts && SQL window??? "
},
{
"msg_contents": "\"Len Morgan\" <[email protected]> writes:\n> I have used Postgres and Tcl/Tk for quite some time and yes, when 8.2 came\n> out, I had trouble accessing ANYTHING because of the UTF-8 switch. My\n> solution was to upgrade my pgsql.tcl file with a new one. I tried it once\n> and it worked but other events have prevented me from switching all of my\n> code yet. Pgsql.tcl is a tcl source only interface to postgres (as opposed\n> to a .dll or .so). Whether the changes in there have made it into\n> libpgtcl.so/dll I don't know. I would be happy to forward it somewhere if\n> you would like to try it out.\n\nYes, I'd like to see it. It'd be even more useful if you also have the\nold version, so I can see what was changed to fix the problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Jan 2001 12:35:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess: russian fonts && SQL window??? "
}
]
|
[
{
"msg_contents": "If the year is very large, datetime formatting overflows its limits and\ngives very weird results. Either the formatting needs to be improved\nor there should be an upper bound on the year.\n\n\nbray=# select version();\n version \n------------------------------------------------------------------\n PostgreSQL 7.1beta1 on i686-pc-linux-gnu, compiled by GCC 2.95.3\n(1 row)\n\n\nbray=# select 'now'::datetime + '100000y'::interval;\n ?column? \n---------------------\n 102001-01-13 22:128\n(1 row)\n\nbray=# select 'now'::datetime + '1000000y'::interval;\n ?column? \n---------------------\n 1002001-01-13 22:32\n(1 row)\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Wherefore let him that thinketh he standeth take heed \n lest he fall.\" I Corinthians 10:12 \n\n\n",
"msg_date": "Sat, 13 Jan 2001 22:27:01 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug in datetime formatting for very large years"
}
]
|
[
{
"msg_contents": "Hi.\n\nDos any one know any sql sentence to Find primary keys in a table.\n\nI'm using postgresql v.7.0 (Mandrake 7.2)\n\n\n\n\n\n\n\nHi.\n \nDos any one know any sql sentence to Find primary \nkeys in a table.\n \nI'm using postgresql v.7.0 (Mandrake \n7.2)",
"msg_date": "Sat, 13 Jan 2001 18:16:48 -0500",
"msg_from": "\"Felipe Diaz Cardona\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "primary keys "
},
{
"msg_contents": "Here is a code extract from phpPgAdmin that dumps UNIQUE and PRIMARY\nconstraints. Feel free to use the query...\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Felipe Diaz Cardona\nSent: Sunday, January 14, 2001 7:17 AM\nTo: [email protected]\nSubject: [HACKERS] primary keys\n\n\nHi.\n\nDos any one know any sql sentence to Find primary keys in a table.\n\nI'm using postgresql v.7.0 (Mandrake 7.2)",
"msg_date": "Mon, 15 Jan 2001 09:37:07 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: primary keys "
}
]
|
[
{
"msg_contents": "I have a question about Postgres:\n\nTake this update:\n\tupdate table set field = 'X' ;\n\n\nThis is a very expensive function when the table has millions of rows,\nit takes over an hour. If I dump the database, and process the data with\nperl, then reload the data, it takes minutes. Most of the time is used\ncreating indexes.\n\nI am not asking for a feature, I am just musing. \n\nI have a database update procedure which has to merge our data with that\nof more than one third party. It takes 6 hours to run.\n\nDo you guys know of any tricks that would allow postgres operate really\nfast with an assumption that it is operating on tables which are not\nbeing used. LOCK does not seem to make much difference.\n\nAny bit of info would be helpful.\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Sat, 13 Jan 2001 20:21:57 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Transactions vs speed."
},
{
"msg_contents": "* mlw <[email protected]> [010113 17:19] wrote:\n> I have a question about Postgres:\n> \n> Take this update:\n> \tupdate table set field = 'X' ;\n> \n> \n> This is a very expensive function when the table has millions of rows,\n> it takes over an hour. If I dump the database, and process the data with\n> perl, then reload the data, it takes minutes. Most of the time is used\n> creating indexes.\n> \n> I am not asking for a feature, I am just musing. \n\nWell you really haven't said if you've tuned your database at all, the\nway postgresql ships by default it doesn't use a very large shared memory\nsegment, also all the writing (at least in 7.0.x) is done syncronously.\n\nThere's a boatload of email out there that explains various ways to tune\nthe system. Here's some of the flags that I use:\n\n-B 32768 # uses over 300megs of shared memory\n-o \"-F\" # tells database not to call fsync on each update\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Sat, 13 Jan 2001 19:20:50 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Transactions vs speed."
},
{
"msg_contents": "Alfred Perlstein wrote:\n> \n> * mlw <[email protected]> [010113 17:19] wrote:\n> > I have a question about Postgres:\n> >\n> > Take this update:\n> > update table set field = 'X' ;\n> >\n> >\n> > This is a very expensive function when the table has millions of rows,\n> > it takes over an hour. If I dump the database, and process the data with\n> > perl, then reload the data, it takes minutes. Most of the time is used\n> > creating indexes.\n> >\n> > I am not asking for a feature, I am just musing.\n> \n> Well you really haven't said if you've tuned your database at all, the\n> way postgresql ships by default it doesn't use a very large shared memory\n> segment, also all the writing (at least in 7.0.x) is done syncronously.\n> \n> There's a boatload of email out there that explains various ways to tune\n> the system. Here's some of the flags that I use:\n> \n> -B 32768 # uses over 300megs of shared memory\n> -o \"-F\" # tells database not to call fsync on each update\n\nI have a good number of buffers (Not 32768, but a few), I have the \"-F\"\noption.\n\n\n-- \nhttp://www.mohawksoft.com\n",
"msg_date": "Sat, 13 Jan 2001 22:40:28 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Transactions vs speed."
},
{
"msg_contents": "* mlw <[email protected]> [010113 19:37] wrote:\n> Alfred Perlstein wrote:\n> > \n> > * mlw <[email protected]> [010113 17:19] wrote:\n> > > I have a question about Postgres:\n> > >\n> > > Take this update:\n> > > update table set field = 'X' ;\n> > >\n> > >\n> > > This is a very expensive function when the table has millions of rows,\n> > > it takes over an hour. If I dump the database, and process the data with\n> > > perl, then reload the data, it takes minutes. Most of the time is used\n> > > creating indexes.\n> > >\n> > > I am not asking for a feature, I am just musing.\n> > \n> > Well you really haven't said if you've tuned your database at all, the\n> > way postgresql ships by default it doesn't use a very large shared memory\n> > segment, also all the writing (at least in 7.0.x) is done syncronously.\n> > \n> > There's a boatload of email out there that explains various ways to tune\n> > the system. Here's some of the flags that I use:\n> > \n> > -B 32768 # uses over 300megs of shared memory\n> > -o \"-F\" # tells database not to call fsync on each update\n> \n> I have a good number of buffers (Not 32768, but a few), I have the \"-F\"\n> option.\n\nExplain a \"good number of buffers\" :)\n\nAlso, when was the last time you ran vacuum on this database?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Sat, 13 Jan 2001 19:50:24 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Transactions vs speed."
},
{
"msg_contents": "mlw <[email protected]> writes:\n> Take this update:\n> \tupdate table set field = 'X' ;\n> This is a very expensive function when the table has millions of rows,\n> it takes over an hour. If I dump the database, and process the data with\n> perl, then reload the data, it takes minutes. Most of the time is used\n> creating indexes.\n\nHm. CREATE INDEX is well known to be faster than incremental building/\nupdating of indexes, but I didn't think it was *that* much faster.\nExactly what indexes do you have on this table? Exactly how many\nminutes is \"minutes\", anyway?\n\nYou might consider some hack like\n\n\tdrop inessential indexes;\n\tUPDATE;\n\trecreate dropped indexes;\n\n\"inessential\" being any index that's not UNIQUE (or even the UNIQUE\nones, if you don't mind finding out about uniqueness violations at\nthe end).\n\nMight be a good idea to do a VACUUM before rebuilding the indexes, too.\nIt won't save time in this process, but it'll be cheaper to do it then\nrather than later.\n\n\t\t\tregards, tom lane\n\nPS: I doubt transactions have anything to do with it.\n",
"msg_date": "Sat, 13 Jan 2001 23:11:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Transactions vs speed. "
}
]
|
[
{
"msg_contents": "Well, I finally got a good build of 7.1beta3 in the RPM build environment. \nWoohoo.....\n\nMost regression tests pass -- 10 of 76 fail in serial mode. I'll be analyzing\nthe diffs tomorrow afternoon to see what's going on, then will be tidying up\nthe RPMset for release. Tidy or no, a release will happen before midday Monday\n-- if I am really patient I may upload the RPM's from home, but don't hold your\nbreath.\n\nThe documentation in README.rpm-dist will be needing an overhaul -- so, for the\nfirst beta RPM release that file will be INCORRECT unless I get really\nindustrious in the afternoon :-).\n\nThere are substantial differences -- and that's BEFORE I reorg the packages! \nBut, now's a good time to reorg, since whole directories are moving around....\n\nUpgrading from prior releases will be unsupported for this first beta.\n\nMore details later. Time to go to bed....\n--\nLamar Owen\nWGCR Internet Radi\n1 Peter 4:11\n",
"msg_date": "Sun, 14 Jan 2001 01:32:52 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "RPMS for 7.1beta3 in progress."
}
]
|
[
{
"msg_contents": "Hello\n\ni don't know, whether it is a real bug or what, has been fixed or not, but\ni can't find any info about it:\n\ni try to fill my table from a file using copy from stdin and postgresql\ncorrupt the table. This happen if before the <tab> or end of line there\nis word that has a non-standard letter like o with accent and then an\nordinary lette [a-z]. and now i reproduced this error if that o is at the\nand of the word. in these cases that word and the next one get into the\nsame field...\n\nwhy does it happen?\n\nthanks\ntRehak\n\n------------------------------------\nE-Mail: Tom Rehak <[email protected]>\n------------------------------------\n\n",
"msg_date": "Sun, 14 Jan 2001 18:26:35 +0100 (MET)",
"msg_from": "Rehak Tamas <[email protected]>",
"msg_from_op": true,
"msg_subject": "copy from stdin; bug?"
},
{
"msg_contents": "Rehak Tamas <[email protected]> writes:\n> i try to fill my table from a file using copy from stdin and postgresql\n> corrupt the table. This happen if before the <tab> or end of line there\n> is word that has a non-standard letter like o with accent and then an\n> ordinary lette [a-z]. and now i reproduced this error if that o is at the\n> and of the word. in these cases that word and the next one get into the\n> same field...\n\nSounds to me like a multibyte-character translation problem. What\nencoding do you have set for the database? What have you told it the\nclient encoding is?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Jan 2001 21:29:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy from stdin; bug? "
},
{
"msg_contents": "Hi\n\nOn Sun, 14 Jan 2001, Tom Lane wrote:\n> Sounds to me like a multibyte-character translation problem. What\n> encoding do you have set for the database? What have you told it the\n> client encoding is?\n\nsorry, i forgot to write:\ni use debian potato, i use the standard debian packege.\nversion 6.5.3\n\ni haven't set any encoding i think it must be the default (?)\ni used psql to copy the texts they are iso-8859-2 encoded...\n\nplease write if you need more information.\n\nUdv\ntRehak\n\n------------------------------------\nE-Mail: Tom Rehak <[email protected]>\n------------------------------------\n\n",
"msg_date": "Mon, 15 Jan 2001 03:37:27 +0100 (MET)",
"msg_from": "Rehak Tamas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: copy from stdin; bug? "
},
{
"msg_contents": "> On Sun, 14 Jan 2001, Tom Lane wrote:\n> > Sounds to me like a multibyte-character translation problem. What\n> > encoding do you have set for the database? What have you told it the\n> > client encoding is?\n> \n> sorry, i forgot to write:\n> i use debian potato, i use the standard debian packege.\n> version 6.5.3\n> \n> i haven't set any encoding i think it must be the default (?)\n> i used psql to copy the texts they are iso-8859-2 encoded...\n> \n> please write if you need more information.\n\nCan you show me your database name and the output from 'psql -l'?\n--\nTatasuo Ishii\n",
"msg_date": "Mon, 15 Jan 2001 12:54:36 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy from stdin; bug? "
},
{
"msg_contents": "Hello\n\nOn Mon, 15 Jan 2001, Tatsuo Ishii wrote:\n> Can you show me your database name and the output from 'psql -l'?\n\nyes, here are the output:\n\ndatname |datdba|encoding|datpath\n-----------------+------+--------+-----------------\ntemplate1 | 31| 5|template1\nmap | 1003| 5|map\nhelyes | 1003| 5|helyes\n\ni found that if i put a space behind the letters ([o with\naccent][a-z][\\t]) before the tab, it works correct... but without the\nspace it corrupt the database...\n\nUdv\ntRehak\n\n------------------------------------\nE-Mail: Tom Rehak <[email protected]>\n------------------------------------\n\n",
"msg_date": "Mon, 15 Jan 2001 17:11:38 +0100 (MET)",
"msg_from": "Rehak Tamas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: copy from stdin; bug? "
},
{
"msg_contents": "> yes, here are the output:\n> \n> datname |datdba|encoding|datpath\n> -----------------+------+--------+-----------------\n> template1 | 31| 5|template1\n> map | 1003| 5|map\n> helyes | 1003| 5|helyes\n> \n> i found that if i put a space behind the letters ([o with\n> accent][a-z][\\t]) before the tab, it works correct... but without the\n> space it corrupt the database...\n\nThe encoding of your databases are all UNICODE. So you need to input\ndata as UTF-8 in this case. I guess you are trying to input ISO-8859-1\nencoded data that is the source of the problem. Here are possible\nsolutions:\n\n1) input data as UTF-8\n\n2) crete a new databse using encoidng LATIN1. createdb -E LATIN1...\n\n3) upgrade to 7.1 that has the capability to do an automatic\n conversion between UTF-8 and ISO-8859-1.\n--\nTatsuo Ishii\n\n",
"msg_date": "Tue, 16 Jan 2001 10:04:12 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy from stdin; bug? "
},
{
"msg_contents": "Re\n\nOn Tue, 16 Jan 2001, Tatsuo Ishii wrote:\n> The encoding of your databases are all UNICODE. So you need to input\n> data as UTF-8 in this case. I guess you are trying to input ISO-8859-1\n> encoded data that is the source of the problem. Here are possible\n> solutions:\n> 1) input data as UTF-8\n:)\n\n> 2) crete a new databse using encoidng LATIN1. createdb -E LATIN1...\nyes, this will be the sollution...\n\n> 3) upgrade to 7.1 that has the capability to do an automatic\n> conversion between UTF-8 and ISO-8859-1.\ni like to use deb packages and to use 7.1 i would have to upgrade to woody\n(or even sid)...\n\nthank you for your quick help!!!\n\nUdv\ntRehak\n\n------------------------------------\nE-Mail: Tom Rehak <[email protected]>\n------------------------------------\n\n",
"msg_date": "Wed, 17 Jan 2001 01:40:58 +0100 (MET)",
"msg_from": "Rehak Tamas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: copy from stdin; bug? "
},
{
"msg_contents": "On Wed, Jan 17, 2001 at 01:40:58AM +0100, Rehak Tamas wrote:\n> \n> > 3) upgrade to 7.1 that has the capability to do an automatic\n> > conversion between UTF-8 and ISO-8859-1.\n>\n> i like to use deb packages and to use 7.1 i would have to upgrade\n> to woody (or even sid)...\n\nNot true. There are Debian source packages, and taking the source\npackage from Debian 2.x, x>2 (woody/sid), you can easily build it \non Debian 2.2 (potato). \n\nIn fact, it seems likely that a 2.2 (potato) packaging of 7.1 should be \navailable from somebody else anyhow. Oliver, do you plan to make the \nwoody 7.1 package depend on any other package versions not in potato? \nIf not, you can just use the 7.1 package directly on your Debian 2.2 \nsystem. \n\n(Apologies to the rest for the Debian jargon.)\n\nNathan Myers\[email protected]\n",
"msg_date": "Wed, 17 Jan 2001 11:50:05 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: copy from stdin; bug?"
},
{
"msg_contents": "Nathan Myers wrote:\n >In fact, it seems likely that a 2.2 (potato) packaging of 7.1 should be \n >available from somebody else anyhow. Oliver, do you plan to make the \n >woody 7.1 package depend on any other package versions not in potato? \n\nIt will automatically depend on whatever is in my machine at the time \n(unstable updated at least weekly), e.g.: libc6 2.2.1.\n\nHowever, as we have with 7.0.3, we will build a version for potato on a \npotato machine and make that available for those who don't want to risk\nup-grading the rest of their systems.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"And, behold, I come quickly; and my reward is with me,\n to give every man according as his work shall be.\" \n Revelation 22:12 \n\n\n",
"msg_date": "Wed, 17 Jan 2001 21:07:00 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy from stdin; bug? "
},
{
"msg_contents": "Nathan Myers wrote:\n[ ... ]\n\n> Not true. There are Debian source packages, \n\nWhere are they ? I'm *quite* interested !\n\n> and taking the source\n> package from Debian 2.x, x>2 (woody/sid), you can easily build it\n> on Debian 2.2 (potato).\n> \n> In fact, it seems likely that a 2.2 (potato) packaging of 7.1 should be\n> available from somebody else anyhow. Oliver, do you plan to make the\n> woody 7.1 package depend on any other package versions not in potato?\n> If not, you can just use the 7.1 package directly on your Debian 2.2\n> system.\n\nOliver Elphick seems awfully busy and once said that 7.1 required a\n*lot* of packaging ... Better not bug him right now ...\n\n--\nEmmanuel Charpentier\n",
"msg_date": "Wed, 17 Jan 2001 22:34:45 +0100",
"msg_from": "Emmanuel Charpentier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy from stdin; bug?"
},
{
"msg_contents": "FYI, if you would like to build 7.1 with the UTF-8 <--> other\nencodings capability, you need to configure --enable-multibyte AND\n--enable-unicode-conversion, and that would add a 1MB conversion table\nlinked to the backend.\n--\nTatsuo Ishii\n\n> On Wed, Jan 17, 2001 at 01:40:58AM +0100, Rehak Tamas wrote:\n> > \n> > > 3) upgrade to 7.1 that has the capability to do an automatic\n> > > conversion between UTF-8 and ISO-8859-1.\n> >\n> > i like to use deb packages and to use 7.1 i would have to upgrade\n> > to woody (or even sid)...\n> \n> Not true. There are Debian source packages, and taking the source\n> package from Debian 2.x, x>2 (woody/sid), you can easily build it \n> on Debian 2.2 (potato). \n> \n> In fact, it seems likely that a 2.2 (potato) packaging of 7.1 should be \n> available from somebody else anyhow. Oliver, do you plan to make the \n> woody 7.1 package depend on any other package versions not in potato? \n> If not, you can just use the 7.1 package directly on your Debian 2.2 \n> system. \n> \n> (Apologies to the rest for the Debian jargon.)\n> \n> Nathan Myers\n> [email protected]\n",
"msg_date": "Thu, 18 Jan 2001 10:07:47 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy from stdin; bug?"
},
{
"msg_contents": "Emmanuel Charpentier wrote:\n >Oliver Elphick seems awfully busy and once said that 7.1 required a\n >*lot* of packaging ... Better not bug him right now ...\n\nI'm working on it at the moment.\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For the eyes of the LORD run to and fro throughout the\n whole earth, to show himself strong in the behalf of \n them whose heart is perfect toward him...\" \n II Chronicles 16:9 \n\n\n",
"msg_date": "Thu, 18 Jan 2001 18:03:04 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy from stdin; bug? "
},
{
"msg_contents": "On Wed, Jan 17, 2001 at 10:34:45PM +0100, Emmanuel Charpentier wrote:\n> Nathan Myers wrote:\n> > Not true. There are Debian source packages, \n\n> Where are they ? I'm *quite* interested !\n\n> > and taking the source\n> > package from Debian 2.x, x>2 (woody/sid), you can easily build it\n> > on Debian 2.2 (potato).\n\nThey're as near as \"apt-get source postgresql\", once you have edited \n/etc/apt/sources.list to point to the package repositories you want,\nand have run \"apt-get update\" to synchronize with those repositories. \n\nUnder the best circumstances, \"apt-get source -b postgresql\" will \ndownload the sources and build a \".deb\" tailored for your system, \nmuch as in the BSD \"ports\" system. (I say \"best circumstances\"\nbecause the package or the build process may depend on tools and\n\"-dev\" packages you have not \"apt-get install\"'ed yet, and because\nyou might prefer to tinker with configuration options before building.)\n\nI'm sure once Oliver has prepared a 7.1 Debian package he will announce\nit here.\n\n(This is getting dangerously Debian-specific.)\n\nNathan Myers\[email protected]\n",
"msg_date": "Thu, 18 Jan 2001 14:27:09 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: copy from stdin; bug?"
},
{
"msg_contents": "Dear list,\n\nI have made some progress about the current state of the ODBC drivers.\n\nI have tried three ODBC drivers :\n\nThe original ODBC river, as compiled by Oliver Elphick in the Debian\n7.1beta4 packages : this one is utterly broken : trying to use it leads\nto nothing : no activity is loged neither in syslog nor in postgres.log\nwith -d2. Nick Gorham says it's because the driver and the driver\nmanager wait mutually for each other, IIRC.\n\nThe same driver patched (how ?) by Nick Gorham has some basic\nfunctionality : it can query the DB in arbitrary ways and is able to do\nother basic things. However, it has other problems. It displays only\ntables, not views, and has some serious limitations on system tables.\n\nNick Gorham's unixODBC driver. This ione has only basic functionality :\nit can connect and query the backend, but only with a hand-crafted\nquery. No way to get the list of tables, nor metadata.\n\nIn the first case, I can do nothing : I'm reluctant to try to rebuild\nthe Debian packages from source (I don't kniow how to do this from the\nsources and Oliver's patches). It follows that I can't do that for the\nsecond either.\n\nHowever, the problems exhibited by the second and third drivers are of\nthe same nature : the SQL queries sent by them to get thje metadata are\nno longer valid for 7.1, since the system tables have undergo a lot of\nchanges.\n\nI will try to fix the third and publish my result and changes, hoping to\nsee them ported on the first one.\n\nAny thoughs ?\n\nAnd, BTW, where can I find the docs of the 7.0 system tables ? I know\nwhere the 7.1 docs are ...\n\nSincerely yours,\n\n\t\t\t\t\tEmmanuel Charpentier\n",
"msg_date": "Tue, 27 Feb 2001 22:23:15 +0100",
"msg_from": "Emmanuel Charpentier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Current ODBC driver(s) problems with 7.1\u0003"
},
{
"msg_contents": "Emmanuel Charpentier wrote:\n >I have tried three ODBC drivers :\n >\n >The original ODBC river, as compiled by Oliver Elphick in the Debian\n >7.1beta4 packages : this one is utterly broken : trying to use it leads\n >to nothing : no activity is loged neither in syslog nor in postgres.log\n >with -d2. Nick Gorham says it's because the driver and the driver\n >manager wait mutually for each other, IIRC.\n \nI have been trying it; I get a segfault (7.1beta4), but haven't yet been\nable to determine the cause. Has anyone got any hints on debugging shared\nlibraries?\n\n...\n >In the first case, I can do nothing : I'm reluctant to try to rebuild\n >the Debian packages from source (I don't kniow how to do this from the\n >sources and Oliver's patches). It follows that I can't do that for the\n >second either.\n\nI make no changes to the ODBC library.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The LORD is my shepherd; I shall not want. He maketh \n me to lie down in green pastures: he leadeth me beside\n the still waters, he restoreth my soul...Surely\n goodness and mercy shall follow me all the days of my\n life; and I will dwell in the house of the LORD for\n ever.\" Psalms 23:1,2,6 \n\n\n",
"msg_date": "Thu, 01 Mar 2001 20:16:08 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Current ODBC driver(s) problems with 7.1 "
}
]
|
[
{
"msg_contents": "Stephan Szabo <[email protected]> writes:\n> Because of Access's brokenness, the parser or some other layer of the\n> code \"fixes\" explicit = NULL (ie, in the actually query string) into\n> IS NULL which is the correct way to check for nulls.\n> Because your original query was = $1, it doesn't do the mangling of the\n> SQL to change into IS NULL when $1 is NULL. The fact that we do that\n> conversion at all actually breaks spec a little bit but we have little\n> choice with broken clients.\n\nIt seems to me that we heard awhile ago that Access no longer generates\nthese non-spec-compliant queries --- ie, it does say IS NULL now rather\nthan the other thing. If so, it seems to me that we ought to remove the\nparser's = NULL hack, so that we have spec-compliant NULL behavior.\n\nAnyone recall anything about that? A quick search of my archives didn't\nturn up the discussion that I thought I remembered.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Jan 2001 14:00:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "MS Access vs IS NULL (was Re: [BUGS] Bug in SQL functions that use a\n\tNULL parameter directly)"
},
{
"msg_contents": "> Anyone recall anything about that? A quick search of my archives didn't\n> turn up the discussion that I thought I remembered.\n\nHmm. Maybe now we know what you dream about at night ;)\n\n - Thomas\n",
"msg_date": "Tue, 16 Jan 2001 04:54:25 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MS Access vs IS NULL (was Re: [BUGS] Bug in SQL functions that\n\tuse a NULL parameter directly)"
}
]
|
[
{
"msg_contents": "I don't have Office 2000, but I can confirm Access 97 generates such \nqueries. The query-builder doesn't generate the 'key = NULL' query, but the \nuse of the Forms interface does.\n\nMike Mascari\[email protected]\n\n-----Original Message-----\nFrom:\tTom Lane [SMTP:[email protected]]\nSent:\tSunday, January 14, 2001 2:00 PM\nTo:\tStephan Szabo\nCc:\[email protected]\nSubject:\t[HACKERS] MS Access vs IS NULL (was Re: [BUGS] Bug in SQL \nfunctions that use a NULL parameter directly)\n\nStephan Szabo <[email protected]> writes:\n> Because of Access's brokenness, the parser or some other layer of the\n> code \"fixes\" explicit = NULL (ie, in the actually query string) into\n> IS NULL which is the correct way to check for nulls.\n> Because your original query was = $1, it doesn't do the mangling of the\n> SQL to change into IS NULL when $1 is NULL. The fact that we do that\n> conversion at all actually breaks spec a little bit but we have little\n> choice with broken clients.\n\nIt seems to me that we heard awhile ago that Access no longer generates\nthese non-spec-compliant queries --- ie, it does say IS NULL now rather\nthan the other thing. If so, it seems to me that we ought to remove the\nparser's = NULL hack, so that we have spec-compliant NULL behavior.\n\nAnyone recall anything about that? A quick search of my archives didn't\nturn up the discussion that I thought I remembered.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 14 Jan 2001 14:05:18 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: MS Access vs IS NULL (was Re: [BUGS] Bug in SQL functions that\n\tuse a NULL parameter directly)"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n> I don't have Office 2000, but I can confirm Access 97 generates such \n> queries. The query-builder doesn't generate the 'key = NULL' query, but the \n> use of the Forms interface does.\n\nYes, it was broken as of a couple years ago. What I thought I\nremembered hearing was that the current release (which I guess would\nbe Office 2000?) is fixed. The first question is whether that is indeed\ntrue. The second question, assuming it's true, is how long to continue\nviolating the spec and confusing people in order to cater to old broken\nversions of Access. My vote is not too darn long, but others may differ.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Jan 2001 14:55:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MS Access vs IS NULL (was Re: [BUGS] Bug in SQL functions that\n\tuse a NULL parameter directly)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Mike Mascari <[email protected]> writes:\n> > I don't have Office 2000, but I can confirm Access 97 generates such\n> > queries. The query-builder doesn't generate the 'key = NULL' query, but the\n> > use of the Forms interface does.\n> \n> Yes, it was broken as of a couple years ago. What I thought I\n> remembered hearing was that the current release (which I guess would\n> be Office 2000?) is fixed. The first question is whether that is indeed\n> true. The second question, assuming it's true, is how long to continue\n> violating the spec and confusing people in order to cater to old broken\n> versions of Access. My vote is not too darn long, but others may differ.\n\n</Lurking>\n\nI'm afraid so ... Lots of people (including large institutions) have\nOffice installed as a \"standard\" interchange tool, use Office 95 or 97,\nbut are tired of MS' endless \"upgrade\" tax and won't \"upgrade\" to Office\n2000, except at gunpoint (i. e. disappearance of volume license\nagreements for O95 or O97).\n\nSo I'm afraid these versions will have a loooong life ...\n\n<Lurking>\n\n\t\t\t\t\tEmmanuel Charpentier\n\n--\nEmmanuel Charpentier\n",
"msg_date": "Mon, 15 Jan 2001 07:55:35 +0100",
"msg_from": "Emmanuel Charpentier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MS Access vs IS NULL (was Re: [BUGS] Bug in SQL functions\n\tthat use a NULL parameter directly)"
}
]
|
[
{
"msg_contents": "Hi,\n\nIt was suggested that I post this patch here as no notice was taken of it\nwhen posted to interfaces!\n\nThis fixes problems with int8 columns which are reported by the driver as\nSQL_BIGINT rather than SQL_CHAR as per the ODBC v2 spec. Specifically, I\nhave had problems with MS ADO - any queries that contain an int8 column in\nthe resultset will *always* return an empty recordset.\n\nRegards,\n\nDave.\n\n*** pgtypes.c.orig Fri Dec 22 09:12:22 2000\n--- pgtypes.c Fri Dec 22 09:12:22 2000\n***************\n*** 217,223 ****\n case PG_TYPE_XID:\n case PG_TYPE_INT4: return SQL_INTEGER;\n\n! case PG_TYPE_INT8: return SQL_BIGINT;\n case PG_TYPE_NUMERIC: return SQL_NUMERIC;\n\n case PG_TYPE_FLOAT4: return SQL_REAL;\n--- 217,223 ----\n case PG_TYPE_XID:\n case PG_TYPE_INT4: return SQL_INTEGER;\n\n! case PG_TYPE_INT8: return SQL_CHAR;\n case PG_TYPE_NUMERIC: return SQL_NUMERIC;\n\n case PG_TYPE_FLOAT4: return SQL_REAL;\n",
"msg_date": "Mon, 15 Jan 2001 08:57:04 -0000",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": true,
"msg_subject": "ODBC Driver int8 Patch"
},
{
"msg_contents": "As I remember, the problem is that this makes us match the ODBC v2 spec,\nbut then we would not match the v3 spec. Is that correct?\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> Hi,\n> \n> It was suggested that I post this patch here as no notice was taken of it\n> when posted to interfaces!\n> \n> This fixes problems with int8 columns which are reported by the driver as\n> SQL_BIGINT rather than SQL_CHAR as per the ODBC v2 spec. Specifically, I\n> have had problems with MS ADO - any queries that contain an int8 column in\n> the resultset will *always* return an empty recordset.\n> \n> Regards,\n> \n> Dave.\n> \n> *** pgtypes.c.orig Fri Dec 22 09:12:22 2000\n> --- pgtypes.c Fri Dec 22 09:12:22 2000\n> ***************\n> *** 217,223 ****\n> case PG_TYPE_XID:\n> case PG_TYPE_INT4: return SQL_INTEGER;\n> \n> ! case PG_TYPE_INT8: return SQL_BIGINT;\n> case PG_TYPE_NUMERIC: return SQL_NUMERIC;\n> \n> case PG_TYPE_FLOAT4: return SQL_REAL;\n> --- 217,223 ----\n> case PG_TYPE_XID:\n> case PG_TYPE_INT4: return SQL_INTEGER;\n> \n> ! case PG_TYPE_INT8: return SQL_CHAR;\n> case PG_TYPE_NUMERIC: return SQL_NUMERIC;\n> \n> case PG_TYPE_FLOAT4: return SQL_REAL;\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 16 Jan 2001 11:50:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ODBC Driver int8 Patch"
}
]
|
[
{
"msg_contents": "\n> Instead of a partial row CRC, we could just as well use some other bit\n> of identifying information, say the row OID. Given a block CRC on the\n> heap page, we'll be pretty confident already that the heap page is OK,\n> we just need to guard against the possibility that it's older than the\n> index item. Checking that there is a valid tuple at the slot indicated\n> by the index item, and that it has the right OID, should be a good\n> enough (and cheap enough) test.\n\nI would hardly call an additional 4 bytes for OID per index entry cheap.\n\nAndreas\n",
"msg_date": "Mon, 15 Jan 2001 10:31:23 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: CRCs "
},
{
"msg_contents": "Andreas SB Zeugswetter wrote:\n> Tom Lane wrote:\n> > Instead of a partial row CRC, we could just as well use some other\n> > bit of identifying information, say the row OID. ... Checking that\n> > there is a valid tuple at the slot indicated by the index item,\n> > and that it has the right OID, should be a good enough (and cheap\n> > enough) test.\n> \n> I would hardly call an additional 4 bytes for OID per index entry\n> cheap.\n\n\"Cheap enough\" is very different from \"cheap\". Undetected corruption \nmay be arbitrarily expensive when it finally manifests itself. \n\nThat said, maybe storing just the low byte or two of the OID in the \nindex would be good enough. Also, maybe the OID would be there by \ndefault, but could be ifdef'd out if the size of the indices affects\nyou noticeably, and you know that your equipment (unlike most) really\ndoes implement strict write ordering.\n\nNathan Myers\[email protected]\n",
"msg_date": "Mon, 15 Jan 2001 15:45:27 -0800",
"msg_from": "[email protected] (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: CRCs"
}
]
|
[
{
"msg_contents": "\n[Cc:ed to hackers list]\n\nFrom: Maneerat Sappaso <[email protected]>\nSubject: PostgreSQL 7.0.2 with thai locale.\nDate: Mon, 15 Jan 2001 16:42:44 -0700 (GMT)\nMessage-ID: <[email protected]>\n\n> \tDear sir,\n> \n> \tI'm a 4 years student in Burapha University,Thailand\n> \tI develop postgreSQL version 7.0.2 to work with thai locale because\n> \tmy project relate with sorting thai language in database of\n> \tpostgreSQl and I install PostgreSQL follow by README file.\n> \tfor configuration\n> \t\t./configure --enable-locale\n> \t\t./configure --enable -multibyte \n\nCurrent multi-byte implementaion does not support TIS620.\nHowever you could enable locale support. So you just type:\n\n\t./configure --enable-locale\n\n> \t\t[which encoding_syatem for thai language] \n> \tand then compile and install it.I add locale variables in\n> \t/etc/profile like this\n> \t\tLC_ALL=th_TH.TIS-620.2533\n> \t\tLC_COLLATE=th_TH.TIS-620.2533\n> \t\tLC_CTYPE=th_TH.TIS-620.2533\n> \t\tLC_MONETARY=th_TH.TIS-620.2533\n> \t\tLC_NUMERIC=th_TH.TIS-620.2533\n> \t\tLC_TIME=th_TH.TIS-620.2533\n> \twhen I test locale it doesn't work.Would you mild if I ask for\n> \tthe fight way to install PostgreSQL to work with thai locale.\n\nIf it's not still working, you have to make sure that your OS's locale\ndata for TIS620 is correct, since PostgreSQL's locale support depends\non OS's locale functionality. For example, writing a small C program\nusing strcoll() or whatever...\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 15 Jan 2001 21:58:05 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 7.0.2 with thai locale."
}
]
|
[
{
"msg_contents": "While below is ok:\n\nselect * from table_a a\n where (select data_a from table_a where id = a.id) >\n (select data_b from table_a where id = a.id);\n\nbut this fails:\n\nselect * from table_a a\n where ((select data_a from table_a where id = a.id) >\n (select data_b from table_a where id = a.id));\n\nERROR: parser: parse error at or near \">\"\n\nDoes anybody know why?\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 15 Jan 2001 22:04:24 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "subselect bug?"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> select * from table_a a\n> where ((select data_a from table_a where id = a.id) >\n> (select data_b from table_a where id = a.id));\n\n> ERROR: parser: parse error at or near \">\"\n\nUgh. The grammar does some pretty squirrely things with parentheses\naround selects, and I guess it's getting confused on this. Don't\nknow why offhand ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Jan 2001 10:06:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subselect bug? "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> select * from table_a a\n> where ((select data_a from table_a where id = a.id) >\n> (select data_b from table_a where id = a.id));\n> ERROR: parser: parse error at or near \">\"\n\nI think I finally got this right ... see if you can break the revised\ngrammar I just committed ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Jan 2001 15:38:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subselect bug? "
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > select * from table_a a\n> > where ((select data_a from table_a where id = a.id) >\n> > (select data_b from table_a where id = a.id));\n> > ERROR: parser: parse error at or near \">\"\n> \n> I think I finally got this right ... see if you can break the revised\n> grammar I just committed ...\n\nThanks. Works fine now.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 16 Jan 2001 11:34:45 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: subselect bug? "
}
]
|
[
{
"msg_contents": "Well well well...\nHi everybody...\nI'm new to this Group and i woud like to ask:\nWho can design a secure database for my www project?\nWhat language to use?\nSugestions?\n\n\nthnks ... Greetings from (HOT Summer) Australia\n\n",
"msg_date": "Mon, 15 Jan 2001 23:26:48 +1000",
"msg_from": "Momo Mordacz <[email protected]>",
"msg_from_op": true,
"msg_subject": "who can design or recomend Database"
}
]
|
[
{
"msg_contents": "I use Postgres 7.1, FreeBSD 4.0\n\nI configure, build and install it with:\n\n./configure --enable-locale --enable-multibyte --with-perl\ngmake\ngmake install\n\ninitdb -E KOI8\n\nThe problem is: when database encoding and client encoding are\ndifferent then 'locale' features, such as 'upper' etc don't work. When these\ntwo encodings are equal - all is OK.\n\nExample, commets are marked by -->:\n\ntolik=# \\l\n List of databases\n Database | Owner | Encoding \n-----------+-------+----------\n cmw | cmw | ALT\n template0 | tolik | KOI8\n template1 | tolik | KOI8\n tolik | tolik | ALT --> database 'tolik' has ALT (one of\n russian) encoding\n(4 rows)\n\ntolik=# \\c\nYou are now connected to database tolik as user tolik.\ntolik=# \\encoding KOI8 --> I change client encoding to KOI8,\n another russian encoding\ntolik=# select upper ('О©╫О©╫О©╫О©╫О©╫'); --> argument is russian word in \n lowercase\n upper \n-------\n О©╫О©╫О©╫О©╫О©╫ --> result don't change\n(1 row)\n\ntolik=# \\encoding ALT --> I set client encoding equals\n to DB encoding\ntolik=# select upper ('О©╫О©╫О©╫О©╫О©╫');\n upper \n-------\n О©╫О©╫О©╫О©╫О©╫ --> Now it works, result is the\n same word in uppercase :(\n(1 row)\n\nI did'nt observe this feature in 6.* versions of Postgres.\n\nAny ideas? Or help?\n\n-- \nAnatoly K. Lasareff Email: [email protected] \nhttp://tolikus.hq.aaanet.ru:8080 Phone: (8632)-710071\n",
"msg_date": "15 Jan 2001 18:41:13 +0300",
"msg_from": "[email protected] (Anatoly K. Lasareff)",
"msg_from_op": true,
"msg_subject": "locale and multibyte together in 7.1"
},
{
"msg_contents": ">>>>> \"AKL\" == Anatoly K Lasareff <[email protected]> writes:\n\n\n AKL> The problem is: when database encoding and client encoding are\n AKL> different then 'locale' features, such as 'upper' etc don't work. When these\n AKL> two encodings are equal - all is OK.\n\nI have partyally win this feature with this 'magic' way. I configure,\nmake, install and initdb under locale, correspondings to database\nencoding. If I wont ALT encoding I use ru_RU.CP866 locale. Then all\n'locale' featureas are working. But this is some strange way.\n\nProbably there is nicer method?\n\n-- \nAnatoly K. Lasareff Email: [email protected] \nhttp://tolikus.hq.aaanet.ru:8080 Phone: (8632)-710071\n",
"msg_date": "16 Jan 2001 19:29:40 +0300",
"msg_from": "[email protected] (Anatoly K. Lasareff)",
"msg_from_op": true,
"msg_subject": "Re: locale and multibyte together in 7.1"
}
]
|
[
{
"msg_contents": "Hi\n\nWhat is the status of the pgsql_perl5? I see the README identify the module in\nthe perl5-directory as 1.8.0. At CPAN i find a newer release 1.9.0 dated\n4 apr 2000. \nIs the newest version the best to use with current sources? \nWill there be future development coordinated with Mr. Mergl?\n\nHelge Haugland\n-- \nReach me at [email protected] / [email protected] mob.tlf +47 90110224\nVisit my homepage at http://www.geocities.com/~helland/\n\n",
"msg_date": "Mon, 15 Jan 2001 22:35:46 +0100",
"msg_from": "Helge Haugland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Perl5 confusions"
}
]
|
[
{
"msg_contents": "Why does LockClassinfoForUpdate() insist on doing heap_mark4update?\nAs far as I can see, this accomplishes nothing except to break\nconcurrent index builds. If I do \n\n\tcreate index tenk1_s1 on tenk1(stringu1);\n\tcreate index tenk1_s2 on tenk1(stringu2);\n\nin two psqls at approximately the same time, the second one fails with\n\n\tERROR: LockStatsForUpdate couldn't lock relid 274157\n\nwhich is entirely unnecessary.\n\nI don't believe that the similar code in AlterTableDropColumn()\nand AlterTableCreateToastTable() is a good idea either. We do not\ndepend on \"SELECT FOR UPDATE\" on pg_class tuples for interlocking\nchanges to relations; we use exclusive locks on the relations themselves\nfor that. mark4update is unnecessary in this context.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Jan 2001 17:48:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is LockClassinfoForUpdate()'s mark4update a good idea?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Why does LockClassinfoForUpdate() insist on doing heap_mark4update?\n\nBecause I want to guard the target pg_class tuple by myself.\nI don't think we could rely on the assumption that the lock on\nthe corresponding relation is held. For example, AlterTableOwner()\ndoesn't seem to open the corresponding relation.\n\n> As far as I can see, this accomplishes nothing except to break\n> concurrent index builds. If I do\n> \n> create index tenk1_s1 on tenk1(stringu1);\n> create index tenk1_s2 on tenk1(stringu2);\n> \n> in two psqls at approximately the same time, the second one fails with\n> \n> ERROR: LockStatsForUpdate couldn't lock relid 274157\n>\n\nThis is my fault. The error could be avoided by retrying \nto acquire the lock like \"SELECT FOR UPDATE\" does.\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Tue, 16 Jan 2001 09:50:37 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is LockClassinfoForUpdate()'s mark4update a good idea?"
},
{
"msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> Tom Lane wrote:\n>> Why does LockClassinfoForUpdate() insist on doing heap_mark4update?\n\n> Because I want to guard the target pg_class tuple by myself.\n> I don't think we could rely on the assumption that the lock on\n> the corresponding relation is held. For example, AlterTableOwner()\n> doesn't seem to open the corresponding relation.\n\nPossibly AlterTableOwner is broken. Not sure that it matters though,\nbecause heap_update won't update a tuple anyway if another process\ncommitted an update first. That seems to me to be sufficient locking;\nexactly what is the mark4update adding?\n\n(BTW, I notice that a lot of heap_update calls don't bother to check\nthe result code, which is probably a bug ...)\n\n>> As far as I can see, this accomplishes nothing except to break\n>> concurrent index builds. If I do\n>> \n>> create index tenk1_s1 on tenk1(stringu1);\n>> create index tenk1_s2 on tenk1(stringu2);\n>> \n>> in two psqls at approximately the same time, the second one fails with\n>> \n>> ERROR: LockStatsForUpdate couldn't lock relid 274157\n\n> This is my fault. The error could be avoided by retrying \n> to acquire the lock like \"SELECT FOR UPDATE\" does.\n\nI have a more fundamental objection, which is that if you think that\nthis is necessary for index creation then it is logically necessary for\n*all* types of updates to system catalog tuples. I do not like that\nanswer, mainly because it will clutter the system considerably ---\nto no purpose. The relation-level locks are necessary anyway for schema\nupdates, and they are sufficient if consistently applied. Pre-locking\nthe target tuple is *not* sufficient, and I don't think it helps anyway\nif not consistently applied, which it certainly is not at the moment.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Jan 2001 20:14:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is LockClassinfoForUpdate()'s mark4update a good idea? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <[email protected]> writes:\n> > Tom Lane wrote:\n> >> Why does LockClassinfoForUpdate() insist on doing heap_mark4update?\n> \n> > Because I want to guard the target pg_class tuple by myself.\n> > I don't think we could rely on the assumption that the lock on\n> > the corresponding relation is held. For example, AlterTableOwner()\n> > doesn't seem to open the corresponding relation.\n> \n> Possibly AlterTableOwner is broken. Not sure that it matters though,\n> because heap_update won't update a tuple anyway if another process\n> committed an update first. That seems to me to be sufficient locking;\n> exactly what is the mark4update adding?\n> \n\nI like neither unexpected errors nor doing the wrong\nthing by handling tuples which aren't guaranteed to\nbe up-to-date. After mark4update, the tuple is \nguaranteed to be up-to-date and heap_update won't\nfail even though some commands etc neglect to lock\nthe correspoding relation. Isn't it proper to guard\nmyself as much as possible ?\n\n> (BTW, I notice that a lot of heap_update calls don't bother to check\n> the result code, which is probably a bug ...)\n> \n> >> As far as I can see, this accomplishes nothing except to break\n> >> concurrent index builds. If I do\n> >>\n> >> create index tenk1_s1 on tenk1(stringu1);\n> >> create index tenk1_s2 on tenk1(stringu2);\n> >>\n> >> in two psqls at approximately the same time, the second one fails with\n> >>\n> >> ERROR: LockStatsForUpdate couldn't lock relid 274157\n> \n> > This is my fault. The error could be avoided by retrying\n> > to acquire the lock like \"SELECT FOR UPDATE\" does.\n> \n> I have a more fundamental objection, which is that if you think that\n> this is necessary for index creation then it is logically necessary for\n> *all* types of updates to system catalog tuples. I do not like that\n> answer, mainly because it will clutter the system considerably ---\n> to no purpose. The relation-level locks are necessary anyway for schema\n> updates, and they are sufficient if consistently applied. Pre-locking\n> the target tuple is *not* sufficient, and I don't think it helps anyway\n> if not consistently applied, which it certainly is not at the moment.\n> \n> regards, tom lane\n",
"msg_date": "Tue, 16 Jan 2001 11:19:33 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is LockClassinfoForUpdate()'s mark4update a good idea?"
},
{
"msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> I like neither unexpected errors nor doing the wrong\n> thing by handling tuples which aren't guaranteed to\n> be up-to-date. After mark4update, the tuple is \n> guaranteed to be up-to-date and heap_update won't\n> fail even though some commands etc neglect to lock\n> the correspoding relation. Isn't it proper to guard\n> myself as much as possible ?\n\nIf one piece of the system \"guards itself\" and others do not, what have\nyou gained? Not much. What I want is a consistently applied coding\nrule that protects all commands; and the simpler that coding rule is,\nthe more likely it is to be consistently applied. I do not think that\nadding mark4update improves matters when seen in this light. The code\nto do it is bulky and error-prone, and I have no confidence that it will\nbe done right everywhere.\n\nIn fact, at the moment I'm not convinced that it's done right anywhere.\nThe uses of mark4update for system-catalog updates are all demonstrably\nbroken right now, and the ones in the executor make use of a hugely\ncomplex and probably buggy qualification re-evaluation mechanism. What\nis the equivalent of qual re-evaluation for a system catalog tuple,\nanyway?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Jan 2001 21:32:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is LockClassinfoForUpdate()'s mark4update a good idea? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <[email protected]> writes:\n> > I like neither unexpected errors nor doing the wrong\n> > thing by handling tuples which aren't guaranteed to\n> > be up-to-date. After mark4update, the tuple is\n> > guaranteed to be up-to-date and heap_update won't\n> > fail even though some commands etc neglect to lock\n> > the correspoding relation. Isn't it proper to guard\n> > myself as much as possible ?\n> \n> If one piece of the system \"guards itself\" and others do not, what have\n> you gained? Not much. \n\n??? The system guarding itself won't gain bad result at least.\nIf one piece of system \"guards others\" and others do not, both\nmay gain bad results. Locking a class info by locking the\ncorrsponding relation is such a mechanism.\n\nHowever I don't think we could introduce this mechanism to all\nsystem catalogs. I implemented LockClassinfoForUpdate() by the\nfollowing reason.\n\n1) pg_class is the most significant relation.\n2) LockClassinfoForUpdate() adds few new conflicts \n by locking the pg_class tuple because locking the\n corresponding relation locks the pg_class entity\n implicitly unless some stuff neglects to lock\n corresponding relation.\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Tue, 16 Jan 2001 12:13:25 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is LockClassinfoForUpdate()'s mark4update a good idea?"
}
]
|
[
{
"msg_contents": "Uploading now. Should show up on ftp.postgresql.org soon.\n\nLook in /pub/dev/test-rpms.\n\nBETA TEST USE ONLY.\n\nTom, try out a PPC build on this one. I know of one problem that I have\nto fix -- postgresql-perl fails dependencies for libpq.so (I backed out\nthe patch to Makefile.shlib). A --nodeps install installs it OK, and the\ntest.pl script (/usr/share/perl5/test.pl) passes its tests.\n\nFixes include:\nplpgsql and pltcl are now in /usr/lib where they belong.\nThe includes in the devel RPM were split, now they are all in\n/usr/include/postgresql. This is a change from prior releases.\nBaggage from prior RPM's removed from spec file.\npg_config in -devel rpm.\npg_upgrade removed.\nAnd others -- see the changelog in the spec file.\n\nBETA TEST USE ONLY!\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 15 Jan 2001 21:50:01 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.1beta3-2 RPMset uploading."
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> Tom, try out a PPC build on this one. I know of one problem that I have\n> to fix -- postgresql-perl fails dependencies for libpq.so (I backed out\n> the patch to Makefile.shlib).\n\nThe backend seems to build OK, but the build fails in interfaces/perl5\nbecause libpq-fe.h isn't found. The compiler is getting passed\n-I/usr/include/postgresql, which might work if I'd already installed\nthe RPM, but that's tough when I haven't built it yet :-(\n\nI dunno how to get the RPM build process to bypass perl support, so I\ncan't get any further than that ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Jan 2001 02:04:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.1beta3-2 RPMset uploading. "
},
{
"msg_contents": "Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > Tom, try out a PPC build on this one. I know of one problem that I have\n> > to fix -- postgresql-perl fails dependencies for libpq.so (I backed out\n> > the patch to Makefile.shlib).\n \n> The backend seems to build OK, but the build fails in interfaces/perl5\n> because libpq-fe.h isn't found. The compiler is getting passed\n> -I/usr/include/postgresql, which might work if I'd already installed\n> the RPM, but that's tough when I haven't built it yet :-(\n\nWell, actually, the perl5 interface gets built a total of three times.\n:-/\nTwo times are done by the main Makefile system, and the third is done\nmanually to pass the right options. The -I _should_, on the third run\n(an 'install' run), get passed $RPM_BUILD_ROOT/usr/include/postgresql. \nThe main Make system builds it once, then the install phase builds it\nagain to get the proper libraries (which, in the buildroot environment\nget clobbered). I have to perform the third build (which bypasses the\nGNUmakefile/Makefile.pl mechanism and executes the (already setup for\ninstall phase) Makefile directly, with the proper options. Which of\ncourse forces another whole build.\n \n> I dunno how to get the RPM build process to bypass perl support, so I\n> can't get any further than that ...\n\nrpm --define 'perl 0' -ba .....\n\nor\n\nrpm --define 'perl 0' --rebuild ..... for that matter.\n\nThat portion needs a good cross-platform test too..... :-) I haven't\ngiven it a thorough test yet -- was planning on doing that Saturday\nmorning/afternoon, as I was going to go through an double-check\neverything, possibly releasing a -3 at that point if I have made any\nsubstantial changes. I don't forsee any at this early date, but I did\nwant to do some extensive build testing and upgrade testing. With the\nway it has to be done, it takes quite some time to thoroughly test.\n\nCan you email me a build log (need stdout AND stderr)?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 19 Jan 2001 11:16:00 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.1beta3-2 RPMset uploading."
}
]
|
[
{
"msg_contents": "Hi!\nDoes PostgreSQL support Dynamic SQL?\nIf so, when can i found documentation?\nThank's\n\n\n",
"msg_date": "Tue, 16 Jan 2001 09:22:26 +0200",
"msg_from": "\"Lark\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Does PostgreSQL support Dynamic SQL?"
}
]
|
[
{
"msg_contents": "Hi.\nI have a big problem with postgres because I need to know how I can see the\nrelations among the table like foreign-key.\nIt' s possible use some commands or graphic tool from wich I can see that\nrelations?\nDo you know some sites where i can found more information about this!\n\nThank you very much,\nand exuse me for my bad English!\n\n\nFabio\n\n\n",
"msg_date": "Tue, 16 Jan 2001 12:59:03 +0100",
"msg_from": "\"riccardo\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "View tables relationship"
},
{
"msg_contents": "Hi,\n\nThis question have been posted recently, please take a look to archive.\n\nI use this very simple query :\n\n SELECT a.tgargs\n FROM pg_class c, pg_trigger a\n WHERE a.tgname LIKE 'RI_ConstraintTrigger_%' AND c.relname='$table' AND\na.tgrelid = c.oid\n\nRegards,\n\nGilles DAROLD\n\nriccardo wrote:\n\n> Hi.\n> I have a big problem with postgres because I need to know how I can see the\n> relations among the table like foreign-key.\n> It' s possible use some commands or graphic tool from wich I can see that\n> relations?\n> Do you know some sites where i can found more information about this!\n>\n> Thank you very much,\n> and exuse me for my bad English!\n>\n> Fabio\n\n",
"msg_date": "Tue, 16 Jan 2001 19:30:37 +0100",
"msg_from": "Gilles DAROLD <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: View tables relationship"
}
]
|
[
{
"msg_contents": "> I install postgresql with locale by\n> ./configure --enable-locale\n> \n> and try to test strcoll() and it work in my OS' locale.\n\nHow did you test strcoll() in your OS? Can you show me the source code\nof your test program?\n\n> I add indirectory /src/backend/main/main.c for\n> setlocale(LC_COLLATE,\"th_TH\");\n> \n> and then compile with gmake and install when I test to\n> sort in table it sort by ASCII and it does not correct.\n> I try to use strcoll() because I need to sort data follow by\n> dictionary.\n> \tDoes Postgresql sort by node or string?I add strcoll() in\n> \t/backend/lib/lispsort.c like this\n> #include <locale.h>\n> List *\n> \tlisp_qsort(List *the_list,int (*compare)())\n> {\n> \t\t/* sort array*/\n> pg_sort(nodearray, num, sizeof(List *), compare);\n> \t\tadd strcoll() replace compare()\n> \t\t[compare is a function to compare two node]\n> \tpg_sort(nodearray, num, sizeof(List *), strcoll); \n> \t\n> \t}\n> \tBut it does not work.I think lispsort.c doesn't relate with data\n> \tsorting in table.If you have other comment please guide to me.\n> \tNow,my problem is I don't know what file that relate with sorting\n> \tdata in table.Where I ought to add strcoll()?\n\nI don't think you need to do those kind of things. Everything is\nalready in PostgreSQL including calling to strcoll().\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 16 Jan 2001 23:24:01 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 7.0.2 with locale."
}
]
|
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: 16 January 2001 16:50\n> To: Dave Page\n> Cc: '[email protected]'\n> Subject: Re: [HACKERS] ODBC Driver int8 Patch\n> \n> \n> As I remember, the problem is that this makes us match the \n> ODBC v2 spec,\n> but then we would not match the v3 spec. Is that correct?\n> \n\nYes, the patch I supplied will make it correct for v2. As it stands it is\ncorrect for v3. However as the driver identifies itself as v2 (i believe now\nv2.5) compliant, ADO expects it to follow the v2 spec and then fails when it\ndoesn't. \n\nThe original problem was briefly discussed on the interfaces list under the\nthread '[INTERFACES] Problems with int8 and MS ADO/ODBC'\n\nRegards,\n\nDave.\n",
"msg_date": "Tue, 16 Jan 2001 17:07:36 -0000",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: ODBC Driver int8 Patch"
}
]
|
[
{
"msg_contents": "> > Note that elog(ERROR/FATAL) is changed to elog(STOP) if Critical\n> > SectionCount > 0.\n> \n> Not in current sources ;-).\n> \n> Perhaps Vadim will say that I broke his error scheme, but if so it's\n> his own fault for not documenting such delicate code at all. \n\nOk, it's my fault (though I placed NO ELOG(ERROR) comments everywhere\naround critical xlog-related sections of code).\n\n> I believe he's out of town this weekend, so let's wait till he gets\n> back and then discuss it some more. Perhaps there is a need to\n> distinguish xlog-related critical sections from other ones, or\n> perhaps not.\n\nPerhaps. For xlog-related code rule is simple - backend must not be\ninterrupted till changes are logged.\n\nVadim\n",
"msg_date": "Tue, 16 Jan 2001 09:18:56 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: SIGTERM -> elog(FATAL) -> proc_exit() is probably a\n\t bad idea"
}
]
|
[
{
"msg_contents": "> Because I think turning an elog(ERROR) into a system-wide crash is\n> not a good idea ;-). If you are correct that this behavior \n> is necessary for WAL-related critical sections, then indeed we need\n> two kinds of critical sections, one that just holds off cancel/die\n> response and one that turns elog(ERROR) into a dangerous weapon.\n> I'm going to wait and see Vadim's response before I do anything ...\n\nI've tried to move \"dangerous\" ops with non-zero probability of\nelog(ERROR) (eg new file block allocation) out of crit sections.\nAnyway we need in ERROR-->STOP for safety when changes aren't logged.\n\nVadim\n",
"msg_date": "Tue, 16 Jan 2001 09:28:45 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: SIGTERM -> elog(FATAL) -> proc_exit() is probably a\n\t bad idea"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> Because I think turning an elog(ERROR) into a system-wide crash is\n>> not a good idea ;-). If you are correct that this behavior \n>> is necessary for WAL-related critical sections, then indeed we need\n>> two kinds of critical sections, one that just holds off cancel/die\n>> response and one that turns elog(ERROR) into a dangerous weapon.\n>> I'm going to wait and see Vadim's response before I do anything ...\n\n> I've tried to move \"dangerous\" ops with non-zero probability of\n> elog(ERROR) (eg new file block allocation) out of crit sections.\n> Anyway we need in ERROR-->STOP for safety when changes aren't logged.\n\nWhy is that safer than just treating an ERROR as an ERROR? It seems to\nme there's a real risk of a crash/restart loop if we force a restart\nwhenever we see an xlog-related problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Jan 2001 12:38:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea "
}
]
|
[
{
"msg_contents": "Dear Tom,\n\tI am writing to you because you are the maintainer of the\nquery optimizer and planner.\n\tI have found a very significant performance degradation\nbetween PostgreSQL 6.5.3 and 7.1beta3, which will severely impact two\nlarge applications that I have developed and maintain for several\nclients. The performance difference is seen with the use of indices in\nSELECT statements, whereby the current release does not make effective\nuse of the indices and 6.5.3 does. All of these tests were run on a SGI\nR10000 Indigo2 system running Irix 6.5. All the regression tests passed\nas expected for both versions.\n\tI have followed the discussion in pgsql-hackers over the previous\nmonths and others have noted some performance problems, and the response\nhas typically been to VACUUM the tables. Unfortunately, this is not a\npractical option for my applications. They are very large -- I have one\ntable that is 17GB in length, and the applications are used frequently.\nMore importantly, PostgreSQL 6.5.3 works very, very well without\nVACUUM'ing.\n\tIn order to assist you to diagnosing and correcting this\nproblem, I have prepared a test database that shows the problems. I\nwill attach three files; the test script, the log from running it on\nversion 6.5.3, and the log from running it on version 7.1beta3. In\naddition, I have setup an anonymous FTP directory on\nftp.congen.com:/pub/pg_perf which contains all of these files as well\nas the compressed table dumps used to build the test database. (When\nyou have finished copying the files, please let me know.)\n\tThe test script creates the database including the necessary\nindexing, and then runs EXPLAIN on each of the queries followed by\nactually executing the queries with \"timex\" commands to report elapsed\ntimes. The striking difference in the query plans is that 7.1 uses\nonly sequential searches for the SELECT's whereas 6.5.3 uses index\nscans. As a result, 7.1 is almost two orders of magnitude slower than\n6.5.3 with exactly the same data, schema, and queries. \n\n\tI plead with you to revisit this question of performance and\nfix PostgreSQL 7.1 to work as well as PostgreSQL 6.5.3 does. I depend\nupon PostgreSQL for much of my work, and I do not want to abandon it\nbecause of this performance problem which arose only recently. Thank\nyou.\n\n+----------------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n| President, Congenomics, Inc. | Fax: 609 737 7528 |\n| 114 W Franklin Ave, Suite K1,4,5 | email: [email protected] |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+----------------------------------+------------------------------------+\n\n#!/bin/csh\n\ncreatedb perf_test\n\ngunzip <proteins.dmp.gz | timex psql -e perf_test\ngunzip <comparisons_4.dmp.gz | timex psql -e perf_test\ngunzip <concordance_91.dmp.gz | timex psql -e perf_test\n\npsql -e perf_test <<EOF\nexplain select * from comparisons_4 where name1 = 'HI0001';\nexplain select count(*) from comparisons_4 where code = 80003;\nexplain select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name;\nexplain select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name;\nEOF\n\ntimex psql -e -c \"select * from comparisons_4 where name1 = 'HI0001'\" perf_test\ntimex psql -e -c \"select count(*) from comparisons_4 where code = 80003\" perf_test\ntimex psql -e -c \"select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name\" perf_test\ntimex psql -e -c \"select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name\" perf_test\n\n\nCREATE TABLE \"proteins\" (\n\t\"name\" character varying(16),\n\t\"organism\" text,\n\t\"start_position\" int4,\n\t\"last_position\" int4,\n\t\"seq\" text,\n\t\"purpose\" text,\n\t\"alternate_key\" character varying(16),\n\t\"comment\" text,\n\t\"compared\" bool,\n\t\"complement\" bool,\n\t\"chromosome\" character varying(4),\n\t\"essentiality\" float8);\nQUERY: CREATE TABLE \"proteins\" (\n\t\"name\" character varying(16),\n\t\"organism\" text,\n\t\"start_position\" int4,\n\t\"last_position\" int4,\n\t\"seq\" text,\n\t\"purpose\" text,\n\t\"alternate_key\" character varying(16),\n\t\"comment\" text,\n\t\"compared\" bool,\n\t\"complement\" bool,\n\t\"chromosome\" character varying(4),\n\t\"essentiality\" float8);\nCOPY \"proteins\" FROM stdin;\nQUERY: COPY \"proteins\" FROM stdin;\nCREATE INDEX \"protein_names\" on \"proteins\" using btree ( \"name\" \"varchar_ops\" );\nQUERY: CREATE INDEX \"protein_names\" on \"proteins\" using btree ( \"name\" \"varchar_ops\" );\nCREATE INDEX \"protein_organism\" on \"proteins\" using btree ( \"organism\" \"text_ops\" );\nQUERY: CREATE INDEX \"protein_organism\" on \"proteins\" using btree ( \"organism\" \"text_ops\" );\nCREATE\nCREATE\nCREATE\nEOF\n\nreal 1:11.42\nuser 3.15\nsys 0.53\n\nCREATE TABLE \"comparisons_4\" (\n\t\"name1\" character varying(16),\n\t\"name2\" character varying(16),\n\t\"z_score\" float8,\n\t\"expected\" float8,\n\t\"local_overlap_ratio\" float8,\n\t\"local_overlap_count\" int4,\n\t\"overlap_ratio\" float8,\n\t\"code\" int4);\nQUERY: CREATE TABLE \"comparisons_4\" (\n\t\"name1\" character varying(16),\n\t\"name2\" character varying(16),\n\t\"z_score\" float8,\n\t\"expected\" float8,\n\t\"local_overlap_ratio\" float8,\n\t\"local_overlap_count\" int4,\n\t\"overlap_ratio\" float8,\n\t\"code\" int4);\nCOPY \"comparisons_4\" FROM stdin;\nQUERY: COPY \"comparisons_4\" FROM stdin;\nCREATE INDEX \"comparisons_4_name1\" on \"comparisons_4\" using btree ( \"name1\" \"varchar_ops\" );\nQUERY: CREATE INDEX \"comparisons_4_name1\" on \"comparisons_4\" using btree ( \"name1\" \"varchar_ops\" );\nCREATE INDEX \"comparisons_4_name2\" on \"comparisons_4\" using btree ( \"name2\" \"varchar_ops\" );\nQUERY: CREATE INDEX \"comparisons_4_name2\" on \"comparisons_4\" using btree ( \"name2\" \"varchar_ops\" );\nCREATE INDEX \"comparisons_4_code\" on \"comparisons_4\" using btree ( \"code\" \"int4_ops\" );\nQUERY: CREATE INDEX \"comparisons_4_code\" on \"comparisons_4\" using btree ( \"code\" \"int4_ops\" );\nCREATE\nCREATE\nCREATE\nCREATE\nEOF\n\nreal 16:42.13\nuser 5.86\nsys 0.96\n\nCREATE TABLE \"concordance_91\" (\n\t\"target_name\" character varying(16),\n\t\"matched_name\" character varying(16),\n\t\"score\" text);\nQUERY: CREATE TABLE \"concordance_91\" (\n\t\"target_name\" character varying(16),\n\t\"matched_name\" character varying(16),\n\t\"score\" text);\nREVOKE ALL on \"concordance_91\" from PUBLIC;\nQUERY: REVOKE ALL on \"concordance_91\" from PUBLIC;\nGRANT ALL on \"concordance_91\" to PUBLIC;\nQUERY: GRANT ALL on \"concordance_91\" to PUBLIC;\nCOPY \"concordance_91\" FROM stdin;\nQUERY: COPY \"concordance_91\" FROM stdin;\nCREATE\nCHANGE\nCHANGE\nEOF\n\nreal 0.30\nuser 0.02\nsys 0.04\n\nexplain select * from comparisons_4 where name1 = 'HI0001';\nQUERY: explain select * from comparisons_4 where name1 = 'HI0001';\nNOTICE: QUERY PLAN:\n\nIndex Scan using comparisons_4_name1 on comparisons_4 (cost=2.05 rows=1 width=64)\n\nexplain select count(*) from comparisons_4 where code = 80003;\nQUERY: explain select count(*) from comparisons_4 where code = 80003;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=2.05 rows=1 width=12)\n -> Index Scan using comparisons_4_code on comparisons_4 (cost=2.05 rows=1 width=12)\n\nexplain select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name;\nQUERY: explain select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name;\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=4.10 rows=1 width=36)\n -> Index Scan using comparisons_4_name1 on comparisons_4 c (cost=2.05 rows=1 width=12)\n -> Index Scan using protein_names on proteins p (cost=2.05 rows=36840 width=24)\n\nexplain select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name;\nQUERY: explain select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name;\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=2093.00 rows=36840 width=60)\n -> Seq Scan on concordance_91 c (cost=43.00 rows=1000 width=36)\n -> Index Scan using protein_names on proteins p (cost=2.05 rows=36840 width=24)\n\nEXPLAIN\nEXPLAIN\nEXPLAIN\nEXPLAIN\nEOF\nQUERY: select * from comparisons_4 where name1 = 'HI0001'\nname1 |name2 |z_score|expected|local_overlap_ratio|local_overlap_count|overlap_ratio| code\n------+---------------+-------+--------+-------------------+-------------------+-------------+-----\nHI0001|PDB2DBV_O | 1217.4| 0| 0.56716| 335| 0.560468|30012\nHI0001|PDB4DBV_O | 1207| 0| 0.56418| 335| 0.557523|30012\nHI0001|PDB2GD1_P | 1226.4| 0| 0.57015| 335| 0.563423|30012\nHI0001|PDB1GAE_O | 1861.8| 0| 0.83133| 332| 0.814164|30012\nHI0001|PDB4GPD_1 | 1357.8| 0| 0.64865| 333| 0.637169|30012\nHI0001|HP1346 | 850.3| 6.9e-41| 0.39222| 334| 0.386435|30005\nHI0001|TP0844 | 780.3| 5.8e-37| 0.46307| 352| 0.465716|30014\nHI0001|PDB1HDG_O | 1020.4| 0| 0.48024| 329| 0.466074|30012\nHI0001|SCPIR-DEBYG1 | 1405.2| 0| 0.6497| 334| 0.640117|30000\nHI0001|Rv1436 | 970.4| 0| 0.49558| 339| 0.49558|30010\nHI0001|PDB1CER_O | 949.7| 0| 0.47734| 331| 0.466075|30012\nHI0001|PDB1NLH_ | 935.1| 0| 0.46847| 333| 0.458825|30012\nHI0001|PDB1GGA_A | 918| 0| 0.52125| 353| 0.51397|30012\nHI0001|PDB1GAD_O | 1869.5| 0| 0.83434| 332| 0.817112|30012\nHI0001|PDB1GYP_A | 900.1| 0| 0.51275| 353| 0.505589|30012\nHI0001|MG301 | 866.7| 0| 0.43155| 336| 0.427731|30004\nHI0001|SCSW-G3P1_YEAST| 1425.3| 0| 0.65868| 334| 0.648965|30000\nHI0001|ScTDH1 | 1424.6| 0| 0.65868| 334| 0.648965|30013\nHI0001|ScTDH2 | 1405.2| 0| 0.6497| 334| 0.640117|30013\nHI0001|SCSW-G3P3_YEAST| 1417.5| 0| 0.65868| 334| 0.648965|30000\nHI0001|ScTDH3 | 1416.8| 0| 0.65868| 334| 0.648965|30013\nHI0001|SCGP-3720 | 1416.8| 0| 0.66168| 334| 0.651921|30000\nHI0001|SCGP-E243731 | 1416.8| 0| 0.65868| 334| 0.648965|30000\nHI0001|SCSW-G3P2_YEAST| 1405.9| 0| 0.6497| 334| 0.640117|30000\nHI0001|SCGP-1008189 | 1424.6| 0| 0.65868| 334| 0.648965|30000\nHI0001|SCGP-3726 | 1398.7| 0| 0.6497| 334| 0.640117|30000\nHI0001|PDB3GPD_R | 1432.2| 0| 0.63772| 334| 0.628314|30012\nHI0001|HP0921 | 762.6| 5.6e-36| 0.40407| 344| 0.41003|30005\nHI0001|MJ1146 | 124.7| 1.9| 0.25094| 267| 0.195338|30007\nHI0001|SCGP-3724 | 1371.5| 0| 0.63772| 334| 0.628314|30000\n(30 rows)\n\n\nreal 0.18\nuser 0.02\nsys 0.03\n\nQUERY: select count(*) from comparisons_4 where code = 80003\ncount\n-----\n 3231\n(1 row)\n\n\nreal 0.34\nuser 0.02\nsys 0.03\n\nQUERY: select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name\nname |purpose \n-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nMG263 |hypothetical protein \nHP0652 |phosphoserine phosphatase \nMJ1594 |phosphoserine phosphatase \nMG125 |hypothetical protein \nTP0290 |conserved hypothetical protein \nHI1033 |phosphoserine phosphatase (o-phosphoserine phosphohydrolase) \nHI0597 |hypothetical protein \nRv3813c|(MTCY409.17), len: 273. Unknown, similar to many hypothetical proteins eg. YXEH_BACSU P54947 hypothetical 30.2 kd protein in idh-deor (270 aa), fasta results; opt: 329 z-score: 456.0 E(): 2.2e-18, 32.2% identity in 267 aa overlap \nRv3042c|(MTV012.57c), len: 409. The C-terminal domain (150-409) is highly similar to several SerB proteins e.g. P06862|SERB_ECOLI. N-terminus (1-150) shows no similarity, FASTA score: sp|P06862|SERB_ECOLI PHOSPHOSERINE PHOSPHATASE (EC 3.1 (322 aa) opt: 628 z-score: 753.3 E(): 0; 46.8%identity in 235 aa overlap. TBparse score is 0.884\nMG265 |hypothetical protein \n(10 rows)\n\n\nreal 0.24\nuser 0.02\nsys 0.03\n\nQUERY: select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name\ntarget_name|matched_name| score|purpose \n-----------+------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nECinfA |BSInfA |0.680556|initiation factor IF-1 \nECinfA |HI0548 | 0.80952|initiation factor IF-1 \nECinfA |HP1298 | 0.61111|translation initiation factor EF-1 \nECinfA |Rv3462c |0.684936|(MTCY13E12.15c), len: 73 aa. infA. Probable initiation factor IF-1. FASTA results: identical to IF1_MYCBO P45957 initiation factor if-1 (72 aa) \nECrpmA |BB0780 |0.635297|ribosomal protein L27 \nECrpmA |HI0879 | 0.87059|ribosomal protein L27 \nECrpmA |HP0297 |0.613632|ribosomal protein L27 \nECrpmA |Rv2441c |0.616278|(MTCY428.05), len: 86. Probable rpmA, similar to eg RL27_ECOLI P02427 50s ribosomal protein l27, (84 aa), fasta scores, opt: 328, E(): 7.1e-17, (64.2% identity in 81 aa overlap); contains PS00831 Ribosomal protein L27 signature\n(8 rows)\n\n\nreal 0.17\nuser 0.02\nsys 0.03\n\n\nCREATE DATABASE\nCREATE TABLE \"proteins\" (\n\t\"name\" character varying(16),\n\t\"organism\" text,\n\t\"start_position\" int4,\n\t\"last_position\" int4,\n\t\"seq\" text,\n\t\"purpose\" text,\n\t\"alternate_key\" character varying(16),\n\t\"comment\" text,\n\t\"compared\" bool,\n\t\"complement\" bool,\n\t\"chromosome\" character varying(4),\n\t\"essentiality\" float8);\nCREATE\nCOPY \"proteins\" FROM stdin;\nCREATE INDEX \"protein_names\" on \"proteins\" using btree ( \"name\" \"varchar_ops\" );\nCREATE\nCREATE INDEX \"protein_organism\" on \"proteins\" using btree ( \"organism\" \"text_ops\" );\nCREATE\n\nreal 1:04.49\nuser 3.14\nsys 0.57\n\nCREATE TABLE \"comparisons_4\" (\n\t\"name1\" character varying(16),\n\t\"name2\" character varying(16),\n\t\"z_score\" float8,\n\t\"expected\" float8,\n\t\"local_overlap_ratio\" float8,\n\t\"local_overlap_count\" int4,\n\t\"overlap_ratio\" float8,\n\t\"code\" int4);\nCREATE\nCOPY \"comparisons_4\" FROM stdin;\nCREATE INDEX \"comparisons_4_name1\" on \"comparisons_4\" using btree ( \"name1\" \"varchar_ops\" );\nCREATE\nCREATE INDEX \"comparisons_4_name2\" on \"comparisons_4\" using btree ( \"name2\" \"varchar_ops\" );\nCREATE\nCREATE INDEX \"comparisons_4_code\" on \"comparisons_4\" using btree ( \"code\" \"int4_ops\" );\nCREATE\n\nreal 7:04.43\nuser 5.87\nsys 1.03\n\nCREATE TABLE \"concordance_91\" (\n\t\"target_name\" character varying(16),\n\t\"matched_name\" character varying(16),\n\t\"score\" text);\nCREATE\nREVOKE ALL on \"concordance_91\" from PUBLIC;\nCHANGE\nGRANT ALL on \"concordance_91\" to PUBLIC;\nCHANGE\nCOPY \"concordance_91\" FROM stdin;\n\nreal 0.60\nuser 0.01\nsys 0.03\n\nexplain select * from comparisons_4 where name1 = 'HI0001';\nNOTICE: QUERY PLAN:\n\nSeq Scan on comparisons_4 (cost=0.00..15640.81 rows=5918 width=64)\n\nEXPLAIN\nexplain select count(*) from comparisons_4 where code = 80003;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=15655.61..15655.61 rows=1 width=0)\n -> Seq Scan on comparisons_4 (cost=0.00..15640.81 rows=5918 width=0)\n\nEXPLAIN\nexplain select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name;\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=22495.22..23029.70 rows=2180283 width=36)\n -> Sort (cost=16011.62..16011.62 rows=5918 width=12)\n -> Seq Scan on comparisons_4 c (cost=0.00..15640.81 rows=5918 width=12)\n -> Sort (cost=6483.60..6483.60 rows=36840 width=24)\n -> Seq Scan on proteins p (cost=0.00..3247.40 rows=36840 width=24)\n\nEXPLAIN\nexplain select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name;\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=6553.43..7026.43 rows=368400 width=60)\n -> Sort (cost=69.83..69.83 rows=1000 width=36)\n -> Seq Scan on concordance_91 c (cost=0.00..20.00 rows=1000 width=36)\n -> Sort (cost=6483.60..6483.60 rows=36840 width=24)\n -> Seq Scan on proteins p (cost=0.00..3247.40 rows=36840 width=24)\n\nEXPLAIN\nselect * from comparisons_4 where name1 = 'HI0001'\n name1 | name2 | z_score | expected | local_overlap_ratio | local_overlap_count | overlap_ratio | code \n--------+-----------------+---------+----------+---------------------+---------------------+---------------+-------\n HI0001 | PDB1GAD_O | 1869.5 | 0 | 0.83434 | 332 | 0.817112 | 30012\n HI0001 | PDB1GAE_O | 1861.8 | 0 | 0.83133 | 332 | 0.814164 | 30012\n HI0001 | PDB3GPD_R | 1432.2 | 0 | 0.63772 | 334 | 0.628314 | 30012\n HI0001 | SCSW-G3P1_YEAST | 1425.3 | 0 | 0.65868 | 334 | 0.648965 | 30000\n HI0001 | SCGP-1008189 | 1424.6 | 0 | 0.65868 | 334 | 0.648965 | 30000\n HI0001 | ScTDH1 | 1424.6 | 0 | 0.65868 | 334 | 0.648965 | 30013\n HI0001 | SCSW-G3P3_YEAST | 1417.5 | 0 | 0.65868 | 334 | 0.648965 | 30000\n HI0001 | ScTDH3 | 1416.8 | 0 | 0.65868 | 334 | 0.648965 | 30013\n HI0001 | SCGP-3720 | 1416.8 | 0 | 0.66168 | 334 | 0.651921 | 30000\n HI0001 | SCGP-E243731 | 1416.8 | 0 | 0.65868 | 334 | 0.648965 | 30000\n HI0001 | SCSW-G3P2_YEAST | 1405.9 | 0 | 0.6497 | 334 | 0.640117 | 30000\n HI0001 | ScTDH2 | 1405.2 | 0 | 0.6497 | 334 | 0.640117 | 30013\n HI0001 | SCPIR-DEBYG1 | 1405.2 | 0 | 0.6497 | 334 | 0.640117 | 30000\n HI0001 | SCGP-3726 | 1398.7 | 0 | 0.6497 | 334 | 0.640117 | 30000\n HI0001 | SCGP-3724 | 1371.5 | 0 | 0.63772 | 334 | 0.628314 | 30000\n HI0001 | PDB4GPD_1 | 1357.8 | 0 | 0.64865 | 333 | 0.637169 | 30012\n HI0001 | PDB2GD1_P | 1226.4 | 0 | 0.57015 | 335 | 0.563423 | 30012\n HI0001 | PDB2DBV_O | 1217.4 | 0 | 0.56716 | 335 | 0.560468 | 30012\n HI0001 | PDB4DBV_O | 1207 | 0 | 0.56418 | 335 | 0.557523 | 30012\n HI0001 | PDB1HDG_O | 1020.4 | 0 | 0.48024 | 329 | 0.466074 | 30012\n HI0001 | Rv1436 | 970.4 | 0 | 0.49558 | 339 | 0.49558 | 30010\n HI0001 | PDB1CER_O | 949.7 | 0 | 0.47734 | 331 | 0.466075 | 30012\n HI0001 | PDB1NLH_ | 935.1 | 0 | 0.46847 | 333 | 0.458825 | 30012\n HI0001 | PDB1GGA_A | 918 | 0 | 0.52125 | 353 | 0.51397 | 30012\n HI0001 | PDB1GYP_A | 900.1 | 0 | 0.51275 | 353 | 0.505589 | 30012\n HI0001 | MG301 | 866.7 | 0 | 0.43155 | 336 | 0.427731 | 30004\n HI0001 | HP1346 | 850.3 | 6.9e-41 | 0.39222 | 334 | 0.386435 | 30005\n HI0001 | TP0844 | 780.3 | 5.8e-37 | 0.46307 | 352 | 0.465716 | 30014\n HI0001 | HP0921 | 762.6 | 5.6e-36 | 0.40407 | 344 | 0.41003 | 30005\n HI0001 | MJ1146 | 124.7 | 1.9 | 0.25094 | 267 | 0.195338 | 30007\n(30 rows)\n\n\nreal 22.68\nuser 0.01\nsys 0.03\n\nselect count(*) from comparisons_4 where code = 80003\n count \n-------\n 3231\n(1 row)\n\n\nreal 21.49\nuser 0.01\nsys 0.03\n\nselect p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name\n name | purpose \n---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HI0597 | hypothetical protein\n HI1033 | phosphoserine phosphatase (o-phosphoserine phosphohydrolase)\n HP0652 | phosphoserine phosphatase \n MG125 | hypothetical protein\n MG263 | hypothetical protein\n MG265 | hypothetical protein\n MJ1594 | phosphoserine phosphatase\n Rv3042c | (MTV012.57c), len: 409. The C-terminal domain (150-409) is highly similar to several SerB proteins e.g. P06862|SERB_ECOLI. N-terminus (1-150) shows no similarity, FASTA score: sp|P06862|SERB_ECOLI PHOSPHOSERINE PHOSPHATASE (EC 3.1 (322 aa) opt: 628 z-score: 753.3 E(): 0; 46.8%identity in 235 aa overlap. TBparse score is 0.884\n Rv3813c | (MTCY409.17), len: 273. Unknown, similar to many hypothetical proteins eg. YXEH_BACSU P54947 hypothetical 30.2 kd protein in idh-deor (270 aa), fasta results; opt: 329 z-score: 456.0 E(): 2.2e-18, 32.2% identity in 267 aa overlap\n TP0290 | conserved hypothetical protein\n(10 rows)\n\n\nreal 23.13\nuser 0.01\nsys 0.03\n\nselect c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name\n target_name | matched_name | score | purpose \n-------------+--------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n ECrpmA | BB0780 | 0.635297 | ribosomal protein L27\n ECinfA | BSInfA | 0.680556 | initiation factor IF-1\n ECinfA | HI0548 | 0.80952 | initiation factor IF-1\n ECrpmA | HI0879 | 0.87059 | ribosomal protein L27\n ECrpmA | HP0297 | 0.613632 | ribosomal protein L27 \n ECinfA | HP1298 | 0.61111 | translation initiation factor EF-1 \n ECrpmA | Rv2441c | 0.616278 | (MTCY428.05), len: 86. Probable rpmA, similar to eg RL27_ECOLI P02427 50s ribosomal protein l27, (84 aa), fasta scores, opt: 328, E(): 7.1e-17, (64.2% identity in 81 aa overlap); contains PS00831 Ribosomal protein L27 signature\n ECinfA | Rv3462c | 0.684936 | (MTCY13E12.15c), len: 73 aa. infA. Probable initiation factor IF-1. FASTA results: identical to IF1_MYCBO P45957 initiation factor if-1 (72 aa)\n(8 rows)\n\n\nreal 11.16\nuser 0.01\nsys 0.03",
"msg_date": "Tue, 16 Jan 2001 13:27:28 -0500 (EST)",
"msg_from": "[email protected] (Robert E. Bruccoleri)",
"msg_from_op": true,
"msg_subject": "Performance degradation in PostgreSQL 7.1beta3 vs 6.5.3"
},
{
"msg_contents": "[email protected] (Robert E. Bruccoleri) writes:\n> \tI have followed the discussion in pgsql-hackers over the previous\n> months and others have noted some performance problems, and the response\n> has typically been to VACUUM the tables. Unfortunately, this is not a\n> practical option for my applications. They are very large -- I have one\n> table that is 17GB in length, and the applications are used frequently.\n\nYou can't afford to run a VACUUM ANALYZE even once in the lifetime of\nthe table?\n\n> More importantly, PostgreSQL 6.5.3 works very, very well without\n> VACUUM'ing.\n\n6.5 effectively assumes that \"foo = constant\" will select exactly one\nrow, if it has no statistics to prove otherwise. I don't regard that\nas a well-chosen default, even if it does happen to work OK for your\napplication. Selecting an indexscan when a seqscan is needed is just\nas evil as doing the reverse; what's much worse is that 6.5 will\npick incredibly bad join plans (ie, nested loops) because it thinks\nthat very little data is coming out of the scans.\n\nIf you want to revert to the 6.5 behavior without doing a VACUUM, you\ncould probably get pretty close with\n\tupdate pg_attribute set attdispersion = -1.0;\n\nStats-gathering and planning certainly does need a great deal of\nadditional work, but I'm afraid that none of that will happen before\n7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Jan 2001 13:59:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation in PostgreSQL 7.1beta3 vs 6.5.3 "
},
{
"msg_contents": "\"Robert E. Bruccoleri\" wrote:\n\nYou can try starting postmaster with the \"-o -fs\" option. This will disable sequential scans if there is an index. There is also an environment variable you can set, prior to the operation. I have run into this same problem.\n\n\n> Dear Tom,\n> I am writing to you because you are the maintainer of the\n> query optimizer and planner.\n> I have found a very significant performance degradation\n> between PostgreSQL 6.5.3 and 7.1beta3, which will severely impact two\n> large applications that I have developed and maintain for several\n> clients. The performance difference is seen with the use of indices in\n> SELECT statements, whereby the current release does not make effective\n> use of the indices and 6.5.3 does. All of these tests were run on a SGI\n> R10000 Indigo2 system running Irix 6.5. All the regression tests passed\n> as expected for both versions.\n> I have followed the discussion in pgsql-hackers over the previous\n> months and others have noted some performance problems, and the response\n> has typically been to VACUUM the tables. Unfortunately, this is not a\n> practical option for my applications. They are very large -- I have one\n> table that is 17GB in length, and the applications are used frequently.\n> More importantly, PostgreSQL 6.5.3 works very, very well without\n> VACUUM'ing.\n> In order to assist you to diagnosing and correcting this\n> problem, I have prepared a test database that shows the problems. I\n> will attach three files; the test script, the log from running it on\n> version 6.5.3, and the log from running it on version 7.1beta3. In\n> addition, I have setup an anonymous FTP directory on\n> ftp.congen.com:/pub/pg_perf which contains all of these files as well\n> as the compressed table dumps used to build the test database. (When\n> you have finished copying the files, please let me know.)\n> The test script creates the database including the necessary\n> indexing, and then runs EXPLAIN on each of the queries followed by\n> actually executing the queries with \"timex\" commands to report elapsed\n> times. The striking difference in the query plans is that 7.1 uses\n> only sequential searches for the SELECT's whereas 6.5.3 uses index\n> scans. As a result, 7.1 is almost two orders of magnitude slower than\n> 6.5.3 with exactly the same data, schema, and queries.\n>\n> I plead with you to revisit this question of performance and\n> fix PostgreSQL 7.1 to work as well as PostgreSQL 6.5.3 does. I depend\n> upon PostgreSQL for much of my work, and I do not want to abandon it\n> because of this performance problem which arose only recently. Thank\n> you.\n>\n> +----------------------------------+------------------------------------+\n> | Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n> | President, Congenomics, Inc. | Fax: 609 737 7528 |\n> | 114 W Franklin Ave, Suite K1,4,5 | email: [email protected] |\n> | P.O. Box 314 | URL: http://www.congen.com/~bruc |\n> | Pennington, NJ 08534 | |\n> +----------------------------------+------------------------------------+\n>\n> ------------------------------------------------------------------------\n> #!/bin/csh\n>\n> createdb perf_test\n>\n> gunzip <proteins.dmp.gz | timex psql -e perf_test\n> gunzip <comparisons_4.dmp.gz | timex psql -e perf_test\n> gunzip <concordance_91.dmp.gz | timex psql -e perf_test\n>\n> psql -e perf_test <<EOF\n> explain select * from comparisons_4 where name1 = 'HI0001';\n> explain select count(*) from comparisons_4 where code = 80003;\n> explain select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name;\n> explain select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name;\n> EOF\n>\n> timex psql -e -c \"select * from comparisons_4 where name1 = 'HI0001'\" perf_test\n> timex psql -e -c \"select count(*) from comparisons_4 where code = 80003\" perf_test\n> timex psql -e -c \"select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name\" perf_test\n> timex psql -e -c \"select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name\" perf_test\n>\n> ------------------------------------------------------------------------\n> CREATE TABLE \"proteins\" (\n> \"name\" character varying(16),\n> \"organism\" text,\n> \"start_position\" int4,\n> \"last_position\" int4,\n> \"seq\" text,\n> \"purpose\" text,\n> \"alternate_key\" character varying(16),\n> \"comment\" text,\n> \"compared\" bool,\n> \"complement\" bool,\n> \"chromosome\" character varying(4),\n> \"essentiality\" float8);\n> QUERY: CREATE TABLE \"proteins\" (\n> \"name\" character varying(16),\n> \"organism\" text,\n> \"start_position\" int4,\n> \"last_position\" int4,\n> \"seq\" text,\n> \"purpose\" text,\n> \"alternate_key\" character varying(16),\n> \"comment\" text,\n> \"compared\" bool,\n> \"complement\" bool,\n> \"chromosome\" character varying(4),\n> \"essentiality\" float8);\n> COPY \"proteins\" FROM stdin;\n> QUERY: COPY \"proteins\" FROM stdin;\n> CREATE INDEX \"protein_names\" on \"proteins\" using btree ( \"name\" \"varchar_ops\" );\n> QUERY: CREATE INDEX \"protein_names\" on \"proteins\" using btree ( \"name\" \"varchar_ops\" );\n> CREATE INDEX \"protein_organism\" on \"proteins\" using btree ( \"organism\" \"text_ops\" );\n> QUERY: CREATE INDEX \"protein_organism\" on \"proteins\" using btree ( \"organism\" \"text_ops\" );\n> CREATE\n> CREATE\n> CREATE\n> EOF\n>\n> real 1:11.42\n> user 3.15\n> sys 0.53\n>\n> CREATE TABLE \"comparisons_4\" (\n> \"name1\" character varying(16),\n> \"name2\" character varying(16),\n> \"z_score\" float8,\n> \"expected\" float8,\n> \"local_overlap_ratio\" float8,\n> \"local_overlap_count\" int4,\n> \"overlap_ratio\" float8,\n> \"code\" int4);\n> QUERY: CREATE TABLE \"comparisons_4\" (\n> \"name1\" character varying(16),\n> \"name2\" character varying(16),\n> \"z_score\" float8,\n> \"expected\" float8,\n> \"local_overlap_ratio\" float8,\n> \"local_overlap_count\" int4,\n> \"overlap_ratio\" float8,\n> \"code\" int4);\n> COPY \"comparisons_4\" FROM stdin;\n> QUERY: COPY \"comparisons_4\" FROM stdin;\n> CREATE INDEX \"comparisons_4_name1\" on \"comparisons_4\" using btree ( \"name1\" \"varchar_ops\" );\n> QUERY: CREATE INDEX \"comparisons_4_name1\" on \"comparisons_4\" using btree ( \"name1\" \"varchar_ops\" );\n> CREATE INDEX \"comparisons_4_name2\" on \"comparisons_4\" using btree ( \"name2\" \"varchar_ops\" );\n> QUERY: CREATE INDEX \"comparisons_4_name2\" on \"comparisons_4\" using btree ( \"name2\" \"varchar_ops\" );\n> CREATE INDEX \"comparisons_4_code\" on \"comparisons_4\" using btree ( \"code\" \"int4_ops\" );\n> QUERY: CREATE INDEX \"comparisons_4_code\" on \"comparisons_4\" using btree ( \"code\" \"int4_ops\" );\n> CREATE\n> CREATE\n> CREATE\n> CREATE\n> EOF\n>\n> real 16:42.13\n> user 5.86\n> sys 0.96\n>\n> CREATE TABLE \"concordance_91\" (\n> \"target_name\" character varying(16),\n> \"matched_name\" character varying(16),\n> \"score\" text);\n> QUERY: CREATE TABLE \"concordance_91\" (\n> \"target_name\" character varying(16),\n> \"matched_name\" character varying(16),\n> \"score\" text);\n> REVOKE ALL on \"concordance_91\" from PUBLIC;\n> QUERY: REVOKE ALL on \"concordance_91\" from PUBLIC;\n> GRANT ALL on \"concordance_91\" to PUBLIC;\n> QUERY: GRANT ALL on \"concordance_91\" to PUBLIC;\n> COPY \"concordance_91\" FROM stdin;\n> QUERY: COPY \"concordance_91\" FROM stdin;\n> CREATE\n> CHANGE\n> CHANGE\n> EOF\n>\n> real 0.30\n> user 0.02\n> sys 0.04\n>\n> explain select * from comparisons_4 where name1 = 'HI0001';\n> QUERY: explain select * from comparisons_4 where name1 = 'HI0001';\n> NOTICE: QUERY PLAN:\n>\n> Index Scan using comparisons_4_name1 on comparisons_4 (cost=2.05 rows=1 width=64)\n>\n> explain select count(*) from comparisons_4 where code = 80003;\n> QUERY: explain select count(*) from comparisons_4 where code = 80003;\n> NOTICE: QUERY PLAN:\n>\n> Aggregate (cost=2.05 rows=1 width=12)\n> -> Index Scan using comparisons_4_code on comparisons_4 (cost=2.05 rows=1 width=12)\n>\n> explain select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name;\n> QUERY: explain select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name;\n> NOTICE: QUERY PLAN:\n>\n> Nested Loop (cost=4.10 rows=1 width=36)\n> -> Index Scan using comparisons_4_name1 on comparisons_4 c (cost=2.05 rows=1 width=12)\n> -> Index Scan using protein_names on proteins p (cost=2.05 rows=36840 width=24)\n>\n> explain select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name;\n> QUERY: explain select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name;\n> NOTICE: QUERY PLAN:\n>\n> Nested Loop (cost=2093.00 rows=36840 width=60)\n> -> Seq Scan on concordance_91 c (cost=43.00 rows=1000 width=36)\n> -> Index Scan using protein_names on proteins p (cost=2.05 rows=36840 width=24)\n>\n> EXPLAIN\n> EXPLAIN\n> EXPLAIN\n> EXPLAIN\n> EOF\n> QUERY: select * from comparisons_4 where name1 = 'HI0001'\n> name1 |name2 |z_score|expected|local_overlap_ratio|local_overlap_count|overlap_ratio| code\n> ------+---------------+-------+--------+-------------------+-------------------+-------------+-----\n> HI0001|PDB2DBV_O | 1217.4| 0| 0.56716| 335| 0.560468|30012\n> HI0001|PDB4DBV_O | 1207| 0| 0.56418| 335| 0.557523|30012\n> HI0001|PDB2GD1_P | 1226.4| 0| 0.57015| 335| 0.563423|30012\n> HI0001|PDB1GAE_O | 1861.8| 0| 0.83133| 332| 0.814164|30012\n> HI0001|PDB4GPD_1 | 1357.8| 0| 0.64865| 333| 0.637169|30012\n> HI0001|HP1346 | 850.3| 6.9e-41| 0.39222| 334| 0.386435|30005\n> HI0001|TP0844 | 780.3| 5.8e-37| 0.46307| 352| 0.465716|30014\n> HI0001|PDB1HDG_O | 1020.4| 0| 0.48024| 329| 0.466074|30012\n> HI0001|SCPIR-DEBYG1 | 1405.2| 0| 0.6497| 334| 0.640117|30000\n> HI0001|Rv1436 | 970.4| 0| 0.49558| 339| 0.49558|30010\n> HI0001|PDB1CER_O | 949.7| 0| 0.47734| 331| 0.466075|30012\n> HI0001|PDB1NLH_ | 935.1| 0| 0.46847| 333| 0.458825|30012\n> HI0001|PDB1GGA_A | 918| 0| 0.52125| 353| 0.51397|30012\n> HI0001|PDB1GAD_O | 1869.5| 0| 0.83434| 332| 0.817112|30012\n> HI0001|PDB1GYP_A | 900.1| 0| 0.51275| 353| 0.505589|30012\n> HI0001|MG301 | 866.7| 0| 0.43155| 336| 0.427731|30004\n> HI0001|SCSW-G3P1_YEAST| 1425.3| 0| 0.65868| 334| 0.648965|30000\n> HI0001|ScTDH1 | 1424.6| 0| 0.65868| 334| 0.648965|30013\n> HI0001|ScTDH2 | 1405.2| 0| 0.6497| 334| 0.640117|30013\n> HI0001|SCSW-G3P3_YEAST| 1417.5| 0| 0.65868| 334| 0.648965|30000\n> HI0001|ScTDH3 | 1416.8| 0| 0.65868| 334| 0.648965|30013\n> HI0001|SCGP-3720 | 1416.8| 0| 0.66168| 334| 0.651921|30000\n> HI0001|SCGP-E243731 | 1416.8| 0| 0.65868| 334| 0.648965|30000\n> HI0001|SCSW-G3P2_YEAST| 1405.9| 0| 0.6497| 334| 0.640117|30000\n> HI0001|SCGP-1008189 | 1424.6| 0| 0.65868| 334| 0.648965|30000\n> HI0001|SCGP-3726 | 1398.7| 0| 0.6497| 334| 0.640117|30000\n> HI0001|PDB3GPD_R | 1432.2| 0| 0.63772| 334| 0.628314|30012\n> HI0001|HP0921 | 762.6| 5.6e-36| 0.40407| 344| 0.41003|30005\n> HI0001|MJ1146 | 124.7| 1.9| 0.25094| 267| 0.195338|30007\n> HI0001|SCGP-3724 | 1371.5| 0| 0.63772| 334| 0.628314|30000\n> (30 rows)\n>\n> real 0.18\n> user 0.02\n> sys 0.03\n>\n> QUERY: select count(*) from comparisons_4 where code = 80003\n> count\n> -----\n> 3231\n> (1 row)\n>\n> real 0.34\n> user 0.02\n> sys 0.03\n>\n> QUERY: select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name\n> name |purpose\n> -------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> MG263 |hypothetical protein\n> HP0652 |phosphoserine phosphatase\n> MJ1594 |phosphoserine phosphatase\n> MG125 |hypothetical protein\n> TP0290 |conserved hypothetical protein\n> HI1033 |phosphoserine phosphatase (o-phosphoserine phosphohydrolase)\n> HI0597 |hypothetical protein\n> Rv3813c|(MTCY409.17), len: 273. Unknown, similar to many hypothetical proteins eg. YXEH_BACSU P54947 hypothetical 30.2 kd protein in idh-deor (270 aa), fasta results; opt: 329 z-score: 456.0 E(): 2.2e-18, 32.2% identity in 267 aa overlap\n> Rv3042c|(MTV012.57c), len: 409. The C-terminal domain (150-409) is highly similar to several SerB proteins e.g. P06862|SERB_ECOLI. N-terminus (1-150) shows no similarity, FASTA score: sp|P06862|SERB_ECOLI PHOSPHOSERINE PHOSPHATASE (EC 3.1 (322 aa) opt: 628 z-score: 753.3 E(): 0; 46.8%identity in 235 aa overlap. TBparse score is 0.884\n> MG265 |hypothetical protein\n> (10 rows)\n>\n> real 0.24\n> user 0.02\n> sys 0.03\n>\n> QUERY: select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name\n> target_name|matched_name| score|purpose\n> -----------+------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> ECinfA |BSInfA |0.680556|initiation factor IF-1\n> ECinfA |HI0548 | 0.80952|initiation factor IF-1\n> ECinfA |HP1298 | 0.61111|translation initiation factor EF-1\n> ECinfA |Rv3462c |0.684936|(MTCY13E12.15c), len: 73 aa. infA. Probable initiation factor IF-1. FASTA results: identical to IF1_MYCBO P45957 initiation factor if-1 (72 aa)\n> ECrpmA |BB0780 |0.635297|ribosomal protein L27\n> ECrpmA |HI0879 | 0.87059|ribosomal protein L27\n> ECrpmA |HP0297 |0.613632|ribosomal protein L27\n> ECrpmA |Rv2441c |0.616278|(MTCY428.05), len: 86. Probable rpmA, similar to eg RL27_ECOLI P02427 50s ribosomal protein l27, (84 aa), fasta scores, opt: 328, E(): 7.1e-17, (64.2% identity in 81 aa overlap); contains PS00831 Ribosomal protein L27 signature\n> (8 rows)\n>\n> real 0.17\n> user 0.02\n> sys 0.03\n>\n> ------------------------------------------------------------------------\n> CREATE DATABASE\n> CREATE TABLE \"proteins\" (\n> \"name\" character varying(16),\n> \"organism\" text,\n> \"start_position\" int4,\n> \"last_position\" int4,\n> \"seq\" text,\n> \"purpose\" text,\n> \"alternate_key\" character varying(16),\n> \"comment\" text,\n> \"compared\" bool,\n> \"complement\" bool,\n> \"chromosome\" character varying(4),\n> \"essentiality\" float8);\n> CREATE\n> COPY \"proteins\" FROM stdin;\n> CREATE INDEX \"protein_names\" on \"proteins\" using btree ( \"name\" \"varchar_ops\" );\n> CREATE\n> CREATE INDEX \"protein_organism\" on \"proteins\" using btree ( \"organism\" \"text_ops\" );\n> CREATE\n>\n> real 1:04.49\n> user 3.14\n> sys 0.57\n>\n> CREATE TABLE \"comparisons_4\" (\n> \"name1\" character varying(16),\n> \"name2\" character varying(16),\n> \"z_score\" float8,\n> \"expected\" float8,\n> \"local_overlap_ratio\" float8,\n> \"local_overlap_count\" int4,\n> \"overlap_ratio\" float8,\n> \"code\" int4);\n> CREATE\n> COPY \"comparisons_4\" FROM stdin;\n> CREATE INDEX \"comparisons_4_name1\" on \"comparisons_4\" using btree ( \"name1\" \"varchar_ops\" );\n> CREATE\n> CREATE INDEX \"comparisons_4_name2\" on \"comparisons_4\" using btree ( \"name2\" \"varchar_ops\" );\n> CREATE\n> CREATE INDEX \"comparisons_4_code\" on \"comparisons_4\" using btree ( \"code\" \"int4_ops\" );\n> CREATE\n>\n> real 7:04.43\n> user 5.87\n> sys 1.03\n>\n> CREATE TABLE \"concordance_91\" (\n> \"target_name\" character varying(16),\n> \"matched_name\" character varying(16),\n> \"score\" text);\n> CREATE\n> REVOKE ALL on \"concordance_91\" from PUBLIC;\n> CHANGE\n> GRANT ALL on \"concordance_91\" to PUBLIC;\n> CHANGE\n> COPY \"concordance_91\" FROM stdin;\n>\n> real 0.60\n> user 0.01\n> sys 0.03\n>\n> explain select * from comparisons_4 where name1 = 'HI0001';\n> NOTICE: QUERY PLAN:\n>\n> Seq Scan on comparisons_4 (cost=0.00..15640.81 rows=5918 width=64)\n>\n> EXPLAIN\n> explain select count(*) from comparisons_4 where code = 80003;\n> NOTICE: QUERY PLAN:\n>\n> Aggregate (cost=15655.61..15655.61 rows=1 width=0)\n> -> Seq Scan on comparisons_4 (cost=0.00..15640.81 rows=5918 width=0)\n>\n> EXPLAIN\n> explain select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name;\n> NOTICE: QUERY PLAN:\n>\n> Merge Join (cost=22495.22..23029.70 rows=2180283 width=36)\n> -> Sort (cost=16011.62..16011.62 rows=5918 width=12)\n> -> Seq Scan on comparisons_4 c (cost=0.00..15640.81 rows=5918 width=12)\n> -> Sort (cost=6483.60..6483.60 rows=36840 width=24)\n> -> Seq Scan on proteins p (cost=0.00..3247.40 rows=36840 width=24)\n>\n> EXPLAIN\n> explain select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name;\n> NOTICE: QUERY PLAN:\n>\n> Merge Join (cost=6553.43..7026.43 rows=368400 width=60)\n> -> Sort (cost=69.83..69.83 rows=1000 width=36)\n> -> Seq Scan on concordance_91 c (cost=0.00..20.00 rows=1000 width=36)\n> -> Sort (cost=6483.60..6483.60 rows=36840 width=24)\n> -> Seq Scan on proteins p (cost=0.00..3247.40 rows=36840 width=24)\n>\n> EXPLAIN\n> select * from comparisons_4 where name1 = 'HI0001'\n> name1 | name2 | z_score | expected | local_overlap_ratio | local_overlap_count | overlap_ratio | code\n> --------+-----------------+---------+----------+---------------------+---------------------+---------------+-------\n> HI0001 | PDB1GAD_O | 1869.5 | 0 | 0.83434 | 332 | 0.817112 | 30012\n> HI0001 | PDB1GAE_O | 1861.8 | 0 | 0.83133 | 332 | 0.814164 | 30012\n> HI0001 | PDB3GPD_R | 1432.2 | 0 | 0.63772 | 334 | 0.628314 | 30012\n> HI0001 | SCSW-G3P1_YEAST | 1425.3 | 0 | 0.65868 | 334 | 0.648965 | 30000\n> HI0001 | SCGP-1008189 | 1424.6 | 0 | 0.65868 | 334 | 0.648965 | 30000\n> HI0001 | ScTDH1 | 1424.6 | 0 | 0.65868 | 334 | 0.648965 | 30013\n> HI0001 | SCSW-G3P3_YEAST | 1417.5 | 0 | 0.65868 | 334 | 0.648965 | 30000\n> HI0001 | ScTDH3 | 1416.8 | 0 | 0.65868 | 334 | 0.648965 | 30013\n> HI0001 | SCGP-3720 | 1416.8 | 0 | 0.66168 | 334 | 0.651921 | 30000\n> HI0001 | SCGP-E243731 | 1416.8 | 0 | 0.65868 | 334 | 0.648965 | 30000\n> HI0001 | SCSW-G3P2_YEAST | 1405.9 | 0 | 0.6497 | 334 | 0.640117 | 30000\n> HI0001 | ScTDH2 | 1405.2 | 0 | 0.6497 | 334 | 0.640117 | 30013\n> HI0001 | SCPIR-DEBYG1 | 1405.2 | 0 | 0.6497 | 334 | 0.640117 | 30000\n> HI0001 | SCGP-3726 | 1398.7 | 0 | 0.6497 | 334 | 0.640117 | 30000\n> HI0001 | SCGP-3724 | 1371.5 | 0 | 0.63772 | 334 | 0.628314 | 30000\n> HI0001 | PDB4GPD_1 | 1357.8 | 0 | 0.64865 | 333 | 0.637169 | 30012\n> HI0001 | PDB2GD1_P | 1226.4 | 0 | 0.57015 | 335 | 0.563423 | 30012\n> HI0001 | PDB2DBV_O | 1217.4 | 0 | 0.56716 | 335 | 0.560468 | 30012\n> HI0001 | PDB4DBV_O | 1207 | 0 | 0.56418 | 335 | 0.557523 | 30012\n> HI0001 | PDB1HDG_O | 1020.4 | 0 | 0.48024 | 329 | 0.466074 | 30012\n> HI0001 | Rv1436 | 970.4 | 0 | 0.49558 | 339 | 0.49558 | 30010\n> HI0001 | PDB1CER_O | 949.7 | 0 | 0.47734 | 331 | 0.466075 | 30012\n> HI0001 | PDB1NLH_ | 935.1 | 0 | 0.46847 | 333 | 0.458825 | 30012\n> HI0001 | PDB1GGA_A | 918 | 0 | 0.52125 | 353 | 0.51397 | 30012\n> HI0001 | PDB1GYP_A | 900.1 | 0 | 0.51275 | 353 | 0.505589 | 30012\n> HI0001 | MG301 | 866.7 | 0 | 0.43155 | 336 | 0.427731 | 30004\n> HI0001 | HP1346 | 850.3 | 6.9e-41 | 0.39222 | 334 | 0.386435 | 30005\n> HI0001 | TP0844 | 780.3 | 5.8e-37 | 0.46307 | 352 | 0.465716 | 30014\n> HI0001 | HP0921 | 762.6 | 5.6e-36 | 0.40407 | 344 | 0.41003 | 30005\n> HI0001 | MJ1146 | 124.7 | 1.9 | 0.25094 | 267 | 0.195338 | 30007\n> (30 rows)\n>\n> real 22.68\n> user 0.01\n> sys 0.03\n>\n> select count(*) from comparisons_4 where code = 80003\n> count\n> -------\n> 3231\n> (1 row)\n>\n> real 21.49\n> user 0.01\n> sys 0.03\n>\n> select p.name, p.purpose from comparisons_4 c, proteins p where c.name1 = 'HI0003' and c.name2 = p.name\n> name | purpose\n> ---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HI0597 | hypothetical protein\n> HI1033 | phosphoserine phosphatase (o-phosphoserine phosphohydrolase)\n> HP0652 | phosphoserine phosphatase\n> MG125 | hypothetical protein\n> MG263 | hypothetical protein\n> MG265 | hypothetical protein\n> MJ1594 | phosphoserine phosphatase\n> Rv3042c | (MTV012.57c), len: 409. The C-terminal domain (150-409) is highly similar to several SerB proteins e.g. P06862|SERB_ECOLI. N-terminus (1-150) shows no similarity, FASTA score: sp|P06862|SERB_ECOLI PHOSPHOSERINE PHOSPHATASE (EC 3.1 (322 aa) opt: 628 z-score: 753.3 E(): 0; 46.8%identity in 235 aa overlap. TBparse score is 0.884\n> Rv3813c | (MTCY409.17), len: 273. Unknown, similar to many hypothetical proteins eg. YXEH_BACSU P54947 hypothetical 30.2 kd protein in idh-deor (270 aa), fasta results; opt: 329 z-score: 456.0 E(): 2.2e-18, 32.2% identity in 267 aa overlap\n> TP0290 | conserved hypothetical protein\n> (10 rows)\n>\n> real 23.13\n> user 0.01\n> sys 0.03\n>\n> select c.target_name, c.matched_name, c.score, p.purpose from concordance_91 c, proteins p where c.matched_name = p.name\n> target_name | matched_name | score | purpose\n> -------------+--------------+----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> ECrpmA | BB0780 | 0.635297 | ribosomal protein L27\n> ECinfA | BSInfA | 0.680556 | initiation factor IF-1\n> ECinfA | HI0548 | 0.80952 | initiation factor IF-1\n> ECrpmA | HI0879 | 0.87059 | ribosomal protein L27\n> ECrpmA | HP0297 | 0.613632 | ribosomal protein L27\n> ECinfA | HP1298 | 0.61111 | translation initiation factor EF-1\n> ECrpmA | Rv2441c | 0.616278 | (MTCY428.05), len: 86. Probable rpmA, similar to eg RL27_ECOLI P02427 50s ribosomal protein l27, (84 aa), fasta scores, opt: 328, E(): 7.1e-17, (64.2% identity in 81 aa overlap); contains PS00831 Ribosomal protein L27 signature\n> ECinfA | Rv3462c | 0.684936 | (MTCY13E12.15c), len: 73 aa. infA. Probable initiation factor IF-1. FASTA results: identical to IF1_MYCBO P45957 initiation factor if-1 (72 aa)\n> (8 rows)\n>\n> real 11.16\n> user 0.01\n> sys 0.03\n\n",
"msg_date": "Tue, 16 Jan 2001 14:01:19 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation in PostgreSQL 7.1beta3 vs 6.5.3"
},
{
"msg_contents": "Dear Tom,\n\n> You can't afford to run a VACUUM ANALYZE even once in the lifetime of\n> the table?\n\nNot very often at best, and certainly not routinely. Some of my\ntables exceed 10GB and have multiple indices.\n\nHowever, to test your suggestion, I modified my performance test\nscript to \"VACUUM ANALYZE\" all the tables prior to invoking EXPLAIN,\nand it improves all of the searches except this one (EXPLAIN output\nalso included):\n\nexplain select count(*) from comparisons_4 where code = 80003;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=15659.29..15659.29 rows=1 width=0)\n -> Seq Scan on comparisons_4 (cost=0.00..15640.81 rows=7391 width=0)\n\nEXPLAIN\n\nThe choice of sequential scan here takes 30x longer.\n\n> \n> > More importantly, PostgreSQL 6.5.3 works very, very well without\n> > VACUUM'ing.\n> \n> 6.5 effectively assumes that \"foo = constant\" will select exactly one\n> row, if it has no statistics to prove otherwise. I don't regard that\n> as a well-chosen default, even if it does happen to work OK for your\n> application. Selecting an indexscan when a seqscan is needed is just\n> as evil as doing the reverse; what's much worse is that 6.5 will\n> pick incredibly bad join plans (ie, nested loops) because it thinks\n> that very little data is coming out of the scans.\n\nI've tuned my applications to work well with these defaults (and\ndemonstrate to my peers that PostgreSQL is comparable to Oracle in\nperformance for these types of queries). I am willing to make changes\nto my applications to make them work as well with 7.1, but the\nperformance of the query above worries me. I think the current planner\nwill make the wrong decision more often than the right one. To test\nthis further on this table, I went through the comparisons_4 table and\nfound that code 13 appears the most (73912) out of 591825 rows. In\nthis case, 6.5.3 takes 8.56 seconds to return its answer, whereas 7.1\ntakes 12.11 seconds. Even in the worst case for this table, the\nindexed scan is faster, but the optimizer decides on the sequential\nscan. It appears that the decision point for the switch to sequential\nscans isn't set properly. To me, this is a bug. \n\n> If you want to revert to the 6.5 behavior without doing a VACUUM, you\n> could probably get pretty close with\n> \tupdate pg_attribute set attdispersion = -1.0;\n\nDoes VACUUM ANALYZE set this column to its calculated value? What\nkinds of queries would not give 6.5 behavior if I set this column as\nyou suggest?\n\nAlternatively, how hard would it be to add another SET variable like\nUSE_6_5_PLANNING_RULES? Personally, that would be most helpful from an\napplication development viewpoint because I could switch to PostgreSQL\n7.1 without destroying the performance of my applications, and then\ntest new versions with the 7.1 planner with less potential for service\ndisruption.\n\n> Stats-gathering and planning certainly does need a great deal of\n> additional work, but I'm afraid that none of that will happen before\n> 7.1.\n\nAs I said above, I've put a lot of effort into making my applications\nwork quickly with Postgres, and I'm looking forward to using the new\nfeatures that are available with version 7.1. However, I'm very\nconcerned that I will not be able to achieve the same performance\nwithout detailed knowledge of the internals. Shouldn't I be assured\nthat I will improve the performance of a query by creating a index on\nthe fields used for selecting the row? That is not the case for the\nquery above.\n\nFinally, I apologize for being a little strident here. I've been\nadvocating for and using Postgres for four years, and it's frustrating\nwhen a new version results in a serious and noticeable performance\ndegradation.\n\nSincerely,\nBob\n\n+----------------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n| President, Congenomics, Inc. | Fax: 609 737 7528 |\n| 114 W Franklin Ave, Suite K1,4,5 | email: [email protected] |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+----------------------------------+------------------------------------+\n",
"msg_date": "Tue, 16 Jan 2001 20:48:10 -0500 (EST)",
"msg_from": "[email protected] (Robert E. Bruccoleri)",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation in PostgreSQL 7.1beta3 vs 6.5.3"
},
{
"msg_contents": "Dear Hannu,\n> \n> \"Robert E. Bruccoleri\" wrote:\n> > \n> > explain select count(*) from comparisons_4 where code = 80003;\n> > NOTICE: QUERY PLAN:\n> > \n> > Aggregate (cost=15659.29..15659.29 rows=1 width=0)\n> > -> Seq Scan on comparisons_4 (cost=0.00..15640.81 rows=7391 width=0)\n> > \n> > EXPLAIN\n> \n> What is the type of field \"code\" ?\n\nint4\n\nDo you think that should make a difference?\n\n+----------------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n| President, Congenomics, Inc. | Fax: 609 737 7528 |\n| 114 W Franklin Ave, Suite K1,4,5 | email: [email protected] |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+----------------------------------+------------------------------------+\n",
"msg_date": "Wed, 17 Jan 2001 10:30:46 -0500 (EST)",
"msg_from": "[email protected] (Robert E. Bruccoleri)",
"msg_from_op": true,
"msg_subject": "Re: Re: Performance degradation in PostgreSQL 7.1beta3 vs"
},
{
"msg_contents": "Dear Hannu,\n> \n> \"Robert E. Bruccoleri\" wrote:\n> > \n> > Dear Hannu,\n> > >\n> > > \"Robert E. Bruccoleri\" wrote:\n> > > >\n> > > > explain select count(*) from comparisons_4 where code = 80003;\n> > > > NOTICE: QUERY PLAN:\n> > > >\n> > > > Aggregate (cost=15659.29..15659.29 rows=1 width=0)\n> > > > -> Seq Scan on comparisons_4 (cost=0.00..15640.81 rows=7391 width=0)\n> > > >\n> > > > EXPLAIN\n> > >\n> > > What is the type of field \"code\" ?\n> > \n> > int4\n> > \n> > Do you think that should make a difference?\n> \n> Probably not here.\n> \n> Sometimes it has made difference if the system does not recognize \n> the other side of comparison (80003) as being of the same type as \n> the index.\n> \n> what are the cost estimates when you run explain with seqscan disabled ?\n> do => SET ENABLE_SEQSCAN TO OFF;\n> see:\n> (http://www.postgresql.org/devel-corner/docs/admin/runtime-config.htm#RUNTIME-CONFIG-OPTIMIZER)\n\nHere's the result from EXPLAIN:\n\nAggregate (cost=19966.21..19966.21 rows=1 width=0)\n -> Index Scan using comparisons_4_code on comparisons_4 (cost=0.00..19947.73 rows=7391 width=0)\n\nThe estimates are too high.\n\n--Bob\n\n+----------------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n| President, Congenomics, Inc. | Fax: 609 737 7528 |\n| 114 W Franklin Ave, Suite K1,4,5 | email: [email protected] |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+----------------------------------+------------------------------------+\n",
"msg_date": "Wed, 17 Jan 2001 11:57:57 -0500 (EST)",
"msg_from": "[email protected] (Robert E. Bruccoleri)",
"msg_from_op": true,
"msg_subject": "Re: Re: Performance degradation in PostgreSQL 7.1beta3 vs"
},
{
"msg_contents": "\"Robert E. Bruccoleri\" wrote:\n> \n> explain select count(*) from comparisons_4 where code = 80003;\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=15659.29..15659.29 rows=1 width=0)\n> -> Seq Scan on comparisons_4 (cost=0.00..15640.81 rows=7391 width=0)\n> \n> EXPLAIN\n\nWhat is the type of field \"code\" ?\n\n---------------\nHannu\n",
"msg_date": "Wed, 17 Jan 2001 17:22:09 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Performance degradation in PostgreSQL 7.1beta3 vs\n 6.5.3"
},
{
"msg_contents": "\"Robert E. Bruccoleri\" wrote:\n> \n> Dear Hannu,\n> >\n> > \"Robert E. Bruccoleri\" wrote:\n> > >\n> > > explain select count(*) from comparisons_4 where code = 80003;\n> > > NOTICE: QUERY PLAN:\n> > >\n> > > Aggregate (cost=15659.29..15659.29 rows=1 width=0)\n> > > -> Seq Scan on comparisons_4 (cost=0.00..15640.81 rows=7391 width=0)\n> > >\n> > > EXPLAIN\n> >\n> > What is the type of field \"code\" ?\n> \n> int4\n> \n> Do you think that should make a difference?\n\nProbably not here.\n\nSometimes it has made difference if the system does not recognize \nthe other side of comparison (80003) as being of the same type as \nthe index.\n\nwhat are the cost estimates when you run explain with seqscan disabled ?\ndo => SET ENABLE_SEQSCAN TO OFF;\nsee:\n(http://www.postgresql.org/devel-corner/docs/admin/runtime-config.htm#RUNTIME-CONFIG-OPTIMIZER)\n-----------------\nHannu\n",
"msg_date": "Wed, 17 Jan 2001 17:58:54 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Performance degradation in PostgreSQL 7.1beta3 vs"
},
{
"msg_contents": "\"Robert E. Bruccoleri\" wrote:\n> \n> >\n> > what are the cost estimates when you run explain with seqscan disabled ?\n> > do => SET ENABLE_SEQSCAN TO OFF;\n> > see:\n> > (http://www.postgresql.org/devel-corner/docs/admin/runtime-config.htm#RUNTIME-CONFIG-OPTIMIZER)\n> \n> Here's the result from EXPLAIN:\n> \n> Aggregate (cost=19966.21..19966.21 rows=1 width=0)\n> -> Index Scan using comparisons_4_code on comparisons_4 (cost=0.00..19947.73 rows=7391 width=0)\n> \n> The estimates are too high.\n\nYou could try experimenting with \n\nSET RANDOM_PAGE_COST TO x.x;\n\nfrom the page above\n\nRANDOM_PAGE_COST (floating point)\n\n Sets the query optimizer's estimate of the cost of a\nnonsequentially fetched disk page. \n this is measured as a multiple of the cost of a sequential page\nfetch. \n\n Note: Unfortunately, there is no well-defined method of\ndetermining ideal values for \n the family of \"COST\" variables that were just described. You are\nencouraged to\n experiment and share your findings.\n\n\n-------------\nHannu\n",
"msg_date": "Wed, 17 Jan 2001 22:10:32 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Performance degradation in PostgreSQL 7.1beta3 vs"
}
]
|
[
{
"msg_contents": "\nJust trying to summarize some traffic stats, and am either running the\nquery wrong, or you can't do this?\n\nThe query is:\n\n SELECT CASE WHEN to_ip << '216.126.84.0/24' THEN to_ip ELSE from_ip END AS LocalAddr,\n sum(bytes) as TotalBytes, date_trunc('day', runtime) AS Day\n FROM stat_log\n WHERE date_trunc('day', runtime) = '2001-01-02 00:00:00-05'\nGROUP BY LocalAddr, Day;\n\nreturns:\n\n localaddr | totalbytes | day\n-----------------+------------+------------------------\n 24.6.125.174 | 13716 | 2001-01-02 00:00:00-05\n 24.43.137.113 | 13140 | 2001-01-02 00:00:00-05\n 24.128.201.128 | 14376 | 2001-01-02 00:00:00-05\n 64.39.38.43 | 14232 | 2001-01-02 00:00:00-05\n 128.11.44.16 | 25050 | 2001-01-02 00:00:00-05\n 130.149.17.13 | 14316 | 2001-01-02 00:00:00-05\n 142.177.197.180 | 179676 | 2001-01-02 00:00:00-05\n 151.164.30.54 | 13260 | 2001-01-02 00:00:00-05\n 166.84.192.39 | 13614 | 2001-01-02 00:00:00-05\n 192.67.198.32 | 13872 | 2001-01-02 00:00:00-05\n 192.245.12.7 | 14676 | 2001-01-02 00:00:00-05\n 193.228.80.12 | 13092 | 2001-01-02 00:00:00-05\n 194.126.24.131 | 21642 | 2001-01-02 00:00:00-05\n 194.209.182.36 | 14448 | 2001-01-02 00:00:00-05\n 195.46.202.129 | 73518 | 2001-01-02 00:00:00-05\n 195.117.86.253 | 13056 | 2001-01-02 00:00:00-05\n 196.38.110.24 | 15012 | 2001-01-02 00:00:00-05\n 202.160.254.40 | 38178 | 2001-01-02 00:00:00-05\n 207.123.82.5 | 15240 | 2001-01-02 00:00:00-05\n 207.136.80.247 | 25290 | 2001-01-02 00:00:00-05\n 208.158.96.110 | 17940 | 2001-01-02 00:00:00-05\n 209.47.145.10 | 2881400 | 2001-01-02 00:00:00-05\n 209.47.148.2 | 3263955 | 2001-01-02 00:00:00-05\n 209.223.182.2 | 222180 | 2001-01-02 00:00:00-05\n 212.43.217.25 | 22974 | 2001-01-02 00:00:00-05\n 216.126.72.6 | 1265472 | 2001-01-02 00:00:00-05\n 216.126.72.30 | 94615 | 2001-01-02 00:00:00-05\n 216.126.84.1 | 201733744 | 2001-01-02 00:00:00-05\n 216.126.84.10 | 151665 | 2001-01-02 00:00:00-05\n 216.126.84.11 | 103630 | 2001-01-02 00:00:00-05\n 216.126.84.14 | 752305 | 2001-01-02 00:00:00-05\n\nYet:\n\nselect * from stat_log_holding where from_ip << '216.126.84.0/24';\n\nreturns what I'd expect:\n\n from_ip | to_ip | port | bytes | runtime\n----------------+-----------------+------+----------+------------------------\n 216.126.84.1 | 212.7.160.126 | 873 | 16091760 | 2001-01-16 10:53:14-05\n 216.126.84.28 | 195.176.0.212 | 80 | 10247530 | 2001-01-16 10:53:14-05\n 216.126.84.73 | 193.172.127.85 | 80 | 7856477 | 2001-01-16 10:53:14-05\n 216.126.84.73 | 195.149.181.21 | 80 | 6343572 | 2001-01-16 10:53:14-05\n 216.126.84.1 | 216.126.84.253 | 53 | 4401161 | 2001-01-16 10:53:14-05\n 216.126.84.28 | 195.230.44.100 | 80 | 3157811 | 2001-01-16 10:53:14-05\n 216.126.84.95 | 194.206.159.140 | 80 | 3140439 | 2001-01-16 10:53:14-05\n\n\nSo, am I doing something wrong here, as far as that CASE statement is\nconcerned, or is this a bug in v7.0.3 that is fixed in v7.1?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n\n",
"msg_date": "Tue, 16 Jan 2001 14:39:56 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "CASE inet << inet ... "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Just trying to summarize some traffic stats, and am either running the\n> query wrong, or you can't do this?\n\nI can't tell if there's anything wrong with that or not. You didn't\nshow us the input data being used by the CASE expression ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Jan 2001 19:09:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CASE inet << inet ... "
},
{
"msg_contents": "\nah shit, trying to come up with an example, I figured out what I did ...\nsome of the records don't fall into that range, so of course, to_ip isn't,\nso it just displays teh fron_ip *sigh*\n\nI'm going back to sleep now ... :(\n\n\nOn Tue, 16 Jan 2001, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Just trying to summarize some traffic stats, and am either running the\n> > query wrong, or you can't do this?\n>\n> I can't tell if there's anything wrong with that or not. You didn't\n> show us the input data being used by the CASE expression ...\n>\n> \t\t\tregards, tom lane\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Tue, 16 Jan 2001 21:22:42 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CASE inet << inet ... "
}
]
|
[
{
"msg_contents": "mv global.bki global.bki.old; mv template1.bki template1.bki.old\ncat global.bki.old | sed s/\" ame\"/\" name\"/ > global.bki\ncat template1.bki.old | sed s/\" ame\"/\" name\"/ > global.bki\n\nSolution is pretty simple actually (did figure this one out). I did find \nother people complaining about this, but no solutions. But I just did the \ninstall on an older slackware system and diffed the bki files to find some \nas 'ame' and others as 'name' - so I used the lines above and managed to get \nit to work just fine. Quite odd really... I am using the latest stable \nversions of all of the GNU tools straight from their site.\n\nProblem appears to be solved though, but maybe a check in the Makefile is in \norder. It seems (through my google search, as well as a search of this \narchive) that I am not the only one with this problem.\n\nCheers;\n--\nMike\n\n\n>From: Tom Lane <[email protected]>\n>To: \"Mike Miller\" <[email protected]>\n>CC: [email protected]\n>Subject: Re: INIT DB FAILURE\n>Date: Tue, 16 Jan 2001 13:18:04 -0500\n>\n>\"Mike Miller\" <[email protected]> writes:\n> > Fixing permissions on pre-existing data directory /usr/pgsql/data\n> > Creating database system directory /usr/pgsql/data/base\n> > Creating database XLOG directory /usr/pgsql/data/pg_xlog\n> > Creating template database in /usr/pgsql/data/base/template1\n> > ERROR: Error: unknown type 'ame'.\n>\n> > ERROR: Error: unknown type 'ame'.\n>\n> > Creating global relations in /usr/pgsql/data/base\n>\n>Now that I look, there is a similar report in the pgsql-bugs archives\n>from last August, but no information about any resolution.\n>\n>Please try the initdb with --debug added, and send me the resulting\n>output (it'll probably be kinda bulky, so don't send to the list).\n>\n>\t\t\tregards, tom lane\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Tue, 16 Jan 2001 19:06:23 ",
"msg_from": "\"Mike Miller\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INIT DB FAILURE"
},
{
"msg_contents": "\"Mike Miller\" <[email protected]> writes:\n> mv global.bki global.bki.old; mv template1.bki template1.bki.old\n> cat global.bki.old | sed s/\" ame\"/\" name\"/ > global.bki\n> cat template1.bki.old | sed s/\" ame\"/\" name\"/ > global.bki\n\n> Solution is pretty simple actually (did figure this one out). I did find \n> other people complaining about this, but no solutions. But I just did the \n> install on an older slackware system and diffed the bki files to find some \n> as 'ame' and others as 'name' - so I used the lines above and managed to get \n> it to work just fine.\n\nOK, so the breakage is not in the bootstrap parser but in the generation\nof the .bki files. This is done by the shell script\nsrc/backend/catalog/genbki.sh, and in looking it over, I notice with\nsuspicion the step\n\nsed -e \"s/;[ \t]*$//g\" \\\n -e \"s/^[ \t]*//\" \\\n -e \"s/[ \t]Oid/\\ oid/g\" \\\n -e \"s/[ \t]NameData/\\ name/g\" \\\n -e \"s/^Oid/oid/g\" \\\n -e \"s/^NameData/\\name/g\" \\\n -e \"s/(NameData/(name/g\" \\\n -e \"s/(Oid/(oid/g\" \\\n -e \"s/NAMEDATALEN/$NAMEDATALEN/g\" \\\n -e \"s/INDEX_MAX_KEYS\\*2/$INDEXMAXKEYS2/g\" \\\n -e \"s/INDEX_MAX_KEYS\\*4/$INDEXMAXKEYS4/g\" \\\n -e \"s/INDEX_MAX_KEYS/$INDEXMAXKEYS/g\" \\\n -e \"s/FUNC_MAX_ARGS\\*2/$INDEXMAXKEYS2/g\" \\\n -e \"s/FUNC_MAX_ARGS\\*4/$INDEXMAXKEYS4/g\" \\\n -e \"s/FUNC_MAX_ARGS/$INDEXMAXKEYS/g\" \\\n| $AWK '\n\nIn particular that \"\\name\" looks pretty bogus. Would you try removing\nthat backslash and see if the script works then? I'll betcha that some\nversions of sed convert the \\n to a newline ...\n\nBTW, what version of sed do you have, exactly?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Jan 2001 16:21:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: INIT DB FAILURE "
}
]
|
[
{
"msg_contents": "> Instead of a partial row CRC, we could just as well use some other bit\n> of identifying information, say the row OID. Given a block CRC on the\n> heap page, we'll be pretty confident already that the heap page is OK,\n> we just need to guard against the possibility that it's older than the\n> index item. Checking that there is a valid tuple at the slot \n> indicated by the index item, and that it has the right OID, should be\n> a good enough (and cheap enough) test.\n\nThis would work in 7.1 but not in 7.2 anyway (assuming UNDO and true\ntransaction rollback to be implemented). There will be no permanent\npg_log and after crash recovery any heap tuple with unknown t_xmin status\nwill be assumed as committed. Rollback will remove tuples inserted by\nuncommitted transactions but this will be possible only for *logged*\nmodifications.\n\nOne should properly configure disk drives instead of hacking arround\nthis problem. \"Log before modifying data pages\" is *rule* for any WAL\nsystem like Oracle, Informix and dozen others.\n\nVadim\n",
"msg_date": "Tue, 16 Jan 2001 11:36:20 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: CRCs "
}
]
|
[
{
"msg_contents": "> > I've tried to move \"dangerous\" ops with non-zero probability of\n> > elog(ERROR) (eg new file block allocation) out of crit sections.\n> > Anyway we need in ERROR-->STOP for safety when changes \n> > aren't logged.\n> \n> Why is that safer than just treating an ERROR as an ERROR? \n> It seems to me there's a real risk of a crash/restart loop if we\n> force a restart whenever we see an xlog-related problem.\n\nWhy don't we elog(ERROR) in assert checking but abort?\nConsider elog(STOP) on any errors inside critical sections\nas assert checking. Rule is simple - validate operation before\napplying it to permanent storage - and it's better to force\nany future development to follow this rule by any means.\nIt's very easy to don't notice ERROR - it's just transaction\nabort and transaction abort is normal thing, - but errors inside\ncritical sections are *unexpected* things which mean that something\ntotally wrong in code.\n\nAs for crash/restart loop, Hiroshi rised this issue ~month ago and I\nwas going to avoid elog(STOP) in AM-specific redo functions and do\nelog(LOG) instead, wherever possible, but was busy with CRC/backup stuff\n- ok, I'll look there soon.\n\nVadim\n",
"msg_date": "Tue, 16 Jan 2001 12:51:21 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: SIGTERM -> elog(FATAL) -> proc_exit() is probably a\n\t bad idea"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> It's very easy to don't notice ERROR - it's just transaction\n> abort and transaction abort is normal thing, - but errors inside\n> critical sections are *unexpected* things which mean that something\n> totally wrong in code.\n\nOkay. That means we do need two kinds of critical sections, then,\nbecause the crit sections I've just sprinkled everywhere are not that\ncritical ;-). They just want to hold off cancel/die interrupts.\n\nI'll take care of fixing what I broke, but does anyone have suggestions\nfor good names for the two concepts? The best I could come up with\noffhand is BEGIN/END_CRIT_SECTION and BEGIN/END_SUPER_CRIT_SECTION,\nbut I'm not pleased with that... Ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Jan 2001 16:11:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea "
},
{
"msg_contents": "\n>I'll take care of fixing what I broke, but does anyone have suggestions\n>for good names for the two concepts? The best I could come up with\n>offhand is BEGIN/END_CRIT_SECTION and BEGIN/END_SUPER_CRIT_SECTION,\n>but I'm not pleased with that... Ideas?\n\nLet CRITICAL be critical. If the other section are there just to be \ncautious. Then the name should represent that. While I like the \nBEGIN/END_OH_MY_GOD_IF_THIS_GETS_INTERRUPTED_YOU_DONT_WANT_TO_KNOW \nmarkers.. They are a little hard to work with.\n\nPossibly try demoting the NON_CRITICAL_SECTIONS to something like the \nfollowing.\n\nBEGIN/END_CAUTION_SECTION,\nBEGIN/END_WATCH_SECTION\n\n",
"msg_date": "Wed, 17 Jan 2001 17:55:09 -0600",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad\n idea"
}
]
|
[
{
"msg_contents": "> I'll take care of fixing what I broke, but does anyone have \n> suggestions for good names for the two concepts? The best I could\n> come up with offhand is BEGIN/END_CRIT_SECTION\n\nHOLD_INTERRUPTS\nRESUME_INTERRUPTS\n\n> and BEGIN/END_SUPER_CRIT_SECTION, but I'm not pleased with that...\n\nBEGIN_CRIT_SECTION\nEND_CRIT_SECTION\n\n- as it was -:)\n\nVadim\n",
"msg_date": "Tue, 16 Jan 2001 13:22:26 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: SIGTERM -> elog(FATAL) -> proc_exit() is probably a\n\t bad idea"
},
{
"msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> HOLD_INTERRUPTS\n> RESUME_INTERRUPTS\n> BEGIN_CRIT_SECTION\n> END_CRIT_SECTION\n\nWorks for me...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Jan 2001 17:28:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea "
}
]
|
[
{
"msg_contents": "GNU sed version 3.02.80\n\n\n>From: Tom Lane <[email protected]>\n>To: \"Mike Miller\" <[email protected]>\n>CC: [email protected], [email protected]\n>Subject: Re: Re: INIT DB FAILURE\n>Date: Tue, 16 Jan 2001 16:21:31 -0500\n>\n>\"Mike Miller\" <[email protected]> writes:\n> > mv global.bki global.bki.old; mv template1.bki template1.bki.old\n> > cat global.bki.old | sed s/\" ame\"/\" name\"/ > global.bki\n> > cat template1.bki.old | sed s/\" ame\"/\" name\"/ > global.bki\n>\n> > Solution is pretty simple actually (did figure this one out). I did \n>find\n> > other people complaining about this, but no solutions. But I just did \n>the\n> > install on an older slackware system and diffed the bki files to find \n>some\n> > as 'ame' and others as 'name' - so I used the lines above and managed to \n>get\n> > it to work just fine.\n>\n>OK, so the breakage is not in the bootstrap parser but in the generation\n>of the .bki files. This is done by the shell script\n>src/backend/catalog/genbki.sh, and in looking it over, I notice with\n>suspicion the step\n>\n>sed -e \"s/;[ \t]*$//g\" \\\n> -e \"s/^[ \t]*//\" \\\n> -e \"s/[ \t]Oid/\\ oid/g\" \\\n> -e \"s/[ \t]NameData/\\ name/g\" \\\n> -e \"s/^Oid/oid/g\" \\\n> -e \"s/^NameData/\\name/g\" \\\n> -e \"s/(NameData/(name/g\" \\\n> -e \"s/(Oid/(oid/g\" \\\n> -e \"s/NAMEDATALEN/$NAMEDATALEN/g\" \\\n> -e \"s/INDEX_MAX_KEYS\\*2/$INDEXMAXKEYS2/g\" \\\n> -e \"s/INDEX_MAX_KEYS\\*4/$INDEXMAXKEYS4/g\" \\\n> -e \"s/INDEX_MAX_KEYS/$INDEXMAXKEYS/g\" \\\n> -e \"s/FUNC_MAX_ARGS\\*2/$INDEXMAXKEYS2/g\" \\\n> -e \"s/FUNC_MAX_ARGS\\*4/$INDEXMAXKEYS4/g\" \\\n> -e \"s/FUNC_MAX_ARGS/$INDEXMAXKEYS/g\" \\\n>| $AWK '\n>\n>In particular that \"\\name\" looks pretty bogus. Would you try removing\n>that backslash and see if the script works then? I'll betcha that some\n>versions of sed convert the \\n to a newline ...\n>\n>BTW, what version of sed do you have, exactly?\n>\n>\t\t\tregards, tom lane\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Tue, 16 Jan 2001 21:36:29 ",
"msg_from": "\"Mike Miller\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: INIT DB FAILURE"
}
]
|
[
{
"msg_contents": "Create three tables and start four transactions, then do:\n\nXACT 1: LOCK TABLE A;\n\nXACT 2: LOCK TABLE B IN ROW SHARE MODE;\n\nXACT 3: LOCK TABLE B IN ROW EXCLUSIVE MODE;\n\nXACT 4: LOCK TABLE C;\n\nXACT 2: LOCK TABLE C;\n\nXACT 3: LOCK TABLE C;\n\nXACT 1: LOCK TABLE B IN SHARE MODE;\n\n<< wait at least 1 second here >>\n\nXACT 4: LOCK TABLE A;\n\nThe system is now deadlocked: 4 waits for 1, 1 waits for 3, 3 waits for\n4 ... but DeadLockCheck doesn't notice. Deadlock *is* detected if LOCK\nTABLE C is issued in xact 3 before xact 2, however. The problem is that\nafter a recursive invocation of DeadLockCheck, the code checks to see\nif the waitProc is blocked by thisProc, and assumes no deadlock if not.\nBut each waitProc will only be visited once, so if we happen to reach it\nfirst by way of a proc that is not blocking it, we fail to notice that\nthere really is a deadlock. In this example, we reach xact 1 by way of\nxact 2 (which is hit first in the scan of waiters for lock C); xact 2 is\nwaiting on the same lock as xact 3, but only xact 3 blocks xact 1.\n\nI have been studying DeadLockCheck for most of a day now, and I doubt\nthat this is the only bug lurking in it. I think that we really ought\nto throw it away and start over, because it doesn't look to me at all\nlike a standard deadlock-detection algorithm. The standard way of doing\nthis is that you trace outward along waits-for relationships and see if\nyou come back to your starting point or not. This is not doing that.\nOne reason is that the existing locktable structure makes it extremely\npainful to discover just which processes are holding a given lock.\nYou can only find which ones are *waiting* for a given lock. That's not\na killer problem, we can simply trace the waits-for relationships in\nreverse, but it's not really doing that either. In particular, it's\nonly doing the tracing in a literal sense for conflicts between locks\nrequested and locks already held. It's not doing things that way for\nthe case where A is waiting for B because they are requesting (not\nholding) conflicting locks and A is behind B in the queue for the lock.\nI think there are bugs associated with that case too, though I don't\nhave an example yet.\n\n\nAnyway, the bottom line is that I want to start over using a standard\ndeadlock-check algorithm, along the following lines:\n\n1. Make it relatively cheap to find all the procs already holding a\ngiven lock, by extending the lock datastructures so that there's a list\nof already-granted holder objects for each lock, not only pending holders.\n(In other words, each HOLDER object will now be in two linked lists, one\nfor its proc and one for its lock, not only one list. This will also\nmake it a lot easier to write a useful lock-state-dumping routine,\nwhenever someone gets around to that.)\n\n2. Rewrite DeadLockCheck to recurse outwards to procs that the current\nproc is waiting for, rather than vice versa, and consider both hard\nblocking (he's already got a conflicting lock) and priority blocking\n(he's got a conflicting request and is ahead of me in the wait queue).\n\n3. As now, if a cycle is found that includes a priority block, try to\nbreak the cycle by awakening the lower-priority waiter out of order,\nrather than aborting a transaction.\n\n4. Clean up DeadLockCheck so that it can be started from any proc,\nnot only MyProc. This is needed for point #5.\n\n5. Change ProcLockWakeup to address Hiroshi's concern about possible\nstarvation if we awaken waiters out-of-order. If we find a wakable\nwaiter behind one or more nonwakable waiters, awaken it only if\nDeadLockCheck says it is deadlocked. Under normal circumstances that\nwon't happen and we won't break priority-of-arrival ordering, but if\nit's necessary to avoid deadlock then we will. (We could avoid excess\ncomputation here by only running DeadLockCheck if the waiting process\nhas already done so on its own behalf; otherwise don't, just let it\nhappen if the waiter is still blocked when its timeout occurs. That\nwould be easy if we add a bool I've-run-DeadLockCheck flag to the PROC\nstructures.)\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Jan 2001 17:27:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "DeadLockCheck is buggy"
},
{
"msg_contents": "> I have been studying DeadLockCheck for most of a day now, and I doubt\n> that this is the only bug lurking in it. I think that we really ought\n> to throw it away and start over, because it doesn't look to me at all\n> like a standard deadlock-detection algorithm. The standard way of doing\n\nGo ahead. Throw away my code. *sniff* :-)\n\nNo really, feel free to rip it out and start over. It caught most\ncases, and I was glad it worked at all, seeing I had never done deadlock\ndetection before, nor read anything about it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 16 Jan 2001 19:34:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DeadLockCheck is buggy"
}
]
|
[
{
"msg_contents": "I'm running into a problem where I have to create an index with a name\nthat doesn't conflict with any existing index. \n\nCurrently, its not possible to do in postgres. \n\nIt'd be nice if either of 3 were implemented:\n1) alter index to rename it\n\n2) alter table would rename index with some option(?)\n\n3) index namespace should be constricted to the table on which it is\nindexed, since no commands to my knowledge manipulate the index without\nalso specifying the table. I.E. in such a way, I will have index a on\ntable foo, and index a on table bar without a conflcit.\n\nIs there a specification of current postgres behaviour anywhere in SQL\nstandard? (i.e. index namespace being global?)\n\n-alex\n\n",
"msg_date": "Tue, 16 Jan 2001 18:51:39 -0500 (EST)",
"msg_from": "Alex Pilosov <[email protected]>",
"msg_from_op": true,
"msg_subject": "renaming indices?"
},
{
"msg_contents": "ALTER TABLE RENAME works on indexes in 7.1; I forget about 7.0.\n\nI think you're right that SQL expects indexes to have a separate\nnamespace from tables, but until we have schema/namespace support\nit's pointless to worry about that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Jan 2001 19:06:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: renaming indices? "
},
{
"msg_contents": "Alex Pilosov wrote:\n> \n> 3) index namespace should be constricted to the table on which it is\n> indexed, since no commands to my knowledge manipulate the index without\n> also specifying the table.\n\nHow about DROP INDEX ... ?\n\nI'm not sure if this is standard SQL, maybe we should have \nALTER TABLE ... DROP INDEX ...\n\n-----------------\nHannu\n",
"msg_date": "Wed, 17 Jan 2001 16:08:57 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: renaming indices?"
}
]
|
[
{
"msg_contents": "Hello All,\n\nThis is my first post (so be gentle with me)...\n\nIs there a searchable archive?\n\nI would like suggestions and examples of adding SQL-92\ndata type BIT compatibility to a PostgreSQL schema.\n\n From the doc's I gather you can \"CREATE TYPE bit\"\nwith storage int or int4... but I don't know\nabout the input/output for zero and one.\n\nShould SQL (ODBC) be able to ask \"WHERE bitfield;\"\nor should it ask \"WHERE bitfield = 1;\" ?\n\nAny response gratefully recognized...\n\n\n\n\nKeith\n",
"msg_date": "Wed, 17 Jan 2001 12:29:28 +1100",
"msg_from": "Keith Gray <[email protected]>",
"msg_from_op": true,
"msg_subject": "Boolean and Bit"
},
{
"msg_contents": "> I would like suggestions and examples of adding SQL-92\n> data type BIT compatibility to a PostgreSQL schema.\n\nYou will probably do best by looking at the 7.1beta and the bit types\nimplemented there. They probably form a good starting point if they do\nnot do everything you need already.\n\nGood luck!\n\n - Thomas\n",
"msg_date": "Wed, 17 Jan 2001 03:56:55 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Boolean and Bit"
}
]
|
[
{
"msg_contents": "I see on Slashdot that:\n\n Slashcode 2.0 (\"Bender\") is officially in beta. We now have themes,\n plugins, an abstacted database layer (MySQL support is beta, PostreSQL\n is alpha, so finally the rivalry can be settled ;) \n\nSo, while Sourceforge has moved to PostgreSQL, seems like Slashdot may\nwell too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 16 Jan 2001 21:54:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slashdot and PostgreSQL"
},
{
"msg_contents": "I don't think they're moving the actual Slashdot site to PostgreSQL... They\npaid a bunch of money to the mySQL folks to add replication support to\nmySQL...\n\nI think other sites based on Slashcode wanted to be able to use PostgreSQL\nthough...\n\nHunter\n\n> From: Bruce Momjian <[email protected]>\n> Date: Tue, 16 Jan 2001 21:54:37 -0500 (EST)\n> To: PostgreSQL-general <[email protected]>\n> Cc: PostgreSQL-development <[email protected]>\n> Subject: [GENERAL] Slashdot and PostgreSQL\n> \n> I see on Slashdot that:\n> \n> Slashcode 2.0 (\"Bender\") is officially in beta. We now have themes,\n> plugins, an abstacted database layer (MySQL support is beta, PostreSQL\n> is alpha, so finally the rivalry can be settled ;)\n> \n> So, while Sourceforge has moved to PostgreSQL, seems like Slashdot may\n> well too.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n",
"msg_date": "Tue, 16 Jan 2001 19:03:57 -0800",
"msg_from": "Hunter Hillegas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slashdot and PostgreSQL"
},
{
"msg_contents": "> I don't think they're moving the actual Slashdot site to PostgreSQL... They\n> paid a bunch of money to the mySQL folks to add replication support to\n> mySQL...\n> \n> I think other sites based on Slashcode wanted to be able to use PostgreSQL\n> though...\n\nYou are probably correct, but I never expected Sourceforge to move to\nPostgreSQL either.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 16 Jan 2001 22:08:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slashdot and PostgreSQL"
},
{
"msg_contents": "Hunter Hillegas wrote:\n\n> I don't think they're moving the actual Slashdot site to PostgreSQL...\n\nSo do I.\n\n> I think other sites based on Slashcode wanted to be able to use PostgreSQL\n> though...\n\nThat's what I will do as soon as possible, and I am trying to be\ninvolved as much as possible in the current development. I am also\nwaiting for 7.1 to have a cleaner environment to test it.\n\n-- \nAlessio F. Bragadini\t\[email protected]\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Wed, 17 Jan 2001 09:53:35 +0200",
"msg_from": "Alessio Bragadini <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slashdot and PostgreSQL"
},
{
"msg_contents": "On Wednesday 17 January 2001 02:53, Alessio Bragadini wrote:\n> Hunter Hillegas wrote:\n> > I don't think they're moving the actual Slashdot site to PostgreSQL...\n>\n> So do I.\n>\n> > I think other sites based on Slashcode wanted to be able to use\n> > PostgreSQL though...\n>\n> That's what I will do as soon as possible, and I am trying to be\n> involved as much as possible in the current development. I am also\n> waiting for 7.1 to have a cleaner environment to test it.\n\nI made a board with php and postgresql. It's *terrible* code but is working \nat www.comptechnews.com. If anyone is interested in playing with it, I can \nmake it available. Who knows, the code might have bugs that are very \ncompromising! :) People might like to improve it. It consists of one php \nfile and three sql files (tables, data, & procedures). It uses PL/pgSQL and \nPL/TcL. You just run the tables sql, load data, then run procedures sql. \nPut the php file in a directory and change the pg_pconnect line to connect to \nthe right db. The php file is 3638 lines. It tries fairly hard to be \nautomatically moderated and to have good protection from users trying to do \nbad things. Code in the php and in the trigger procs provide two layers of \nlogic that tries to ensure only correct things happen. It takes good \nadvantage of transactions. The RAISE EXCEPTION PL/pgSQL call is used to \nrollback/abort things that shouldn't happen ... stuff like that. The trigger \nprocs do recursive stuff to manage the threaded messages and topics. Again \nthe php code is an embarrassment, but I don't care! :)\n\n\n-- \n-------- Robert B. Easter [email protected] ---------\n-- CompTechNews Message Board http://www.comptechnews.com/ --\n-- CompTechServ Tech Services http://www.comptechserv.com/ --\n---------- http://www.comptechnews.com/~reaster/ ------------\n",
"msg_date": "Wed, 17 Jan 2001 16:46:57 -0500",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slashdot and PostgreSQL"
},
{
"msg_contents": "On Tue, 16 Jan 2001 21:54:37 -0500 (EST), you wrote:\n\n>So, while Sourceforge has moved to PostgreSQL, seems like Slashdot may\n>well too.\n\nPlease, ask sourceforge to correct this web page\nhttp://sourceforge.net/docman/display_doc.php?docid=755&group_id=1\n\n:-)\n\n-- \[email protected]\n",
"msg_date": "Fri, 19 Jan 2001 20:52:16 +0100",
"msg_from": "Giulio Orsero <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slashdot and PostgreSQL"
}
]
|
[
{
"msg_contents": " I am trying to store a binary file with Visual Basic 6.0 and ADO and I\nuse the oid data type. The same code with Oracle and the clob type works\nbut with PostgreSQL I receive an error saying \"Multiple Step Operation\ngenerated errors. Check each status value.\".\n I am using the ODBC drivers and I assign a String to the Oid but it\nfails. I have also tried with a bytes array and with variant data type\nbut the same error happen.\n If I use Java and JDBC it works with byte array reading from a stream.\nThank you.\n\n",
"msg_date": "Wed, 17 Jan 2001 09:20:17 +0100",
"msg_from": "Haritz Elosegi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Storing a binary file with Visual Basic and ADO"
}
]
|
[
{
"msg_contents": "\n> > > Yes, the annoyance is, that localtime works for dates before 1970\n> > > but mktime doesn't. Best would probably be to assume no DST before\n> > > 1970 on AIX and IRIX.\n> > \n> > That seems like a reasonable answer to me, especially since we have\n> > other platforms that behave that way. How can we do this --- just\n> > test for isdst = -1 after the call, and assume that means failure?\n\nAre you working on this, or can you point me to the parts of the code, \nthat would need change ?\n\nAndreas\n",
"msg_date": "Wed, 17 Jan 2001 09:58:03 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: Re: tinterval - operator problems on AIX "
}
]
|
[
{
"msg_contents": "\n> > More importantly, PostgreSQL 6.5.3 works very, very well without\n> > VACUUM'ing.\n> \n> 6.5 effectively assumes that \"foo = constant\" will select exactly one\n> row, if it has no statistics to prove otherwise.\n\nI thought we had agreed upon a default that would still use\nthe index in the above case when no statistics are present.\nWasn't it something like a 5% estimate ? I did check\nthat behavior, since I was very concerned about that issue. \nNow, what is so different in his case?\n\nAndreas\n",
"msg_date": "Wed, 17 Jan 2001 10:10:23 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Re: Performance degradation in PostgreSQL 7.1beta3\n\tvs 6.5.3"
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> More importantly, PostgreSQL 6.5.3 works very, very well without\n> VACUUM'ing.\n>> \n>> 6.5 effectively assumes that \"foo = constant\" will select exactly one\n>> row, if it has no statistics to prove otherwise.\n\n> I thought we had agreed upon a default that would still use\n> the index in the above case when no statistics are present.\n> Wasn't it something like a 5% estimate ? I did check\n> that behavior, since I was very concerned about that issue. \n> Now, what is so different in his case?\n\nThe current estimate is 0.01 (1 percent). That seems sufficient to\ncause an indexscan on small to moderate-size tables, but apparently\nit is not small enough to do so for big tables. I have been thinking\nabout decreasing the default estimate some more, maybe to 0.005.\n(The reason the table size matters even if you haven't done a VACUUM\nANALYZE is that both plain VACUUM and CREATE INDEX will update the\ntable-size stats. So the planner may know the correct table size but\nstill have to rely on a default selectivity estimate. The cost\nfunctions are nonlinear, so what's \"small enough\" can depend on table\nsize.)\n\nBruce, if you'd like to experiment, try setting the attdispersion\nvalue in pg_attribute to various values, eg\n\nupdate pg_attribute set attdispersion = 0.005\nwhere attname = 'foo' and\nattrelid = (select oid from pg_class where relname = 'bar');\n\nPlease report back on how small a number seems to be needed to cause\nindexscans on your tables.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Jan 2001 09:59:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: Performance degradation in PostgreSQL 7.1beta3 vs 6.5.3 "
}
]
|
[
{
"msg_contents": "> > > > Yes, the annoyance is, that localtime works for dates before 1970\n> > > > but mktime doesn't. Best would probably be to assume no DST before\n> > > > 1970 on AIX and IRIX.\n> > > \n> > > That seems like a reasonable answer to me, especially since we have\n> > > other platforms that behave that way. How can we do this --- just\n> > > test for isdst = -1 after the call, and assume that means failure?\n> \n> Are you working on this, or can you point me to the parts of the code, \n> that would need change ?\n\nHere is a patch that should make AIX and IRIX more happy.\nIt changes all checks for tm_isdst to (tm_isdst > 0) and fixes \nthe expected horology file for AIX.\n\nI just now realized, that the new expected file (while still bogous) is more correct \nthan the old one.\nThanks to Tom for mentioning that the hour should stay the same when subtracting\ndays from a timestamp.\n\nPlease apply.\n\nAndreas",
"msg_date": "Wed, 17 Jan 2001 11:22:12 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: Re: tinterval - operator problems on AIX "
}
]
|
[
{
"msg_contents": "> > > > > Yes, the annoyance is, that localtime works for dates before 1970\n> > > > > but mktime doesn't. Best would probably be to assume no DST before\n> > > > > 1970 on AIX and IRIX.\n> > > > \n> > > > That seems like a reasonable answer to me, especially since we have\n> > > > other platforms that behave that way. How can we do this --- just\n> > > > test for isdst = -1 after the call, and assume that means failure?\n\nHere is a patch that is incremental to the previous patch and makes AIX\nnot use DST before 1970. The results are now consistent with other \nno-DST-before-1970 platforms.\n\nThe down side is, that I did not do a configure test, and did not incooperate\nIRIX, since I didn't know what define to check.\n\nThe correct thing to do instead of the #if defined (_AIX) would be to use\nsomething like #ifdef NO_NEGATIVE_MKTIME and set that with a configure.\nThomas, are you volunteering ?\n\nAndreas",
"msg_date": "Wed, 17 Jan 2001 12:23:50 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: Re: tinterval - operator problems on AIX "
},
{
"msg_contents": "> The correct thing to do instead of the #if defined (_AIX) would be to use\n> something like #ifdef NO_NEGATIVE_MKTIME and set that with a configure.\n> Thomas, are you volunteering ?\n\nActually, I can volunteer to be supportive of your efforts ;) I'm\ntraveling at the moment, and don't have the original thread(s) which\ndescribe in detail what we need to do for platforms I don't have.\n\nIf Peter E. would be willing to do a configure test for this mktime()\nproblem, then you or I can massage the actual code. Peter, is this\nsomething you could pick up?\n\nI do not have the original thread where Andreas describes the behavior\nof mktime() on his machine. Andreas, can you suggest a simple configure\ntest to be used?\n\n - Thomas\n",
"msg_date": "Wed, 17 Jan 2001 14:14:16 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "Zeugswetter Andreas SB writes:\n > The down side is, that I did not do a configure test, and did not\n > incooperate IRIX, since I didn't know what define to check.\n > \n > The correct thing to do instead of the #if defined (_AIX) would be\n > to use something like #ifdef NO_NEGATIVE_MKTIME and set that with a\n > configure.\n\nI agree that configure is the way to go. What if someone has\ninstalled a third party library to provide a better mktime() and\nlocaltime()?\n\nBut to answer your question, #if defined (__sgi) is a good test for\nIRIX, at least with the native compiler. I can't speak for gcc.\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWesternGeco -./\\.- by myself and does not represent\[email protected] -./\\.- opinion of Schlumberger, Baker\nhttp://www.crosswinds.net/~petef -./\\.- Hughes or their divisions.\n",
"msg_date": "Thu, 18 Jan 2001 11:26:42 +0000",
"msg_from": "Pete Forman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX "
},
{
"msg_contents": "Pete Forman writes:\n\n> I agree that configure is the way to go. What if someone has\n> installed a third party library to provide a better mktime() and\n> localtime()?\n\nExactly. What if someone has a binary PostgreSQL package installed, then\nupdates his time library to something supposedly binary compatible and\nfinds out that PostgreSQL still doesn't use the enhanced capabilities?\nRuntime behaviour checks are done at runtime, it's as simple as that.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 19 Jan 2001 17:23:00 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "> Exactly. What if someone has a binary PostgreSQL package installed, then\n> updates his time library to something supposedly binary compatible and\n> finds out that PostgreSQL still doesn't use the enhanced capabilities?\n> Runtime behaviour checks are done at runtime, it's as simple as that.\n\nI'm not sure I fully appreciate the distinction here. configure will\ncheck for \"behavior\" of various kinds, including \"behavior assumptions\"\nin the PostgreSQL code.\n\nSo the issue for this case is simply whether it is appropriate to check\nfor behavior at run time on all platforms, even those known to \"never\"\nexhibit this problematic result, or whether we put in a configure-time\ncheck to save the day for the (two) platforms known to have trouble --\nwithout the other supported platforms taking the hit by having to check\neven though the check will never fail.\n\nAndreas, Pete and I were advocating the latter view: that the majority\nof platforms which happen to be well behaved should run code optimized\nfor them, while the bad actors can be supported without \"#ifdef __AIX__\n|| __IRIX__\" in our code, but rather with a more helpful \"#ifdef\nNO_DST_BEFORE_1970\" or whatever.\n\n - Thomas\n",
"msg_date": "Fri, 19 Jan 2001 19:37:32 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> I'm not sure I fully appreciate the distinction here. configure will\n> check for \"behavior\" of various kinds, including \"behavior assumptions\"\n> in the PostgreSQL code.\n\nThere are two general categories of platform dependencies: One is\ncompilation system features, for example presence of certain functions,\nlibraries, header files, argument types, compiler flags. These sort of\nthings must be determined before compilation can begin. Autoconf is used\nto test for these things.\n\nThe other category is run-time behaviour, which is the present case of\ntesting whether mktime() behaves a certain way when given certain\narguments. Another item that has been thrown around over the years is\nwhether the system supports Unix domain sockets (essentially a run-time\nbehaviour check of socket()). You do not need to check these things in\nconfigure; you can just do it when the program runs and adjust\naccordingly.\n\nMore importantly, you *should* *not* do these tests in configure because\nthese tests will be unreliable in a cross-compilation situation.\nCross-compilation in this context does not only mean compiling between\ncompletely different platforms, but it includes any setup where the build\nsystem is configured differently from the system you're going to run the\nsystem on, including building on a noexec file system, misconfigured\nrun-time linkers, different user id, or just a different file system\nlayout on an otherwise identical platform.\n\nI'm not making these things up. Just yesterday there was a message on\nthis list where someone ran into this problem and his configure run\nmisbehaved badly. (PostgreSQL currently violates these rules, but that\ndoes not mean that we should make it harder now to clean it up at some\nlater date.) Admittedly, these situations sound somewhat exotic, but that\ndoes not mean they do not happen, it may merely mean that PostgreSQL is\nnot fit to perform in these situations.\n\n> So the issue for this case is simply whether it is appropriate to check\n> for behavior at run time on all platforms, even those known to \"never\"\n> exhibit this problematic result,\n\nYou can make the run-time check #ifdef one or the other platform. Or you\ndon't check at all and just postulate the misfeature for a set of\nplatforms. We have done this in at least two cases: The list of\nplatforms known not to support Unix domain sockets (HAVE_UNIX_SOCKETS),\nand the list of platforms known to have a peculiar bug in the accept()\nimplementation (happen to be both from SCO). I think this is an\nappropriate solution in cases where the affected systems can be classified\neasily. But as soon as some versions of some of these systems start\nimplementing Unix sockets (which is indeed the situation we're currently\nfacing) or SCO fixes their kernels we're going to have to think harder.\n\nSo I would currently suggest that you define\n\n#ifdef AIX || IRIX\n# define NO_DST_BEFORE_1970\n#endif\n\nbut when these systems get fixed you might have to run a simple check once\nwhen the system starts up. This is not really a \"hit\".\n\n> Andreas, Pete and I were advocating the latter view: that the majority\n> of platforms which happen to be well behaved should run code optimized\n> for them, while the bad actors can be supported without \"#ifdef __AIX__\n> || __IRIX__\" in our code, but rather with a more helpful \"#ifdef\n> NO_DST_BEFORE_1970\" or whatever.\n\nYeah.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 19 Jan 2001 23:17:55 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> More importantly, you *should* *not* do these tests in configure because\n> these tests will be unreliable in a cross-compilation situation.\n> Cross-compilation in this context does not only mean compiling between\n> completely different platforms, but it includes any setup where the build\n> system is configured differently from the system you're going to run the\n> system on, including building on a noexec file system, misconfigured\n> run-time linkers, different user id, or just a different file system\n> layout on an otherwise identical platform.\n\nAn approach I've followed in the past is to use three-way logic. If\nconfiguring for a native system, compile and run a program which\nprovides a yes or no answer. When using cross-configuration, set the\nconfiguration variable to ``don't know'' (or, since this a database\ngroup, NULL).\n\nPull all code which needs to make this test into a single routine.\nLet it check the configuration variable. If the variable is ``don't\nknow,'' then compile in a static variable, and run the required test\nonce, at run time, and set the static variable accordingly. Then test\nthe static variable in all future calls.\n\nThis is the approach I used in my UUCP code to look for a bad version\nof ftime on some versions of SCO Unix--the ftime result would\nsometimes run backward.\n\nIan\nCo-author of GNU Autoconf, Automake, and Libtool\n",
"msg_date": "19 Jan 2001 14:26:53 -0800",
"msg_from": "Ian Lance Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "Ian Lance Taylor writes:\n\n> An approach I've followed in the past is to use three-way logic. If\n> configuring for a native system, compile and run a program which\n> provides a yes or no answer. When using cross-configuration, set the\n> configuration variable to ``don't know'' (or, since this a database\n> group, NULL).\n\nThis would seem to be the right answer, but unfortunately Autoconf is not\nsmart enough to detect marginal cross-compilation cases in all situations.\nSomeone had zlib installed in a location where gcc would find it (compiles\nokay) but the run-time linker would not (does not run). This is not\ndetected when AC_PROG_CC runs, but only later on after you have checked\nfor the libraries.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 20 Jan 2001 00:24:15 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> Ian Lance Taylor writes:\n> \n> > An approach I've followed in the past is to use three-way logic. If\n> > configuring for a native system, compile and run a program which\n> > provides a yes or no answer. When using cross-configuration, set the\n> > configuration variable to ``don't know'' (or, since this a database\n> > group, NULL).\n> \n> This would seem to be the right answer, but unfortunately Autoconf is not\n> smart enough to detect marginal cross-compilation cases in all situations.\n> Someone had zlib installed in a location where gcc would find it (compiles\n> okay) but the run-time linker would not (does not run). This is not\n> detected when AC_PROG_CC runs, but only later on after you have checked\n> for the libraries.\n\nHmmm. I would not describe that as a cross-compilation case at all.\nThe build machine and the host machine are the same. I would describe\nthat as a case where the compiler search path and the run time library\nsearch path are not the same.\n\nThe autoconf tests don't use any extra libraries, so any discrepancies\nof this sort should not matter to them. Autoconf tests whether it can\ncompile and run a simple program; if it can, it assumes that it is not\nin a cross-compilation situation.\n\nClearly differences between search paths matter, but they are not the\nsame as cross-compilation.\n\nIan\n",
"msg_date": "19 Jan 2001 21:39:59 -0800",
"msg_from": "Ian Lance Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "Ian Lance Taylor writes:\n\n> > This would seem to be the right answer, but unfortunately Autoconf is not\n> > smart enough to detect marginal cross-compilation cases in all situations.\n> > Someone had zlib installed in a location where gcc would find it (compiles\n> > okay) but the run-time linker would not (does not run). This is not\n> > detected when AC_PROG_CC runs, but only later on after you have checked\n> > for the libraries.\n>\n> Hmmm. I would not describe that as a cross-compilation case at all.\n> The build machine and the host machine are the same.\n\nOnly for small values of \"same\". ;-) Sameness is not defined by the names\nbeing spelled identically or the physical coincidence of the hardware.\nThere are a million things you can do to a system that supposedly preserve\nbinary compatibility, such as installing vendor patches or changing a\nsetting in /etc. But if you run a test program is such a situation you're\ntesting the wrong system.\n\n> I would describe that as a case where the compiler search path and the\n> run time library search path are not the same.\n\nThe assumption is surely that the user would set LD_LIBRARY_PATH or\nconfigure his linker before running the program. But nothing guarantees\nthat he'll actually set the search path to /usr/local/lib, which is what\ngcc was searching in this situation.\n\n> Clearly differences between search paths matter, but they are not the\n> same as cross-compilation.\n\nIt's not the same as the classical Canadian Cross, but it's still cross\nenough to concern me. ;-)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 20 Jan 2001 16:56:12 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> > > This would seem to be the right answer, but unfortunately Autoconf is not\n> > > smart enough to detect marginal cross-compilation cases in all situations.\n> > > Someone had zlib installed in a location where gcc would find it (compiles\n> > > okay) but the run-time linker would not (does not run). This is not\n> > > detected when AC_PROG_CC runs, but only later on after you have checked\n> > > for the libraries.\n> >\n> > Hmmm. I would not describe that as a cross-compilation case at all.\n> > The build machine and the host machine are the same.\n> \n> Only for small values of \"same\". ;-) Sameness is not defined by the names\n> being spelled identically or the physical coincidence of the hardware.\n> There are a million things you can do to a system that supposedly preserve\n> binary compatibility, such as installing vendor patches or changing a\n> setting in /etc. But if you run a test program is such a situation you're\n> testing the wrong system.\n\nI believe that terminology is important in technical issues, and in\nany case I'm pedantic by nature. So I am going to disagree. When you\nbuild a program on the system on which it is going to run, that is not\ncross-compilation. I agree that if you build a program on one system,\nand then copy it (either physically or via something like NFS) to\nanother system, and run it on that other system, then that may be\ncross-compilation.\n\nIt may be that the program in this case was being moved from one\nmachine to another, and I missed it. However, unless that was the\ncase, a difference in search path between the compiler and the runtime\nlinker is not a cross-compilation issue. A difference in where a\nlibrary appears from one machine to another is a cross-compilation\nissue.\n\nI believe that it is important to describe problems correctly in order\nto understand how to fix them correctly. If the problem is ``autoconf\ndoes not correctly diagnose a case of cross-compilation,'' that\nsuggests one sort of fix. If the problem is ``the compiler and the\nruntime linker have different search paths,'' that suggests a\ndifferent sort of fix.\n\n\nTo return to the issue which started this thread, I believe it was an\nissue concerning the behaviour of the mktime library call. The\nbehaviour of that library call is not going to be affected by the\nlocation of a shared library. The approach I suggested should suffice\nto address the particular problem at hand. Do you disagree?\n\n\nThe issue in which the compiler and the runtime linker use different\nsearch paths is fixed by either changing the compiler search path, to\nby default only choose shared libraries from directories which the\nruntime linker will search, or by changing the library installation\nprocedure, either put the library in /usr/lib, or on systems which\nsupport ld.so.conf arrange to put the directory in ld.so.conf.\n\nNote that if you link with -lz, at least some versions of the GNU\nlinker will issue a warning that the library will not be found by the\nruntime linker.\n\nIan\n",
"msg_date": "20 Jan 2001 15:27:15 -0800",
"msg_from": "Ian Lance Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "Peter Eisentraut writes:\n > What if someone has a binary PostgreSQL package installed, then\n > updates his time library to something supposedly binary compatible\n > and finds out that PostgreSQL still doesn't use the enhanced\n > capabilities?\n\nYou are too generous. If someone downloads a binary package it should\nnot be expected to be able to take advantage of non standard features\nof the platform. It is reasonable that they should compile it from\nsource to get the most from it.\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWesternGeco -./\\.- by myself and does not represent\[email protected] -./\\.- opinion of Schlumberger, Baker\nhttp://www.crosswinds.net/~petef -./\\.- Hughes or their divisions.\n",
"msg_date": "Mon, 22 Jan 2001 10:33:35 +0000",
"msg_from": "Pete Forman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Re: tinterval - operator problems on AIX"
}
]
|
[
{
"msg_contents": "Hi,\n\nafter getting GiST works we're trying to use RD-Tree in\nour fulltext search application. We have universe of lexems\n(words in dictionaries) which is rather large, so\nwe need some compression to effectively use RD-Tree.\nWhen we did index support for int arrays we compressed\nset by range sets but it's not applicable if cardinality of\nuniverse set is very high. We're thinking about algorithm of\ncreating good signature for set of integers. This signature\nmust follow several rules:\n\n1). if set A is contained in set B, then sig(A) is also contained in sig(B)\n2). if set C is a union of set A and set B, then sig(C) is union of\n sig(A) and sig(B)\n\nAlso, signature should be good for effective tree construction (RD-Tree),\ni.e. it should be not degenerated for set size about 10^6 .\n\nWe need 1) for search operation and 2) for tree contructing.\n\nRight now we implementing so-called \"superimposed coding\" technique\n(D. Knuth, vol.3) which is based on idea to hash attribute values\ninto random k-bit codes in a b-bit field and to superimpose the codes for\neach attribute value in a record. This technique was proposed by Sven Helmer\n(\"Index Structures for Databases Containing Data Items with Set-valued Attr\nibutes\",1997, Sven Helmer, paper is available from my gist page)\nto represent sets in the index structures. This technique is great because\nof fixed length and great speed of calculation (used only bit operations).\nIt follows rules 1 and 2, but it's not good for big sets, because for\ninternal nodes and especially for root (a union of sets) we get signature\nfully consisting of 1.\nWe couldn't use arbitrarily long signature, because we have 8Kb limit of\nindex page size. For signature of variable size length we don't know\nhow to define 1) and 2)\n\nWhile we 're investigating the problem, I'd be glad\nto know some references, ideas.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Wed, 17 Jan 2001 15:44:59 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "SIGNATURE for int sets (need advise)"
}
]
|
[
{
"msg_contents": "\nGreetings! We have a script updating our database with thousands of\nentries on a daily basis. To speed up processing, we drop a\nconsistency check trigger before the update and recreate it\nafterwards. Occasionally, we get the following, even though the\ndatabase has no other live connections, and the trigger drop is the\nfirst statement:\n\ndrop trigger rprices_insupdel on rprices;\nDROP\nERROR: RelationClearRelation: relation 160298 modified while in use\n\n\nAny pointers most appreciated!\n\n-- \nCamm Maguire\t\t\t \t\t\[email protected]\n==========================================================================\n\"The earth is but one country, and mankind its citizens.\" -- Baha'u'llah\n",
"msg_date": "17 Jan 2001 09:58:08 -0500",
"msg_from": "Camm Maguire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Mysterious 7.0.3 error"
},
{
"msg_contents": "Camm Maguire <[email protected]> writes:\n> Greetings! We have a script updating our database with thousands of\n> entries on a daily basis. To speed up processing, we drop a\n> consistency check trigger before the update and recreate it\n> afterwards. Occasionally, we get the following, even though the\n> database has no other live connections, and the trigger drop is the\n> first statement:\n\n> drop trigger rprices_insupdel on rprices;\n> DROP\n> ERROR: RelationClearRelation: relation 160298 modified while in use\n\nAre you doing other schema changes (like other instances of this script)\nin parallel? Or vacuums of system tables? Those are the cases I recall\nthat might trigger this problem.\n\n> Any pointers most appreciated!\n\nLive with it until 7.1 :-(.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Jan 2001 10:31:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mysterious 7.0.3 error "
},
{
"msg_contents": "Greetings, and thanks for your reply!\n\nTom Lane <[email protected]> writes:\n\n> Camm Maguire <[email protected]> writes:\n> > Greetings! We have a script updating our database with thousands of\n> > entries on a daily basis. To speed up processing, we drop a\n> > consistency check trigger before the update and recreate it\n> > afterwards. Occasionally, we get the following, even though the\n> > database has no other live connections, and the trigger drop is the\n> > first statement:\n> \n> > drop trigger rprices_insupdel on rprices;\n> > DROP\n> > ERROR: RelationClearRelation: relation 160298 modified while in use\n> \n> Are you doing other schema changes (like other instances of this script)\n> in parallel? Or vacuums of system tables? Those are the cases I recall\n> that might trigger this problem.\n> \n\nNo, just one job, this job, at a time.\n\n> > Any pointers most appreciated!\n> \n> Live with it until 7.1 :-(.\n> \n\nWill do. Thanks!\n\n> \t\t\tregards, tom lane\n> \n> \n\n-- \nCamm Maguire\t\t\t \t\t\[email protected]\n==========================================================================\n\"The earth is but one country, and mankind its citizens.\" -- Baha'u'llah\n",
"msg_date": "17 Jan 2001 13:49:20 -0500",
"msg_from": "Camm Maguire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Mysterious 7.0.3 error"
}
]
|
[
{
"msg_contents": "> > > The correct thing to do instead of the #if defined (_AIX) would be to use\n> > > something like #ifdef NO_NEGATIVE_MKTIME and set that with a configure.\n> > ...Andreas, can you suggest a simple configure\n> > test to be used?\n> #include <time.h>\n> int main()\n> {\n> struct tm tt, *tm=&tt;\n> int i = -50000000;\n> tm = localtime (&i);\n> i = mktime (tm);\n> if (i != -50000000) /* on AIX this check could also be (i == -1) */\n> {\n> printf(\"ERROR: mktime(3) does not correctly support datetimes before 1970\\n\");\n> return(1);\n> }\n> }\n\nOn my Linux box, where the test passes, the compiler is happier if \"i\"\nis declared as time_t. Any problem on other platforms if we change this?\n\n - Thomas\n",
"msg_date": "Wed, 17 Jan 2001 15:26:18 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: tinterval - operator problems on AIX"
},
{
"msg_contents": "\n> > The correct thing to do instead of the #if defined (_AIX) would be to use\n> > something like #ifdef NO_NEGATIVE_MKTIME and set that with a configure.\n> > Thomas, are you volunteering ?\n> \n> Actually, I can volunteer to be supportive of your efforts ;) I'm\n> traveling at the moment, and don't have the original thread(s) which\n> describe in detail what we need to do for platforms I don't have.\n> \n> If Peter E. would be willing to do a configure test for this mktime()\n> problem, then you or I can massage the actual code. Peter, is this\n> something you could pick up?\n> \n> I do not have the original thread where Andreas describes the behavior\n> of mktime() on his machine. Andreas, can you suggest a simple configure\n> test to be used?\n\n#include <time.h>\nint main()\n{\n struct tm tt, *tm=&tt;\n int i = -50000000;\n tm = localtime (&i);\n i = mktime (tm);\n if (i != -50000000) /* on AIX this check could also be (i == -1) */\n {\n printf(\"ERROR: mktime(3) does not correctly support datetimes before 1970\\n\");\n return(1);\n }\n}\n\nAndreas\n",
"msg_date": "Wed, 17 Jan 2001 16:50:37 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": false,
"msg_subject": "AW: AW: AW: AW: Re: tinterval - operator problems on AI\n\tX"
},
{
"msg_contents": "Zeugswetter Andreas SB writes:\n\n> > I do not have the original thread where Andreas describes the behavior\n> > of mktime() on his machine. Andreas, can you suggest a simple configure\n> > test to be used?\n>\n> #include <time.h>\n> int main()\n> {\n> struct tm tt, *tm=&tt;\n> int i = -50000000;\n> tm = localtime (&i);\n> i = mktime (tm);\n> if (i != -50000000) /* on AIX this check could also be (i == -1) */\n> {\n> printf(\"ERROR: mktime(3) does not correctly support datetimes before 1970\\n\");\n> return(1);\n> }\n> }\n\nYou don't need to put this check into configure, you can just do the check\nafter mktime() is used.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 17 Jan 2001 17:53:03 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: AW: Re: tinterval - operator problems on\n AI X"
}
]
|
[
{
"msg_contents": "> > I have been studying DeadLockCheck for most of a day now, \n> > and I doubt that this is the only bug lurking in it.\n> > I think that we really ought to throw it away and start\n> > over, because it doesn't look to me at all like a standard\n> > deadlock-detection algorithm. The standard way of doing\n> \n> Go ahead. Throw away my code. *sniff* :-)\n\nAnd my changes from the days of 6.5 -:)\n\nVadim\n",
"msg_date": "Wed, 17 Jan 2001 09:25:49 -0800",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: DeadLockCheck is buggy"
}
]
|
[
{
"msg_contents": "Cursors are not supported in PL/pgSQL. I don't see a TODO item to fix\nthis.\n\nFixing the syntax to support cursors is easy. The problem then is\nthat PL/pgSQL uses SPI, and SPI does not support cursors. In spi.c\nthere is a bit of code for cursor support, with the comment\n\t/* Don't work currently */\n\nIs adding cursor support to SPI a bad idea? Is adding cursor support\nto PL/pgSQL undesirable?\n\nCan anybody sketch the problems which would arise when adding cursor\nsupport to SPI?\n\nThanks.\n\nIan\n",
"msg_date": "17 Jan 2001 12:49:00 -0800",
"msg_from": "Ian Lance Taylor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cursors in PL/pgSQL"
}
]
|
[
{
"msg_contents": "Wow, this looks great, and it worked the first time too. I will commit\nif no one makes objects.\n\n\n> > > > Is there a way to relate this to the names of the databases? Why the\n> > > > change? Or am I missing something key here..\n> > >\n> > > See the thread on the renaming in the archives. In short, this is part\n> > > of Vadim's work on WAL -- the new naming makes certain things easier for\n> > > WAL.\n> > >\n> > > Utilities to relate the new names to the actual database/table names\n> > > _do_ need to be written, however. The information exists in one of the\n> > > system catalogs now -- it just has to be made accessible.\n> >\n> > Yes, I am hoping to write this utility before 7.1 final. Maybe it will\n> > have to be in /contrib.\n> \n> I just finished writing such an app. Take a look. It's in a format\n> that can be put in /contrib. Let me know if you want any changes made,\n> etc. Feel free to use any of the code you wish.\n> \n> http://www.crimelabs.net/postgresql.shtml\n> \n> - Brandon\n> \n> b. palmer, [email protected]\n> pgp: www.crimelabs.net/bpalmer.pgp5\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Jan 2001 17:49:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: $PGDATA/base/???"
},
{
"msg_contents": "On Wed, Jan 17, 2001 at 05:49:36PM -0500, Bruce Momjian wrote:\n> Wow, this looks great, and it worked the first time too. I will commit\n> if no one makes objects.\n> \n\nI object. The code displays oids and tablenames or relnames. Oid is just\nthe initial, default filename for tables, and may change to something other\nthan the oid. Currently, the reindex code is the only place that could change\nthe relfilenode without changing the oid, but I think there may be more\nin the future.\n\nHere's a patch to Brandon's code (completely untested, BTW):\n\nRoss",
"msg_date": "Wed, 17 Jan 2001 17:27:59 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: $PGDATA/base/???"
},
{
"msg_contents": "> I object. The code displays oids and tablenames or relnames. Oid is just\n> the initial, default filename for tables, and may change to something other\n> than the oid. Currently, the reindex code is the only place that could change\n> the relfilenode without changing the oid, but I think there may be more\n> in the future.\n\nLooks great, and I agree. Did not know that little piece of information.\nI have made the changed to my code, here's the new version. I have\ntested this one and updated the web page.\n\n- brandon\n\n\nb. palmer, [email protected]\npgp: www.crimelabs.net/bpalmer.pgp5",
"msg_date": "Wed, 17 Jan 2001 18:58:44 -0500 (EST)",
"msg_from": "bpalmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: $PGDATA/base/???"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> I object. The code displays oids and tablenames or relnames. Oid is just\n> the initial, default filename for tables, and may change to something other\n> than the oid. Currently, the reindex code is the only place that could change\n> the relfilenode without changing the oid, but I think there may be more\n> in the future.\n\nRight, relfilenode is the thing to look at, not OID. I believe we are\nthinking of using relfilenode updates for a number of things in the\nfuture --- CLUSTER and faster index rebuilds in VACUUM are two thoughts\nthat come to mind ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Jan 2001 20:44:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: $PGDATA/base/??? "
},
{
"msg_contents": "I have added this to /contrib for 7.1.\n\n> > I object. The code displays oids and tablenames or relnames. Oid is just\n> > the initial, default filename for tables, and may change to something other\n> > than the oid. Currently, the reindex code is the only place that could change\n> > the relfilenode without changing the oid, but I think there may be more\n> > in the future.\n> \n> Looks great, and I agree. Did not know that little piece of information.\n> I have made the changed to my code, here's the new version. I have\n> tested this one and updated the web page.\n> \n> - brandon\n> \n> \n> b. palmer, [email protected]\n> pgp: www.crimelabs.net/bpalmer.pgp5\n> \nContent-Description: \n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Jan 2001 22:39:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: $PGDATA/base/???"
},
{
"msg_contents": "On Tue, 23 Jan 2001, Bruce Momjian wrote:\n\n> I have added this to /contrib for 7.1.\n>\nNot sure if you know this, but you checked in the code compiled and w/\nthe .o file...\n\nFYI.\n\nb. palmer, [email protected]\npgp: www.crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Tue, 23 Jan 2001 23:19:56 -0500 (EST)",
"msg_from": "bpalmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: $PGDATA/base/???"
},
{
"msg_contents": "Thanks. Removed.\n\n> On Tue, 23 Jan 2001, Bruce Momjian wrote:\n> \n> > I have added this to /contrib for 7.1.\n> >\n> Not sure if you know this, but you checked in the code compiled and w/\n> the .o file...\n> \n> FYI.\n> \n> b. palmer, [email protected]\n> pgp: www.crimelabs.net/bpalmer.pgp5\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 00:06:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: $PGDATA/base/???"
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.